Linear Matrix Inequalities in

System and Control Theory

SIAM Studies in Applied Mathematics

This series of monographs focuses on mathematics and its applications to problems
of current concern to industry, government, and society. These monographs will be of
interest to applied mathematicians, numerical analysts, statisticians, engineers, and
scientists who have an active need to learn useful methodology.

Series List
Vol. 1 Lie-Bäcklund Transformations in Applications
Robert L. Anderson and Nail H. Ibragimov
Vol. 2 Methods and Applications of Interval Analysis
Ramon E. Moore
Vol. 3 Ill-Posed Problems for Integrodifferential Equations in Mechanics and
Electromagnetic Theory
Frederick Bloom
Vol. 4 Solitons and the Inverse Scattering Transform
Mark J. Ablowitz and Harvey Segur
Vol. 5 Fourier Analysis of Numerical Approximations of Hyperbolic Equations
Robert Vichnevetsky and John B. Bowles
Vol. 6 Numerical Solution of Elliptic Problems
Garrett Birkhoff and Robert E. Lynch
Vol. 7 Analytical and Numerical Methods for Volterra Equations
Peter Linz
Vol. 8 Contact Problems in Elasticity: A Study of Variational Inequalities and
Finite Element Methods
N. Kikuchi and J. T. Oden
Vol. 9 Augmented Lagrangian and Operator-Splitting Methods in Nonlinear
Mechanics
Roland Glowinski and P. Le Tallec
Vol. 10 Boundary Stabilization of Thin Plate Splines
John E. Lagnese
Vol. 11 Electro-Diffusion of Ions
Isaak Rubinstein
Vol. 12 Mathematical Problems in Linear Viscoelasticity
Mauro Fabrizio and Angelo Morro
Vol. 13 Interior-Point Polynomial Algorithms in Convex Programming
Yurii Nesterov and Arkadii Nemirovskii
Vol. 14 The Boundary Function Method for Singular Perturbation Problems
Adelaida B. Vasil’eva, Valentin F. Butuzov, and Leonid V. Kalachev
Vol. 15 Linear Matrix Inequalities in System and Control Theory
Stephen Boyd, Laurent El Ghaoui, Eric Feron, and Venkataramanan
Balakrishnan

Stephen Boyd, Laurent El Ghaoui,
Eric Feron, and Venkataramanan Balakrishnan

Linear Matrix Inequalities in
System and Control Theory

Society for Industrial and Applied Mathematics
® Philadelphia

c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

All rights reserved. No part of this book may be reproduced, stored, or transmitted in any
manner without the written permission of the Publisher. For information, write to the Society
for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia,
Pennsylvania 19104-2688.

The royalties from the sales of this book are being placed in a fund to help students
attend SIAM meetings and other SIAM related activities. This fund is administered by
SIAM and qualified individuals are encouraged to write directly to SIAM for guidelines.

Library of Congress Cataloging-in-Publication Data

Linear matrix inequalities in system and control theory / Stephen Boyd
. . . [et al.].
p. cm. -- (SIAM studies in applied mathematics ; vol. 15)
Includes bibliographical references and index.
ISBN 0-89871-334-X
1. Control theory. 2. Matrix inequalities. 3. Mathematical
optimization. I. Boyd, Stephen P. II. Series: SIAM studies in
applied mathematics : 15.
QA402.3.L489 1994
515’.64--dc20 94-10477

. . . . 5 2 Some Standard Problems Involving LMIs 7 2. . . . . . . . . . . 51 4. .3 Minimizing Norm by Scaling . . . . . . 41 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Ellipsoidal Approximation . . . . . . . .6 Quadratic Approximation of a Polytopic Norm . . . . . . . . . . .2 A Brief History of LMIs in Control Theory . .1 Quadratic Stability . . . . . . . . . . . . . . . . . 4 1. .4 Interior-Point Methods . . . . . 54 Notes and References . . 12 2. . 52 4. . . 40 3. . . . . . . . . . . 9 2. . . .2 Some Specific LDIs . . . . . . . . . . . . . . . . . . . . . . 7 2. . . . . . . 24 Notes and References . . . . . . . . . . . . . . . . . . . 68 v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Some Standard Problems . . . . . . . 39 3. . . . 22 2. . . . . . . . . . . . . . . . . . . . . . . . .1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3. . . . . . . . . . . . . 47 4 Linear Differential Inclusions 51 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Linear Matrix Inequalities . . . . . . . . . . . . . . . . . . . .2 Minimizing Condition Number of a Positive-Definite Matrix . .6 Miscellaneous Results on Matrix Inequalities . .5 Strict and Nonstrict LMIs . . . . . . . . . . . . . . . . . . .7 Some LMI Problems with Analytic Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1. . . . . . . . . . 14 2. . . . . . . . . . . . .1 Differential Inclusions .1 Minimizing Condition Number by Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Notes on the Style of the Book . . .3 Nonlinear System Analysis via LDIs . . .5 Matrix Completion Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5. 56 5 Analysis of LDIs: State Properties 61 5. . 2 1. . . . . . . . . . . . . . . 42 Notes and References . . . . 27 3 Some Matrix Problems 37 3. . . . . . . 37 3. . . . . . . . . . . . . . . .2 Invariant Ellipsoids . . . . . . . . . . .4 Origin of the Book . . .Contents Preface vii Acknowledgments ix 1 Introduction 1 1. . . . . . . . . . . . . . . . . . . . . . . . . . .4 Rescaling a Matrix Positive-Definite . . . . . . . . . . . 18 2. . . . . . . . . . 38 3. . . . . . . .3 Ellipsoid Algorithm . . . . . . .

. . . 100 7. . . 84 6. . . . . . .2 State-to-Output Properties . . . . . . . . 124 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Multipliers for Systems with Unknown Parameters . . . .3 Positive Orthant Stabilizability . . . . . .4 State-to-Output Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 State-Feedback Synthesis . . . . . . . .8 Multi-Criterion LQG . . . . . . . . . . . . . . . . 109 7. 134 Notes and References . . . . . . . . . . . . . . . 96 7 State-Feedback Synthesis for LDIs 99 7.5 Input-to-Output Properties . . . . . . . 141 10. . . . . . . . . . . . . . . 151 Notes and References . . . . 148 10. . . . . . . . . . . . . . . . . . . . . . 126 9 Systems with Multiplicative Noise 131 9. . . . . . . 111 Notes and References . . . . . . . 145 10. . . . . . . . . . . . . . . 150 10. . . . . .9 Nonconvex Multi-Criterion Quadratic Problems . . . . . . . . .1 Input-to-State Properties . . . . . . . 89 Notes and References . . . . . . . . . . . . . . . . . . . .1 Static State-Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Analysis of Lur’e Systems . . . . . . . . . . . . . . . . . 112 8 Lur’e and Multiplier Methods 119 8. . 152 Notation 157 List of Acronyms 159 Bibliography 161 Index 187 c 1994 by the Society for Industrial and Applied Mathematics. . 143 10. . . . . . . . . .2 Analysis of Systems with LTI Perturbations . 147 10. . . . . . . 144 10. . . . . .1 Analysis of Systems with Multiplicative Noise . . . . . . . . . . . . . . .2 Integral Quadratic Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9. 104 7. . . . . . . . 119 8. . . . . . . . . . . .7 System Realization Problems . . . . . . . . . . . . . . . 99 7. . . . . . . . . . .6 The Inverse Problem of Optimal Control . . . . . . . . 144 10. . . . . . . . . . . . .1 Optimization over an Affine Family of Linear Systems . . . . . . . . . . . . . . . Copyright ° . . .5 Interpolation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 10 Miscellaneous Problems 141 10. . . . . . . . . . . . . . . . .vi Contents Notes and References .6 Observer-Based Controllers for Nonlinear Systems . . . . . . .3 Input-to-Output Properties . . . . . 77 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Input-to-State Properties . . . . . . .2 State Properties . . . . . . . . . . . . . 107 7. . . . .4 Linear Systems with Delays . . . . . . . . . . . . . . 72 6 Analysis of LDIs: Input/Output Properties 77 6. . . . . . . . . . . . . . . . . . . . . . .

and Yudin). and Convex Analysis and Minimization Algorithms I [HUL93] by Hiriart– Urruty and Lemaréchal. Nemirovskii. and so are tractable. Further background material is covered in the texts Linear Systems [Kai80] by Kailath. This is a research monograph: We present no specific examples or nu- merical results. The background required of the reader is knowledge of basic system and control theory and an exposure to optimization. not the most readable or accessible. The reader will soon see that their ideas and methods play a critical role in the basic idea presented in this book. We show that a wide variety of problems arising in system and control theory can be reduced to a handful of standard convex and quasiconvex optimization problems that involve matrix inequalities. and we make only brief comments about the implications of the results for practical control engineering. but can also serve as a source of application problems for researchers in convex op- timization. These standard problems can be solved in polynomial- time (by.. but our main point is that they can be solved numerically in all cases. We also highly recommend the book Interior-point Polynomial Algorithms in Con- vex Programming [NN94] by Nesterov and Nemirovskii as a companion to this book. Nonlinear Systems Analysis [Vid92] by Vidyasagar. at least in a theoretical sense. we consider the original problems from system and control theory as solved. Sontag’s book Mathematical Control The- ory [Son90] is an excellent survey. the ellipsoid algorithm of Shor.g. e. we should warn the reader whose primary interest is applied control engineering. To put it in a more positive light. For a few special cases there are “analytic solutions” to these problems. Therefore. vii . we hope that this book will later be considered as the first book on the topic. This book is primarily intended for the researcher in system and control theory.Preface The basic topic of this book is solving problems from system and control theory using convex optimization. Although we believe that the methods described in this book have great practical value. Recently developed interior-point methods for these standard problems have been found to be extremely efficient in practice. Optimal Control: Linear Quadratic Methods [AM90] by Anderson and Moore.

.

P. Franklin. and state-feedback synthesis methods. S. Pyatnitskii. respectively. realization theory. Dankoski. France Eric Feron Cambridge. M. Petersen. California Laurent El Ghaoui Paris. We are grateful to S. Hall. El Ghaoui and E. C. and A. Vandenberghe. and many macros originally written by Craig Barratt. Yakubovich for his comments and suggestions. NSF (under ECS-9222391). T. Nemirovskii for encouraging us to write this book. G. I. and J.Acknowledgments We thank A. and ARPA (under F49620-93-1-0085). J. Massachusetts Venkataramanan Balakrishnan College Park. Packard for introducing us to multiplier methods. E. M. Gevers. Maryland ix . Feron were supported in part by the Délégation Générale pour l’Armement. Abedor. Willems. A. Kosut. Safonov. L. Doyle. M. Tits. Rotea. This book was greatly improved by the suggestions of J. Stephen Boyd Stanford. This book was typeset by the authors using LATEX. Feron was supported in part by the Charles Stark Draper Career Development Chair at MIT. It is a special pleasure to thank V. The research reported in this book was supported in part by AFOSR (under F49620-92-J-0013). Balakrishnan was supported in part by the Control and Dynamical Systems Group at Caltech and by NSF (under NSFD CDR-8803012). Kailath. L. R. E. V. Savastiouk.

.

Linear Matrix Inequalities in System and Control Theory .

.

In com- parison. e. certainly in a practical sense. including scaling • multicriterion LQG/LQR • inverse problem of optimal control In some cases. By scanning the list above or the table of contents. the reader will see that Lya- punov’s methods will be our main focus. in others. The types of problems we consider include: • matrix scaling problems.1 Overview The aim of this book is to show that we can reduce a very wide variety of prob- lems arising in system and control theory to a few standard convex or quasiconvex optimization problems involving linear matrix inequalities (LMIs). which are traditionally 1 . but also in several other senses as well. This is to show that Lyapunov’s methods. the more conventional approach is to seek an analytic or frequency-domain solution to the matrix inequalities. minimizing condition number by diagonal scaling • construction of quadratic Lyapunov functions for stability and performance anal- ysis of linear differential inclusions • joint synthesis of state-feedback and quadratic Lyapunov functions for linear differential inclusions • synthesis of state-feedback and quadratic Lyapunov functions for stochastic and delay systems • synthesis of Lur’e-type Lyapunov functions for nonlinear systems • optimization over an affine family of transfer matrices. Here we have a secondary goal. including synthesis of multipliers for analysis of linear systems with unknown parameters • positive orthant stability and state-feedback synthesis • optimal system realization • interpolation problems.Chapter 1 Introduction 1. we are extending known results. it seems that the results are new.g. beyond showing that many problems from Lyapunov theory can be cast as convex or quasi- convex problems. In many cases. published results. Since these result- ing optimization problems can be solved numerically very efficiently using recently developed interior-point methods. we are describing known.. however. our reduction constitutes a solution to the original problem.

provided we do not insist on an “analytic solution”. which can be solved analytically (by solving a set of linear equations). which is guaranteed to be positive-definite if the system (1. the contribution was to show how to solve a certain family of LMIs by graphical methods. third order) systems.2). The LMIs that resulted were solved analytically.1) is stable. needless to say. but did not gracefully or usefully extend to systems containing more than one nonlinearity. the first LMI used to analyze stability of a dynamical system was the Lyapunov inequality (1. In summary.2). can just as well be used to find bounds on system performance. The story begins in about 1890. The next major breakthrough came in the early 1960’s. In summary.. using what we now call the positive-real (PR) lemma (see §2. their stability criteria have the form of LMIs. circle criterion. Popov. and other researchers succeeded in reducing the solution of the LMIs that arose in the problem of Lur’e to simple graphical criteria. Kalman. c 1994 by the Society for Industrial and Applied Mathematics. especially. when Lyapunov published his seminal work introducing what we now call Lyapunov theory. Indeed. small systems). when Yakubovich. all trajectories converge to zero) if and only if there exists a positive- definite matrix P such that AT P + P A < 0. From the introduction of Lur’e’s 1951 book [Lur57] we find: This book represents the first attempt to demonstrate that the ideas ex- pressed 60 years ago by Lyapunov.2) T The requirement P > 0. Lur’e. Although they did not explicitly form matrix inequalities. Postnikov. are now about to become a real medium for the examination of the urgent problems of contemporary engineering. The next major milestone occurs in the 1940’s. Of course this limited their application to small (second.2 Chapter 1 Introduction applied to the analysis of system stability. Nevertheless they were justifiably excited by the idea that Lyapunov’s theory could be applied to important (and difficult) practical problems in control engineering. which even comparatively recently ap- peared to be remote from practical application.7.1) dt is stable (i. the problem of stability of a control system with a nonlinearity in the actuator. and many variations. Lur’e and others were the first to apply Lyapunov’s methods to practical control engineering problems. These inequalities were reduced to polynomial inequalities which were then checked “by hand” (for. From the point of view of our story (LMIs in control theory).2 A Brief History of LMIs in Control Theory The history of LMIs in the analysis of dynamical systems goes back more than 100 years. These criteria could be applied to higher order systems. A P + P A < 0 is what we now call a Lyapunov inequality on P . and others in the Soviet Union applied Lyapunov’s methods to some specific practical problems in control engineering. 1. which is a special form of an LMI. Tsypkin criterion. by hand.e. (1. Lyapunov also showed that this first LMI could be explicitly solved. He showed that the differential equa- tion d x(t) = Ax(t) (1. Copyright ° . This resulted in the celebrated Popov criterion. we can pick any Q = QT > 0 and then solve the linear equation AT P +P A = −Q for the matrix P .

It would be interesting to see whether or not it can be exploited in compu- tational algorithms.2 for details.) This connection had been observed earlier in the Soviet Union. and The method of matrix inequalities in the stability theory of nonlinear control systems (1965. Willems is led to the LMI " # AT P + P A + Q P B + C T ≥ 0. clearly and completely..7. English translation 1967). By 1970. . This observation was made explicitly by several researchers. This is clear simply from the titles of some of his papers from 1962–5. researchers knew several methods for solving special types of LMIs: direct (for small systems).1. From our point of view. (See §2. the small-gain criteria introduced by Zames and Sandberg. which they This electronic version is for personal use and may not be duplicated or distributed. Of course it cannot be solved exactly in a finite number of arithmetic steps for systems of fifth and higher order.) In Willems’ 1971 paper we find the following striking quote: The basic importance of the LMI seems to be largely unappreciated. the most important of which is that we can reliably solve many LMIs for which no “analytic solution” has been found (or is likely to be found). for example. So by 1971. The solution of certain matrix inequalities in automatic control theory (1962). Although this is a simple observation. it has some important consequences. Still. J. Willems’ suggestion that LMIs might have some advantages in computational algorithms (as compared to the corresponding Riccati equations) foreshadows the next chapter in the story. Yak64. but also by solving a certain algebraic Riccati equation (ARE). which in turn can be found by an eigendecomposition of a related Hamiltonian matrix. graphical methods. these methods are all “closed-form” or “analytic” solutions that can be used to solve special forms of LMIs. it was known that the LMI appearing in the PR lemma could be solved not only by graphical means. Here Willems refers to the specific LMI (1.g. Pyatnitskii and Sko- rodinskii [PS82] were perhaps the first researchers to make this point. C. (1. (Most control researchers and engineers consider the Riccati equation to have an “analytic” solution. They reduced the original problem of Lur’e (extended to the case of mul- tiple nonlinearities) to a convex optimization problem involving LMIs. and quadratic optimal control. and not the more general form that we consider in this book. since the standard algorithms that solve it are very predictable in terms of the effort required. Yak67]. especially by Yakubovich [Yak62. and were found to be related to the ideas of passivity. where the ARE was called the Lur’e resolving equation (see [Yak88]). e.3). In a 1971 paper [Wil71b] on quadratic optimal control. The PR lemma and extensions were intensively studied in the latter half of the 1960s. which depends almost entirely on the problem size and not the particular problem data.2 A Brief History of LMIs in Control Theory 3 The important role of LMIs in control theory was already recognized in the early 1960’s.3) BT P + C R and points out that it can be solved by studying the symmetric solutions of the ARE AT P + P A − (P B + C T )R−1 (B T P + C) + Q = 0. The next major advance (in our view) was the simple observation that: The LMIs that arise in system and control theory can be formulated as convex optimization problems that are amenable to computer solution. and by solving Lyapunov or Riccati equations.

In a 1976 paper. And of course. Horisberger and Be- langer [HB76] had remarked that the existence of a quadratic Lyapunov function that simultaneously proves stability of a collection of linear systems is a convex problem involving LMIs. Small LMIs solved “by hand”. • Late 1980’s: Development of interior-point algorithms for LMIs. Copyright ° . but the “solutions” involved an arbitrary scaling matrix. Every statement is to be interpreted as being true modulo appropriate technical conditions (that in most cases are trivial to figure out).g. Then in 1988. it may be c 1994 by the Society for Industrial and Applied Mathematics. Nesterov and Nemirovskii developed interior-point methods that apply directly to con- vex problems involving LMIs. as far as we know. In 1984. The new development is the ability to directly solve (general) LMIs. We should also mention several precursors. perhaps even cavalier. several interior-point algorithms for LMI problems have been implemented and tested on specific families of LMIs that arise in control theory. We are very informal. and Lyapunov the grandfather of the field. the idea of having a computer search for a Lya- punov function was not new—it appears. Karmarkar’s work spurred an enormous amount of work in the area of interior-point methods for linear programming (including the rediscovery of efficient methods that were developed in the 1960s but ignored).4 Chapter 1 Introduction then solved using the ellipsoid algorithm. (This problem had been studied before.3 Notes on the Style of the Book We use a very informal mathematical style. • Late 1960’s: Observation that the same family of LMIs can be solved by solving an ARE. we often fail to mention regularity or other technical conditions.) Pyatnitskii and Skorodinskii were the first. Karmarkar introduced a new linear programming algorithm that solves linear programs in polynomial-time. As an example. A summary of key events in the history of LMIs in control theory is then: • 1890: First LMI appears. 1. and then apply an algorithm guaranteed to solve the optimization problem. We sometimes skip “details” that would be important if the optimization problem were to be solved numerically. • 1940’s: Application of Lyapunov’s methods to real control engineering prob- lems. Essentially all of this research activity con- centrated on algorithms for linear and (convex) quadratic programs. Although there remains much to be done in this area. but in contrast to the ellipsoid method. in our reduction of a problem to an optimization problem. like the ellip- soid method. analytic solution of the Lyapunov LMI via Lyapunov equation.. is also very efficient in practice. • Early 1980’s: Recognition that many LMIs can be solved by computer via convex programming. N. and found to be extremely efficient. to formulate the search for a Lyapunov function as a convex optimization problem. • Early 1960’s: PR lemma gives graphical techniques for solving another family of LMIs. to the problems we encounter in this book. in a 1965 paper by Schultz et al. for example. It is fair to say that Yakubovich is the father of the field. and in particular. e. [SSHJ65]. The final chapter in our story is quite recent and of great practical importance: the development of powerful and efficient interior-point methods to solve the LMIs that arise in system and control theory.

and so on. The order of the authors’ names reflects this history. transpose. writing for example. and bibliography and historical notes. assuming that the reader could work out the extensions. The list of problems that we consider is meant only to be representative. precise statements. when we encounter time- varying coefficients. To avoid excessive repetition.4 Origin of the Book 5 necessary to add constraints to the optimization problem for normalization or to ensure boundedness. In a similar way. as in ẋ = A(t)x. but grew too large to be a section. etc. we drop the dummy variable from definite integrals. we will explicitly show the time dependence. probably simplify. lower bounds for the problem. our treatment of problems becomes more terse as the book progresses. and then interpret in system or control theoretic terms the optimality conditions for the resulting convex problem. Therefore. This electronic version is for personal use and may not be duplicated or distributed. . we adopt the convention that the operators Tr (trace of a matrix) and E (expected value)¡ have lower precedence than multiplication. For example. Thus. Presumably these dual problems and lower bounds can be given interesting system-theoretic interpretations.1. ¢ 1. and certainly not exhaustive. The completeness of the bibliography should not be overestimated. we can consider various dual optimization problems. started adding material. not a paper. and soon it was clear that we were writing a book. Similarly. For a few months it was a manuscript (that presumably would have been submitted for publication as a paper) entitled Generalized Eigenvalue Problems Aris- ing in Control Theory. we describe many variations on problems (e. for each reduced problem we could state. elaborations. and assume that the reader can translate the results from the continuous-time case to the discrete-time case. and later Balakrishnan. ẋ = Ax is our short form for dx/dt = Ax(t). in which we hide proofs.. RT T RT 0 u y dt for 0 u(t)T y(t)dt. We switch to discrete-time systems when we consider system realization problems (which almost always arise in this form) and also when we consider stochastic systems (to avoid the technical details of stochastic differential equations). in later chapters. Another fascinating topic that could be explored is the relation between system and control theory duality and convex programming duality. despite its size (over 500 entries). we do not pursue any theoretical aspects of reducing a problem to a convex problem involving matrix inequalities. The appendix contains a list of notation and a list of acronyms used in the book. We apologize to the reader for the seven new acronyms we introduce. we describe fewer and fewer variations. we use the standard convention of dropping the time argument from the variables in differential equations. In the first chapter on analysis of lin- ear differential inclusions.4 Origin of the Book This book started out as a section of the paper Method of Centers for Minimizing Generalized Eigenvalues. computing bounds on margins and decay rates). Once we reduce a problem arising in control theory to a convex program.g. Here A is a constant matrix. To reduce the number of parentheses required. the reader who wishes to implement an algorithm that solves a problem considered in this book should be prepared to make a few modifications or additions to our description of the “solution”. We do not discuss initial guesses for the optimization problems. Tr AT B means Tr AT B . To lighten the notation. even though good ones may be available. We mostly consider continuous-time systems. Then Feron. by Boyd and El Ghaoui [BE93]. Thus. Each chapter concludes with a section entitled Notes and References.

.

. The inequality symbol in (2.e. it can represent a wide variety of convex constraints on x. . linear inequalities. . . “the LMI F (1) (x) > 0.1) is a convex constraint on x.1) means that F (x) is positive- definite.3) S(x)T R(x) 7 .1) is equivalent to a set of n polynomial inequalities in x. i. are given. . Nonlinear (convex) inequalities are converted to LMI form using Schur complements. When the matrices Fi are diagonal... F (p) (x)) > 0. Therefore we will make no distinction between a set of LMIs and a single LMI. but a precise statement of the relation is a bit involved. the LMI (2. F (p) (x)) > 0”. .1 Linear Matrix Inequalities A linear matrix inequality (LMI) has the form m ∆ X F (x) = F0 + xi Fi > 0. . Multiple LMIs F (1) (x) > 0.e. the set {x | F (x) > 0} is convex. (convex) quadratic inequalities. .1) may seem to have a specialized form. . the LMI F (x) > 0 is just a set of linear inequalities. F (p) (x) > 0” will mean “the LMI diag(F (1) (x). Although the LMI (2. the leading principal minors of F (x) must be positive. such as Lyapunov and convex quadratic matrix inequalities. i = 0. .. i. (2.Chapter 2 Some Standard Problems Involving LMIs 2. (2. ..5. . can all be cast in the form of an LMI.e. . The basic idea is as follows: the LMI " # Q(x) S(x) > 0. We will also encounter nonstrict LMIs. . The LMI (2. In particular. . i. F (p) (x) > 0 can be expressed as the single LMI diag(F (1) (x).2) are closely related.2) The strict LMI (2. .e. In the next few sections we consider strict LMIs. . (2. Of course. which have the form F (x) ≥ 0. . .1) i=1 where x ∈ Rm is the variable and the symmetric matrices Fi = FiT ∈ Rn×n . so we defer it to §2. uT F (x)u > 0 for all nonzero u ∈ Rn .1) and the nonstrict LMI (2. . matrix norm inequalities. and constraints that arise in control theory. m. i.

. is equivalent to R(x) > 0.8 Chapter 2 Some Standard Problems Involving LMIs where Q(x) = Q(x)T . Pm be a basis for symmetric n × n matrices (m = n(n + 1)/2). the (maximum singular value) matrix norm constraint kZ(x)k < 1. as follows.5) is readily put in the form (2. Copyright ° .4. (Of course. (2. is handled by introducing a new (slack) matrix variable X = X T ∈ Rp×p . where c(x) ∈ Rn and P (x) = P (x)T ∈ Rn×n depend affinely on x. is represented as the LMI " # I Z(x) >0 Z(x)T I (since kZk < 1 is equivalent to I − ZZ T > 0). As an example. The phrase “the LMI AT P + P A < 0 in P ” means that the matrix P is a variable. > 0. (2. where P (x) = P (x)T ∈ Rn×n and S(x) ∈ Rn×p depend affinely on x. P (x) > 0. is expressed as the LMI " # P (x) c(x) > 0. P (x) > 0. the Lyapunov inequality (2.3).4. S(x) P (x) Many other convex constraints on x can be expressed in the form of an LMI.4) can be represented as the LMI (2. and S(x) depend affinely on x. Q(x) − S(x)R(x)−1 S(x)T > 0. and the LMI (in x and X): " # X S(x)T Tr X < 1.5) where A ∈ Rn×n is given and P = P T is the variable. .1 Matrices as variables We will often encounter problems in which the variables are matrices. see the Notes and References. As another related example.g. Note that the case q = 1 reduces to a general convex quadratic inequality on x. e. see §2. but instead make clear which matrices are the variables. R(x) = R(x)T . (2. the constraint Tr S(x)T P (x)−1 S(x) < 1. in addition to saving notation.1). The constraint c(x)T P (x)−1 c(x) < 1. the Lya- punov inequality AT P + P A < 0. 2. c(x)T 1 More generally.5).) Leaving LMIs in a condensed form such as (2. where Z(x) ∈ Rp×q and depends affinely on x.. In this case we will not write out the LMI explicitly in the form F (x) > 0. . Let P1 . consider the quadratic matrix inequality AT P + P A + P BR−1 B T P + Q < 0.6) c 1994 by the Society for Industrial and Applied Mathematics. may lead to more efficient computation.1. the set of nonlinear inequalities (2.4) In other words. . Then take F0 = 0 and Fi = −AT Pi − Pi A.

2 Linear equality constraints In some problems we will encounter linear equality constraints on the variables. consider the “simultaneous Lyapunov stability prob- lem” (which we will see in §5. This electronic version is for personal use and may not be duplicated or distributed. e. QL ≥ 0. the corresponding LMI Problem (LMIP) is to find xfeas such that F (xfeas ) > 0 or determine that the LMI is infeasible. Q = QT . Of course we can eliminate the equality constraint to write (2. −AT Pi − Pi A) for i = 1. Then take F0 = diag(P0 . P > 0. . . L. (2.2. .) Of course. .7) where P ∈ Rk×k is the variable. . . . leaving any required elimination of equality constraints to the reader. which is not obvious. Let P1 . L. Pm be a basis for symmetric k×k matrices with trace zero (m = (k(k + 1)/2) − 1) and let P0 be a symmetric k × k matrix with Tr P0 = 1. B. m and Tr GF0 ≤ 0. this is a convex feasibility problem. . . . ¡ ¢ Q0 ≥ 0.6) is convex in P . 2. . i = 1. such that L X Qi ATi + Ai Qi .1. . AT P + P A < 0. m. . . or determine that no such P exists.1 LMI problems Given an LMI F (x) > 0. . . this means: Find a nonzero G ≥ 0 such that Tr GFi = 0 for i = 1. . We will say “solving the LMI F (x) > 0” to mean solving the corresponding LMIP. . . 2. i = 1. As an example of an LMIP.7) in the form F (x) > 0. QL . −AT P0 − P0 A) and Fi = diag(Pi . (By duality. not all zero.g. . Tr P = 1. and P = P T is the variable.1): We are given Ai ∈ Rn×n . .2 Some Standard Problems 9 where A.2 Some Standard Problems Here we list some common convex and quasiconvex problems that we will encounter in the sequel. ATi P + P Ai < 0. . . see the Notes and References.8) i=1 which is another (nonstrict) LMIP. We will refer to constraints such as (2. R = RT > 0 are given matrices of appropriate sizes.7) as LMIs. .2. . Determining that no such P exists is equivalent to finding Q0 . . Q0 = (2. BT P R This representation also clearly shows that the quadratic matrix inequality (2. It can be expressed as the linear matrix inequality " # −AT P − P A − Q P B > 0. . 2. . . Note that this is a quadratic matrix inequality in the variable P . and need to find P satisfying the LMI P > 0.

.3. and C ∈ Rm×n are given. subject to an LMI constraint (or determine that the constraint is infeasible). From our remarks above.e. this problem reduces to the general linear programming problem: minimizing the linear function cT x subject to a set of linear inequalities on x. B(x) > 0. i.2. Another equivalent form for the EVP is: minimize λ subject to A(x. this EVP can also be expressed in terms of the associated quadratic matrix inequality: minimize γ subject to AT P + P A + C T C + γ −1 P BB T P < 0. and P and γ are the optimization variables... B(x)) subject to B(x) > 0. λ) > 0 where A is affine in (x. Copyright ° .3 Generalized eigenvalue problems The generalized eigenvalue problem (GEVP) is to minimize the maximum generalized eigenvalue of a pair of matrices that depend affinely on a variable. i.e. B(x) > 0 where A and B are symmetric matrices that depend affinely on the optimization variable x.e. minimize λ subject to λI − A(x) > 0. In the special case when the matrices Fi are all diagonal.9) subject to F (x) > 0 with F an affine function of x. We leave it to the reader to verify that these forms are equivalent. This is a convex optimization problem. We can express this as minimize λmax (A(x). EVPs can appear in the equivalent form of minimizing a linear function subject to an LMI. consider the problem (which appears in §6. subject to an LMI constraint. i. P >0 2.10 Chapter 2 Some Standard Problems Involving LMIs 2. P >0 BT P γI where the matrices A ∈ Rn×n . C(x) > 0 where A. λ). minimize cT x (2. C(x) > 0 c 1994 by the Society for Industrial and Applied Mathematics.2): minimize γ " # −AT P − P A − C T C PB subject to > 0. any can be transformed into any other. The general form of a GEVP is: minimize λ subject to λB(x) − A(x) > 0. B ∈ Rn×p .2.2 Eigenvalue problems The eigenvalue problem (EVP) is to minimize the maximum eigenvalue of a matrix that depends affinely on a variable. B and C are symmetric matrices that are affine functions of x. As an example of an EVP.

Y ) denotes the largest generalized eigenvalue of the pencil λY − X with Y > 0.4 A convex problem Although we will be concerned mostly with LMIPs. minimiz- ing a linear-fractional function subject to a set of linear inequalities. consider the problem maximize α subject to −AT P − P A − 2αP > 0. viT P vi ≤ 1. B(θx + (1 − θ)x̃)) ≤ max {λmax (A(x). the largest eigenvalue of the matrix Y −1/2 XY −1/2 . .) Remark: Problem CP can be transformed into an EVP. and satisfies the monotonicity condition λ > µ =⇒ A(x.3]. In addition. (This problem arises in §5. µ). see the Notes and References.e.) 2. We will encounter several variations of this problem.11) subject to P > 0. and the optimization variables are the symmetric matrix P and the scalar α. we will also encounter the following convex problem. log det A−1 is a convex function of A. EVPs. x̃ and 0 ≤ θ ≤ 1. This means that for feasible x. λ) is affine in x for fixed λ and affine in λ for fixed x. B(x)).4. i = 1. λmax (A(θx + (1 − θ)x̃). λ) > 0 where A(x. and GEVPs. i. B(x̃))} . (Note that when A > 0. B(x) > 0 where A and B are symmetric matrices that depend affinely on x. As an example of CP. see [NN94. This electronic version is for personal use and may not be duplicated or distributed. this problem reduces to the general linear-fractional programming problem. As an example of a GEVP. consider the problem: minimize log det P −1 (2.1.. λmax (A(x). λ) ≥ A(x. is quasiconvex. Let E denote the ellipsoid centered at the origin determined by P . Note that when the matrices are all diagonal and A(x) and B(x) are scalar.e. . . many nonlinear quasiconvex functions can be represented in the form of a GEVP with appropriate A.2. we leave these problems in the more natural form. L here vi ∈ Rn are given and P = P T ∈ Rn×n is the variable. B. which we will abbreviate CP: minimize log det A(x)−1 (2. This GEVP is a quasiconvex optimization problem since the constraint is convex and the objective.2 Some Standard Problems 11 where λmax (X. and C..2. As we do with variables which are matrices. B(x)). λmax (A(x̃). . . §6. An equivalent alternate form for a GEVP is minimize λ subject to A(x.10) subject to A(x) > 0. which has the following interpretation. i. since det A(x) > λ can be represented as an LMI in x and λ. P >0 where the matrix A is given.

12 Chapter 2 Some Standard Problems Involving LMIs ∆ © ¯ E = z ¯ z T P z ≤ 1 .3 Ellipsoid Algorithm We first describe a simple ellipsoid algorithm that. ¯ c 1994 by the Society for Industrial and Applied Mathematics.) The basic idea of the algorithm is as follows. We describe it here because it is very simple and. Copyright ° . . .. . and CPs) are tractable. .2. compute a feasible point with an objective value that exceeds the global minimum by less than some prespecified accuracy. the interior-point algorithms described in the next section are much more efficient. we will assume that the problem we are solving has at least one optimal point. . centered at the ori- gin. The minimum volume ellipsoid that contains the half-ellipsoid z ¯ (z − a)T A−1 (z − a) ≤ 1.) We then know that the sliced half-ellipsoid n ¯ o E (0) ∩ z ¯ g (0)T (z − x(0) ) ≤ 0 ¯ contains an optimal point. however. (We will explain how to do this for each of our problems later. vL }. from both theoretical and practical viewpoints: • They can be solved in polynomial-time (indeed with a variety of interpretations for the term “polynomial-time”). We now describe the algorithm more explicitly. where Co denotes the convex hull. In practice. that contains the points v1 .11). minimizing log det P −1 is the same as minimizing the volume of E. efficient (polynomial-time). 2. The constraints are simply vi ∈ E.5 Solving these problems The standard problems (LMIPs. . we consider any feasible point as being optimal. (0) This means©that ¯ we find a nonzeroªvector g such that an© optimal ¯ point lies in the half-space z ¯ g (0)T (z − x(0) ) ≤ 0 (or in the half-space z ¯ g (0)T (z − x(0) ) < 0 . E (1) is then guaranteed to contain an optimal point. We start with an ellipsoid E (0) that is guaranteed to contain an optimal point. vL . GEVPs. (In the feasibility problem. An ellipsoid E can be described as E = z ¯ (z − a)T A−1 (z − a) ≤ 1 © ¯ ª where A = AT > 0. or equivalently. the polytope Co{v1 . the constraints are feasible. By “solve the problem” we mean: Determine whether or not the problem is fea- sible. is guaranteed to solve the standard problems. 2. ª depending on the situation). from a theoretical point of view.e. The process is then repeated. Since the volume of E is pro- ª portional to (det P )−1/2 . EVPs. and if it is. g T (z − a) ≤ 0 © ¯ ª is given by n ¯ o Ẽ = z ¯ (z − ã)T Ã−1 (z − ã) ≤ 1 . . • They can be solved in practice very efficiently. . So by solving (2. we find the minimum volume ellipsoid. We then compute a cutting plane for our problem that passes through the center x (0) of E (0) . Now we compute the ellipsoid E (1) of minimum volume that contains this sliced half-ellipsoid. Although more sophisticated versions of the ellipsoid algorithm can detect in- feasible constraints. i. roughly speaking.

) We now show how to compute cutting planes for each of our standard problems. there exists a nonzero u such that m à ! X T T u F (x)u = u F0 + xi Fi u ≤ 0. and this fact can be used to prove polynomial-time convergence of the algorithm. in this case. i. In the case of one variable. . i = 1. (We refer the reader to the papers cited in the Notes and References for precise statements of what we mean by “polynomial-time” and indeed. . i=1 If x does not satisfy this LMI. LMIPs: Consider the LMI m X F (x) = F0 + xi Fi > 0. compute a g (k) that defines a cutting plane at x(k) ¢−1/2 (k) g̃ := g (k)T A(k) g (k) ¡ g 1 x(k+1) := x(k) − m+1 A (k) g̃ ³ ´ m2 2 A(k+1) := m2 −1 A(k) − m+1 A (k) g̃g̃ T A(k) This recursion generates a sequence of ellipsoids that are guaranteed to contain an optimal point. Then for any z satisfying g T (z − x) ≥ 0 we have m à ! X T T u F (z)u = u F0 + zi Fi u = uT F (x)u − g T (z − x) ≤ 0. . . Ã = A− Ag̃g̃ T A . . . The algorithm then proceeds as follows: for k = 1. m. “convergence.3 Ellipsoid Algorithm 13 where m2 µ ¶ Ag̃ 2 ã = a − . . i=1 It follows that every feasible point lies in the half-space {z | g T (z − x) < 0}.2.) The ellipsoid algorithm is initialized with x(0) and A(0) such that the corresponding ellipsoid contains an optimal point. . m+1 m2 − 1 m+1 p and g̃ = g/ g T Ag. reduces to the familiar bisection algorithm. this g defines a cutting plane for the LMIP at the point x.” as well as proofs. the ellipsoid algorithm. We have k vol(E (k) ) ≤ e− 2m vol(E (0) ). It turns out that the volume of these ellipsoids decreases geometrically.e. EVPs: Consider the EVP minimize cT x subject to F (x) > 0 This electronic version is for personal use and may not be duplicated or distributed. (We note that these formulas hold only for m ≥ 2.. i=1 Define g by gi = −uT Fi u. the minimum length interval containing a half-interval is the half-interval itself. 2.

if g T (z − x) ≥ 0 we find that λmax (A(z). . . 2. and hence cannot be optimal. B(x)) which establishes our claim. To see this. B(x))Bi − Ai )u. In this case.e. Suppose now that x is feasible... i. When the given point x is infeasible we already know how to generate a cutting plane. B(x))B(z) − A(z)) u = −g T (z − x). In this case. B(x) = B0 + i=1 xi Bi and C(x) = C0 + i=1 xi Ci . i. So we assume that x is feasible. CPs: We now consider our standard CP (2. the vector g = c defines a cutting plane for the EVP at the point x. Pick any u 6= 0 such that (λmax (A(x). B(x)) subject to B(x) > 0. The references describe more sophisticated interior-point methods for the other problems (including the problem of computing a feasible starting point for an EVP). Now suppose we are given a point x. and hence z cannot be optimal. given a feasible starting point. a cutting plane is given by the gradient of the objective m à ! X − log det A(x) = − log det A0 + xi Ai i=1 −1 at x.e. . F (x) > 0. In this section we describe a simple interior-point method for solving an EVP. Hence. i. c 1994 by the Society for Industrial and Applied Mathematics. A(x) = A0 + i=1 xi Ai . uT (λmax (A(x).. we are discarding the half-space {z | g T (z − x) ≥ 0} because all such points are infeasible. Then we can con- struct a cutting plane for this problem using the method described above for LMIPs. In particular. m. Since the objective function is convex we have for all z. C(x) > 0 Pm Pm Pm Here. Copyright ° . We claim g defines a cutting plane for this GEVP at the point x. interior-point methods have been developed for the standard problems. . GEVPs: We consider the formulation minimize λmax (A(x). If the constraints are violated.14 Chapter 2 Some Standard Problems Involving LMIs Suppose first that the given point x is infeasible. g T (z − x) > 0 implies log det A(z)−1 > log det A(x)−1 . log det A(z)−1 ≥ log det A(x)−1 + g T (z − x). Now assume that the given point x is feasible.e.10). gi = − Tr Ai A(x) . B(x))B(x) − A(x)) u = 0. Here. F (x) 6> 0. we use the method described for LMIPs to generate a cutting plane at x. B(z)) ≥ λmax (A(x).4 Interior-Point Methods Since 1988. Define g by gi = −uT (λmax (A(x). we note that for any z. we are discarding the half-space {z | g T (z − x) > 0} because all such points (whether feasible or not) have an objective value larger than x. In this case. i = 1.

respectively. We now turn to the problem of computing the analytic center of an LMI. the function φ. . .e. It can be shown that φ is strictly convex on the feasible set. (2.16) 1/(1 + δ(x )) otherwise. Remark: The analytic center depends on the LMI (i.4 Interior-Point Methods 15 2.2. Equivalently. Their damping factor is: ( (k) 1 if δ(x(k) ) ≤ 1/4. and g(x(k) ) and H(x(k) ) denote the gradient and Hessian of φ. α := (k) (2. however. . This electronic version is for personal use and may not be duplicated or distributed. which we denote x? : ∆ x? = arg min φ(x). . so it has a unique minimizer. The function ( ∆ log det F (x)−1 F (x) > 0 φ(x) = (2.13) x We refer to x? as the analytic center of the LMI F (x) > 0. and becomes infinite as x approaches the boundary of the feasible set {x|F (x) > 0}. and is important in its own right. . §2.2]. Nesterov and Nemirovskii give a simple step length rule appropriate for a general class of barrier functions (called self-concordant).12) ∞ otherwise. and in particular.1 Analytic center of an LMI The notion of the analytic center of an LMI plays an important role in interior-point methods. i=1 where Fi = FiT ∈ Rn×n .15) where α(k) is the damping factor of the kth iteration.14) F (x) > 0 that is. is finite if and only if F (x) > 0.) We suppose now that the feasible set is nonempty and bounded. at x(k) . the data F0 . with appropriate step length selection. invariant under congruence of the LMI: F (x) > 0 and T T F (x)T > 0 have the same analytic center provided T is nonsingular.4. can be used to efficiently compute x? . We consider the algorithm: x(k+1) := x(k) − α(k) H(x(k) )−1 g(x(k) ). The analytic center is. (2. (This is a special form of our problem CP. Fm ) and not its feasible set: We can have two LMIs with different analytic centers but identical feasible sets. . . i.. (2. which implies that the matrices F1 . Fm are linearly independent (otherwise the feasible set will contain a line). starting from a feasible initial point.) Newton’s method.e. . F (x? ) has maximum determinant among all positive-definite matrices of the form F (x). In [NN94. . it is a barrier function for the feasible set. (We have already encountered this function in our standard problem CP. Consider the LMI m X F (x) = F0 + xi Fi > 0.. x? = arg max det F (x).

..16 Chapter 2 Some Standard Problems Involving LMIs where q ∆ δ(x) = g(x)T H(x)−1 g(x) is called the Newton decrement of φ at x. We will also assume that the LMI (2..17) depends on how “centered” the initial point is. cT x < λ (2.2 The path of centers Now consider the standard EVP: minimize cT x subject to F (x) > 0 Let λopt denote its optimal value. x? (λ) is feasible and satisfies cT x? (λ) < λ. They prove that φ(x(k) ) − φ(x? ) ≤ ² whenever ³ ´ k ≥ c1 + c2 log log(1/²) + c3 φ(x(0) ) − φ(x? ) . The second term grows so slowly with required accuracy ² that for all practical purposes it can be lumped together with the first and considered an absolute constant.16).4.e. Copyright ° .e.e.19) c i = − Tr F (x? (λ))−1 Fi + = 0. . The point xopt is optimal (or more precisely.18) has a bounded feasible set. and c3 are three absolute constants. . Indeed. F (x(k+1) ) > 0. given a feasible starting point. . the limit of a minimizing sequence) since for λ > λopt . i. λ − cT x x The curve given by x? (λ) for λ > λopt is called the path of centers for the EVP. m. The algorithm is initialized with λ(0) and x(0) . c2 . and convergence of x(k) to x? . Fm . The first and second terms on the right-hand side do not depend on any problem data. The last term on the right-hand side of (2. the matrices F0 . The point x? (λ) is characterized by ¯ µ ¶ ∂ ¯¯ −1 1 log det F (x) + log ∂xi ¯x? (λ) λ − cT x (2.4.18) is feasible. . 2. i. i. and proceeds as follows: λ(k+1) := (1 − θ)cT x(k) + θλ(k) (k+1) x := x? (λ(k+1) ) c 1994 by the Society for Industrial and Applied Mathematics. . . which we denote xopt . (2.. so for each λ > λopt the LMI F (x) > 0. λ − cT x? (λ) 2. and therefore has an analytic center which we denote x? (λ): µ ¶ ∆ 1 x? (λ) = arg min log det F (x)−1 + log . Nesterov and Nemirovskii show that this step length always results in x(k+1) feasible. specific numbers. i = 1.17) where c1 .3 Method of centers The method of centers is a simple interior-point algorithm that solves an EVP. and the numbers m and n. . with F (x(0) ) > 0 and cT x(0) < λ(0) . they give sharp bounds on the number of steps required to compute x ? to a given accuracy using Newton’s method with the step length (2. It can be shown that it is analytic and has a limit as λ → λopt .

cT x < λ.. (k) We now give a simple proof of convergence. the optimal value has been found within ². we can view the method of centers as a way of solving an EVP by solving a sequence of CPs (which is done using Newton’s method). the method of centers is not the most efficient. but more sophisticated versions are.19) by (xi − xopt i ) and summing over i yields: ³ ´ 1 Tr F (x(k) )−1 F (x(k) ) − F (xopt ) = (k) cT (x(k) − xopt ). In other words we use µ ¶ ? ∆ −1 1 x (λ) = arg min log det F (x) + q log . Multiplying (2.4 Interior-Point Methods 17 where θ is a parameter with 0 < θ < 1. λ − cT x x Using q > 1.2. This electronic version is for personal use and may not be duplicated or distributed. λ − cT x(k) Since Tr F (x(k) )−1 F (xopt ) ≥ 0. see the Notes and References. say q ≈ n where F (x) ∈ Rn×n . Second. λ − cT x(k) Replacing cT x(k) by (λ(k+1) − θλ(k) )/(1 − θ) yields n + θ (k) (λ(k+1) − λopt ) ≤ (λ − λopt ). n+1 which proves that λ(k) approaches λopt with at least geometric convergence. the current iterate x(k) is feasible for the inequality cT x < λ(k+1) . First. a step length chosen to mini- mize φ along the Newton search direction (i. and therefore can be used as the initial point for computing the next iterate x? (λ(k+1) ) via Newton’s method. In this case. and refer the reader to the Notes and References for further elaboration. We make a few comments here. we note that two simple modifications of the method of centers as described above yield an algorithm that is fairly efficient in practice. this variation on the method of centers is not polynomial- time. which shows that the stopping criterion ³ ´ until λ(k) − cT x(k) < ²/n guarantees that on exit. we define it to be the analytic center of the LMI F (x) > 0 along with q copies of cT x < λ. x (k) does not (quite) satisfy the new inequality cT x < λ(k+1) . The modifications are: • Instead of the Nesterov–Nemirovskii step length. . where q > 1 is an integer. Note that we can also express the inequality above in the form cT x(k) − λopt ≤ n(λ(k) − cT x(k) ). Since computing an analytic center is itself a special type of CP.e. however. The most efficient algorithms developed so far appear to be primal-dual algorithms (and variations) and the projective method of Nemirovskii. The classic method of centers is obtained with θ = 0. we conclude that 1 n ≥ (k) (cT x(k) − λopt ). • Instead of defining x? (λ) to be the analytic center of the LMI F (x) > 0. an exact line-search) will yield faster convergence to the analytic center. With θ > 0. Among interior-point methods for the standard problems. yields much faster reduction of λ per iteration. F (x) > 0. and perhaps more important. however.

20) to a given accuracy in O(m 3. GEVP. . the ellipsoid method solves (2. the constant hidden in the O(·) notation is much larger for the ellipsoid algorithm). c 1994 by the Society for Industrial and Applied Mathematics. Copyright ° . with a mixture of strict and nonstrict LMIs. It turns out that this Newton direction can be expressed as the solution of a weighted least-squares problem of the same size as the original problem. . . . or more generally. . the computational effort required to solve the least- squares problems is much smaller than by standard “direct” methods such as QR or Cholesky factorization. Using conjugate-gradient and other related methods to solve these least-squares systems gives two advantages. We consider the EVP minimize Tr EP (2. For example. An example will demonstrate the efficiencies that can be obtained using the tech- niques sketched above. i = 1. The idea is very roughly as follows. E ∈ n×n R .1 and β ≈ 1. . In comparison.5 L) arithmetic operations (moreover. compared to the cost of solving L independent Lyapunov inequalities is O(m0. In interior-point methods most of the computational effort is devoted to computing the Newton direc- tion of a barrier or similar function. The problem is an EVP that we will encounter in §6.4 Interior-point methods and problem structure An important feature of interior-point methods is that problem structure can be ex- ploited to increase efficiency.18 Chapter 2 Some Standard Problems Involving LMIs 2. we find that F (x) ∈ RN ×N with N = Ln. DL .2.2. This example illustrates one of our points: The computational cost of solving one of the standard problems (which has no “analytic solution”) can be comparable to the computational cost of evaluating the solution of a similar problem that has an “analytic solution”. the cost of solving L Lyapunov equations of the same size is O(m1. We will also encounter these problems with nonstrict LMIs. . the relative cost of solving L coupled Lyapunov inequalities. . so the dimension of the optimization variable is m = n(n + 1)/2. . First. We are given matrices A1 . They prove the worst-case esti- mate of O(m2. symmetric matrices D1 . it is possible to terminate the conjugate-gradient iter- ations before convergence. Vandenberghe and Boyd have developed a (primal-dual) interior-point method that solves (2. AL ∈ Rn×n . This LMI has much structure: It is block-diagonal with each block a Lyapunov inequality. Second.75 L1. .1. Therefore.2 ). exploiting the problem structure.4. 2. with α ≈ 2.6 L0.5 ) arithmetic operations to solve the problem to a given accuracy. and CP involve strict LMIs. . .20) subject to ATi P + P Ai + Di < 0.20). EVP.5 Strict and Nonstrict LMIs We have so far assumed that the optimization problems LMIP. L In this problem the variable is the matrix P . by exploiting problem structure in the conjugate-gradient iterations. Numerical experiments on families of problems with randomly generated data and families of problems arising in system and control theory show that the actual performance of the interior-point method is much better than the worst-case estimate: O(mα Lβ ) arithmetic operations. This complexity estimate is remarkably small.5 L). and still obtain an approximation of the Newton direction suitable for interior-point methods. When the Lyapunov inequalities are combined into one large LMI F (x) > 0. See the Notes and References for more discussion.

9). . −x) with x ∈ R. the nonstrict LMI F (x) ≥ 0 is equivalent (in the sense of defining equal feasible sets) to the “reduced” LMI F̃ (x) = x ≥ 0. and symmetric matrices F̃0 . is always 0. say. and CP in order to solve them. 0) with x ∈ R and c = −1.e. we can replace nonstrict inequality with strict inequality in the problems EVP. . It follows that inf cT x | F (x) ≥ 0 = inf cT x | F (x) > 0 . .1. This is true for the problems GEVP and CP as well. .21) need not hold..22) is +∞ since the strict LMI F (x) > 0 is infeasible. i. The right-hand side of (2.5 Strict and Nonstrict LMIs 19 As an example consider the nonstrict version of the EVP (2. . the feasible set of the nonstrict LMI is the closure of the feasible set of the strict LMI.2. This is correct in most but not all cases. When an LMI is feasible but not strictly feasible. The problem here is that F (x) is always singular. the requirement of strict feasibility is a (very strong) constraint qualification. . i.. consider F (x) = diag(x. Once again. . Then there is a matrix A ∈ Rm×p with p ≤ m. Fm ∈ Rn×n be symmetric. and the EVPs with the strict and nonstrict LMIs can be very different.1 Reduction to a strictly feasible LMI It turns out that any feasible nonstrict LMI can be reduced to an equivalent LMI that is strictly feasible. . then we have {x ∈ Rm | F (x) ≥ 0 } = {x ∈ Rm | F (x) > 0 }. a vector b ∈ Rm .e. the strict LMI is infeasible so the right-hand side of (2.e. We have just seen that when an LMI is strictly feasible. . © ª © ª (2. If the strict LMI F (x) > 0 is feasible. We will say that the LMI F (x) ≥ 0 is strictly feasible if its strict version is feasible. however. As a simple example. Of course. i. we can solve the strict EVP to obtain a solution of the nonstrict EVP. since the LMI F (x) ≥ 0 has the single feasible point x = 0. In the language of optimization theory. This example shows one of the two pathologies that can occur: The nonstrict inequality contains an implicit equality constraint (in contrast with an explicit equality constraint as in §2. The left-hand side. The precise statement is: Let F0 .2). The other pathology is demonstrated by the example F (x) = diag(x.22) is +∞. F̃p ∈ Rq×q with q ≤ n such that: x = Az + b for some z ∈ Rp with p F (x) ≥ 0 ⇐⇒ ∆ X F̃ (z) = F̃0 + zi F̃i ≥ 0 i=1 This electronic version is for personal use and may not be duplicated or distributed. Note that this reduced LMI satisfies the constraint qualification. is strictly feasible. GEVP. The feasible set for the nonstrict LMI is the interval [0. minimize cT x subject to F (x) ≥ 0 Intuition suggests that we could simply solve the strict EVP (by. ∞) so the right-hand side is −∞..5.22) So in this case. an interior-point method) to obtain the solution of the nonstrict EVP. (2. 2.21) i. if there is some x0 ∈ Rm such that F (x0 ) > 0. by eliminating implicit equality constraints and then reducing the resulting LMI by removing any constant nullspace. (2.e.

Note that this LMI contains a strict inequality as well as a nonstrict inequality. Roughly speaking. and the part associated with eigenvalue zero. always deal with strictly feasible LMIs. From the remarks above. See the Notes and References for a proof. Copyright ° . Similarly. and all the eigenvalues of the matrix Astab ∈ Rs×s have negative real part. and those with zero real part are nondefective.e. all the eigenvalues of A have negative real part. P > 0...23) where A ∈ Rk×k is given. We also know from system theory that the LMI (2... In most of the prob- lems encountered in this book. so we can just take A = I.5. if the eigenvalues of A have nonpositive real part. the part corresponding to each imaginary axis eigenvalue. Astab −ω1 Ik1 0 −ωr Ikr 0 where 0 < ω1 < · · · < ωr . b = 0. the Lyapunov inequality: AT P + P A ≤ 0. The matrix A and vector b describe the implicit equality constraints for the LMI F (x) ≥ 0. we have separated out the stable part of A.23) is strictly feasible if and only if all trajectories of ẋ = Ax converge to zero. or equivalently. the LMI F̃ (z) ≥ 0 can be interpreted as the original LMI with its constant nullspace removed (see the Notes and References). correspond to Jordan blocks of size one. . Consider the interesting case where the LMI (2. 2. We know from system theory that the LMI (2.20 Chapter 2 Some Standard Problems Involving LMIs where the LMI F̃ (z) ≥ 0 is either infeasible or strictly feasible. 0kr+1 denotes the zero matrix in Rkr+1 ×kr+1 . (2. at least in principle. and F̃ = F . c 1994 by the Society for Industrial and Applied Mathematics.23) is feasible but not strictly feasible. or equivalently. there are no implicit equality constraints or nontrivial common nullspace for F .23) is feasible if and only if all trajectories of ẋ = Ax are bounded. and the symmetric matrix P is the variable. 0kr+1 .2 Example: Lyapunov inequality To illustrate the previous ideas we consider the simplest LMI arising in control theory. we see that by a change of coordinates we can put A in the form ∆ Ã = T −1 AT Ã" # " # ! 0 ω 1 I k1 0 ω r I kr = diag .. Using this reduction we can. For example we have n ¯ o inf cT x ¯ F (x) ≥ 0 inf cT (Az + b) © ¯ ª = ¯ F̃ (z) ≥ 0 ¯ n ¯ o = inf cT (Az + b) ¯ F̃ (z) > 0 ¯ since the LMI F̃ (z) ≥ 0 is either infeasible or strictly feasible. i.

. . . . We remove the constant nullspace to obtain the reduced version.. i = 1. Pr+1 . two inequali- ties of size k). . Qr as zero matrices. Thus. . .. .     ¯ Pi Qi   > 0. This reduced LMI is strictly feasible. . Pstab > 0   ¯   QTi  ¯      ¯ ¯ Pi       ¯   ATstab Pstab + Pstab Astab ≤ 0  ¯  From this characterization we can find a reduced LMI that is strictly feasible.. In summary. P > 0 =  ¯ Ã" # " # !   ¯ P1 Q1 Pr Qr    ¯ P̃ = diag . . Pr+1 > 0. . Pstab T −1 .. ATstab Pstab + Pstab Astab T ¡ ¢ where the zero matrix has size 2k1 +· · ·+2kr +kr+1 . i = 1.. . .23) is " # Pi Q i > 0. QTi Pi (2. . . . Qr as the “free” variable z. ATstab Pstab + Pstab Astab ≤ 0.23). . since we can take P1 .e.5 Strict and Nonstrict LMIs 21 Using standard methods of Lyapunov theory it can be shown that © ¯ T ª P ¯ A P + P A ≤ 0. and Pstab as the solution of the Lyapunov equation ATstab Pstab + Pstab Astab + I = 0. . r    T −T P̃ T −1 ¯ ¯ " # . . . . We can take the symmetric matrices P1 . the original LMI (2. . . the affine mapping from z into x simply maps these matrices into Ã" # " # ! −T P1 Q 1 Pr Q r P =T diag . . Q1 . Pr+1 as identity matri- ces. . Pstab   QT1 P1 QTr Pr   ¯     ¯     ¯    ¯ ki ×ki T     ¯ ¯ Pi ∈ R .23) involves a matrix inequality of size n = 2k (i. Qr .e. i = 1. .. ... . . . . This electronic version is for personal use and may not be duplicated or distributed. . We find that AT P + P A = T T diag 0. Pr+1 . the dimension of the original variable x is m = k(k + 1)/2). .23) has one symmetric matrix of size k as variable (i. . r + 1. the equality constraints implicit in the LMI (2.24) back into the original LMI (2. Pr+1 . . the reduced LMI corresponding to (2. The reduced LMI (2.e.. ATstab Pstab + Pstab Astab ≤ 0. .25) Pstab > 0.. Now we substitute P in the form (2. The reduced LMI (2. i=1 2 2 The original LMI (2. Pr+1 .25) has as variable the symmetric matrices P1 . Pstab and the skew- symmetric matrices Q1 . Pr+1 > 0.24) QT1 P1 QTr Pr Put another way. . (2.23) are that P must have this special structure. r. . .. . i. i = 1. r. . Qi = −Qi .25) involves a matrix inequality of size k + s < n.2. . so the dimension of the variable z in the reduced LMI is r X kr+1 (kr+1 + 1) s(s + 1) p= ki2 + + < m. . Pstab and the skew-symmetric matrices Q1 .

28) with variable z. Then (2. We prove this lemma in the Notes and References. we have h iT h i F (x) > 0 ⇐⇒ Ũ U F (x) Ũ U > 0.26) we can eliminate any variables for which the corresponding coefficient in F is semidef- inite.28) hold with z = z0 . Therefore. so the LMIP reduces to making F positive-definite on the nullspace of Fm . xm−1 )Ũ > 0. ..6. (2. . Ũ (z) and Ṽ (z) are orthogonal complements of U (z) and V (z) respectively. and let Ũ be an orthogonal complement of U . by taking xm extremely large.28).28) disappears. Suppose that for every z. (2. we can derive an equivalent inequality without those variables. feasibility of the matrix inequality (2.27) holds for some X and z = z0 if and only if the inequalities Ũ (z)T G(z)Ũ (z) > 0. . .2 Elimination of matrix variables When a matrix inequality has some variables that appear in a certain form.6 Miscellaneous Results on Matrix Inequalities 2. Ṽ (z)T G(z)Ṽ (z) > 0 (2.27) to form (2. U T Ũ = 0 and [U Ũ ] is of maximum rank (which in this case just means that [U Ũ ] is nonsingular).27) arise in the controller synthesis problems described in Chapter 7. 2. we have eliminated the matrix variable X from (2. and G(z) ∈ Rn×n . .26) is equivalent to find x1 . Copyright ° .6.g. U (z) and V (z) do not depend on X. Roughly speaking. . xm−1 such that Ũ T F̃ (x1 . Of course.1 Elimination of semidefinite terms In the LMIP find x such that F (x) > 0. c 1994 by the Society for Industrial and Applied Mathematics. U T F̃ (x̃)Ũ U T F̃ (x̃)U + xm (U T U )2 ∆ where x̃ = [x1 · · · xm−1 ]T and F̃ (x̃) = F0 + x1 F1 + · · · + xm−1 Fm−1 . Consider G(z) + U (z)XV (z)T + V (z)X T U (z)T > 0. e. and xm is large enough that ³ ´−1 U T F̃ (x̃)U + xm (U T U )2 > U T F̃ (x̃)Ũ Ũ T F̃ (x̃)Ũ Ũ T F̃ (x̃)U.22 Chapter 2 Some Standard Problems Involving LMIs 2. More precisely.27) where the vector z and the matrix X are (independent) variables. we see that F (x) > 0 if and only if Ũ T F̃ Ũ (x̃) > 0. Matrix inequalities of the form (2. then the first or second inequality in (2.27) with variables X and z is equivalent to the feasibility of (2. Since " # h iT h i Ũ T F̃ (x̃)Ũ Ũ T F̃ (x̃)U Ũ U F (x) Ũ U = . Suppose for example that Fm ≥ 0 and has rank r < n. we can ensure that F is positive-definite on the range of Fm . In other words. let Fm = U U T with U full-rank. Note that if U (z) or V (z) has rank n for all z. the LMIP (2. . .

In some cases. this constraint can be expressed as an LMI in the data defining the quadratic functions or forms. 2. AQ + QAT + BY + Y T B T < 0. we can form an LMI that is a conservative but often useful approximation of the constraint. . AQ + QAT < σBB T .28) in another form using Finsler’s lemma (see the Notes and References): G(z) − σU (z)U (z)T > 0. provided that there is some ζ0 such that F1 (ζ0 ) > 0. Fp : F0 (ζ) ≥ 0 for all ζ such that Fi (ζ) ≥ 0. Note that (2. We consider the following condition on F0 . As an example. . we will encounter in §7. .31) for all ζ.1 the LMIP with LMI Q > 0.6. i = 0. Remark: If the functions Fi are affine. i=1 then (2. ¡ ¢ Q > 0. with variable Q. .2.31) and (2. . . Thus we have eliminated the variable Y from (2. It is a nontrivial fact that when p = 1.29) where Q and Y are the variables.30) Obviously if there exist τ1 ≥ 0. this is the Farkas lemma. The S-procedure for quadratic functions and nonstrict inequalities Let F0 . This LMIP is equivalent to the LMIP with LMI Q > 0. . . i = 1. p.30) are equivalent. p. . uT0 v0 i=1 uTi vi This electronic version is for personal use and may not be duplicated or distributed. the converse holds. . . then (2.29) and reduced the size of the matrices in the LMI.3 The S-procedure We will often encounter the constraint that some quadratic function (or quadratic form) be negative whenever some other quadratic functions (or quadratic forms) are all negative. . where B̃ is any matrix of maximum rank such that B̃ T B = 0. . . . .31) can be written as p " # " # T0 u 0 X Ti u i − τi ≥ 0. . τp ≥ 0 such that p X (2. . F0 (ζ) − τi Fi (ζ) ≥ 0. in other cases. (2. Fp be quadratic functions of the variable ζ ∈ Rn : ∆ Fi (ζ) = ζ T Ti ζ + 2uTi ζ + vi . where Ti = TiT .2.6 Miscellaneous Results on Matrix Inequalities 23 We can express (2.30) holds. It is also equivalent to the LMIP B̃ T AQ + QAT B̃ < 0. where the variables are Q and σ ∈ R. (2. . . . G(z) − σV (z)V (z)T > 0 for some σ ∈ R.

Remark: Suppose that T0 . Let T0 . 2. BT P −τ I Thus the problem of finding P > 0 such that (2. τp .33) is an LMI in the variables T0 and τ1 . . . The second version deals with strict inequalities and quadratic forms only. Note that (2.34) is equivalent to the existence of τ ≥ 0 such that   AT P + P A + τ C T C P B   < 0. See the Notes and References for further discussion.24 Chapter 2 Some Standard Problems Involving LMIs The S-procedure for quadratic forms and strict inequalities We will use another variation of the S-procedure. Tp : ζ T T0 ζ > 0 for all ζ 6= 0 such that ζ T Ti ζ ≥ 0. .31) holds for some ν is an LMIP in the variables ν and τ1 . .30) holds has low complexity. . . . . quadratic functions without constant or linear terms. this problem has low complexity.7 Some LMI Problems with Analytic Solutions There are analytic solutions to several LMI problems of special form. u0 and v0 depend affinely on some parameter ν. checking whether (2. . (2. This does not. . Tp ∈ Rn×n be symmetric matrices. provided that there is some ζ0 such that ζ0T T1 ζ0 > 0. Copyright ° . however. Therefore. . i.32) holds. . At the same time we introduce and define several important terms from system and control theory. often with im- portant system and control theoretic interpretations. . . It is a nontrivial fact that when p = 1. c 1994 by the Society for Industrial and Applied Mathematics. τp . . S-procedure example In Chapter 5 we will encounter the following constraint on the variable P : for all ξ 6= 0 and π satisfying π T π ≤ ξ T C T Cξ.. . " #T " #" # ξ AT P + P A P B ξ (2. .33) i=1 then (2. (2. . . Remark: The first version of the S-procedure deals with nonstrict inequalities and quadratic functions that may include constant and linear terms. We briefly describe some of these results in this section. which involves quadratic forms and strict inequalities. π BT P 0 π Applying the second version of the S-procedure. . . (2. mean that the problem of checking whether there exists ν such that (2.32) It is obvious that if p X there exists τ1 ≥ 0. We consider the following condition on T0 . . .34) holds can be expressed as an LMIP (in P and the scalar variable τ ). i = 1. Then the condition (2. p. the converse holds. On the other hand.e.34) < 0.30) is convex in ν. . τp ≥ 0 such that T0 − τi Ti > 0.

. B ∈ Rn×p .37) is passive. AT P + P A < 0 where P is variable and A ∈ Rn×n is given. and D ∈ Rp×p are given. (We will encounter a variation on this LMI in §6. which means that H(s) + H(s)∗ ≥ 0 for all Re s > 0. or equivalently.) Note that if D + D T > 0. i.e. P > 0. To solve this LMIP.35) is equivalent to the quadratic matrix inequality AT P + P A + (P B − C T )(D + D T )−1 (P B − C T )T ≤ 0. i. i. it parametrizes all solutions as Q varies over the positive-definite cone.2 The positive-real lemma Another important example is given by the positive-real (PR) lemma.7. The link with system and control theory is given by the following result.35) is feasible if and only if the linear system ẋ = Ax + Bu. 2. all trajectories of ẋ = Ax converge to zero as t → ∞. In fact this procedure not only finds a solution when the LMI is feasible. a numerical solution procedure via Riccati equations as well.3. (2. C) is minimal. and under some additional assumptions. The LMI considered is: " # AT P + P A P B − C T P > 0. We give a simplified discussion here. Lyapunov showed that this LMI is feasible if and only the matrix A is stable. (2. and the matrix P = P T ∈ Rn×n is the variable. which is nothing but a set of n(n + 1)/2 linear equations for the n(n + 1)/2 scalar variables in P .3. the LMI (2. Z T u(t)T y(t) dt ≥ 0 0 for all solutions of (2. C ∈ Rp×n .37) with x(0) = 0.7 Some LMI Problems with Analytic Solutions 25 2.. and refer the reader to the References for more complete statements.e. all eigenvalues of A must have negative real part. B.36) We assume for simplicity that A is stable and the system (A.. This electronic version is for personal use and may not be duplicated or distributed. Passivity can also be expressed in terms of the transfer matrix of the linear sys- tem (2.37). which yields a “frequency-domain” interpretation for a certain LMIP.7. This set of linear equations will be solvable and result in P > 0 if and only if the LMI is feasible.35) B T P − C −D T − D where A ∈ Rn×n . defined as ∆ H(s) = C(sI − A)−1 B + D for s ∈ C.e.1 Lyapunov’s inequality We have already mentioned the LMIP associated with Lyapunov’s inequality. ≤ 0. we pick any Q > 0 and solve the Lyapunov equation AT P + P A = −Q.2. The LMI (2. y = Cx + Du (2. Passivity is equivalent to the transfer matrix H being positive-real.

This fact can be used to form a Routh– Hurwitz (Sturm) type test for passivity as a set of polynomial inequalities in the data A.g. and the matrix P = P T ∈ Rn×n is the variable. The solution Pare thus obtained is the minimal element among the set of solutions of (2. Thus we have a graphical.35) can be solved by a method based on Riccati equations and Hamiltonian matrices. With a few further technical assumptions. the LMI (2. . C) is minimal.26 Chapter 2 Some Standard Problems Involving LMIs When p = 1 this condition can be checked by various graphical means. including the precise technical conditions.35) is feasible if and only if there exists a real matrix P = P T satisfying the ARE AT P + P A + (P B − C T )(D + D T )−1 (P B − C T )T = 0.. Copyright ° . then set Pare = V2 V1−1 . can be found in the References.38) we first form the associated Hamiltonian matrix " # A − B(D + D T )−1 C B(D + D T )−1 B T M= . or equivalently. the bounded-real lemma. i. Partition V as " # V1 V = . including D + D T > 0. . B. With these assumptions the LMI (2. e.g..37) is passive. 2.38).36) with equality substituted for in- equality. B ∈ Rn×p .. (2. V2 where V1 and V2 are square.38) which is just the quadratic matrix inequality (2. To solve the ARE (2. This LMI is feasible if and only the linear system (2. (2. C. .2. e. This approach was used in much of the work in the 1960s and 1970s described in §1. Here we consider the LMI " # AT P + P A + C T C P B + C T D P > 0.35) is feasible. V = [v1 · · · vn ] where v1 . Note that P > 0. .35).38): if P = P T satisfies (2. and D. the LMI (2. vn are a set of independent eigenvectors of M associated with its n eigenvalues with negative real part. Pick V ∈ R2n×n so that its range is a basis for the stable eigenspace of M . Much more discussion of this method.37) is nonexpansive. B. When M has no pure imaginary eigenvalues we can construct a solution Pare as follows.39) B T P + DT C DT D − I where A ∈ Rn×n . For simplicity we assume that A is stable and (A. and D ∈ Rp×p are given. Z T Z T y(t)T y(t) dt ≤ u(t)T u(t) dt 0 0 c 1994 by the Society for Industrial and Applied Mathematics.3 The bounded-real lemma The same results appear in another important form. C ∈ Rp×n . −C T (D + D T )−1 C −AT + C T (D + D T )−1 B T Then the system (2. then P ≥ Pare . plotting the curve given by the real and imaginary parts of H(iω) for ω ∈ R (called the Nyquist plot of the linear system).e. ≤ 0.7. frequency-domain condition for feasibility of the LMI (2. if and only if M has no pure imaginary eigenvalues.

e. Several standard results from linear algebra can also be interpreted as analytic solutions to certain LMI problems of special form. it is well-known that among all symmetric positive-definite matrices with given diagonal elements. 2. B) is stabilizable.4 Others Several other LMI problems have analytic solutions.. and when it is.39) is feasible.. This gives an analytic solution to a special (and fairly trivial) CP (see §3. C. i. This is sometimes expressed as kHk∞ ≤ 1 where ∆ kHk∞ = sup { kH(s)k | Re s > 0} is called the H∞ norm of the transfer matrix H.2. . For example. there exists a matrix K ∈ R p×n such that A + BK is stable.g. As a simple example consider the LMI P > 0. Nonexpansivity is equivalent to the transfer matrix H satisfying the bounded-real condition..39) to an ARE. the diagonal matrix has maximum determinant. This condition can also be expressed in terms of the transfer matrix H. if and only if M has no imaginary eigenvalues. −C T (I − DD T )−1 C −AT − C T D(I − D T D)−1 B T Then the system (2. by plotting kH(iω)k versus ω ∈ R (called a singular value plot for p > 1 and a Bode magnitude plot for p = 1).e. Notes and References LMIs The term “Linear Matrix Inequality” was coined by J.e.37) with x(0) = 0.39). This condition is easily checked graphically. with equality instead of the inequality.39) is feasible if and only the ARE AT P + P A + C T C + (P B + C T D)(I − D T D)−1 (P B + C T D)T = 0 (2. Once again we can relate the LMI (2. of constructing an appropriate P . the term was used in several papers from the 1970s to refer to the This electronic version is for personal use and may not be duplicated or distributed. There are simple methods for determining whether this is the case. Willems and is widely used now.40) T has a real solution P = P . H(s)∗ H(s) ≤ I for all Re s > 0.7. We will describe this and other results in Chapter 7. or equivalently. Once again this is the quadratic matrix inequality associated with the LMI (2.g. This LMI is feasible if and only if the pair (A. As we mentioned in §1. e.Notes and References 27 for all solutions of (2.5). or estimator gains for observers. the LMI (2. With some appropriate tech- nical conditions (see the Notes and References) including D T D < I. In this case we can find the minimal solution to (2. i. We can solve this equation by forming the Hamiltonian matrix " # A + B(I − D T D)−1 DT C B(I − D T D)−1 B T M= .40) as described in §2. the LMI (2..7.2.37) is nonexpansive. AP + P AT < BB T where A ∈ Rn×n and B ∈ Rn×p are given and P is the variable. the ones that arise in the synthesis of state-feedback for linear systems.

. The term is also consistent with the title of an early paper by Bellman and Ky Fan: On Systems of Linear Inequalities in Hermitian Matrix Variables [BF63] (see below). which holds if and only if S(I − RR† ) = 0..   T T 0 U S R 0 U S2T 0 0 where [S1 S2 ] = SU . .41) holds if and only if   " #" #" # Q S1 S2 I 0 Q S I 0 =  S1T Σ 0  ≥ 0. 0 0 where Σ > 0 and diagonal.3). p.g.41) ST R is equivalent to R ≥ 0. let U be an orthogonal matrix that diagonalizes R. and it useful to know that such problems can be solved extremely efficiently. Golub and Van Loan [GL89] and the references therein). e. In this context the variable represents the “step length” to be taken in an iteration of an algorithm. The case m = 1 does not arise on its own in any interesting problem from system or control theory. where R† denotes the Moore–Penrose inverse of R. A more accurate term might be “Affine Matrix Inequality”. when we restrict one of our standard problems to a specific line. The condition " # Q S ≥0 (2. Schur complements for nonstrict inequalities The Schur complement result of section §2. with appropriate partitioning. The theory of matrix pencils is very old and quite complete. and " # Q S1 ≥ 0. i = 1. an LMI is nothing but a matrix pencil. Inequality (2.e.1 can be generalized to nonstrict inequalities as follows. S1T Σ which holds if and only if Q − SR† S T ≥ 0.28 Chapter 2 Some Standard Problems Involving LMIs specific LMI (1.g. Copyright ° . We think of the term “Linear Matrix Inequality” as analogous to the term “Linear Inequalities” used to describe aTi x ≤ bi . . Formulating convex problems in terms of LMIs The idea that LMIs can be used to represent a wide variety of convex constraints can be found in Nesterov and Nemirovskii’s report [NN90b] (which is really a preliminary version c 1994 by the Society for Industrial and Applied Mathematics. and there is an extensive literature (see. e. since the matrix is an affine and not linear function of the variable.. When m = 1. Suppose Q and R are symmetric. by simultaneous diagonalization of F0 and F1 by an appropriate congruence. We must then have S2 = 0. so that " # T Σ 0 U RU = . . S(I − RR† ) = 0. but it does arise in the “line-search (sub)problem”. For the case of m = 1 all of our standard problems are readily solved. To see this. i.. Q − SR† S T ≥ 0.

Notes and References 29

of their book [NN94]) and software manual [NN90a]. They formalize the idea of a “positive-
definite representable” function; see §5.3, §5.4, and §6.4 of their book [NN94]. The idea
is also discussed in the article [Ali92b] and thesis [Ali91] of Alizadeh. In [BE93], Boyd
and El Ghaoui give a list of quasiconvex functions that can be represented as generalized
eigenvalues of matrices that depend affinely on the variable.

Infeasibility criterion for LMIs

The LMI F (x) > 0 is infeasible means the affine set {F (x) | x ∈ Rm } does not intersect the
positive-definite cone. From convex analysis, this is equivalent to the existence of a linear
functional ψ that is positive on the positive-definite cone and nonpositive on the affine set
of matrices. The linear functionals that are positive on the positive-definite cone are of the
form ψ(F ) = Tr GF , where G ≥ 0 and G 6= 0. From the fact that ψ is nonpositive on the
affine set {F (x)|x ∈ Rm }, we can conclude that Tr GFi = 0, i = 1, . . . , m and Tr GF0 ≤ 0.
These are precisely the conditions for infeasibility of an LMI that we mentioned in §2.2.1.
For the special case of multiple Lyapunov inequalities, these conditions are given Bellman
and Ky Fan [BF63] and Kamenetskii and Pyatnitskii [KP87a, KP87b].
It is straightforward to derive optimality criteria for the other problems, using convex anal-
ysis. Some general references for convex analysis are the books [Roc70, Roc82] and survey
article [Roc93] by Rockafellar. The recent text [HUL93] gives a good overview of convex
analysis; LMIs are used as examples in several places.

Complexity of convex optimization

The important role of convexity in optimization is fairly widely known, but perhaps not
well enough appreciated, at least outside the former Soviet Union. In a standard (Western)
treatment of optimization, our standard problems LMIP, EVP, GEVP, and CP would be
considered very difficult since they are nondifferentiable and nonlinear. Their convexity
properties, however, make them tractable, both in theory and in practice.
In [Roc93, p194], Rockafellar makes the important point:

One distinguishing idea which dominates many issues in optimization theory is
convexity . . . An important reason is the fact that when a convex function is
minimized over a convex set every locally optimal solution is global. Also, first-
order necessary conditions turn out to be sufficient. A variety of other properties
conducive to computation and interpretation of solutions ride on convexity as
well. In fact the great watershed in optimization isn’t between linearity and
nonlinearity, but convexity and nonconvexity.

Detailed discussion of the (low) complexity of convex optimization problems can be found
in the books by Grötschel, Lovász, Schrijver [GLS88], Nemirovskii and Yudin [NY83], and
Vavasis [Vav91].
Complete and detailed worst-case complexity analyses of several algorithms for our standard
problems can be found in Chapters 3, 4, and 6 of Nesterov and Nemirovskii [NN94].

Ellipsoid algorithm

The ellipsoid algorithm was developed by Shor, Nemirovskii, and Yudin in the 1970s [Sho85,
NY83]. It was used by Khachiyan in 1979 to prove that linear programs can be solved in
polynomial-time [Kha79, GL81, GLS88]. Discussion of the ellipsoid method, as well as
several extensions and variations, can be found in [BGT81] and [BB91, ch14]. A detailed
history of its development, including English and Russian references, appears in chapter 3 of
Akgül [Akg84].
This electronic version is for personal use and may not be duplicated or distributed.

30 Chapter 2 Some Standard Problems Involving LMIs

Optimization problems involving LMIs

One of the earliest papers on LMIs is also, in our opinion, one of the best: On systems of
linear inequalities in Hermitian matrix variables, by Bellman and Ky Fan [BF63]. In this
paper we find many results, e.g., a duality theory for the multiple Lyapunov inequality EVP
and a theorem of the alternative for the multiple Lyapunov inequality LMIP. The paper
concentrates on the associated mathematics: there is no discussion of how to solve such
problems, or any potential applications, although they do comment that such inequality
systems arise in Lyapunov theory. Given Bellman’s legendary knowledge of system and
control theory, and especially Lyapunov theory, it is tempting to conjecture that Bellman
was aware of the potential applications. But so far we have found no evidence for this.
Relevant work includes Cullum et al. [CDW75], Craven and Mond [CM81], Polak and
Wardi [PW82], Fletcher [Fle85], Shapiro [Sha85], Friedland et al. [FNO87], Goh and
Teo [GT88], Panier [Pan89], Allwright [All89], Overton and Womersley [Ove88, Ove92,
OW93, OW92], Ringertz [Rin91], Fan and Nekooie [FN92, Fan93] and Hiriart–Urruty and
Ye [HUY92].
LMIPs are solved using various algorithms for convex optimization in Boyd and Yang [BY89],
Pyatnitskii and Skorodinskii [PS83], Kamenetskii and Pyatnitskii [KP87a, KP87b].
A survey of methods for solving problems involving LMIs used by researchers in control theory
can be found in the paper by Beck [Bec91]. The software packages [BDG+ 91] and [CS92a]
use convex programming to solve many robust control problems. Boyd [Boy94] outlines how
interior-point convex optimization algorithms can be used to build robust control software
tools. Convex optimization problems involving LMIs have been used in control theory since
about 1985: Gilbert’s method [Gil66] was used by Doyle [Doy82] to solve a diagonal scaling
problem; another example is the musol program of Fan and Tits [FT86].
Among other methods used in control to solve problems involving multiple LMIs is the
“method of alternating convex projections” described in the article by Grigoriadis and Skel-
ton [GS92]. This method is essentially a relaxation method, and requires an analytic ex-
pression for the projection onto the feasible set of each LMI. It is not a polynomial-time
algorithm; its complexity depends on the problem data, unlike the ellipsoid method or the
interior-point methods described in [NN94], whose complexities depend on the problem size
only. However, the alternating projections method is reported to work well in many cases.

Interior-point methods for LMI problems

Interior-point methods for various LMI problems have recently been developed by several re-
searchers. The first were Nesterov and Nemirovskii [NN88, NN90b, NN90a, NN91a, NN94,
NN93]; others include Alizadeh [Ali92b, Ali91, Ali92a], Jarre [Jar93c], Vandenberghe and
Boyd [VB93b], Rendl, Vanderbei, and Wolkowocz [RVW93], and Yoshise [Yos94].
Of course, general interior-point methods (and the method of centers in particular) have a
long history. Early work includes the book by Fiacco and McCormick [FM68], the method
of centers described by Huard et al. [LH66, Hua67], and Dikin’s interior-point method
for linear programming [Dik67]. Interest in interior-point methods surged in 1984 when
Karmarkar [Kar84] gave his interior-point method for solving linear programs, which appears
to have very good practical performance as well as a good worst-case complexity bound. Since
the publication of Karmarkar’s paper, many researchers have studied interior-point methods
for linear and quadratic programming. These methods are often described in such a way
that extensions to more general (convex) constraints and objectives are not clear. However,
Nesterov and Nemirovskii have developed a theory of interior-point methods that applies to
more general convex programming problems, and in particular, every problem that arises
in this book; see [NN94]. In particular, they derive complexity bounds for many different
interior-point algorithms, including the method of centers, Nemirovskii’s projective algorithm
and primal-dual methods. The most efficient algorithms seem to be Nemirovskii’s projective
algorithm and primal-dual methods (for the case of linear programs, see [Meh91]).
Other recent articles that consider interior-point methods for more general convex program-
ming include Sonnevend [Son88], Jarre [Jar91, Jar93b], Kortanek et al. [KPY91], Den
Hertog, Roos, and Terlaky [DRT92], and the survey by Wright [Wri92].
c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

Notes and References 31

Interior-point methods for GEVPs are described in Boyd and El Ghaoui [BE93] (variation
on the method of centers), and Nesterov and Nemirovskii [NN91b] and [NN94, §4.4] (a
variation on Nemirovskii’s projective algorithm). Since GEVPs are not convex problems,
devising a reliable stopping criterion is more challenging than for the convex problems LMIP,
EVP, and CP. A detailed complexity analysis (in particular, a statement and proof of the
polynomial-time complexity of GEVPs) is given in [NN91b, NN94]. See also [Jar93a].
Several researchers have recently studied the possibility of switching from an interior-point
method to a quadratically convergent local method in order to improve on the final conver-
gence; see [Ove92, OW93, OW92, FN92, NF92].
Interior-point methods for CPs and other related extremal ellipsoid volume problems can be
found in [NN94, §6.5].

Interior-point methods and problem structure

Many researchers have developed algorithms that take advantage of the special structure
of the least-squares problems arising in interior-point methods for linear programming. As
far as we know, the first (and so far, only) interior-point algorithm that takes advantage
of the special (Lyapunov) structure of an LMI problem arising in control is described in
Vandenberghe and Boyd [VB93b, VB93a]. Nemirovskii’s projective method can also take
advantage of such structure; these two algorithms appear to be the most efficient algorithms
developed so far for solving the LMIs that arise in control theory.

Software for solving LMI problems

Gahinet and Nemirovskii have recently developed a software package called LMI-Lab [GN93]
based on an earlier FORTRAN code [NN90a], which allows the user to describe an LMI
problem in a high-level symbolic form (not unlike the formulas that appear throughout this
book!). LMI-Lab then solves the problem using Nemirovskii’s projective algorithm, taking
advantage of some of the problem structure (e.g., block structure, diagonal structure of some
of the matrix variables).
Recently, El Ghaoui has developed another software package for solving LMI problems. This
noncommercial package, called LMI-tool, can be used with matlab. It is available via anony-
mous ftp (for more information, send mail to elghaoui@ensta.fr). Another version of LMI-
tool, developed by Nikoukhah and Delebecque, is available for use with the matlab-like
freeware package scilab; in this version, LMI-tool has been interfaced with Nemirovskii’s
projective code. scilab can be obtained via anonymous ftp (for more information, send mail
to Scilab@inria.fr).
A commercial software package that solves a few specialized control system analysis and
design problems via LMI formulation, called optin, was recently developed by Olas and
Associates; see [OS93, Ola94].

Reduction to a strictly feasible LMI

In this section we prove the following statement. Let F0 , . . . , Fm ∈ Rn×n be symmetric
matrices. Then there is a matrix A ∈ Rm×p with p ≤ m, a vector b ∈ Rm , and symmetric
matrices F̃0 , . . . , F̃p ∈ Rq×q with q ≤ n such that:

p

X
F (x) ≥ 0 if and only if x = Az + b for some z ∈ Rp and F̃ (z) = F̃0 + zi F̃i ≥ 0.
i=1

In addition, if the LMI F (x) ≥ 0 is feasible, then the LMI F̃ (z) ≥ 0 is strictly feasible.
This electronic version is for personal use and may not be duplicated or distributed.

32 Chapter 2 Some Standard Problems Involving LMIs

Consider the LMI
m
X
F (x) = F0 + xi Fi ≥ 0, (2.42)
i=1

where Fi ∈ Rn×n , i = 0, . . . , m. Let X denote the feasible set {x | F (x) ≥ 0}.
If X is empty, there is nothing to prove; we can take p = m, A = I, b = 0, F̃0 = F0 , . . . , F̃p =
Fp .
Henceforth we assume that X is nonempty. If X is a singleton, say X = {x0 }, then with
p = 1, A = 0, b = x0 , F̃0 = 1, and F̃1 = 0, the statement follows.
Now consider the case when X is neither empty nor a singleton. Then, there is an affine
subspace A of minimal dimension p ≥ 1 that contains X. Let a1 , . . . ap a basis for the linear
part of A. Then every x ∈ A can be written as x = Az + b where A = [a1 · · · ap ] P is full-rank
p
and b ∈ Rm . Defining G0 = F (b), Gi = F (ai )−F0 , i = 1, . . . , p, and G(z) = G0 + i=1 zi Gi ,
p
we see that F (x) ≥ 0 if and only if there exists z ∈ R satisfying x = Az + b and G(z) ≥ 0.

Let Z = {z ∈ Rp | G(z) ≥ 0}. By construction, Z has nonempty interior. Let z0 a point in
the interior of Z and let v lie in the nullspace of G(z0 ). Now, v T G(z)v is a nonnegative affine
function of z ∈ Z and is zero at an interior point of Z. Therefore, it is identically zero over
Z, and hence over Rp . Therefore, v belongs to the intersection B of the nullspaces of the
Gi , i = 0, . . . , p. Conversely, any v belonging to B will satisfy v T G(z)v = 0 for any z ∈ Rp .
B may therefore be interpreted as the “constant” nullspace of the LMI (2.42). Let q be the
dimension of B.
If q = n (i.e., G0 = · · · = Gp = 0), then obviously F (x) ≥ 0 if and only if x = Az + b. In this
case, the statement is satisfied with F̃0 = I ∈ Rp×p , F̃i = 0, i = 1, . . . , p.
If q < n, let v1 , . . . , vq a basis for B. Complete the basis v1 , . . . , vq by vq+1 , . . . , vn to
obtainPa basis of Rn . Define U = [vq+1 · · · vn ], F̃i = U T Gi U , i = 1, . . . , p and F̃ (z) =
p
F̃0 + z F̃ . Then, the LMI G(z) ≥ 0 is equivalent to the LMI F̃ (z) ≥ 0, and by
i=1 i i
construction, there exists z0 such that F̃ (z0 ) > 0. This concludes the proof.

Elimination procedure for matrix variables

We now prove the matrix elimination result stated in §2.6.2: Given G, U , V , there exists X
such that

G + U XV T + V X T U T > 0 (2.43)

if and only if

Ũ T GŨ > 0, Ṽ T GṼ > 0 (2.44)

holds, where Ũ and Ṽ are orthogonal complements of U and V respectively.
It is obvious that if (2.43) holds for some X, so do inequalities (2.44). Let us now prove the
converse.
Suppose that inequalities (2.44) are feasible. Suppose now that inequality (2.43) is not feasible
for any X. In other words, suppose that

G + U XV T + V X T U T 6> 0,

for every X. By duality, this is equivalent to the condition

there exists Z 6= 0 with Z ≥ 0, V T ZU = 0, and Tr GZ ≤ 0. (2.45)
c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

then there exists a real scalar such that Q − σA > 0.46) for some matrices H. Z = RRT . B such that V T A = 0. Iwa93] (see also the Notes and References of Chapter 7).6 of [Sch73]). it is closely related to the S-procedure. PZP+ 92. where the number of columns of M and N add up to that of R. where Q and A are symmetric. BPG89b]). The equivalence between the conditions Ũ (z)T G(z)Ũ (z) > 0 and G(z) − σU (z)U (z)T > 0 for some real scalar σ is through Finsler’s lemma [Fin37] (see also §3. due to Parrott [Par78]. p354-360] and Horn and Johnson [HJ91. B can be written A = Ṽ H. since (2. For a complete discussion and references. RT can be written as RT = [A B] for some matrices A. The elimination lemma is related to a matrix dilation problem considered in [DKW82]. p78-86] for proofs of various S-procedure results. This will finish our proof.44) implies that Tr GZ > 0 for Z of the form (2. which is the desired result. KR88.46). The condition V T ZU = 0 is equivalent to V T RRT U = (V T R)(U T R)T = 0. which contradicts Tr GZ ≤ 0 in (2. From the definition of Ṽ and Ũ . The name “S-procedure” was introduced much later by Aizerman and Gantmacher in their 1964 This electronic version is for personal use and may not be duplicated or distributed. We also note another result on elimination of matrix variables. we refer the reader to the survey article by Uhlig [Uhl79]. IS93a. Finsler’s lemma has also been directly used to eliminate variables in certain matrix inequali- ties (see for example [PH86. The S-procedure The problem of determining if a quadratic form is nonnegative when other quadratic forms are nonnegative has been studied by mathematicians for at least seventy years.Notes and References 33 Now let us show that Z ≥ 0 and V T ZU = 0 imply that Z = Ṽ HH T Ṽ T + Ũ KK T Ũ T (2. matrices A. which was used in control problems in [PZPB91. and matrices M and N such that V T RT = [0 M ] and U T RT = [N 0] . B = Ũ K for some matrices H. which states that if xT Qx > 0 for all nonzero x such that xT Ax = 0. Pac94. . The application of the S-procedure to control problems dates back to 1944. since otherwise Z = 0). See also the book by Hestenes [Hes81.2. K.e. This means that there exists a unitary matrix T . K (at least one of which is nonzero. Gah92. In other words. Let R be a Cholesky factor of Z. U T B = 0. i. when Lur’e and Postnikov used it to prove the stability of some particular nonlinear systems.45). and we have Z = RRT = (RT )(RT )T = AAT + BB T = Ṽ HH T Ṽ T + Ũ KK T Ũ T . real matrices.

it is in [Wil71b] that we find Willems’ quote on the role of LMIs in quadratic optimal control (see page 3). y = Cx + Du. The proof of the two S-procedure results described in this chapter can be found in the articles by Yakubovich [FY79. Kal63b] established the connection between the Popov criterion and the existence of a positive- definite matrix satisfying certain matrix inequalities. i. which is NP-complete. The system ẋ = Ax + Bu. ui . and D such that all eigenvalues of A have negative real part. Yak73]. Anderson and Vongpanitlerd [AV73. Yak77. The PR lemma is also known by various names such as the Yakubovich–Kalman–Popov–Anderson Lemma or the Kalman–Yakubovich Lemma. Narendra and Taylor [NT73]. ch5–7].. is as hard as solving a general indefinite quadratic program. The connections between passivity. satisfies Z T u(t)T y(t)dt ≥ 0 0 for all u and T ≥ 0.e. In fact. Popov gave the famous Popov frequency-domain stability criterion for the absolute stability problem [Pop62]. Anderson [And67. Wil74a] described connections between the PR lemma. c 1994 by the Society for Industrial and Applied Mathematics. by verify- ing that the Nyquist plot of the “linear part” of the nonlinear system was confined to a specific region in the complex plane. Re c(sI − A)−1 b ≥ 0 for all Re s > 0 (2.30) holds for fixed data Ti . and v0 depend affinely on some variable ν. and vi . Yak64] and Kalman [Kal63a. We might therefore imagine that the problem of checking whether there exists ν such that (2.47) can be checked graphically.g.30) is convex in ν. e. C. the constraint F0 (ζ) ≥ 0 is a linear constraint on ν and the constraint (2. b. and Willems [Wil70].. And66b. i. the PR Lemma states that the transfer function c(sI − A)−1 b of the single-input single-output minimal system (A.e. The condition (2. u0 . Recent and important developments about the S-procedure can be found in [Yak92] and references therein. c) is positive-real. the problem is that finding a cutting plane (which is required for each step of the ellipsoid algorithm) is itself an NP-complete problem. In its original form. This is wrong. The PR lemma is now standard material. We could determine whether there exists ν such that (2. B. merely verifying that the condition (2.30) holds with polynomially many steps of the ellipsoid algorithm. Vidyasagar [Vid92. LMIs and AREs can be summarized as follows. and derived similar results for nonexpansive systems [And66a]. We consider A. for fixed ζ. The following statements are equivalent: 1. Indeed. (A.30) holds has low complexity since it can be cast as a convex feasibility problem.34 Chapter 2 Some Standard Problems Involving LMIs book [AG64]. Popov’s criterion could be checked via graphical means. Positive-real and bounded-real lemma In 1961.47) if and only if there exists P > 0 such that AT P + P A ≤ 0 and P b = cT . since then Yakubovich has given more general formulations and corrected errors in some earlier proofs [Yak77. the condition (2.30) is simply an infinite number of these linear constraints. Yakubovich [Yak62. FY79]. Willems [Wil71b. certain quadratic optimal control problems and the existence of symmetric solutions to the ARE. Copyright ° . Brockett [Bro70]. Complexity of S-procedure condition Suppose that T0 . x(0) = 0 is passive. pp474–478]. described in several books on control and systems theory. And73] extended the PR lemma to multi-input multi-output systems. B) is controllable and D + D T > 0. Faurre and Depeyrot [FD77].

and Kabamba [BBK89]. Connections between the ex- tremal points of the set of solutions of the LMI (2. and 4 can be found in Theorem 4 of [Wil71b] and also [Wil74a].49) 5. The equivalence of 5 is found in. The equivalence of 1. An excellent discussion of this result and its connections with spectral factorization theory can be found in [AV73]. (2. or Van Dooren’s [Doo81].2. is in [BBK89]. Sil73] and Boyd. The LMI " # AT P + P A P B − CT ≤0 (2. Pot66. Rob89].. It is also possible to check passivity (or nonexpansivity) of a system symbolically using Sturm methods. AM74].e. The transfer matrix H(s) = C(sI − A)−1 B + D is positive-real. interconnected systems [MH78] and stochastic processes and stochastic realization theory [Fau67.48) and the solutions of the ARE (2. There exists P = P T satisfying the ARE AT P + P A + (P B − C T )(D + D T )−1 (P B − C T )T = 0. see Siljak [Sil71. H(s) + H(s)∗ ≥ 0 for all s with Re s ≥ 0.48) BT P − C −(D + D T ) in the variable P = P T is feasible. The method for constructing Pare from the stable eigenspace of M .. this method is an obvious variation of Laub’s algo- rithm [Lau79. 3. Lozano–Leal and Joshi [LLJ90].g. Fau73. see also [Fra87. which yields Routh–Hurwitz like algorithms. it is explicitly stated in Potter [Pot66]. BBK89. for example. Balakrishnan. A less stable numerical method can be found in [Rei46. The sizes of the Jordan blocks corresponding to the pure imaginary eigenvalues of the Hamiltonian matrix " # A − B(D + D T )−1 C B(D + D T )−1 B T M= −C T (D + DT )−1 C −AT + C T (D + DT )−1 B T are all even. .49) are explored in [Bad82]. AL84]. Origins of this result can be traced back to Reid [Rei46] and Levin [Lev59]. described in §2. see the articles by Wen [Wen88]. i. 4. Lev59. The PR lemma is used in many areas. 2. For other discussions of strict passivity. Theorem 1 of [LR80].7. 3. AV73]. e. This electronic version is for personal use and may not be duplicated or distributed.Notes and References 35 2.

.

which is the same as (RRT )−1 ≤ M T (LT L)M ≤ γ 2 (RRT )−1 .Chapter 3 Some Matrix Problems 3. Q ∈ Rq×q with P > 0. We consider the problem: minimize κ (LM R) subject to L ∈ Rp×p . diagonal L ∈ R and R ∈ Rq×q such that I ≤ (LM R)T (LM R) ≤ γ 2 I. diagonal and nonsingular (3. (3. i. Let us fix γ > 1. and κ(M ) = ∞ otherwise. with p ≥ q. There exist nonsingular. diagonal L ∈ Rp×p and R ∈ Rq×q and µ > 0 such that µI ≤ (LM R)T (LM R) ≤ µγ 2 I. √ Since we can absorb the factor 1/ µ into L.3) 37 .2) This is equivalent to the existence of diagonal P ∈ Rp×p .e. For square invertible matrices this reduces to κ(M ) = kM kkM −1 k.. this is equivalent to the existence of p×p nonsingular. (3. ¶1/2 λmax (M T M ) µ ∆ κ(M ) = λmin (M T M ) for M full-rank. We will show that this problem can be transformed into a GEVP. diagonal L ∈ Rp×p and R ∈ Rq×q such that κ(LM R) ≤ γ if and only if there are nonsingular. diagonal and nonsingular where L and R are the optimization variables. is the ratio of its largest and smallest singular values. We assume without loss of generality that p ≥ q and M is full-rank.1) R ∈ Rq×q .1 Minimizing Condition Number by Scaling The condition number of a matrix M ∈ Rp×q . Q > 0 and Q ≤ M T P M ≤ γ 2 Q. and the matrix M ∈ Rp×q is given.

Hence we can solve (3. I < νM0 + x̃i Mi < γI i=1 i=1 3. and γ). and (3.4) as the EVP with variables ν = 1/µ. first suppose that L ∈ Rp×p and R ∈ Rq×q are nonsingular and diagonal. i=1 i=1 Defining the new variables ν = 1/µ. c 1994 by the Society for Industrial and Applied Mathematics. Q ∈ Rq×q and diagonal.4) subject to F (x) > 0. We can reformulate this GEVP as an EVP as follows. subject to the LMI constraint F (x) > 0. ©° °¯ ª where kM k is the norm (i. Q>0 Q ≤ M T P M ≤ γ2Q Remark: This result is readily extended to handle scaling matrices that have a given block-diagonal structure. F (x) = F0 + x i Fi . This problem can be reformulated as the GEVP minimize γ (3. suppose that (3. x̃ = x/µ. the constraint that one or more blocks are equal. Conversely.3 Minimizing Norm by Scaling The optimal diagonally scaled norm of a matrix M ∈ Cn×n is defined as ∆ ν(M ) = inf °DM D−1 ° ¯ D ∈ Cn×n is diagonal and nonsingular . Complex matrices are readily handled by substituting M ∗ (complex-conjugate transpose) for M T .2) holds for L = P 1/2 and R = Q−1/2 . µ > 0. Then (3.38 Chapter 3 Some Matrix Problems To see this. x̃ = x/µ and γ: minimize γ m X m X subject to νF0 + x̃i Fi > 0. µ.1) by solving the GEVP: minimize γ2 subject to P ∈ Rp×p and diagonal..e. (3. µI < M (x) < γµI (the variables here are x. largest singular value) of the matrix M . 3.3) holds for diagonal P ∈ Rp×p and Q ∈ Rq×q with P > 0. and more generally. Suppose m X m X M (x) = M0 + x i Mi . Q > 0. P > 0.2 Minimizing Condition Number of a Positive-Definite Matrix A related problem is minimizing the condition number of a positive-definite matrix M that depends affinely on the variable x.3) holds with P = LT L and Q = (RRT )−1 . Then.2) holds. we can express (3. Copyright ° . We will show that ν(M ) can be computed by solving a GEVP.

© ¯ ª Therefore ν(M ) is the optimal value of the GEVP minimize γ subject to P > 0 and diagonal. and ask whether there is a diagonal matrix D > 0 such that the Hermitian part of DM is positive-definite. with equality constraints among the blocks.4 Rescaling a Matrix Positive-Definite 39 Note that ¯ DM D−1 ¢∗ ¡DM D−1 ¢ < γ 2 I ¯ ¡ ( ) ν(M ) = inf γ ¯ ¯ ¯ for some diagonal. . For example. . is ( ¯ ) ¯ P > 0. D>0 (3. p ¯ and nonsingular we simply compute the optimal value of the GEVP minimize γ subject to P > 0 and diagonal. This is true if and only if the LMI M ∗ D + DM > 0. . which plays an important role in the stability analysis of linear systems with uncertain parameters. M ∗P M < γ2P Remark: This result can be extended in many ways. inf γ ¯ . i = 1. . So determining whether a matrix can be rescaled positive-definite is an LMIP. i = 1. .5) ¯ ¯ P and G diagonal and real which can also be computed by solving a GEVP. . nonsingular D © ¯ ∗ ∗ ª = inf γ ¯ M ∗ P M ≤ γ 2 P for some diagonal P = P T > 0 .6) is feasible. M ∗ P M + i(M ∗ G − GM ) < γ 2 P.4 Rescaling a Matrix Positive-Definite We are given a matrix M ∈ Cn×n . When this LMI is feasible. we can solve the problem of simultaneous optimal diagonal scaling of several matrices M1 . (3. Mi∗ P Mi < γ 2 P. . M ∗ D + DM > 0.3. is readily handled. I < D < µ2 I This electronic version is for personal use and may not be duplicated or distributed. nonsingular D = inf γ ¯ M D DM ≤ γ 2 D∗ D for some diagonal. the more general case of block-diagonal similarity-scaling. . 3. . we can find a feasible scaling D with the smallest condition number by solving the EVP minimize µ subject to D diagonal. Mp : To com- pute ( ¯ ) max ° −1 ° ¯¯ D ∈ Cn×n is diagonal inf °DMi D ° ¯ . p Another closely related quantity. . . . . As another example.

But of course an arbitrary positive-definite completion problem is readily solved as an LMIP. The simplest example is positive-definite completion: Some entries of a symmetric matrix are given.6) can also be interpreted as the condition for the existence of a diagonal quadratic Lyapunov function that proves stability of −M . it has the same sparsity structure as the original matrix.4. We can assume that the diagonal entries are specified (otherwise we simply make them very large.1). when added to each of the given matrices.e. i. This problem is also readily cast as an LMIP.e.40 Chapter 3 Some Matrix Problems Remark: The LMI (3. for entries that are completely free the lower bound is −∞ and the upper bound is +∞. As a more specific example.g. We want to determine a matrix with the complementary sparsity pattern that. is bounded. Evidently this is an LMIP. etc.) matrix. see the Notes and References and §10. Suppose that for each entry of the matrix we specify an upper and lower bound. results in a positive-definite (or contractive. if nonempty..4. and we are to complete the matrix to make it positive-definite. suppose we are given a set of matrices that have some given sparsity pattern. in other words. Then the set of all positive-definite completions.2). The matrix A? is characterized by Tr A? −1 B = 0 for every B = B T that has zero entries where A has specified entries (see §2. Hence the associated LMI has an analytic center A? (see §2. while the others are completely free (unspecified).3. As another example. In other words. Depending on the pattern of specified entries. this condition can lead to an “analytic solution” of the problem. For entries that are completely specified. Thus the matrix A?−1 has a zero entry in every location corresponding to an unspecified entry in the original matrix. i. is Toeplitz and positive-definite. have norm less than one) is readily formulated as an LMIP. for any choice of the specified elements in the intervals. We might call this the “simultaneous matrix completion problem” since we seek a completion that serves for multiple original matrices.6. We can easily recover some known results about this problem.1). The problem of completing the matrix. Let us mention a few simple generalizations of this matrix completion problem that have not been considered in the literature but are readily solved as LMIPs. and might be useful in applications. 3. to make it positive-definite (or a contraction. c 1994 by the Society for Industrial and Applied Mathematics. subject to the bounds on each entry..e. we consider an “interval matrix”. Copyright ° . see §2. e. Many of these problems can be cast as LMIPs. and must determine some complementary correlation coefficients that are consistent with them. i. This is an LMIP. elim- inate them.. suppose we are given upper and lower bounds for some of the correlation coefficients of a stationary time-series.5 Matrix Completion Problems In a matrix completion problem some entries of a matrix are fixed and the others are to be chosen so the resulting (“completed”) matrix has some property. the lower and upper bound coincide.. we have a Toeplitz matrix with some entries within given intervals. We must determine these free entries so that the Toeplitz matrix is positive-definite.

We will show that the problem of finding P that minimizes α. Therefore we can minimize α such that (3.. . i = 1. √ and the second. This electronic version is for personal use and may not be duplicated or distributed. . Therefore the result is uninteresting from the point of view of computational complexity and of limited practical use except for problems with low dimensions. i = 1. Indeed. .6 Quadratic Approximation of a Polytopic Norm Consider a piecewise linear norm k · kpl on Rn defined by ∆ kzkpl = max ¯aTi z ¯ . . i = 1. . . . .7) is equivalent to √ √ 1/ αBP ⊆ Bpl ⊆ αBP .7) holds for some P > 0 by solving the EVP minimize α " # P ai subject to viT P vi ≤ α. p.. .√i = 1. ∆ √ For any P > 0.p where ai ∈ Rn for i = 1. L). (which does not require us to find the vertices vi . . i = 1.. . by computing the maximum volume ellipsoid that lies inside the unit ball of k · kpl . . L. p. . . L. . Let v1 . is equivalent to viT P vi ≤ α. . 1/ αBP ⊆ Bpl . √ for some constant α ≥ 1. . Then √ √ 1/ αkzkP ≤ kzkpl ≤ αkzkP (3. . is equivalent to aTi P −1 ai ≤ α. . . ≥ 0.e. . vL } and let BP denote the unit ball of k · kP . i. vL be the vertices of the unit ball Bpl of k · kpl . Bpl ⊆ αBP . . so that ∆ Bpl = {z | kzkpl ≤ 1 } = Co {v1 . . . ¯ ¯ i=1. Thus.. the quadratic norm z T P z approximates kzkpl within a factor of α.6 Quadratic Approximation of a Polytopic Norm 41 3. See the Notes and References for more details. . . we can find (in polynomial- time) a norm for which α is less than n. . . the problem of determining the optimal quadratic norm approximation of k · k pl .. .3. . is an EVP. . the quadratic norm defined by kzkP = z T P z = kP 1/2 zk satisfies √ √ 1/ αkzkP ≤ kzkpl ≤ αkzkP for all z. √ The optimal factor α in this problem never exceeds n. √ The first inclusion. p aTi α Remark: The number of vertices L of the unit ball of k · kpl can grow exponen- tially in p and n. . .

Ep in various ways: union.. but with a different description of the polytope. b and c by any positive factor without affecting E. In some cases the problem can be cast as a CP or an EVP. and hence solved exactly. . Note that this description is homogeneous. To pose such a problem precisely we need to know how the subset is described.42 Chapter 3 Some Matrix Problems 3. the problem is known to be NP-hard. A. Indeed. we can compute an approximation of the optimal ellipsoid by solving a CP or an EVP. We will also describe an ellipsoid as the image of the unit ball under an affine mapping with symmetric positive-definite matrix: E = x ¯ (x − xc )T P −2 (x − xc ) ≤ 1 = { P z + xc | kzk ≤ 1 } . whether we seek an inner or outer approximation. { x | a Ti x ≤ bi . i... c 1994 by the Society for Industrial and Applied Mathematics.7 Ellipsoidal Approximation The problem of approximating some subset of Rn with an ellipsoid arises in many fields and has a long history. i = 1. which is NP-complete. As an example consider the problem of finding the ellipsoid of smallest volume that contains a polytope described by a set of linear inequalities. The first description uses convex quadratic functions: E = {x | T (x) ≤ 0. (This is the same problem as described above. we can scale A. It also suggests that the approximations obtained by solving a CP or EVP will not be good for all instances of problem data. and hence of some use in practice.8) T T −1 where A = A > 0 and b A b − c > 0 (which ensures that E is nonempty and does not reduce to a single point).e. In this section we consider subsets formed from ellipsoids E1 .. © ¯ ª (3. fixed ellipsoid contains a polytope described by a set of linear inequalities. so it is unlikely that the problem can be reduced (polynomially) to an LMI problem.) This problem is NP-hard. This problem is equivalent to the general concave quadratic programming problem. and its diameter is 2λmax (P ).e. . . The diameter of E (i. We saw in §2.8) (i. In some of these cases.2. The volume of E is given by vol(E)2 = β det (bT A−1 b − c)A−1 ¡ ¢ where β is a constant that depends only on the dimension of x.. major or minor semi-axis. (3. (This representation is unique. } . xc = −A−1 b. twice the maximum semi-axis length) is q 2 (bT A−1 b − c)λmax (A−1 ). i. P and xc are uniquely determined by the ellipsoid E.. n. . T (x) = xT Ax + 2xT b + c. and c) we form the representation (3. i. see the Notes and References.). b. As an example consider the problem of finding the ellipsoid centered around the origin of smallest volume that contains a polytope described by its vertices. .) The volume of E is then proportional to det P . Each of these descriptions of E is readily converted into the other. etc. and how the approximation will be judged (volume. In other cases. consider the problem of simply verifying that a given.e.4 that this problem can be cast as a CP. i. Given a repre- sentation (3. p }. although the approximations may be good on “typical” problems. We describe an ellipsoid E in two different ways. Copyright ° .9) with p P = bT A−1 b − c A−1/2 . .e. intersection. We approximate these sets by an ellipsoid E 0 . and addition.e. . .9) where P = P T > 0.e.

To summarize. . p. bT0 c0 bTi ci Since our representation of E0 is homogeneous. . be positive. . τp .11). . or.e. . . . τp such that for every x. the volume of E0 is proportional to det A−10 . . (3. the condition (3.12)    0 b0 −A0 0 0 0 with variables A0 .7 Ellipsoidal Approximation 43 Given the representation (3. T0 (x) − τi Ti (x) ≤ 0. . every x such that Ti (x) ≤ 0 satisfies T0 (x) ≤ 0. . . Ep by solving the CP (with variables A0 . . .9). we can form a representation (3. Co i=1 Ei ). τp ) minimize log det A−1 0 subject to A0 > 0. . i = 1. . . Ep (or equiva- p lently. . we will now normalize A0 . b. i = 1.1 Outer approximation of union of ellipsoids We seek a small ellipsoid E0 that covers theSunion of ellipsoids E1 .3. . τ1 ≥ 0. By the S-procedure. In other words we set c0 = bT0 A−1 0 b0 − 1 (3. p. We have p [ E0 ⊇ Ei (3. . τp such that (3. b0 and c0 in a convenient way: such that bT0 A−1 0 b0 − c0 = 1. . . p. . in fact. . We will describe these ellipsoids via the associated quadratic functions Ti (x) = xT Ai x + 2xT bi + ci .12) This electronic version is for personal use and may not be duplicated or distributed. p.10) i=1 if and only if Ei ⊆ E0 for i = 1. Thus our condition becomes: " # " # A0 b0 A i bi − τi ≤ 0. p. . . . c by positive factors.7. . . c = xTc P −2 xc − 1. we obtain the equivalent LMI     A0 b0 0 Ai bi 0  T  b0 −1 bT0  − τi  bTi ci 0  ≤ 0.. This is true if and only if. τp ≥ 0. and τ1 . . bT0 bT0 A−1 0 b0 − 1 bTi ci Using Schur complements. Thus we can find the smallest volume ellipsoid containing the union of ellipsoids E 1 .12) holds. this is true if and only if there exist nonnegative scalars τ1 .11) and parametrize E0 by A0 and b0 alone. and τ1 . . . such that " # " # A 0 b0 Ai bi − τi ≤ 0. . it follows that the τi must. b0 .10) is equivalent to the existence of nonnegative τ1 . . for each i. .8) of E with A = P −2 .) q With the normalization (3. . (This forms one representation of E. . i = 1.) 3. the convex hull of the union. . . . b = −P −2 xc . . . . i. (3. . . (Since A0 > 0. . equivalently. i = 1. . b0 . we can form every representation of E by scaling these A. . . .

. . τp ):     A 0 b0 0 Xp A i bi 0  T  b0 −1 bT0  − τi  bTi ci 0  ≤ 0. . such that p " # X Ai bi τi ≥ 0. and τ1 . . 3. . . b0 .13) to hold.13) to hold.13) i=1 holds. sphere) of smallest diameter that con- tains Ei by minimizing λmax (A−1 0 ) subject to the constraints of the previous CP. . . Once again we normalize our representation of E0 so that (3. Ep . τp ≥ 0.. τp such that p " # " # A0 b0 X A i bi − τi ≤ 0. .e. .11) holds. where λopt is an optimal value of the EVP above. see the Notes and References. .. bT0 bT0 A−1 0 b0 − 1 i=1 bTi ci which can be written as the LMI (in variables A0 . and τ1 . τ1 ≥ 0. i.7. we do not expect to cast the problem of finding the smallest volume ellipsoid covering the intersection of some ellipsoids as a CP. certain bounds on how well it approximates Co Ei ). τ1 ≥ 0. From the S-procedure we obtain the condition: there exist positive scalars τ1 . and satisfies several niceSproperties (e.e. A fortiori. τp . .15) is unbounded below. . it characterizes some but not all of the ellipsoids that cover the intersection of E1 . E1 . which is equivalent to solving the EVP maximize λ subject toA0 > λI. given E0 .14)    i=1 0 b0 −A0 0 0 0 This LMI is sufficient but not necessary for (3. so we do not expect to recast it as one of our LMI problems. We will describe three ways that we can compute suboptimal ellipsoids for this problem by solving CPs. . is NP-complete. . τp ≥ 0. . (3. . . WeS can find the ellipsoid (or equivalently. (3. . . We can find the best such outer approximation (i. . .15) subject to A0 > 0. . The first method uses the S-procedure to derive an LMI that is a sufficient condition for (3.44 Chapter 3 Some Matrix Problems Sp This ellipsoid is called the Löwner–John ellipsoid for i=1 Ei . . .2 Outer approximation of the intersection of ellipsoids Simply verifying that p \ E0 ⊇ Ei (3. not all zero.. . . . Copyright ° . affine invariance. i=1 bTi ci In this case (3. . (3.12) √ The optimal diameter is 2/ λopt . Ep . .14) Tp Remark: The intersection i=1 Ei is empty or a single point if and only if there exist nonnegative τ1 .g. b0 . c 1994 by the Society for Industrial and Applied Mathematics. τp ) minimize log det A−1 0 (3. . . the one of smallest volume) by solving the CP (with variables A0 .

. .9). i=1 In fact we can give a bound on how poorly E0 approximates the intersection.. See the Notes and References for more discussion. . Ep are given in the form (3.7 Ellipsoidal Approximation 45 Another method that can be used to find a suboptimal solution for the problem of determining the minimum volume ellipsoid that contains the intersection of E 1 . p. . Ep if and only if for every x satisfying (x − x0 )T P0−2 (x − x0 ) ≤ 1. Using the S-procedure.3 Inner approximation of the intersection of ellipsoids The ellipsoid E0 is contained in the intersection of the ellipsoids E1 . which can be cast as a CP (see the next section). . . >0 (x − xp )T Pp−1 1 Tp in the variable Tp x has as feasible set the interior of i=1 Ei . .4. . when E0 is enlarged by a factor of 3p + 1. . i. . The LMI Ã" # I P1−1 (x − x1 ) F (x) = diag . . We assume now that the intersection i=1 Ei has nonempty interior (i. i = 1. Then it can be shown that p ∆ © \ E0 = x ¯ (x − x? )T H(x − x? ) ≤ 1 ⊆ ¯ ª Ei . . . It can be shown that p \ x ¯ (x − x? )T H(x − x? ) ≤ (3p + 1)2 ⊇ © ¯ ª Ei . . . Ep is to first compute the maximum volume ellipsoid that is contained in the intersection. is nonempty and not a single point). . .. .1). . (x − x1 )T P1−1 1 " #! I Pp−1 (x − xp ) .. .. this is equivalent to the existence of nonnegative λ 1 . Let x? denote its analytic center. λp satisfying for every x.16) −λi (x − x0 )T P0−2 (x − x0 ) ≤ 1 − λi . . . with x1 . i=1 Tp Thus. If this ellipsoid is scaled by a factor of n about it center then it is guaranteed to contain the intersection. This electronic version is for personal use and may not be duplicated or distributed. . .. i = 1.e.7. (x − xi )T Pi−2 (x − xi ) (3. . . x? = arg min log det F (x)−1 F (x) > 0 and let H denote the Hessian of log det F (x)−1 at x? . Yet another method for producing a suboptimal solution is based on the idea of the analytic center (see §2. .e.. . See the Notes and References for more discussion of this. it covers i=1 Ei . .3. . p. . . 3. . xp and P1 . Pp . . Suppose that E1 . we have (x − xi )T Pi−2 (x − xi ) ≤ 1..

T0 xi − τi Ti (xi ) ≤ 0. xp Pp Pp Then both T0 ( i=1 xi ) and i=1 τi Ti (xi ) are quadratic functions of x.16) is equivalent to   −Pi2 xi − x 0 P0  (xi − x0 )T λi − 1 0  ≤ 0. . . (3.17) Equivalently.11) holds. . . . .. . . . xp . . . i=1 c 1994 by the Society for Industrial and Applied Mathematics. The ellipsoid E0 contains A if and only if for every x1 . i. i = 1. p. let Ei denote the matrix such that xi = Ei x. p. . . . . b̃i = EiT bi . . . . . 3. i = 1. p suchÃthat ! p p X X (3. (3. . this yields the sphere of largest diameter contained in the intersection. λp ≥ 0.} We seek a small ellipsoid E0 containing A. .   x1  . i=1 i=1 Once again we normalize our representation of E0 so that (3. . Copyright ° . . . . we can find one that maximizes the smallest semi-axis by solving minimize λ subject to λI > P0 > 0. λ1 ≥ 0. . λp ≥ 0. i=1 By application of the S-procedure. . . p.   . . . . . we obtain the largest volume ellipsoid contained in the intersection of the ellipsoids E1 . λp ) minimize log det P0−1 subject to P0 > 0. . . p. . λ1 . Ep is defined as ∆ A = {x1 + · · · + xp | x1 ∈ E1 .7. . . and define p X E0 = Ei . .17) Among the ellipsoids contained in the intersection of E1 . (3. x0 . . this condition is true if there exist τi ≥ 0. λ1 ≥ 0.e. . Ep by solving the CP (with variables P0 . xp such that Ti (xi ) ≤ 0. . .46 Chapter 3 Some Matrix Problems We prove in the Notes and References section that (3. Indeed. i = 1. . . . . . . we have à p ! X T0 xi ≤ 0. Let x denote the vector made by stacking x1 .17)   P0 0 −λi I Therefore. . Ãi = EiT Ai Ei . .18) for every xi . . . . i = 1. Ep . xp ∈ Ep . i = 0. .4 Outer approximation of the sum of ellipsoids The sum of the ellipsoids E1 . . .  x= . .

This is a CP. see also Chapter 8 in this book. used in the analysis of linear systems with parameter uncertainties. The quantity (3.19) Ã0 b̃0 X Ãi b̃i − τi ≤ 0. . . which can be computed via GEVPs. Saf82]). . BY89. . since the cost of computing the optimal scalings (via the GEVP) exceeds the cost of solving the problem for which the scalings are required (e. CS92b]. was introduced by Fan. Kra91].Notes and References 47 With this notation. power systems. just as we do). described in [TSC92].g. A recent survey article on this topic is by Packard and Doyle [PD93]. . and Rotea and Prasanth [RP93. This electronic version is for personal use and may not be duplicated or distributed. . b0 . . RP94]. GV74. Kaszkurewicz and Bhaya give an excellent survey on the topic of diagonal Lyapunov func- tions. Some closely related quantities were considered in [SC93. In [KB93].20)    i=1 0 b0 −A0 0 0 0 Therefore. SL93a. τp using Schur complements:     Ã0 b̃0 0 p Ãi b̃i 0  T X  b̃0 −1 bT0  − τi  b̃Ti ci 0  ≤ 0. we compute the minimum volume ellipsoid. we see that condition (3. Tits and Doyle in [FTD91] (see also [BFBE92]). the problem of computing this quantity can be easily transformed into a GEVP. τ1 . i = 1. by minimizing log det A−1 0 subject to A0 > 0 and (3. p such that " # p " # (3.1 are not practically useful in numerical analysis. Wat91]. including many references and examples from mathematical ecology. Finally. Khachiyan and Kalantari ([KK92]) consider the problem of diagonally scaling a positive semidefinite matrix via interior-point methods and give a complete complexity analysis for the problem. . We also mention some other related quantities found in the articles by Kouvaritakis and Latchman [KL85a. . The problem of similarity-scaling a matrix to minimize its norm appears often in control applications (see for example [Doy82.20). . where a different approach to the same problem and its extensions is considered. . b̃T0 bT0 A−1 0 b0 − 1 i=1 b̃Ti ci This condition is readily written as an LMI in variables A0 . A special case of the problem of minimizing the condition number of a positive-definite matrix is considered in [EHM92. . solving a set of linear equations). proven to contain the sum of E1 . see [Ger85.. Sha85. (3. The problem is the same as determining the existence of diagonal quadratic Lyapunov function for a given linear system.18) is equivalent to there exist τi ≥ 0. LN92].5). Other references on the problem of improving the condition number of a matrix via diagonal or block-diagonal scaling are [Bau63. Ep via the S-procedure. Notes and References Matrix scaling problems The problem of scaling a matrix to reduce its condition number is considered in Forsythe and Straus [FS55] (who describe conditions for the scaling T to be optimal in terms of a new variable R = T −∗ T −1 . Diagonal quadratic Lyapunov functions The problem of rescaling a matrix M with a diagonal matrix D > 0 such that its Her- mitian part is positive-definite is considered in the articles by Barker et al. KL85b. . We should point out that the results of §3. EHM93]. [BBP78] and Khalil [Kha82]. A related quantity is the one-sided multivariable stability margin. Hu87.

2. Diagonal Lyapunov functions arise in the analysis of positive orthant stability..e. in which an analytic solution is given for contractive completion problems with a special block form.5 appears.5). aTi Qai i=1 c 1994 by the Society for Industrial and Applied Mathematics. . see §10. . i = 1. p . . Hence we obtain the maximum volume ellipsoid contained in P by solving the CP minimize log det Q−1 subject to Q > 0. Kahan. their results are extended to a more general class of partially specified matrices.6. Helton and Woerdeman consider the problem of minimum norm extension of Hankel matrices. Grigoriadis. aTi Qai ≤ 1.e. In [GJSW84]. i=1 i For any x ∈ Rn . The result in this paper is closely related to (and also more general than) the matrix elimination procedure given in §2. From the standard optimality conditions. λp such that p X λi Q−1 = ai aTi . As far as we know. Frazho and Skelton consider the problem of approximating a given symmetric matrix by a Toeplitz matrix and propose a solution to this problem via convex optimization.. In both papers the “sparsity pattern” result mentioned in §3. . Barrett et al. we establish some properties of the maximum volume ellipsoid contained in a symmetric polytope described as: P = z ¯ ¯aTi z ¯ ≤ 1. p. there are nonneg- ative λ1 . . provide an algorithm for finding the completion maximizing the determinant of the completed matrix (i. . © ¯¯ ¯ ª i = 1. completing a matrix so its norm is less than one) is Davis. The condition E ⊆ P can be expressed aTi Qai ≤ 1 for i = 1. Copyright ° . such problems cannot be cast as LMIPs. Multiplying by Q1/2 on the left and right and taking the trace p yields λ = n. In [BJL89]. In [HW93]. the analytic center A ? mentioned in §3. in which the authors examine conditions for the existence of completions of band matrices such that the inverse is also a band matrix. In [GFS93]. . See [Woe90. Maximum volume ellipsoid contained in a symmetric polytope In this section.3. . we have p X (xT ai )2 xT Q−1 x = λi . . Dancis ([Dan92. Matrix completion problems Matrix completions problems are addressed in the article of Dym and Gohberg [DG81]. BW92] for more references on contractive completion problems. Its volume is √ proportional to det Q. . . . A classic paper on the contractive completion problem (i. See also [OSA93]. . NW92. aTi Qai i=1 and λiP= 0 if aTi Qai = 1.48 Chapter 3 Some Matrix Problems and digital filter stability. . and Weinberger in [DKW82]. p Suppose now that Q is optimal. Dan93] and Johnson and Lunquist [JL92]) study completion problems in which the rank or inertia of the completed matrix is specified. ¯ Consider an ellipsoid described by E = {x ¯ xT Q−1 x ≤ 1 } where Q = QT > 0. . .

Bona and Cerone [BBC90]. and can be traced back to Dikin [Dik67] and Karmarkar [Kar84] for polytopes described by a set of linear inequalities. i=1 Suppose now that x belongs to P. Lau. KLB92]. Ellipsoidal approximation The problem of computing the minimum volume ellipsoid in Rn that contains a given convex set was considered by John and Löwner.. and come with provable bounds on how suboptimal they are. These bounds have the following form: if This electronic version is for personal use and may not be duplicated or distributed. p. Cheung. Che80c] and Kahan [Kah68]. SS72] use ellipsoids containing the sum and the intersection of two given ellipsoids in the problem of state estimation with norm-bounded disturbances. ellipsoidal approximations of robot linkages and workspace are used to rapidly detect or avoid collisions. KLB90. so we have: √ E ⊆P⊆ nE. the references on the ellipsoid algorithm describe various extensions. the maximum volume ellipsoid contained in P approximates P within the scale factor √ n. Then p X xT Q−1 x ≤ λi = n. In [RB92].e. see e. Ellipsoidal approximation via analytic centers Ellipsoidal approximations via barrier functions arise in the theory of interior-point algo- rithms. In the case of symmetric sets. Yurkovich and Passino [CYP93] and the book by Norton [Nor86]. Deller [Del89]. Belforte. Similar results hold for these ellipsoids. For the case of more general LMIs. when shrunk about its center by a factor of n. i=1 √ Equivalently. Kosut and Boyd [LKB90. See also the articles by Bertsekas and Rhodes [BR71]. either λi = 0 or aTi Qai = 1.. A closely related problem is finding the maximum volume ellipsoid contained in a convex set.g. In the ellipsoid algorithm we encountered the minimum volume ellipsoid that contains a given half-ellipsoid. These approximations have the property that they are easy to compute. Che80b. so we conclude p X xT Q−1 x = λi (xT ai )2 . P ⊆ nE.255] or Post [Pos84]. Fogel and Huang [FH82]. see Nesterov and Nemirovskii [NN94]. Efficient interior-point methods for computing minimum volume outer and maximum volume inner ellipsoids for various sets are described in Nesterov and Nemirovskii [NN94]. GLS88]). Preparata and Shamos [PS85. . and Jarre [Jar93c]. is contained in the given set (see [Joh85.Notes and References 49 For each i. Minimum volume ellipsoids containing a given set arise often in control theory. the factor n can be improved to n (using an argument similar to the one in the Notes and References section above). See also [KT93]. i. Boyd and El Ghaoui [BE93]. who showed that the optimal ellipsoid. Such ellipsoids are called √ Löwner–John ellipsoids. see the ar- ticles by Fogel [Fog79]. Schweppe and Schlaepfer [Sch68. Chernousko [Che80a. Special techniques have been developed to compute smallest spheres and ellipsoids containing a given set of points in spaces of low dimension. Minimum volume ellipsoids have been widely used in system identification.

50 Chapter 3 Some Matrix Problems the inner ellipsoid is stretched about its center by some factor α (which depends only on the dimension of the space. Proof of lemma in §3. is equivalent to the matrix inequality   −P12 x1 − x0 P0  (x1 − x0 )T λ−1 0  ≤ 0.7. x1 − x0 P02 /λ − P12 which is equivalent to (3. In contrast. condition (3. (P02 /λ − P12 )P1−2 (x∗ − x1 ) P02 /λ − P12 Using (3. the type and size of the constraints) then the resulting ellipsoid covers the feasible set.22).21) where P0 .24) or. equivalently. i.e. one may be better than another.22)   P0 0 −λI The maximum over x of (x − x1 )T P1−2 (x − x1 ) − λ(x − x0 )T P0−2 (x − x0 ) (3. λ ≥ 0.25).21). however) but the factor α depends only on the dimension of the space and not on the type or number of constraints. is more dif- ficult to compute (still polynomial. (3.24).23). the Löwner–John ellipsoid. this implies " # λ−1 (x1 − x0 )T ≤ 0. So an ellipsoidal approximation from a barrier function can serve as a “cheap substitute” for the Löwner–John ellipsoid. Copyright ° .25) In this case. (3. Using Schur complements. and is always smaller than the ones obtained via barrier functions. Conversely. it is easy to show that (3. this last condition is equivalent to " # λ−1 (x∗ − x1 )T P1−2 (P02 /λ − P12 ) ≤ 0.. (x − x1 )T P1−2 (x − x1 ) − λ(x − x0 )T P0−2 (x − x0 ) ≤ 1 − λ (3.22) implies (3.3 We prove that the statement for every x.23) is finite if and only if P02 − λP12 ≤ 0 (which implies that λ > 0) and there exists x∗ satisfying P1−2 (x∗ − x1 ) = λP0−2 (x∗ − x0 ).21) implies (x∗ − x1 )T P1−2 (P12 − P02 /λ)P1−2 (x∗ − x1 ) ≤ 1 − λ. Using (3. c 1994 by the Society for Industrial and Applied Mathematics. x∗ is a maximizer of (3. depending on the number and type of constraints. P1 are positive matrices. (3. The papers cited above give different values of α. satisfying (P02 /λ − P12 )P1−2 (x∗ − x1 ) = x1 − x0 . the maximum volume inner ellipsoid.

there can be many solutions of the DI (4.1) is also a trajectory of relaxed DI (4.1). we will always get it “for free”—every result we establish in the next two chapters extends immediately to the relaxed version of the problem. Very roughly speak- ing. Our goal is to establish that various properties are satisfied by all solutions of a given DI. it can be shown that for every DI we encounter in this book.e. t). the reachable or attainable sets of the DI and its relaxed version coincide. In fact we will not need the Relaxation Theorem.1 Linear differential inclusions A linear differential inclusion (LDI) is given by ẋ ∈ Ωx.1).3) as describing a family of linear time-varying systems.1). The reason is that when a quadratic Lyapunov function is used to establish some property for the DI (4.1.) As a specific and simple example. Any x : R+ → R that satisfies (4.2)} . t) is a convex set for every x and t.Chapter 4 Linear Differential Inclusions 4. t).2). (4. x(0) = x0 . 4.1 Differential Inclusions A differential inclusion (DI) is described by: ẋ ∈ F (x(t). Of course. x(0) = x0 . for every T ≥ 0 {x(T ) | x satisfies (4. we may as well assume F (x.1) n n where F is a set-valued function on R × R+ .4) 51 . we might show that every trajectory of a given DI converges to zero as t → ∞. x(0) = x0 (4.2). i. x(0) = x0 . t). every trajectory of the DI (4. or rather. Every trajectory of the LDI satisfies ẋ = A(t)x.2) is called the relaxed version of the DI (4. We can interpret the LDI (4..1) is called a solution or trajectory of the DI (4. The DI given by ẋ ∈ Co F (x(t). By a standard result called the Relaxation Theorem. For example. t) ⊇ F (x(t).1). (4. (4. (See the References for precise statements. then the same Lyapunov function establishes the property for the relaxed DI (4.3) n×n where Ω is a subset of R . the Relaxation Theorem states that for many purposes the converse is true.1)} = {x(T ) | x satisfies (4. Since Co F (x(t).

the LDI (4.5) and (4.2.2 Some Specific LDIs We now describe some specific families of LDIs that we will encounter in the next two chapters.5) satisfy " # A(t) Bu (t) Bw (t) ∈ Ω. In order not to introduce another term to describe the set of all solutions of (4.1. Copyright ° . u is the control input. z : R+ → Rnz . x(0) = x0 . 4. the LDI reduces to the linear time-invariant (LTI) system ẋ = Ax + Bu u + Bw w. w : R+ → Rnw .3). where (" #) A Bu Bw Ω= .7) Cz Dzu Dzw Although most of the results of Chapters 5–7 are well-known for LTI systems. where Ω ⊆ R(n+nz )×(n+nu +nw ) . we will call (4.5) z = Cz (t)x + Dzu (t)u + Dzw (t)w where x : R+ → Rn . 4. In some applications we can have one or more of the integers nu .2 A generalization to systems We will encounter a generalization of the LDI described above to linear systems with inputs and outputs. We will be more specific about the form of Ω shortly. (4. we will discuss these in detail when we encounter them.5) and (4. We will consider a system described by ẋ = A(t)x + Bu (t)u + Bw (t)w.4) is a trajectory of the LDI (4. some are new. Conversely.” with the set Ω describing the “uncertainty” in the matrix A(t). 4. nw . x(0) = x0 . u : R+ → Rnu . which means that the corresponding variable is not used. c 1994 by the Society for Industrial and Applied Mathematics. (4. z = Cz x + Dzu u + Dzw w.1 Linear time-invariant systems When Ω is a singleton.3) might be described as an “uncertain time-varying linear system.6) Cz (t) Dzu (t) Dzw (t) for all t ≥ 0. x is referred to as the state. for any A : R+ → Ω. For example. (4. In the language of control theory. and nz equal to zero.6). the LDI ẋ ∈ Ωx results when nu = nw = nz = 0. w is the exogenous input and z is the output. an LDI.52 Chapter 4 Linear Differential Inclusions for some A : R+ → Ω.6) a system described by LDIs or simply. the solution of (4. The matrices in (4.

8) are given.9): Dqw and Dzp will always be zero.2 Polytopic LDIs When Ω is a polytope.2 Some Specific LDIs 53 4.L where the matrices (4. The set Ω is well defined if and only if Dqp Dqp < I. ¯ where " # " # A Bu Bw Bp h i à = .10) Cz Dzu Dzw Dzp Thus.L Co . C̃ = Cq Dqu Dqw . This electronic version is for personal use and may not be duplicated or distributed.3 Norm-bound LDIs Another special class of LDIs is the class of norm-bound LDIs (NLDIs). B̃ = . In the sequel we will consider only well-posed NLDIs. Note that Ω is convex.. Ω is the image of the (matrix norm) unit ball under a (matrix) linear-fractional T mapping. p = ∆(t)q.. then the number of vertices. L. k∆(t)k ≤ 1.L Dzw. will generally increase very rapidly (exponentially) with l. We will often rewrite the condition p = ∆(t)q.2. q = Cq x + Dqp p + Dqu u + Dqw w. (4.8) Cz. 4. w. Dzw . we often drop some terms in (4. If instead Ω is described by a set of l linear inequalities.1 AL Bu. x(0) = x0 .4. For the NLDI (4.1 Bw. Dzu and Dqu will be assumed zero. (4. These assumptions greatly simplify the formulas and treatment. output z.1 Cz.9) the set Ω has the form n ¯ o −1 Ω= à + B̃∆ (I − Dqp ∆) C̃ ¯ k∆k ≤ 1 . and a time-varying feedback matrix ∆(t) connected between q and p. with k∆(t)k ≤ 1 for all t.9) z = Cz x + Dzp p + Dzu u + Dzw w. we will call the LDI a polytopic LDI or PLDI.e. Most of our results require that Ω be described by a list of its vertices.e. The equations (4. i. Therefore results for PLDIs that require the description (4..1 Dzu. . k∆k ≤ 1 in the equivalent form pT p ≤ q T q.(4. described by ẋ = Ax + Bp p + Bu u + Bw w.. . where ∆ : R+ → Rnp ×nq .9) can be interpreted as an LTI system with inputs u. Remark: In the sequel.L Bw. in which case we say that the NLDI is well-posed . It is not hard to work out the corresponding formulas and results for the more general case. i...L Dzu.8) are of limited interest for problems in which Ω is described in terms of linear inequalities.2. in the form (" # " #) A1 Bu.1 Dzw. and very often Dqp .

11) z = Cz x + Dzp p + Dzu u + Dzw w. |δi (t)| ≤ 1 in the simpler form |pi | ≤ |qi |. we have a DNLDI. uncertain. When we have a single block.. q = Cq x + Dqp p + Dqu u + Dqw w. . determining whether a DNLDI is well-posed is NP-hard. In fact. 4. . in which we further restrict the matrix ∆ to be diagonal. Co Ω is a polytope. u. time-varying feedback gains. The condition of well-posedness for a DNLDI is that I −Dqp ∆ should be invertible for all diagonal ∆ with k∆k ≤ 1.e. u. |∆ii | = 1. Consider the system ẋ = f (x. The DNLDI is described by the set n ¯ o −1 Ω = à + B̃∆ (I − Dqp ∆) C̃ ¯ k∆k ≤ 1. Most of the results we present for NLDIs and DNLDIs can be extended to these more general types of LDIs. It turns out that for a DNLDI. x(0) = x0 . . ¯ i.4 Diagonal norm-bound LDIs A useful variation on the NLDI is the diagonal norm-bound LDI (DNLDI). Doyle introduced the idea of “structured feedback. In fact. We will often express the condition pi = δi (t)qi . B̃ and C̃ are given by (4.10). Therefore a DNLDI can be described as a PLDI.13) c 1994 by the Society for Industrial and Applied Mathematics.3 Nonlinear System Analysis via LDIs Much of our motivation for studying LDIs comes from the fact that we can use them to establish various properties of nonlinear.12) ¯ where Ã. |δi (t)| ≤ 1. the number of inequalities grows exponentially with nq . and in addition. (4. In the language of control theory. |∆ii | = 1 . Remark: More elaborate variations on the NLDI are sometimes useful.2. This is equivalent to det(I − Dqp ∆) > 0 when ∆ takes on its 2nq extreme values.e. the vertices of Co Ω are among 2nq images of the extreme points of ∆ under the matrix linear-fractional mapping. w. each of which is bounded by one.54 Chapter 4 Linear Differential Inclusions 4. t). w. It is given by ẋ = Ax + Bp p + Bu u + Bw w. we describe a DNLDI as a linear system with nq scalar. see the Notes and References. time-varying systems using a technique known as “global linearization”. pi = δi (t)qi . Copyright ° .. when it is diagonal (with no equality constraints). i. at least in principle. But there is a very important difference between the two descriptions of the same LDI: the number of vertices required to give the PLDI representation of an LDI increases exponentially with nq (see the Notes and References of Chapter 5 for more discussion). nq . (4. Although this condition is explicit. n ¯ o −1 Co Ω = Co à + B̃∆ (I − Dqp ∆) C̃ ¯ ∆ diagonal.” in which the matrix ∆ is restricted to have a given block-diagonal form. z = g(x. . The idea is implicit in the work on absolute stability originating in the Soviet Union in the 1940s. ∆ diagonal . we have an NLDI. (4. t). there can be equality con- straints among some of the blocks. Here we have the componentwise inequalities |pi | ≤ |qi | instead of the single inequality kpk ≤ kqk appearing in the NLDI. i = 1.

u. Then for any pair of trajectories (x. 0) = 0.13) has this property. and t there is a matrix G(x. w. z̃) we have   " # x − x̃ ˙ ẋ − x̃ ∈ Co Ω  u − ũ  .   z − z̃ w − w̃ i. w. 0. u. 0) = 0. t) w where Ω ⊆ R(n+nz )×(n+nu +nw ) . z − z̃) is a trajectory of the LDI given by Co Ω. then a fortiori we have proved that every trajectory of the nonlinear system (4. 0) = 0. u. w − w̃. g(0. By the mean-value theorem we have ∂φ cT (φ(x) − φ(x̃)) = cT (ζ)(x − x̃) ∂x for some ζ that lies on the line segment between x and x̃.13) is also a trajectory of the LDI defined by Ω.4. These results follow from a simple extension of the mean-value theorem: if φ : Rn → Rn satisfies ∂φ ∈Ω ∂x throughout Rn . 0) = 0. A ∈ Co Ω Since c was arbitrary.16). t) ∈ Ω such that   " # x f (x. u. .16) n To see this. If we can prove that every trajectory of the LDI defined by Ω has some property (e. (4. u. ∂x we conclude that cT (φ(x) − φ(x̃)) ≤ sup cT A(x − x̃). w. 0. let c ∈ R . which proves (4.15)   ∂x ∂u ∂w In fact we can make a stronger statement that links the difference between a pair of trajectories of the nonlinear system and the trajectories of the LDI given by Ω. Then of course every trajectory of the nonlinear system (4.3 Nonlinear System Analysis via LDIs 55 Suppose that for each x..14)   g(x. Since by assumption ∂φ (ζ) ∈ Ω. z) and (x̃..g. w. Suppose we have (4. 0. (4. g(0. 0. (x − x̃.1 A derivative condition Conditions that guarantee the existence of such a G are f (0. t. 0. 0.3. w̃. t) = G(x.e. This electronic version is for personal use and may not be duplicated or distributed. 0. w. converges to zero). 4. u. 0. w. we see that φ(x) − φ(x̃) is in every half-space that contains the convex set Co Ω(x − x̃).15) but not necessarily f (0. and   ∂f ∂f ∂f  ∂x ∂u ∂w   ∂g ∂g ∂g  ∈ Ω for all x. t)  u  (4. then for any x and x̃ we have φ(x) − φ(x̃) ∈ Co Ω(x − x̃). w.

The term linear differential inclusion is a bit misleading. βi ] ..10). LSL69]. suggested to us and used by A. . Pyatnitskii. there are many trajectories of the LDI that are not trajectories of the nonlinear system. We will see in Chapter 8 how this conservatism can be reduced in some special cases. (4. . Many authors refer to LDIs using some subset of the phrase “uncertain linear time-varying system”. We use the term only to point out the interpretation as an uncertain time-varying linear system. . Kisielewicz [Kis91].. we say the system is well-posed. see §8.. Background material. Lur57] and Popov [Pop73]. Lur’e and Postnikov [LP44. PLDIs come up in several articles on control theory. [Liu68. . Then.e. [MP89. The idea of global linearization is implicit in the early Soviet literature on the absolute stability problem. the condition (4. KP87a. c 1994 by the Society for Industrial and Applied Mathematics. e. e. BY89].3. Assume now that the system (4. can be found in the book by Aubin and Frankowska [AF90]. Define n ¯ o −1 Ω = Ã + B̃∆ (I − Dqp ∆) C̃ ¯ ∆ diagonal.. approximating the set of trajectories of a nonlinear system via LDIs can be very conservative. Notes and References Differential inclusions The books by Aubin and Cellina [AC84]. Global linearization The idea of replacing a nonlinear system by a time-varying linear system can be found in Liu et al.13) be the equations obtained by eliminating the variables p and q. where φi : R × R+ → R satisfy the sector conditions αi q 2 ≤ qφi (q. q = Cq x + Dqp p + Dqu u + Dqw w. B̃ and C̃ are given by (4.g. and let (4.17) is well-posed.14) holds. KP87b. i = 1. In words. Integral quadratic constraints The pointwise quadratic constraint p(t)T p(t) T R t ≤T q(t) q(t) R tthat occur in NLDIs can be gener- alized to the integral quadratic constraint 0 p p dτ ≤ 0 q T q dτ .17) z = Cz x + Dzp p + Dzu u + Dzw w. They call this approach global linearization. The variables pi and qi can be eliminated from (4. set-valued analysis. In this case. since LDIs do not enjoy any particu- lar linearity or superposition properties. the system consists of a linear part together with nq time-varying sector-bounded scalar nonlinearities.3. ∆ii ∈ [αi . A more accurate term. pi = φi (qi . t) ≤ βi q 2 for all q and t ≥ 0. Of course.g. where αi and βi are given. t). See also the article by Roxin [Rox65] and references therein.g. Copyright ° . is selector-linear differential inclusion. e.2 Sector conditions Sometimes the nonlinear system (4. ¯ where Ã.17) provided the matrix I−Dqp ∆ is nonsingular for all diagonal ∆ with αi ≤ ∆ii ≤ βi . Filippov and E.13) can be expressed in the form ẋ = Ax + Bp p + Bu u + Bw w.56 Chapter 4 Linear Differential Inclusions 4. i. nq . and the classic text by Filip- pov [Fil88] cover the theory of differential inclusions.2 and §8.

In other words. P > 0. P diagonal. At the least. so the different models refer to the same state vector. We collect many sets of input/output measurements. but at high frequencies they differ considerably more. see also [Sae86]. p149].e. Suppose we have a real system that is fairly well modeled as a linear system. which means that the DNLDI (4. then the DNLDI is well-posed. the method is extremely simple and natural. when equality constraints are imposed on the matrix ∆ in (4. and (I + Dqp )−1 (I − Dqp ) is a P0 matrix [Gha90]. It seems reasonable to conjecture that a controller that works well with this PLDI is likely to work well when connected to the real system. especially when used with computation- ally cheap sufficient conditions (see below). we model the plant as a time-varying linear system. of high complexity [CPS92. It is important that we have data sets from enough plants or plant conditions to characterize or at least give some idea of the plant variation that can be expected. SP89. In [FTD91]. Another equivalent condition is that (I + Dqp . see for example [Sd86. The problem of determining whether a general matrix is P0 is thought to be very difficult. the idea behind this proof can be traced back to Zadeh and Desoer [ZD63. We propose a simple method that should work well in some cases.17]. I − Dqp ) is a W0 pair (see [Wil70]).Notes and References 57 Modeling systems as PLDIs One of the key issues in robust control theory is how to model or measure plant “uncertainty” or “variation”. see [FP62. (4.g. the authors also give a sufficient condition for well-posedness of more general “structured” LDIs. different units from a manufacturing run). These conditions can also be obtained using the S-procedure. but of course not exactly the same. and Doyle [FTD91] show that if the LMI T Dqp P Dqp < P. Such controllers can be designed by the methods described in Chapter 7. BBB91. More specifically. Representing DNLDIs as PLDIs For a well-posed DNLDI..12) can also be represented as a PLDI.18) is feasible. obtained at different times. These models should be fairly close. Standard branch-and-bound techniques can be applied to the problem of determining well- posedness of DNLDIs. e. There are several sufficient conditions for well-posedness of a DNLDI that can be checked in polynomial-time. FP67]): A DNLDI is well-posed if and only if I + Dqp is invertible. we have ¯ Co Ω = Co à + B̃∆ (I − Dqp ∆)−1 C̃ ¯ ∆ diagonal.11). FP66. under different op- erating conditions.g. Of course these methods have no theoretical advantage over simply checking that the 2nq determinants are positive. We might find for example that the transfer matrices of these models at s = 0 differ by about 10%. this collection of models contains information about the “structure” of the plant variation. . © ª This electronic version is for personal use and may not be duplicated or distributed. see [Fer93]. Co Ω is a polytope. More importantly. An equivalent condition is in terms of P0 matrices (a matrix is P0 if every principal minor is nonnegative. Well-posedness of DNLDIs See [BY89] for a proof that a DNLDI is well-posed if and only if det(I − Dqp ∆) > 0 for every ∆ with |∆ii | = 1. §9.18) is equivalent to the existence of a diagonal (scaling) matrix R such that kRDqp R−1 k < 1. GS88. with system matrices that can jump around among any of the models we estimated.. |∆ii | = 1 . i. Fan. Tits. But in practice they may be more efficient.. For each data set we develop a linear system model of the plant. BB92]. Condition (4. To simplify the problem we will assume that the state in this model is accessible. or perhaps from different instances of the system (e. We propose to model the system as a PLDI with the vertices given by the measured or estimated linear system models.

where ² > 0 is sufficiently small. For L very large. . (We can also replace Bp and Cq with Bp U and V Cq where U and V are any orthogonal matrices. since otherwise. © ª K= which is a set containing at most 2nq points. . Bp . c 1994 by the Society for Industrial and Applied Mathematics. for W ∈ Ω. ∆nq nq leads us to the following conclusion: Wext = à + B̃S (I − Dqp S)−1 C̃. L.. . Therefore. we will only consider the case with Bp ∈ Rn×n and invertible. . for fixed ∆22 . One reason. Note that the set on the right-hand side is a polytope with at most 2nq vertices. ∆nq nq . Copyright ° . 1]. Moreover. Wext ∈ Co K.20) ∆ with associated set ΩNLDI = {A + Bp ∆Cq | k∆k ≤ 1}. Define ¯ à + B̃∆ (I − Dqp ∆)−1 C̃ ¯ ∆ diagonal. We will now show that every extreme point of Co Ω is contained in Co K. We have already noted that the description of a DNLDI as a PLDI will be much larger than its description as a DNLDI. Wext ∈ Ω. We now prove this result. which will imply that Co K ⊇ Co Ω. there exists π such that Bp π = (Ak − A)ξ. among many.10). ∆11 = ±1. . By definition ¯ Co Ω = Co à + B̃∆ (I − Dqp ∆)−1 C̃ ¯ ∆ diagonal. ψ(W ) is a bounded linear-fractional function of the first diagonal entry ∆11 of ∆. A(t) ∈ ΩPLDI = Co {A1 . . which concludes our proof. see the Notes and References of Chapter 5. . (4. This will give an efficient outer approximation of the PLDI (4. Evidently there is substantial redundancy in our representation of the NLDI (4. but we will not use this fact. Now. since the DNLDI is well-posed. (4. . In other words. AL } . which contradicts Wext ∈ Co Ω. |∆ii | = 1 . k∆(t)k ≤ 1. . Kam83]. ∆nq nq . i. © ª Clearly we have Co Ω ⊇ Co K. contains Ω but not Wext . . and Cq such that ΩPLDI ⊆ ΩNLDI with the set ΩNLDI as small as possible in some appropriate sense.. . . completing the proof. .e.20). i. In fact. Let Wext be an extreme point of Co Ω. it is much easier to work with the NLDI than the PLDI. π T π ≤ ξ T CqT Cq ξ.58 Chapter 4 Linear Differential Inclusions where Ã. . b. . KP87a. B̃ and C̃ are given by (4. |∆ii | ≤ 1 .19) with x(t) ∈ Rn . it equals (a + b∆11 )/(c + d∆11 ) for some a. and the NLDI ẋ = (A + Bp ∆(t)Cq )x. c and d.e. every result we establish for all trajectories of the NLDI holds for all trajectories of the PLDI. Since ΩPLDI ⊆ ΩNLDI implies that every trajectory of the PLDI is a trajectory of the NLDI. the function ψ achieves a maximum at an extreme value of ∆11 . We consider the PLDI ∆ ẋ = A(t)x.20): We can replace B p by αBp and Cq by Cq /α where α 6= 0. where S satisfies Sii = ±1. Extending this argument to ∆22 . is the potentially much smaller size of the description as an NLDI compared to the description of the PLDI. for every fixed ∆22 . . and also [BY89] and [PS82. the denominator c + d∆11 is nonzero for ∆11 ∈ [−1. Approximating PLDIs by NLDIs It is often useful to conservatively approximate a PLDI as an NLDI. the half-space {W | ψ(W ) < ψ(Wext ) − ²}. For simplicity. . Then there exists a linear function ψ such that Wext is the unique maximizer of ψ over Co Ω. . . Our goal is to find A. without affecting the set ΩNLDI .) We have ΩPLDI ⊆ ΩNLDI if for every ξ and k = 1. .19) by the NLDI (4.

Notes and References 59

This yields the equivalent condition

(Ak − A)T Bp−T Bp−1 (Ak − A) ≤ CqT Cq , k = 1, . . . , L,

which, in turn, is equivalent to
" #
CqT Cq (Ak − A)T
Bp BpT > 0, ≥ 0, k = 1, . . . , L.
Ak − A Bp BpT

Introducing new variables V = CqT Cq and W = Bp BpT , we get the equivalent LMI in A, V
and W
" #
V (Ak − A)T
W > 0, ≥ 0, k = 1, . . . , L. (4.21)
Ak − A W

Thus we have ΩPLDI ⊆ ΩNLDI if condition (4.21) holds.
There are several ways to minimize the size of ΩNLDI ⊇ ΩPLDI . The most obvious is to
minimize Tr V + Tr W subject to (4.21), which is an EVP in A, V and W . This objective
is clearly related to several measures of the size of ΩNLDI , but we do not know of any simple
interpretation.
We can formulate the problem of minimizing the diameter of ΩNLDI as an EVP. We define the
n×n
diameter
p of a set Ω ⊆ R as max {kF − Gk | F, G ∈ Ω}. The diameter of ΩNLDI is equal
to 2 λmax (V )λmax (W ). We can exploit the scaling redundancy in Cq to assume without loss
of generality that λmax (V ) ≤ 1, and then minimize λmax (W ) subject to (4.21) and V ≤ I,
which is an EVP. It can be shown that the resulting optimal V opt satisfies λmax (V opt ) = 1,
so that we have in fact minimized the diameter of ΩNLDI subject to ΩPLDI ⊆ ΩNLDI .

This electronic version is for personal use and may not be duplicated or distributed.

Chapter 5

Analysis of LDIs:
State Properties

In this chapter we consider properties of the state x of the LDI
ẋ = A(t)x, A(t) ∈ Ω, (5.1)
n×n
where Ω ⊆ R has one of four forms:

• LTI systems: LTI systems are described by ẋ = Ax.

• Polytopic LDIs: PLDIs are described by ẋ = A(t)x, A(t) ∈ Co {A1 , . . . , AL }.

• Norm-bound LDIs: NLDIs are described by
ẋ = Ax + Bp p, q = Cq x + Dqp p, p = ∆(t)q, k∆(t)k ≤ 1,
which we will rewrite as
ẋ = Ax + Bp p, pT p ≤ (Cq x + Dqp p)T (Cq x + Dqp p). (5.2)
We assume well-posedness, i.e., kDqp k < 1.

• Diagonal Norm-bound LDIs: DNLDIs are described by

ẋ = Ax + Bp p, q = Cq x + Dqp p,
(5.3)
pi = δi (t)qi , |δi (t)| ≤ 1, i = 1, . . . , nq .
which can be rewritten as
ẋ = Ax + Bp p, q = Cq x + Dqp p, |pi | ≤ |qi |, i = 1, . . . , nq .
Again, we assume well-posedness.

5.1 Quadratic Stability
We first study stability of the LDI (5.1), that is, we ask whether all trajectories of
system (5.1) converge to zero as t → ∞. A sufficient condition for this is the existence
of a quadratic function V (ξ) = ξ T P ξ, P > 0 that decreases along every nonzero
trajectory of (5.1). If there exists such a P , we say the LDI (5.1) is quadratically
stable and we call V a quadratic Lyapunov function.
61

62 Chapter 5 Analysis of LDIs: State Properties

Since
d
V (x) = xT A(t)T P + P A(t) x,
¡ ¢
dt
a necessary and sufficient condition for quadratic stability of system (5.1) is

P > 0, AT P + P A < 0 for all A ∈ Ω. (5.4)

Multiplying the second inequality in (5.4) on the left and right by P −1 , and defining
a new variable Q = P −1 , we may rewrite (5.4) as

Q > 0, QAT + AQ < 0 for all A ∈ Ω. (5.5)

This dual inequality is an equivalent condition for quadratic stability. We now show
that conditions for quadratic stability for LTI systems, PLDIs, and NLDIs can be
expressed in terms of LMIs.

• LTI systems: Condition (5.4) becomes

P > 0, AT P + P A < 0. (5.6)

Therefore, checking quadratic stability for an LTI system is an LMIP in the variable
P . This is precisely the (necessary and sufficient) Lyapunov stability criterion for LTI
systems. (In other words, a linear system is stable if and only if it is quadratically sta-
ble.) Alternatively, using (5.5), stability of LTI systems is equivalent to the existence
of Q satisfying the LMI

Q > 0, AQ + QAT < 0. (5.7)

Of course, each of these (very special) LMIPs can be solved analytically by solving a
Lyapunov equation (see §1.2 and §2.7).

• Polytopic LDIs: Condition (5.4) is equivalent to

P > 0, ATi P + P Ai < 0, i = 1, . . . , L. (5.8)

Thus, determining quadratic stability for PLDIs is an LMIP in the variable P . The
dual condition (5.5) is equivalent to the LMI in the variable Q

Q > 0, QATi + Ai Q < 0, i = 1, . . . , L. (5.9)

• Norm-bound LDIs: Condition (5.4) is equivalent to P > 0 and
" #T " #" #
ξ AT P + P A P B p ξ
<0 (5.10)
π BpT P 0 π

for all nonzero ξ satisfying
" #T  T T
" #
ξ −C q C q −C q D qp ξ
  ≤ 0. (5.11)
π −DqpT
Cq I − DqpT
Dqp π

In order to apply the S-procedure we show that the set

A = {(ξ, π) | ξ 6= 0, (5.11) }
c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

11) } = ∅. (P Bp + λCqT Dqp )T T −λ(I − Dqp Dqp ) Thus. • Diagonal Norm-bound LDIs: For the DNLDI (5. (5.11). π) | (ξ. . π) 6= 0.1 Quadratic Stability 63 equals the set B = {(ξ. µ = 1/λ. With the new variables Q = P −1 . we may.i π) . The condition dV (x)/dt < 0 for all nonzero trajectories is equivalent to ξ T (AT P + P A)ξ + 2ξ T P Bp π < 0 for all nonzero (ξ. Q > 0.3). we can obtain only a sufficient condition for quadratic stability.   AT P + P A + λCqT Cq P Bp + λCqT Dqp (5. .2) is zero.2) is equivalent to the existence of P and λ satisfying P > 0. then condition (5. π) satisfying (5.13) (P̃ Bp + CqT Dqp )T T −(I − Dqp Dqp ) an LMI in the variable P̃ . Since LMI (5.10) being satisfied for any nonzero (ξ. Thus.i ξ + Dqp. i = 1. by defining P̃ = P/λ. In the remainder of this chapter we will assume that Dqp in (5. π) | ξ = 0.i π) (Cq.5. . (This argument recurs throughout this chapter and will not be repeated. π 6= 0. since I −D qp Dqp > 0. (5. we find that quadratic stability of (5. Therefore.11) cannot hold without having ξ 6= 0. the condition that dV (x)/dt < 0 for all nonzero trajectories is equivalent to (5. . λ ≥ 0.   < 0. obtain an equivalent condition   AT P̃ + P̃ A + CqT Cq P̃ Bp + CqT Dqp P̃ > 0.) Using the S-procedure. .14) " # AQ + QAT + µBp BpT T µBp Dqp + QCqT T < 0. (5.2) is well-posed and quadratically stable if and only if the LMIP (5. This electronic version is for personal use and may not be duplicated or distributed. quadratic stability of NLDIs is also equivalent to the existence of µ and Q satisfying the LMI µ ≥ 0. q = Cq x + Dqp p is less than one. π) satisfying T πi2 ≤ (Cq. (µBp Dqp + QCqT )T T −µ(I − Dqp Dqp ) Remark: Note that our assumption of well-posedness is in fact incorporated in T the LMI (5. Therefore the NLDI (5. quadratic stability of the NLDI has the frequency- domain interpretation that the H∞ norm of the LTI system ẋ = Ax + Bp p.12) since it implies I − Dqp Dqp > 0. But this is immediate: T If π 6= 0. (5. determining quadratic stability of an NLDI is an LMIP. It suffices to show that {(ξ.12) implies that λ > 0. the reader should bear in mind that all the following results hold for nonzero D qp as well.11) } .i ξ + Dqp. L.12) has a solution.12)   < 0.

. Checking this sufficient condition for quadratic stability is an LMIP. (See the Notes and References for more details on the connection between the S-procedure and scaling.16) is feasible. . Bp .2) is well-posed and quadratically stable if the LMI (5. Cq ) is minimal.i π) − πi2 < 0 i=1 for all nonzero (ξ. Equivalently. Remark: An LTI system is stable if and only if it is quadratically stable. . we will often restrict our attention to NLDIs. Note the similarity between the corresponding LMIs for the NLDI and the DNLDI. Denoting by H the transfer matrix H(s) = Dqp + Cq (sI − A)−1 Bp .3) is quadratically stable. Note that from (5. for some diagonal. With Λ = diag(λ. we see that this condition is implied by the existence of nonnegative λ1 . . this is equivalent to " # AT P + P A + CqT ΛCq P Bp + CqT ΛDqp < 0. and pay only occasional attention to DNLDIs. Remark: Quadratic stability of a DNLDI via the S-procedure has a simple frequency-domain interpretation. λnq such that ξ T (AT P + P A)ξ + 2ξ T P Bp π L ³ ´ T X + λi (Cq. In the remainder of the book. µnq ) > 0 satisfying the LMI " # AQ + QAT + Bp M BpT QCqT + Bp M Dqp T < 0. however. Using the S-procedure. π).i ξ + Dqp. . and assuming (A. if there exist P > 0 and diagonal Λ ≥ 0 satisfying (5. see the Notes and References.i and Dqp. . c 1994 by the Society for Industrial and Applied Mathematics.15) BpT P + DqpT ΛCq T Dqp ΛDqp − Λ Therefore.16) Cq Q + Dqp M BpT T −M + Dqp M Dqp another LMIP.64 Chapter 5 Analysis of LDIs: State Properties where we have used Cq.15) or (5.15). The bottom right block is precisely the sufficient condition for well-posedness mentioned on page 57. .) Remark: Here too our assumption of well-posedness is in fact incorporated in the LMI (5. (5. . M = diag(µ1 . λnq ). (5. Copyright ° . stability does not imply quadratic stability (see the Notes and References at the end of this chapter). kΛ1/2 HΛ−1/2 k∞ < 1.i to denote the ith rows of Cq and Dqp respec- tively. positive-definite matrix Λ. then the DNLDI (5.i π) (Cq. Therefore the DNLDI (5. quadratic stability is implied by the existence of Q > 0. . the only difference is that the diagonal matrix appearing in the LMI associated with the DNLDI is fixed as the scaled identity matrix in the LMI associated with the NLDI. It is true that an LDI is stable if and only if there exists a convex Lyapunov function that proves it. For more general LDIs. .16).15).15) or (5. quadratic stabil- ity is equivalent to the fact that. Λ−1/2 can then be interpreted as a scaling. . we must have Λ > 0. . we leave to the reader the task of generalizing all the results for NLDIs to DNLDIs. In general it is straightforward to derive the corresponding (sufficient) condition for DNLDIs from the result for NLDIs.i ξ + Dqp.

Consider the change of coordinates x = T x̄. dkx̄k/dt < 0? It is easy to show that this is true if and only if there exists a quadratic Lyapunov function V (ξ) = ξ T P ξ for the LDI.1. PLDIs and NLDIs). Minimizing the condition number of T subject to the requirement that dkx̄k/dt < 0 for all nonzero trajectories is then equivalent to minimizing the condition number of P subject to d(xT P x)/dt < 0. where det T 6= 0.1. −1 (Any T with T T T = Popt where Popt is an optimal solution. . and has the small- est possible condition number κ(T ). AL } . we have taken advantage of the homogeneity of the LMI (5. • Polytopic LDIs: For the PLDI ẋ = (A0 + A(t))x. With this interpretation. A(t) ∈ α Co {A1 .12). We ask the question: Does there exist T such that in the new coordinate system. I ≤ P ≤ ηI • Norm-bound LDIs: The EVP in the variables P . . η and λ is minimize η subject to (5. i. .6).1) is described by the matrix Ā(t) = T −1 A(t)T .. . the change of coordinates with smallest condition number is obtained by solving the following EVPs: • LTI systems: The EVP in the variables P and η is minimize η subject to (5.8). we define the quadratic stability margin as the largest nonnegative α for which it is quadratically stable.5. (5.) • Polytopic LDIs: The EVP in the variables P and η is minimize η subject to (5. all (nonzero) trajectories of the LDI are always decreasing in norm. This turns out to be an EVP for LTI systems.1 Quadratic Stability 65 5.2 Quadratic stability margins Quadratic stability margins give a measure of how much the set Ω can be expanded about some center with the LDI remaining quadratically stable. is optimal for the original problem. . let P = T −T T −1 .1 Coordinate transformations We can give another interpretation of quadratic stability in terms of state-space coor- dinate transformations. This quantity is computed by solving the following GEVP in P This electronic version is for personal use and may not be duplicated or distributed.6) in the variable P . In the new coordinate system.e. for example. it is natural to seek a coordinate transformation matrix T that makes all nonzero trajectories decreasing in norm. I ≤ P ≤ ηI 5. in which case we can take any T with T T T = P −1 . I ≤ P ≤ ηI (In this formulation. To see this. T = P −1/2 (these T ’s are all related by right orthogonal matrix multi- plication). polytopic and norm-bound LDIs.) For each of our systems (LTI systems. so that κ(T )2 = κ(P ).

5. If dV (x)/dt ≤ −2αV (x) for all trajectories.17) c 1994 by the Society for Industrial and Applied Mathematics. and therefore the decay rate of the LDI (5. so that kx(t)k ≤ e−αt κ(P )1/2 kx(0)k for all trajectories. β ≥ 0. the largest lower bound on the decay rate that we can find using a quadratic Lyapunov function can be found by solving the following GEVP in P and α: maximize α (5. L • Norm-bound LDIs: The quadratic stability margin of the system ẋ = Ax + Bp p. λ and β = α2 : maximize β " # AT P + P A + βλCqT Cq P Bp subject to P > 0. then V (x(t)) ≤ V (x(0))e−2αt .1) is at least α. • LTI systems: The condition that dV (x)/dt ≤ −2αV (x) for all trajectories is equivalent to AT P + P A + 2αP ≤ 0. <0 BpT P −λI Defining P̃ = P/λ. we get an equivalent EVP in P̃ and β: maximize β " # AT P̃ + P̃ A + βCqT Cq P̃ Bp subject to P̃ > 0. β ≥ 0. pT p ≤ α2 xT CqT Cq x. α ≥ 0.1). (Stability corresponds to positive decay rate. AT0 P + P A0 + α ATi P + P Ai < 0. . <0 BpT P̃ −I Remark: The quadratic stability margin obtained by solving this EVP is just 1/kCq (sI − A)−1 Bp k∞ .66 Chapter 5 Analysis of LDIs: State Properties and α: maximize α subject to P > 0.17) Therefore. is defined as the largest α ≥ 0 for which the system is quadratically stable. (5. ¡ ¢ i = 1. Equivalently. . (5. Copyright ° . .1.18) subject to P > 0. the decay rate is the supremum of − log kx(t)k lim inf t→∞ t over all nonzero trajectories. and is computed by solving the GEVP in P . .) We can use the quadratic Lyapunov function V (ξ) = ξ T P ξ to establish a lower bound on the decay rate of the LDI (5.1) is defined to be the largest α such that lim eαt kx(t)k = 0 t→∞ holds for all trajectories x.3 Decay rate The decay rate (or largest Lyapunov exponent) of the LDI (5.

18) is the decay rate of the LTI system (which is the stability degree of A.1 Quadratic Stability 67 This lower bound is sharp.e.. Cq Q −µI This electronic version is for personal use and may not be duplicated or distributed.23) " # AQ + QAT + µBp BpT + 2αQ QCqT ≤ 0.22) as " # T (A + αI) P + P (A + αI) + λCqT Cq P Bp ≤ 0.19) • Polytopic LDIs: The condition that dV (x)/dt ≤ −2αV (x) for all trajectories is equivalent to the LMI ATi P + P Ai + 2αP ≤ 0.e.2) is at least α is that there exists Q and µ satisfying the LMI Q > 0. An alternate necessary and sufficient condition for the existence of a quadratic Lyapunov function proving that the decay rate of (5. BpT P −λI and noting that the H∞ norm of the system (A + αI. L. As an alternate condition. the largest lower bound on the decay rate provable via quadratic Lyapunov functions is obtained by maximizing α subject to (5. . i. . i = 1. the condition that dV (x)/dt ≤ −2αV (x) for all trajectories is equivalent to the existence of λ ≥ 0 such that " # AT P + P A + λCqT Cq + 2αP P Bp ≤ 0. Ai Q + QATi + 2αQ ≤ 0. we obtain the largest lower bound on the decay rate of (5. (5. the optimal value of the GEVP (5.20) and P > 0.22) BpT P −λI Therefore.α < 1. . i. negative of the maximum real part of the eigenvalues of A). i = 1.20) Therefore. L. Then the optimal value of the GEVP is equal to the largest α such that kHk∞.α = sup { kH(s)k | Re s > −α} . This can be seen by rewriting (5. .21) • Norm-bound LDIs: Applying the S-procedure. AQ + QAT + 2αQ ≤ 0.. (5. This is a GEVP in P and α. (5. (5. λ ≥ 0 and (5. As an alternate condition.2) by maximizing α over the variables α.5. The lower bound has a simple frequency-domain interpretation: Define the α- shifted H∞ norm of the system by ∆ kHk∞. . subject to P > 0. (5. there exists a quadratic Lyapunov function proving that the decay rate is at least α if and only if there exists Q satisfying the LMI Q > 0. µ ≥ 0. .22).α . . . a GEVP. there exists a quadratic Lyapunov function proving that the decay rate is at least α if and only if there exists Q satisfying the LMI Q > 0. . P and λ. Bp . Cq ) equals kHk∞.

2 Invariant Ellipsoids Quadratic stability can also be interpreted in terms of invariant ellipsoids. AT P + P A ≤ 0. . λ ≥ 0. invariance of the ellipsoid E is equivalent to the existence of λ such that P > 0. For Q > 0. ATi P + P Ai ≤ 0. AT P + P A ≤ 0. . (5. PLDIs and NLDIs. c 1994 by the Society for Industrial and Applied Mathematics. its inverse. (P Bp + λCqT Dqp )T T −λ(I − Dqp Dqp ) Invariance of the ellipsoid E is equivalent to the existence of µ such that µ ≥ 0. invariance of E is charac- terized by LMIs in Q or P . AQ + P AT ≤ 0. for LTI systems. .28) • Norm-bound LDIs: Applying the S-procedure.29)   ≤ 0. L. • LTI systems: The corresponding LMI in P is P > 0. © ¯ ª The ellipsoid E is said to be invariant for the LDI (5. Remark: Condition (5. L. or equivalently. Q > 0.25) and the LMI in Q is Q > 0. for all A ∈ Ω. .1) if for every trajectory x of the LDI. (5. the condition that E be an invariant ellipsoid can be expressed in each case as an LMI in Q or its inverse P .26) • Polytopic LDIs: The corresponding LMI in P is P > 0. i = 1. let E denote the ellipsoid centered at the origin E = ξ ∈ Rn ¯ ξ T Q−1 ξ ≤ 1 . Copyright ° . . . . It is easily shown that this is the case if and only if Q satisfies QAT + AQ ≤ 0. (5. Thus. and (5. (5. (5.27) and the LMI in Q is Q > 0.24) is just the nonstrict version of condition (5. (µBp Dqp + QCqT )T T −µ(I − Dqp Dqp ) In summary. x(t0 ) ∈ E implies x(t) ∈ E for all t ≥ t0 . QATi + Ai Q ≤ 0.4).   AT P + P A + λCqT Cq P Bp + λCqT Dqp (5. . i = 1.30) " # AQ + QAT + µBp BpT µBp Dqp T + QCqT T ≤ 0. for all A ∈ Ω.24) where P = Q−1 .68 Chapter 5 Analysis of LDIs: State Properties 5.

.2 Invariant Ellipsoids 69 5.2. . . Q ≤ λI This electronic version is for personal use and may not be duplicated or distributed.31). • LTI systems: The EVP in the variables Q and λ is minimize λ subject to (5.5.32) • Polytopic LDIs: The CP in P is minimize log det P −1 (5. The ellipsoid E contains the polytope P if and only if vjT Q−1 vj ≤ 1. . .1 Smallest invariant ellipsoid containing a polytope Consider a polytope described by its vertices.25). Q ≤ λI • Norm-bound LDIs: The EVP in Q. (5. Q ≤ λI • Polytopic LDIs: The EVP in Q and λ is minimize λ (5.32) • Norm-bound LDIs: The CP in P and λ is minimize log det P −1 subject to (5.26).31) vj Q or as an LMI in P = Q−1 as vjT P vj ≤ 1. p.32) Minimum diameter p The diameter of the ellipsoid E is 2 λmax (Q). p. (5. . . . P = Co{v1 .31).33) subject to (5. λ and µ is minimize λ subject to (5. j = 1.30). j = 1. We can minimize this quantity by solving appropriate EVPs.31). We can minimize this volume by solving appropriate CPs.28).34) subject to (5. (5. • LTI systems: The CP in P is minimize log det P −1 subject to (5. This condition may be expressed as an LMI in Q " # 1 vjT ≥ 0. det Q. (5. . . . p.27). . up to a constant that depends on n. . vp }. (5. (5.32) Minimum volume √ The volume of E is. j = 1. . . (5.29). . (5. .

2 Largest invariant ellipsoid contained in a polytope We now consider a polytope described by linear inequalities: P = ξ ∈ Rn ¯ aTk ξ ≤ 1. the EVP in the variables Q and λ is maximize λ subject to (5. k = 1.35) which is a set of linear inequalities in Q. λI ≤ Q • Polytopic LDIs: For PLDIs.35). This is equivalent to aTk Qak ≤ 1. x(t0 ) ∈ P (this knowledge may reflect measurements or prior assumptions). λI ≤ Q c 1994 by the Society for Industrial and Applied Mathematics. q. . det Q.26). up to a constant. .e. i.35) • Norm-bound LDIs: For NLDIs. . Copyright ° . . (5. (5. the CP in the variable Q is minimize log det Q−1 subject to (5.35) √ (Since the volume of E is. the EVP in the variables Q and λ is maximize λ subject to (5.28). © ª k = 1. ..30).28). the length of the minor axis) of invariant ellipsoids contained in P by solving EVPs: • LTI systems: For LTI systems.35). q . the CP in the variable Q is minimize log det Q−1 subject to (5.70 Chapter 5 Analysis of LDIs: State Properties Remark: These results have the following use.35) We also can find the maximum minor diameter (that is. (5. . minimizing log det Q−1 will maximize the volume of E).26). we can guarantee that x(t) ∈ E for all t ≥ t0 . .e. . An invariant ellipsoid E containing P then gives a bound on the state for t ≥ t0 . . 5. The polytope P represents our knowledge of the state at time t0 . k = 1. © ¯ ª The ellipsoid E is contained in the polytope P if and only if max aTk ξ | ξ ∈ E ≤ 1.2.. (5. the CP in the variables Q and µ is minimize log det Q−1 subject to (5. • Polytopic LDIs: For PLDIs. i. q. . . (5. The maximum volume of invariant ellipsoids contained in P can be found by solving CPs: • LTI systems: For LTI systems. 5. .

19). (5. the EVP in the variables Q. and the problem of finding the smallest such γ and therefore the smallest T (for a fixed α) is an EVP in the variables γ and Q: minimize γ subject to (5. we can conclude that if x(0) ∈ P.36) • Norm-bound LDIs: For NLDIs.e. (5. (5. (5.36) where γ = e2αT .35).36) • Polytopic LDIs: For PLDIs.. . then x(t) ∈ P for all t ≥ T . initial conditions for which we can guarantee that the state always remains in the safe operating region. . . γ and µ minimize γ subject to (5. and moreover x(0) ∈ E implies that x(t) ∈ e −αt E for all t ≥ 0. the smallest bound on the return time provable via invariant ellipsoids.31). If Q > 0 satisfies QAT + AQ + 2αQ ≤ 0. with a given decay rate α. then x(t) ∈ P for all t ≥ T .2 Invariant Ellipsoids 71 • Norm-bound LDIs: For NLDIs. (5. The ellipsoids found above can be interpreted as regions of safe initial conditions. P = x ∈ Rn ¯ aTk x ≤ 1. . q.30). . If we use both representations of the polytope. . with a given decay rate α.2. . is obtained by solving the EVP in the variables Q. (5. Therefore if T is such that e−αT E ⊆ P ⊆ E. λI ≤ Q Remark: These results can be used as follows. vp }. q = Co{v1 .36) This electronic version is for personal use and may not be duplicated or distributed.31). (5. © ¯ ª the constraint e−αT E ⊆ P is equivalent to the LMI in Q and γ aTi Qai ≤ γ. i. k = 1. 5. The polytope P represents the allowable (or safe) operating region for the system. Upper bounds on the return time can be found by solving EVPs. . then E is an invariant ellipsoid. (5. µ and λ is maximize λ subject to (5. is obtained by solving the EVP in the variables Q and γ minimize γ subject to (5. the smallest bound on the return time provable via invariant ellipsoids. .23). k = 1. • LTI systems: Let us consider a positive decay rate α. so that T is an upper bound on the return time.3 Bound on return time The return time of a stable LDI for the polytope P is defined as the smallest T such that if x(0) ∈ P. . .21). .5. .31).

in [Pya68].37) where p and q are scalar. The advantage of the input-output approach is that the systems are not required to have finite-dimensional state [Des65. the newer results can be regarded as extensions of the works of Yakubovich and Popov to feedback synthesis and to the case of “structured perturba- tions” [RCDP93. See also [Bar70b]. BY89. BH89]. KR88. over 200 papers had been written about the system (5. In the original setting of Lur’e and Postnikov. where the discrete-time counterpart of this c 1994 by the Society for Industrial and Applied Mathematics. there is no fundamental difference between these and the older results of Yakubovich and Popov.13). which is further refined in [Mei91]. ZF67]. Other major contributors include Zames [Zam66a. uncertain systems. This approach was sparked by Popov [Pop62]. Wil71a]. In [KPZ90. Wil69b. t) can be any sector [−1. For other results and computational procedures for PLDIs see [PS82. ZK87. EZKN90. Yak77. PD90]. the ones most relevant to this book are undoubtedly from Yakubovich. as stated in [PD90].) Pyatnitskii. and result in infinite-dimensional convex optimization problems. PS86. implemented in the robustness analysis soft- ware packages µ-tools and Km -tools [BDG+ 91. is an NLDI.37). PZP+ 92]. where Horisberger and Belanger note that the problem of quadratic stability for a PLDI is convex (the authors write down the corresponding LMIs). DV75. p = φ(q. [ZK88. see for instance. San65a. BW65]. the article [KP87a]. q = cq x. Modern approaches from this viewpoint include Doyle’s µ-analysis [Doy82] and Safonov’s Km -analysis [Saf82. points out that by 1968. they have very strong connections with the input-output approach to study the stability of nonlinear. Yak66. t)q. (The family of systems of the form (5. Zam66b. SS81]. the authors remark that quadratic stability of NLDIs is equivalent to the H∞ condition (5. KB93.) These criteria are most useful when dealing with experimental data arising from frequency response measurements. they are described in detail in the book by Narendra and Taylor [NT73].8)) cannot be converted to a Riccati inequality or a frequency-domain criterion. various additional assumptions were made about φ. San65b] and Brockett [Bro65. the corresponding stability criteria are usually most easily expressed in the frequency domain. Popov [Pop73] and Willems [Wil71b. Kam83. See also [AS79. where φ(q. Copyright ° . Yak82]). One of the first occurrences of PLDIs is in [HB76]. See also the papers and the book by Willems [Wil69a. Wil74a] outline the relationship between the problem of absolute stability of automatic control and quadratic optimal control. While all the results presented in this chapter are based on Lyapunov stability techniques. the likely reason being that the LMI expressing quadratic stability of a general PLDI (which is just a number of simultaneous Lyapunov inequalities (5. Yak64. Log90]. who used it to study the Lur’e system. (5. which approximately solve these infinite-dimensional problems. CS92a]. KP87a. (Yakubovich calls this method the “method of matrix inequalities”. This problem came to be popularly known as the problem of “absolute stability in automatic control”. generating different special cases and solutions. SD83. The main idea of Yakubovich was to express the re- sulting LMIs as frequency-domain criteria for the transfer function G(s) = cq (sI − A)−1 bp .37). AGG94]. As far as we know. 1] nonlinearity. Quadratic stability for PLDIs This topic is much more recent than NLDIs. Among these. Yakubovich was the first to make systematic use of LMIs along with the S-procedure to prove stability of nonlinear control systems (see references [Yak62.72 Chapter 5 Analysis of LDIs: State Properties Notes and References Quadratic stability for NLDIs Lur’e and Postnikov [LP44] gave one of the earliest stability analyses for NLDIs: They considered the stability of the system ẋ = Ax + bp p. SD84]. Sandberg [San64. where the authors develop a subgradient method for proving quadratic stability of a PLDI. The case when p and q are vector-valued signals has been considered much more recently. Bro66. Yak67. Subsequently. However. φ was assumed to be a time-invariant nonlinearity. the paper [PS86].

however. Vector Lyapunov functions The technique of vector Lyapunov functions also yields search problems that can be expressed as LMIPs. for an LDI to be stable without being quadratically stable. Gu94]. −1 1 0 1 proves that the PLDI is stable.g. Q1 ≥ 0 and Q2 ≥ 0. See also the articles by Gu et al. It is possible. To show this. stability of an interconnected system. such that Q0 = A1 Q1 + Q1 AT1 + A2 Q2 + Q2 AT2 . p277]). where the discrete-time counterpart of this problem is considered. we use the S-procedure. " # " # 14 −1 0 0 (5.1 3 2. the problem is to find an appropriate positive linear combination that proves. [GCG93]. Q2 = 2 24 3 90 1 1 satisfy these duality conditions. A2 = . Stable LDIs that are not quadratically stable An LTI system is stable if and only if it is quadratically stable. this is just Lyapunov’s stability theorem for linear systems (see e. " # " # −100 0 8 −9 A1 = . e. not all zero. GL93b. Here we already have quadratic Lyapunov functions for a number of subsystems. See for example [Gha94. see for example [PR91b] or [MP86].38) to prove the stability of This electronic version is for personal use and may not be duplicated or distributed. let us point out that quadratic Lyapunov functions have been used to determine estimates of regions of stability for general nonlinear systems.38) P1 = . [Lya47. [GZL90. Finally. One example is given in [CFA90]. However. Here is a PLDI example: ẋ = A(t)x. A(t) ∈ Co {A1 . . this PLDI is not quadratically stable if there exist Q0 ≥ 0. P2 = .g. and the survey article by Barmish et al. Necessary and sufficient conditions for LDI stability Much attention has also focused on necessary and sufficient conditions for stability of LDIs (as opposed to quadratic stability).7 1 Q0 = .. A2 } . xT P2 x . the piecewise quadratic Lyapunov function © ª V (x) = max xT P1 x.2 2 0. Earlier references on this topic include [Pya70b] which connects the problem of stability of an LDI to an optimal control problem. BT79.. [BK93].8). Garofalo et al.Notes and References 73 problem is considered. 0 −1 120 −18 From (2. and [Pya70a. It can be verified that the matrices " # " # " # 5. BT80]. A necessary and sufficient condition for the Lyapunov function V defined in (5. Q1 = . BD85].

Brockett uses the S-procedure to prove that a certain piecewise quadratic form is a Lyapunov function and finds LMIs similar to (5. the difference between any two trajectories of (5. Therefore. (See [BC85]. w. . t ≥ 0 } . (5. t). λ3 = 1. (5. computing such Lyapunov functions is computationally intensive. For other examples of stable LDIs that are not quadratically stable. . Rap93. nq for any diagonal nonsingular T . w. i. DK71. In general. x(t) − x̃(t) converges to zero as t → ∞.3) is also a trajectory (and vice versa) of the DNLDI ẋ = Ax + Bp T −1 p. if not intractable. see also [Vid78]. PR91b. if DNLDI (5. for fixed input w.74 Chapter 5 Analysis of LDIs: State Properties the PLDI is the existence of four nonnegative numbers λ1 . Rap90. and also [Zub55. λ4 = 100 are such numbers. It can be verified that λ1 = 50. see Brockett [Bro65. x(0) = ξ.15) as a scaling matrix . λ2 = 0. Relation between S-procedure for DNLDIs and scaling We discuss the interpretation of the diagonal matrix Λ in (5. an LDI is stable if and only if there is a convex Lyapunov function that proves it. Nonlinear systems and fading memory Consider the nonlinear system ẋ = f (x. (5. w. then so is DNLDI (5. PR91a.41) pi = δi (t)qi ..39). T is referred to as a scaling matrix. that is. This implies the system has fading memory. VV85. Suppose the LDI ẋ ∈ Ωx is stable.) Fix an input w and let x and x̃ denote any two solutions of (5. Since the LDI is stable. . all trajectories converge to zero as t → ∞. MV63. One such Lyapunov function is V (ξ) = sup {kx(t)k | x satisfies (5.e.40) with ∂f ∈ Ω for all x. Bro77. AT2 P1 + P1 A2 − λ2 (P2 − P1 ) < 0.3.41) is stable for some diagonal nonsingular T . Using the mean-value theorem from §4.39) AT1 P2 + P2 A1 + λ3 (P2 − P1 ) < 0. we have d (x − x̃) = f (x.1. In [Bro77]. t ∂x where Ω is convex. Bro70] and Pyatnitskii [Pya71]. Copyright ° . λ2 . c 1994 by the Society for Industrial and Applied Mathematics.40). i = 1. λ3 . |δi (t)| ≤ 1. BT80]. In other words. AT2 P2 + P2 A2 + λ4 (P2 − P1 ) < 0. . MP89. Finally. q = T Cq x + T Dqp T −1 p. See Brayton and Tong [BT79. Every trajectory of the DNLDI (5. t) − f (x̃. the system “forgets” its initial condition.3). w.1). Mol87]. λ4 such that AT1 P1 + P1 A1 − λ1 (P2 − P1 ) < 0. t) = A(t)(x − x̃) dt for some A(t) ∈ Ω.40) converges to zero.

3).e. From the results of §5. a sufficient condition for quadratic stability is given by the LMI (5. checking quadratic stability for this PLDI requires the solution of L simultaneous Lyapunov inequalities in P > 0 (LMI (5. . PS83]. This electronic version is for personal use and may not be duplicated or distributed. we observe that while the quadratic stability condition (5. the size of the LMI grows exponentially with n q . we require. Saeki and Araki [SA88] also conclude convexity of the scaling problem and use the properties of M-matrices to obtain a solution. (P Bp T −1 + CqT T 2 Dqp T −1 )T T −(I − T −1 Dqp T 2 Dqp T −1 ) An obvious congruence.8) for LDI (5. Representing a DNLDI as a PLDI: Implications for quadratic stability We saw in Chapter 4 that a DNLDI can be represented as a PLDI.41) as an NLDI. followed by the substitution T 2 = Λ. If the system is represented as a DNLDI.3) obtained using the S-procedure. obtained by representing it as a PLDI.1.13) for quadratic stability. in the variables P > 0 and Λ > 0. but this LMI yields only a sufficient condition for quadratic stability of LDI (5. This relation between scaling and the diagonal matrices arising from the S-procedure is described in Boyd and Yang [BY89].15). Comparing the two conditions for quadratic stability. for some diagonal nonsingular T .15). the size of ∆. On the other hand. BPT P + Dqp T ΛCq T Dqp ΛDq − Λ This is precisely LMI (5. and Kamenetskii [Kam83] reduce the problem of numerical search for appropriate scalings and the associated Lyapunov functions to a convex optimization problem. yields the LMI condition in P > 0 and diagonal Λ > 0: " # AT P + P A + CqT ΛCq P Bp + Cq ΛDqp < 0. " # AT P + P A + CqT T 2 Cq P Bp T −1 + CqT T 2 Dqp T −1 P > 0. the condition for stability of the DNLDI (5. not conservative).Notes and References 75 Treating DNLDI (5.3). Pyatnitskii and Skorodinskii [PS82.8))..15) grows polynomially with n q . and applying the condition (5. Discussion of this issue can be found in Kamenetskii [Kam83] and Pyatnitskii and Skorodinskii [PS82]. where the number L of vertices can be as large as 2nq . is both necessary and sufficient (i. < 0. the size of the LMI (5.

.

6. . • Polytopic LDIs: Here the description is ẋ = A(t)x + Bw (t)w.1) where w is an exogenous input signal. ẋ = Ax + Bp p + Bw w. x(0) = 0     ¯  ∆ ¯ Rue = x(T ) ¯ Z T ¯ . . .1). . q = Cq x. q = Cq x. wT w dt ≤ 1.1 ] . [A(t) Bw (t)] ∈ Ω. or equivalently.Chapter 6 Analysis of LDIs: Input/Output Properties 6. • Diagonal Norm-bound LDIs: DNLDIs have the form ẋ = Ax + Bp p + Bw w.1).e. q = Cq x. . pT p ≤ q T q. nq . q = Cq x. or equivalently. i = 1. p = ∆(t)q. pi = δi (t)qi . i. T ≥0   ¯    ¯  0 77 .1. • Norm-bound LDIs: NLDIs have the form ẋ = Ax + Bp p + Bw w. . . . .1 Reachable sets with unit-energy inputs Let Rue denote the set of reachable states with unit-energy inputs for the LDI (6. i = 1. |pi | ≤ |qi |.1 Input-to-State Properties We first consider input-to-state properties of the LDI ẋ = A(t)x + Bw (t)w.  ¯   ¯ x. • LTI systems: The system has the form ẋ = Ax + Bw w. |δi (t)| ≤ 1. . nq .. w satisfy (6. . [A(t) Bw (t)] ∈ Co {[A1 Bw. [AL Bw. (6. k∆(t)k ≤ 1.L ]} . . ẋ = Ax + Bp p + Bw w.

c 1994 by the Society for Industrial and Applied Mathematics. we get Z T V (x(T )) − V (x(0)) ≤ wT w dt.3) is equivalent to the LMI in P " # AT P + P A P B w P > 0. . 0 Noting that V (x(0)) = 0 since x(0) = 0.i P > 0.5). Copyright ° . L. condition (6.3) holds if and only if " # A(t)T P + P A(t) P Bw (t) P > 0 and ≤ 0 for all t ≥ 0. the reachable set is the ellipsoid {ξ | ξ T Wc−1 ξ ≤ 1}.2) where P > 0. we get Z T V (x(T )) ≤ wT w dt ≤ 1. . 0 Since Wc satisfies the Lyapunov equation AWc + Wc AT + Bw Bw T = 0. . Suppose that the function V (ξ) = ξ T P ξ. Thus the ellipsoidal bound is sharp for LTI systems.7) Bw (t)T P −I Inequality (6. where Wc is the controllability Gramian.5) Remark: For a controllable LTI system.4) Bw P −I which can also be expressed as an LMI in Q = P −1 Q > 0. .3) Integrating both sides from 0 to T . the ellipsoid E contains the reachable set Rue . (6.i P −I Thus the ellipsoid E contains Rue if the LMI (6. satisfies dV (x)/dt ≤ w T w for all x. (6. (6.1).6) we see that Q = Wc satisfies (6. • LTI systems: Condition (6. w satisfying (6. (6. (6. AQ + QAT + Bw Bw T ≤ 0. T ≤ 0. © ª (6.7) holds if and only if " # ATi P + P Ai P Bw.78 Chapter 6 Analysis of LDIs: Input/Output Properties We will bound Rue by ellipsoids of the form E = ξ | ξT P ξ ≤ 1 . 0 RT for every T ≥ 0. and every input w such that 0 wT w dt ≤ 1. defined by Z ∞ ∆ T Wc = eAt Bw Bw T A e t dt.8) Bw.8) holds. i = 1. In other words. T ≤ 0. (6. • Polytopic LDIs: For PLDIs. with P > 0.

i Bw. Note that this condition implies that E is invariant. ≤ 0. This electronic version is for personal use and may not be duplicated or distributed. i = 1. Using the S-procedure. BpT P −λI 0  ≤ 0.1 Input-to-State Properties 79 Alternatively.11) Cq Q −µI • Diagonal Norm-bound LDIs: Condition (6. condition (6. The results described above give us a set of ellipsoidal outer approximations of Rue . µ ≥ 0.9) where Q = P −1 . We can optimize over this set in several ways. . λnq ) satisfying   AT P + P A + CqT ΛCq P Bp P Bw P > 0. . QATi + Ai Q + Bw. . It follows from the S-procedure that this condition holds if there exist P and Λ = diag(λ. Λ ≥ 0.8) as Q > 0. • Norm-bound LDIs: Condition (6.i ξ) . ξ T (AT P + P A)ξ + 2ξ T P (Bp π + Bw ω) − ω T ω ≤ 0. ≤ 0.12) Cq Q −M For the remainder of this section we leave the extension to DNLDIs to the reader. M ≥ 0. . BpT P −Λ 0  ≤ 0.3) holds if and only if P > 0 and for all ξ and π that satisfy T πi2 ≤ (Cq. . . this is equivalent to the existence of P and λ satisfying  T  A P + P A + λCqT Cq P Bp P Bw P > 0. .i T ≤ 0.i ξ) (Cq. (6. defining Q = P −1 . . we can write (6. . defining Q = P −1 . .3) holds if there exist Q and a diagonal matrix M such that " # AQ + QAT + Bw Bw T + Bp M BpT QCqT Q > 0. we have for all ω.3) holds if and only if P > 0 and ξ T (AT P + P A)ξ + 2ξ T P (Bp π + Bw ω) − ω T ω ≤ 0 hold for every ω and for every ξ and π satisfying π T π − ξ T CqT Cq ξ ≤ 0. L. nq . condition (6. . (6.10)    T Bw P 0 −I Equivalently. (6. λ ≥ 0.    T Bw P 0 −I Equivalently. (6. i = 1. .3) holds if and only if there exist Q and µ satisfying " # AQ + QAT + Bw Bw T + µBp BpT QCqT Q > 0.6. .

6). This is an LMIP. if (A. and this CP will be unbounded below.8). by replacing the objective function log det P −1 in the CPs by the objective function λmax (P −1 ). For LTI systems.3) by minimizing log det P −1 over the variable P subject to (6. which corresponds to P = Wc−1 . This is again a CP. that is.8). x0 does not belong to the reachable set if there exists P satisfying xT0 P x0 > 1 and (6. satisfying (6. the minimum volume ellipsoid of the form (6. Bw ) is controllable.2). the minimum volume ellipsoid of the form (6. We easily see that min ξ T Q−1 ξ ¯ aT ξ > 1 = 1/(aT Qa). For PLDIs.4). Bw ) is uncontrollable. satisfying (6. we can immediately use the results of the previous subsection in order to c 1994 by the Society for Industrial and Applied Mathematics.4). where Wc is defined by the equation (6. Testing if the reachable set is contained in a polytope Since a polytope can be expressed as an intersection of half-spaces (these determine its “faces”).3). Bw ) is controllable.2) with P satisfying (6. we can minimize the volume over ellipsoids E with P satisfying (6.3). Testing if a point is outside the reachable set The point x0 lies outside the reachable set if there exists an ellipsoidal outer approx- imation of Rue that does not contain it. is obtained by minimizing log det P −1 over the variables P and λ subject to (6.6). This is a CP. then x0 does not belong to the reachable set if and only if xT0 Wc−1 x0 > 1.10). x0 lies outside the reachable set Rue if there exist P and λ satisfying xT0 P x0 > 1 and (6.9).80 Chapter 6 Analysis of LDIs: Input/Output Properties Smallest outer approximations • LTI systems: For LTI systems. This yields EVPs. the minimum value of ξ T Q−1 ξ over ξ satisfying aT ξ > 1 exceeds one. is found by minimizing log det P −1 subject to (6. • Norm-bound LDIs: For NLDIs. For PLDIs. another LMIP. Copyright ° . If (A. © ¯ ª Therefore. For NLDIs. then the CP finds the exact reachable set. the volume of the reachable set is zero. where Wc is defined by the equation (6. • Polytopic LDIs: For PLDIs. then H does not intersect the reachable set if and only if aT Wc a < 1. H does not intersect the reachable set if there exists Q satisfying aT Qa < 1 and (6. Of course. a sufficient condition for a point x0 to lie outside the reachable set Rue can be checked via an LMIP in the variable P : xT0 P x0 > 1 and (6.2).10). Testing if the reachable set lies outside a half-space The reachable set Rue lies outside the half-space H = ξ ∈ Rn ¯ aT ξ > 1 if and only © ¯ ª if an ellipsoid E containing Rue lies outside H. This is a CP.5). If (A. This is an LMIP.3). a sufficient condition for H not to intersect the reachable set Rue can be checked via an LMIP in the variable Q: aT Qa < 1 and (6. This is an LMIP. H does not intersect the reachable set Rue if there exists Q and µ satisfying aT Qa < 1 and (6.11). If (A. Bw ) is controllable. We can also minimize the diameter of ellipsoids of the form (6. for LTI systems. For NLDIs.

) We remark that we can use different ellipsoidal outer approximations to check different faces. . (Note we only obtain sufficient conditions.1). rnw ) such that P > 0. (For more details on this. we get Z T V (x(T )) ≤ wT Rw dt ≤ 1. To prove this. to get Z T V (x(T )) − V (x(0)) ≤ wT Rw dt. R ≥ 0. the reachable set is equal to the intersection of all such ellipsoids.1. we can write an equivalent LMI in Q and R: Q > 0. x(0) = 0. T ≥0  0 Suppose there is a positive-definite quadratic function V (ξ) = ξ T P ξ. (6. we consider the set of reachable states with inputs whose components have unit-energy.15) " # QAT + AQ Bw T ≤ 0. Tr R = 1. w satisfy (6. 0 Let us now consider condition (6. we consider the set  ¯  ¯ x. Tr R = 1.13) is equivalent to the LMI in P and R: P > 0. Bw −R Remark: It turns out that with LTI systems. condition (6.13) for the various LDIs: • LTI systems: For LTI systems. Bw P −R Therefore.1). we integrate both sides of the last inequality from 0 to T . .  ¯    ∆ ¯ Z T Ruce = x(T ) ¯ .   ¯ ¯ wi2 dt ≤ 1. 6.6. 0 Noting that V (x(0)) = 0 since x(0) = 0. Alternatively. . nw . R ≥ 0 and diagonal. and R = diag(r1 . then the ellipsoid E contains the reachable set Ruce . using the variable Q = P −1 . i = 1.2) contains the reachable set. . . see the Notes and References.13) V (x) ≤ w T Rw for all x and w satisfying (6. . . dt Then the ellipsoid E given by (6.1 Input-to-State Properties 81 check if the reachable set is contained in a polytope.) This electronic version is for personal use and may not be duplicated or distributed. .2 Reachable sets with componentwise unit-energy inputs We now turn to a variation of the previous problem. that is. if there exist P and R such that (6.14) " # AT P + P A P B w T ≤ 0.14) is satisfied. d (6. R ≥ 0 and diagonal. Tr R = 1. . (6.

i T ≤ 0. ¯ ¯ wT w ≤ 1.3 Reachable sets with unit-peak inputs We consider reachable sets with inputs w that satisfy w T w ≤ 1. by solving CPs and EVPs. condition (6. L. condition (6. λ ≥ 0.16) " # ATi P + P Ai P Bw.13) is equivalent to Q > 0.16) holds. Bw. . Tr R = 1. condition (6. R ≥ 0 and diagonal.   AQ + QAT + µBp BpT QCqT Bw (6.i −R • Norm-bound LDIs: For NLDIs.13) holds if and only if P > 0. .i T ≤ 0.2).i P −R Therefore.13) is also equivalent to the existence of Q.1). then the ellipsoid E contains the reachable set Ruce . Copyright ° .  T  A P + P A + λCqT Cq P Bp P Bw (6. satisfying condition (6. w (6.18) BpT P −λI 0  ≤ 0. i = 1.1.1). (6. . w satisfy (6. 6. we are inter- ested in the set ( ¯ ) ¯ x. R and µ such that Q > 0.    T Bw P 0 −R Therefore.17) " # QATi + Ai Q Bw. w T w ≤ 1 and V (x) ≥ 1. . if there exist P and R such that (6. R ≥ 0 and diagonal. R ≥ 0 and diagonal. Tr R = 1. . if there exists P . R ≥ 0 and diagonal.82 Chapter 6 Analysis of LDIs: Input/Output Properties • Polytopic LDIs: For PLDIs. Tr R = 1. applying the S-procedure. Bw. by solving appropriate LMIPs. Thus. L. µ ≥ 0. ∆ Rup = x(T ) ¯ . λ and R such that (6. . . we can find the minimum volume and minimum di- ameter ellipsoids among all ellipsoids of the form (6. condition (6. i = 1. T ≥0 Suppose that there exists a quadratic function V (ξ) = ξ T P ξ with P > 0.18) is satisfied. c 1994 by the Society for Industrial and Applied Mathematics. Tr R = 1. © ª With Q = P −1 .    T Bw 0 −R As in the previous section.13) holds if and only if there exist P . (6.19) Cq Q −µI 0  ≤ 0.13). With Q = P −1 . respectively.20) satisfying (6. λ and R such that P > 0. then the ellipsoid E = ξ | ξ T P ξ ≤ 1 contains Ruce . x(0) = 0. We can also check that a point does not belong to the reachable set or that a half-space does not intersect the reachable set. . and dV (x)/dt ≤ 0 for all x.

23). Let us consider now condition (6. (6.21) as " # AT P + P A + αP P Bw T ≤ 0. Using the S-procedure. . i = 1. Note that inequality (6.23) is not an LMI in α and P . Alternatively. We can also rewrite condition (6. (6. and (6. Therefore.26) is not an LMI in α and P . (6. . condition (6. we note that condi- tion (6.23) is equivalent to the following inequality in Q = P −1 and α: " # QAT + AQ + αQ Bw T ≤ 0. we can assume without loss of generality that β = α. (6. then it holds for all α0 ≥ β ≥ β0 . . the ellipsoid E given by (6. (6.2) contains the reachable set Rup . " #T " #" # x AT P + P A + αP P Bw x T + β − α ≤ 0.20) is equivalent to P > 0 and ξ T (AT P + P A)ξ + ξ T P Bw w + wT Bw T Pξ ≤ 0 for any ξ and w satisfying wT w ≤ 1 and ξ T P ξ ≥ 1.24) then the ellipsoid E contains the reachable set Rup . (6. L. and (6.21) w Bw P −βI w or equivalently  T  A P + P A + αP P Bw 0 T Bw P −βI 0  ≤ 0.i P −αI Therefore.6. if there exists P and α satisfying P > 0.26) in terms of Q = P −1 as the equivalent inequality " # QATi + Ai Q + αQ Bw. if (6. but is an LMI in P for fixed α.22) holds for some (α0 .28) Bw. however. .20) for the various LDIs: • LTI systems: For LTI systems. .26) Bw.27) then the ellipsoid E contains the reachable set Rup . . . we conclude that condition (6.22)    0 0 β−α Clearly we must have α ≥ β. β0 ). α ≥ 0. and rewrite (6. L. for fixed α it is an LMI in P . Next.25) Bw −αI • Polytopic LDIs: Condition (6. . (6. (6.i T ≤ 0.23) Bw P −αI Therefore. i = 1. condition (6.20) holds if P > 0 and there exists α ≥ 0 satisfying " # ATi P + P Ai + αP P Bw.26). Once again. .20) holds if there exist α ≥ 0 and β ≥ 0 such that for all x and w.1 Input-to-State Properties 83 Then.i T ≤ 0. if there exists P and α satisfying P > 0.i −αI This electronic version is for personal use and may not be duplicated or distributed. α ≥ 0.

PLDIs and NLDIs.29). π T π − ξ T CqT Cq ξ ≤ 0. Copyright ° .1 with an exponential time-weighting of the input w. α and λ satisfying P > 0. ξ T P ξ ≥ 1.20) is also implied by the existence of µ ≥ 0 and α ≥ 0 such that   QAT + AQ + αQ + µBpT Bp QCqT Bw Cq Q −µI 0  ≤ 0. Therefore.84 Chapter 6 Analysis of LDIs: Input/Output Properties • Norm-bound LDIs: For NLDIs. respectively. a sufficient condition for (6. subject to componentwise unit- peak inputs. (6.31)    T Bw 0 −αI (In fact. c 1994 by the Society for Industrial and Applied Mathematics.29)    T Bw P 0 −αI If there exists P .1.) For fixed α.20) to hold is that there exist α ≥ 0 and λ ≥ 0 such that  T  A P + P A + αP + λCqT Cq P Bp P Bw BpT P −λI 0  ≤ 0. We note again that inequality (6. (6. condition (6. See the Notes and References for details. (6. (6.32) where " # A(t) ∈ Ω. but is an LMI in P for fixed α.29) is not an LMI in P and α. α ≥ 0. the existence of λ ≥ 0 and α ≥ 0 satisfying (6. condition (6.1. z = Cz x.29) is equivalent to the existence of µ ≥ 0 and α ≥ 0 satisfying (6. condition (6. As in §6.2 State-to-Output Properties We now consider state-to-output properties for the LDI ẋ = A(t)x. Remark: Inequality (6.33) Cz (t) • LTI systems: LTI systems have the form ẋ = Ax. or EVPs.30) the ellipsoid EP −1 contains the reachable set Rup . ξ and π satisfying ω T ω ≤ 1.23) can also be derived by combining the results of §6.30) for NLDIs by forming the appropriate CPs. λ ≥ 0. using the S-procedure. it is possible to determine outer ellipsoidal approximations of the reachable set of LTI systems.27) for PLDIs and condition (6. z = Cz (t)x (6. and (6.20) holds if and only if P > 0 and ξ T (AT P + P A)ξ + 2ξ T P (Bw ω + Bp π) ≤ 0 for every ω.2. we can minimize the volume or diameter among all ellipsoids E satis- fying condition (6.24) for LTI systems. 6.31). Defining Q = P −1 .

6. . q = Cq x. we get Z T V (x(T )) − V (x(0)) ≤ − z T z dt. .32). . . i = 1. pT p ≤ q T q. we conclude that V (x(0)) = x(0)T P x(0) is an upper bound on the maximum energy of the output z given the initial condition x(0). nq . Cz (t) such that (6. i = 1.1 Bounds on output energy We seek the maximum output energy given a certain initial state.. . We now derive LMIs that provide upper bounds on the output energy for the various LDIs. (6. .37) subject to P > 0. .35) dt Then. q = Cq x. where " # (" # " #) A(t) A1 AL ∈ Co .. z = Cz x. condition (6. and the maximum is taken over A(t). we obtain the best upper bound on the output energy provable via quadratic functions by solving the EVP minimize x(0)T P x(0) (6. integrating both sides of the second inequality in (6. ¯ max (6. z = Cz (t)x.36) Therefore. z = Cz x.. or equivalently ẋ = Ax + Bp p. AT P + P A + CzT Cz ≤ 0. q = Cq x. |pi | ≤ |qi |.2. .34) 0 where x(0) is given. . for every x and z satisfying (6.35) is equivalent to P > 0. Cz (t) C1 CL • Norm-bound LDIs: NLDIs have the form ẋ = Ax + Bp p.36) This electronic version is for personal use and may not be duplicated or distributed. Since V (x(T )) ≥ 0. . z = Cz x p = ∆(t)q. Suppose there exists a quadratic function V (ξ) = ξ T P ξ such that d P > 0 and V (x) ≤ −z T z. pi = δi (t)qi .35) from 0 to T . z = Cz (t)x . q = Cq x. nq . (6.33) holds..2 State-to-Output Properties 85 • Polytopic LDIs: PLDIs have the form ẋ = A(t)x. k∆(t)k ≤ 1. ½Z ∞ ¯ ¾ z T z dt ¯¯ ẋ = A(t)x. 6. • LTI systems: In the case of LTI systems. |δi (t)| ≤ 1. 0 for every T ≥ 0. (6. z = Cz x. • Diagonal Norm-bound LDIs: DNLDIs have the form ẋ = Ax + Bp p. or equivalently ẋ = Ax + Bp p.

λ ≥ 0. BpT P −λI Therefore we obtain the smallest upper bound on the output energy (6.86 Chapter 6 Analysis of LDIs: Input/Output Properties In this case the solution can be found analytically.41) Defining Q = P −1 . defined by Z ∞ ∆ T Wo = eA t CzT Cz eAt dt.35). (6.34) provable via quadratic functions satisfying condition (6.i ≤ 0. i = 1. (6. Copyright ° . λ ≥ 0. T ATi P + P Ai + Cz.39). which is x(0)T Wo x(0).43) c 1994 by the Society for Industrial and Applied Mathematics. With Q = P −1 . .35) by solving the following EVP in the variables P and λ minimize x(0)T P x(0) (6.34) is x(0) T Q−1 x(0). .i T Q > 0.35) is also equivalent to the LMI in Q " # Ai Q + QATi QCz.44) subject to P > 0.35) is equivalent to P > 0.i Cz. (6.42) subject to P > 0. (6.35) is equivalent to P > 0 and ξ T (AT P + P A + CzT Cz )ξ + 2ξ T P Bp π ≤ 0 for every ξ and π that satisfy π T π ≤ ξ T CqT Cq ξ. where Wo is the observability Gramian of the system. (6. (6. condition (6.43) " # AT P + P A + CzT Cz + λCqT Cq P Bp ≤ 0. (6. condition (6. . Cz. condition (6.40) hold if and only if the following LMI in the variable P holds: P > 0. then an upper bound on the output energy (6. an equivalent condition is the existence of P and λ such that P > 0. • Polytopic LDIs: For PLDIs. ≤ 0.i Q −I • Norm-bound LDIs: For NLDIs.41) Therefore the EVP corresponding to finding the best upper bound on the output energy provable via quadratic functions is minimize x(0)T P x(0) (6. . 0 Since Wo satisfies the Lyapunov equation AT Wo + Wo A + CzT Cz = 0.35) is equivalent to " # AQ + QAT QCzT Q > 0. Using the S-procedure. condition (6.40) Inequalities (6. L. ≤ 0. and it is exactly equal to the output energy.38) it satisfies (6.39) Cz Q −I If Q satisfies (6. A(t)T P + P A(t) + Cz (t)T Cz (t) ≤ 0 for all t ≥ 0.

. condition (6. x(0)T P x(0) takes its maximum value on one of the vertices v1 . . which is known only to lie in a polytope P = Co{v1 .35) is obtained by replacing the objective function x(0)T P x(0) in the EVPs (6. for every ξ and π that satisfy T πi2 ≤ (Cq. The maximum of the p resulting optimal values is the smallest upper bound sought. Cz Q −I 0  ≤ 0. p. (6.35) is also equivalent to the existence of a diagonal matrix M ≥ 0 satisfying   AQ + QAT + Bp M BpT QCzT QCqT Cz Q −I 0  ≤ 0.35) and the S-procedure by solving the following EVP in the variables P and Λ: minimize x(0)T P x(0) subject to P > 0. . Therefore. . . we leave it to the reader to extend the results in the remainder of the section to DNLDIs. As a variation.2 State-to-Output Properties 87 With Q = P −1 . condition (6.i ξ) (Cq.42) or (6. . i = 1.    Cq Q 0 −µI • Diagonal Norm-bound LDIs: For DNLDIs. the smallest upper bound on the extractable energy from P provable via quadratic Lyapunov functions for any of our LDIs is computed by solving (6. .35) is also equivalent to the existence of µ ≥ 0 satisfying   AQ + QAT + µBp BpT QCzT QCqT Q > 0. For LTI systems. Then.45) With Q = P −1 .44) by the objective function λmax (X −1/2 P X −1/2 ). a sufficient condition is that there exists a diagonal Λ ≥ 0 such that " # AT P + P A + CzT Cz + CqT ΛCq P Bp ≤ 0.35) is equivalent to ξ T (AT P + P A + CzT Cz )ξ + 2ξ T P Bp π ≤ 0. (6. the exact value of the maximum extractable output energy is obtained. successively setting x(0) = v i . Using the S-procedure. . nq . . .i ξ) . . .45) BpT P −Λ Therefore we obtain the smallest upper bound on the output energy (6.34) provable via quadratic functions satisfying condition (6. we seek bounds on the maximum extractable energy from x(0). . vp }. and solving the corresponding EVPs.37). i = 1. .37). and (6.    Cq Q 0 −M Once again.44) p times. vp . This electronic version is for personal use and may not be duplicated or distributed.6. Maximum output energy extractable from a set As an extension. (6. . .42). (6. the smallest upper bound on the maximum output energy extractable from E provable via quadratic functions satisfying (6. . condition (6. We first note that since P > 0. Λ ≥ 0 and diagonal. suppose that x(0) is only known to belong to the ellipsoid E = {ξ | ξ T Xξ ≤ 1}.

32). . (6. ATi P + P Ai ≤ 0. Suppose E = {ξ | ξ T P ξ ≤ 1} is an invariant ellipsoid containing x(0) for the LDI (6.44) by the objective function Tr X0 P and solving the corresponding EVPs. λ and δ minimize δ subject to P > 0. (6. and (6.88 Chapter 6 Analysis of LDIs: Input/Output Properties Finally.46). x(0)T P x(0) ≤ 1. . x(0)T P x(0) ≤ 1.47) (6. we obtain the smallest upper bound on kzk provable via invariant ellipsoids by taking the square root of the optimal value of the following EVP in the variables P . (6.46). then we obtain the small- est upper bound on the expected output energy provable via quadratic functions by replacing the objective function x(0)T P x(0) in the EVPs (6. (6. we obtain the smallest upper bound on kzk provable via invariant ellipsoids by taking the square root of the optimal value of the following EVP in the variables P and δ minimize δ subject to P > 0. L • Norm-bound LDIs: For NLDIs.46) Cz δI • LTI systems: The smallest bound on the output peak that can be obtained via invariant ellipsoids is the square root of the optimal value of the EVP in the variables P and δ minimize δ subject to P > 0. if x0 is a random variable with E x0 xT0 = X0 .46). .2 Bounds on output peak It is possible to derive bounds on kz(t)k using invariant ellipsoids. . (6.42).37).2. i = 1.   AT P + P A + λCqT Cq P Bp  ≤0 BpT P −λI c 1994 by the Society for Industrial and Applied Mathematics. λ ≥ 0. Assume first that the initial condition x(0) is known. AT P + P A ≤ 0 • Polytopic LDIs: For PLDIs. Then z(t)T z(t) ≤ max ξ T Cz (t)T Cz (t)ξ ξ∈E for all t ≥ 0. Copyright ° . x(0)T P x(0) ≤ 1. 6. We can express maxξ∈E ξ T Cz (t)T Cz (t)ξ as the square root of the minimum of δ subject to " # P CzT ≥ 0.

B(t) and C(t) satisfying (6. x(0) = x0 . . from the arguments in §6. (6.6.50) by computing the square root of the minimum γ such that there exist P > 0 and Q > 0 satisfying AT P + P A + P B w Bw T P ≤ 0. γP − Q ≥ 0. For an LTI system.  ∆ z T z dt ¯¯ ¯ φ = max . t > T ≥ 0  and the √ maximum is taken over A(t). Suppose that P > 0 and Q > 0 satisfy d ¡ T d ¡ T x P x ≤ wT w.1. This electronic version is for personal use and may not be duplicated or distributed.48). for all t ≥ 0. x(0) = 0. An upper bound for φ can be computed by combining the ellipsoidal bounds on reachable sets from §6. that is.1. we assume that Dzw (t) = 0 for all t.3 Input-to-Output Properties We finally consider the input-output behavior of the LDI ẋ = A(t)x + Bw (t)w.1. AT Q + QA + CzT Cz ≤ 0. φ equals the Hankel norm.48) z = Cz (t)x + Dzw (t)w.1 and the bounds on the output energy from §6.3 Input-to-Output Properties 89 Remark: As variation of this problem. and consider the quantity  ¯ RT  Z ∞ ¯ 0 wT w dt ≤ 1. This also reduces to an EVP. Then.1. 6. given α > 0.  T ¯ w(t) = 0.2.3.50) dt dt for all x. we conclude that ©R ∞ ¯ φ ≤ sup T z T z dt ¯ x(T )T P x(T ) ≤ 1 ª ¯ ≤ sup x(T )T Qx(T ) ¯ x(T )T P x(T ) ≤ 1 © ª = λmax (P −1/2 QP −1/2 ). (6.2. we can impose a decay rate constraint on the output.1 and §6.49). we can compute an upper bound on the smallest γ such that the output z satisfies kz(t)k ≤ γe−αt .1 Hankel norm bounds In this section. w and z satisfying (6. • LTI systems: We compute the smallest upper bound on the Hankel norm provable via quadratic functions satisfying (6. a name that we extend for convenience to LDIs.49) Cz (t) Dzw (t) 6. x Qx ≤ −z T z ¢ ¢ (6. where " # A(t) Bw (t) ∈ Ω.

.i Bw. ATi Q̃ + Q̃Ai + Cz. and Q.e. λ.i /γ ≤ 0. . Q > 0. an equivalent EVP in the variables γ. P .. i = 1. This is a GEVP in the variables γ.i T Cz.    T " Bw P 0 −I # T A Q + QA + CzT Cz + µCqT Cq QBp ≤ 0. i = 1. Q.i ≤ 0. Defining Q̃ = Q/γ. . AT Q̃ + Q̃A + CzT Cz /γ ≤ 0. ATi Q + QAi + Cz. i. P . This is a GEVP in the variables γ. L P − Q̃ ≥ 0 • Norm-bound LDIs: Similarly. .50) by computing the square root of the minimum γ such that there exist P > 0 and Q > 0 satisfying ATi P + P Ai + P Bw. . • Polytopic LDIs: We compute the smallest upper bound on the Hankel norm provable via quadratic functions satisfying (6. .50) is computed by taking the square root of the minimum γ such that there exist P > 0. Copyright ° . P . P . and µ ≥ 0 satisfying  T  A P + P A + λCqT Cq P Bp P Bw BpT P −λI 0  ≤ 0. the optimal value is exactly the Hankel norm. ν = µ/γ and equivalent EVP in the variables γ. BpT Q̃ −νI P − Q̃ ≥ 0. and Q̃ is to minimize γ subject to ATi P + P Ai + P Bw. If we introduce the new variables Q̃ = Q/γ.38).i T P ≤ 0. and Q. . The corresponding EVP in the variables γ. λ ≥ 0. and ν is to minimize γ such that  T  A P + P A + λCqT Cq P Bp P Bw BpT P −λI 0  ≤ 0. P . and Q̃ is then to minimize γ subject to AT P + P A + P B w Bw T P ≤ 0. L γP − Q ≥ 0. Q̃. .i T Cz.i Bw. It is possible to transform it into an EVP by introducing the new variable Q̃ = Q/γ. respectively.90 Chapter 6 Analysis of LDIs: Input/Output Properties This is a GEVP in the variables γ. the smallest upper bound on the Hankel norm provable via quadratic functions satisfying (6. c 1994 by the Society for Industrial and Applied Mathematics. and µ. BpT Q −µI γP − Q ≥ 0. the solutions of the Lyapunov equations (6. P .iT P ≤ 0.6) and (6. λ. P − Q̃ ≥ 0 In this case. It can be found analytically as λmax (Wc Wo ) where Wc and Wo are the controllability and observability Gramians.    T " Bw P 0 −I # T T T A Q̃ + Q̃A + Cz Cz /γ + νCq Cq Q̃Bp ≤ 0.

T →∞ T 0 and the RMS gain is defined as RMS(z) sup . this implies kzk2 ≤ γ. (6.48). We define the L2 gain of the LDI (6.51) dt Then the L2 gain of the LDI is less than γ. this EVP gives the exact value of the L2 gain of the LTI system. and the supremum is taken over all nonzero trajectories of the LDI. Remark: Assuming that (A.51) from 0 to T . Now.6. . Bw . starting from x(0) = 0. kwk2 Remark: It can be easily checked that if (6. where the root-mean- square (RMS) value of ξ is defined as µ Z T ¶1/2 ∆ 1 RMS(ξ) = lim sup ξ T ξ dt .3. RMS(w)6=0 RMS(w) Now reconsider condition (6. This electronic version is for personal use and may not be duplicated or distributed. suppose there exists a quadratic function V (ξ) = ξ T P ξ. Cz ) is minimal. kCz (sI − A)−1 Bw k∞ .52).51) is equivalent to " # AT P + P A + CzT Cz P Bw T ≤ 0. ¢ V (x(T )) + 0 since V (x(T )) ≥ 0. and γ ≥ 0 such that for all t. d V (x) + z T z − γ 2 wT w ≤ 0 for all x and w satisfying (6. we compute the smallest upper bound on the L2 gain of the LTI system provable via quadratic functions by minimizing γ over the variables P and γ satisfying conditions P > 0 and (6. The LDI is said to be nonexpansive if its L2 gain is less than one. This is an EVP. (6. To show this. we integrate (6.51) holds for V (ξ) = ξ T P ξ.51) for LDIs. • LTI systems: Condition (6. then γ is also an upper bound on the RMS gain of the LDI. which also equals the H∞ norm of its transfer matrix.48) as the quantity kzk2 sup kwk2 6=0 kwk 2 R∞ where the L2 norm of u is kuk22 = 0 uT u dt. to get Z T ¡ T z z − γ 2 wT w dt ≤ 0. with the initial condition x(0) = 0.52) Bw P −γ 2 I Therefore.2 L2 and RMS gains We assume Dzw (t) = 0 for simplicity of exposition.3 Input-to-Output Properties 91 6. P > 0. P > 0.

i T T QCz. .51) is equivalent to ξ T (AT P + P A + CzT Cz )ξ + 2ξ T P (Bp π + Bw w) − γ 2 wT w ≤ 0 for all ξ and π satisfying πiT πi ≤ (Cq. i = 1.i P Bw. P and λ. condition (6. .92 Chapter 6 Analysis of LDIs: Input/Output Properties Assume Bw 6= 0 and Cz 6= 0.53) and P > 0.i Bw. L.i ξi )T (Cq. . L. nq . .i Q −γ 2 I We get the smallest upper bound on the L2 gain provable via quadratic functions by minimizing γ (over γ and P ) subject to (6.54). . . then the existence of P > 0 satisfying (6.i P −γ 2 I Assume there exists i0 for which Bw.53) Bw.52) is equivalent to the existence of Q > 0 satisfying " # AQ + QAT + Bw Bw T /γ 2 QCzT ≤ 0.i ξi ). then the existence of P > 0 and λ ≥ 0 satisfying (6. This is true if and only if there exists λ ≥ 0 such that  T  A P + P A + CzT Cz + λCqT Cq P Bp P Bw BpT P −λI 0  ≤ 0.51) is equivalent to " # ATi P + P Ai + Cz. condition (6.i T ≤ 0.i ≤ 0.i0 6= 0. If Bw 6= 0 and Cz 6= 0. subject to P > 0. λ ≥ 0 and (6.54)    T 2 Bw P 0 −γ I Therefore.51) is equivalent to ξ T (AT P + P A + CzT Cz )ξ + 2ξ T P (Bp π + Bw w) − γ 2 wT w ≤ 0 for all ξ and π satisfying π T π ≤ ξ T CqT Cq ξ. Cz. .i T Cz.j0 6= 0.54) is equivalent to the existence of Q > 0 and µ ≥ 0 satisfying   QAT + AQ + Bw Bw T + µBp BpT QCqT QCzT Cq Q −µI 0  ≤ 0.    Cz Q 0 −γ 2 I • Diagonal Norm-bound LDIs: For DNLDIs. i = 1. c 1994 by the Society for Industrial and Applied Mathematics. Then there exists P > 0 satisfying (6. we obtain the best upper bound on the L2 gain provable via quadratic functions by minimizing γ over the variables γ. Copyright ° . . . . which is an EVP. • Norm-bound LDIs: For NLDIs. (6. Cz Q −I • Polytopic LDIs: Condition (6. i = 1. and j0 for which Cz.53) if and only if there exists Q > 0 satisfying " # QATi + Ai Q + Bw. . . (6.

. • LTI systems: For LTI systems.48).3. . (6. ¢ V (x(T )) − 0 Since V (x(T )) ≥ 0. subject to P > 0 and (6. will be called its dissipativity. i.54) is equivalent to the existence of Q > 0 and M = diag(µ1 . V (x) − 2w T z + 2ηw T w ≤ 0. then the existence of P > 0 and λ ≥ 0 satisfying (6. Now reconsider condi- tion (6. (6.58) Bw P − Cz 2ηI − (Dzw + Dzw ) We find the largest dissipation that can be guaranteed using quadratic functions by maximizing η over the variables P and η. If Bw 6= 0 and Cz 6= 0. an EVP. . a sufficient condition is that there exist a diagonal Λ ≥ 0 such that  T  A P + P A + CzT Cz + CqT ΛCq P Bp P Bw BpT P −Λ 0  ≤ 0. subject to P > 0. λ ≥ 0 and (6.    Cz Q 0 −γ 2 I 6.e. condition (6. Thus passivity corresponds to nonnegative dissipation.57) is equivalent to " # AT P + P A P Bw − CzT T T ≤ 0.56) holds.56) 0 holds for all trajectories with x(0) = 0 and all T ≥ 0.6.3 Dissipativity The LDI (6. ¢ 0 which implies that the dissipativity of the LDI is at least η. This electronic version is for personal use and may not be duplicated or distributed.57) dt Then. . such that d for all x and w satisfying (6. (6. µnq ) ≥ 0 satisfying   QAT + AQ + Bw Bw T + Bp M BpT QCqT QCzT Cq Q −M 0  ≤ 0.57) from 0 to T with initial condition x(0) = 0 yields Z T ¡ T 2w z − 2ηw T w dt ≤ 0. Suppose that there is a quadratic function V (ξ) = ξ T P ξ. . The largest dissipation of the system.3 Input-to-Output Properties 93 Using the S-procedure. the largest number η such that (6.58).55)    T Bw P 0 −γ 2 I We obtain the best such upper bound by minimizing γ over the variables γ. we conclude Z T ¡ T w z − ηwT w dt ≥ 0..57) for the various LDIs. P > 0. integrating (6.48) is said to be passive if every solution x with x(0) = 0 satisfies Z T wT z dt ≥ 0 0 for all T ≥ 0. It is said to have dissipation η if Z T ¡ T w z − ηwT w dt ≥ 0 ¢ (6. P and λ.55).

which can be expressed in terms of the transfer matrix H(s) = Cz (sI − A)−1 Bw + Dzw as λmin (H(s) + H(s)∗ ) inf .63).58) is equivalent to " # QAT + AQ Bw − QCzT T T ≤ 0.i Q 2ηI − (Dzw.i − Cz. . .61) Bw.3. i = 1. This is an EVP.i − Cz.63)    BpT P 0 −λI Therefore we can find the largest dissipation provable with a quadratic Lyapunov function by maximizing η (over η and P ). the optimal value of this EVP is exactly equal to the dissipativity of the system.59) Res>0 2 This follows from the PR lemma. condition (6. where T is a positive-definite diagonal matrix.48) has as many inputs as outputs. .i 2ηI − (Dzw. i = 1. . We will see that scaling enables us to conclude “componentwise” results for the system (6.4 Diagonal scaling for componentwise results We assume for simplicity that Dzw (t) = 0. which has the interpretation of a scal- ing. (6. L.i − QCz. condition (6.94 Chapter 6 Analysis of LDIs: Input/Output Properties Assuming that the system (A.65) z̃ = T Cz (t)x. . (6. . .i T T ≤ 0. x(0) = 0. subject to P > 0 and (6.i T T ≤ 0. Defining Q = P −1 .60) Bw − Cz Q 2ηI − (Dzw + Dzw ) • Polytopic LDIs: For PLDIs. . the condition (6.57) is equivalent to the existence of µ ≥ 0 satisfying   QAT + AQ + µBp BpT Bw − QCzT QCqT T T Bw − Cz Q 2ηI − (Dzw + Dzw ) 0  ≤ 0. using the S-procedure. (6. c 1994 by the Society for Industrial and Applied Mathematics.57) is equivalent to " # ATi P + P Ai T P Bw.57) is true if and only if there exists a nonnegative scalar λ such that  T  A P + P A + λCqT Cq P Bw − CzT P Bp T T Bw P − Cz 2ηI − (Dzw + Dzw ) 0  ≤ 0.61).i ) • Norm-bound LDIs: For NLDIs. (6. condition (6.i P − Cz.i + Dzw.i + Dzw.64)    Cq Q 0 −µI 6. its optimal value is a lower bound on the dissipativity of the PLDI. Assuming that the system (6.65). If we define Q = P −1 . L.i ) We find the largest dissipation that can be guaranteed via quadratic functions by maximizing η over the variables P and η satisfying P > 0 and (6. Defining Q = P −1 .62) Bw. condition (6. Bw . Cz ) is minimal. Copyright ° . (6. we consider the system ẋ = A(t)x + Bw (t)T −1 w̃. (6. This is an EVP.57) is equivalent to " # QATi + Ai Q T Bw. (6.

. . . and S > 0.66) Bw. . • Polytopic LDIs: For PLDIs. . P > 0. kwi k22 Since the inequality is true for all diagonal nonsingular T .nz kw k 6=0 i 2 kwi k We show this as follows: For every fixed scaling T = diag(t1 . S diagonal.. • Norm-bound LDIs: For NLDIs. P and S) subject to (6. such that the following LMI holds:   AT P + P A + CzT SCz + λCqT Cq P Bp P Bw BpT P −λI 0  ≤ 0.. • LTI systems: The L2 gain for the LTI system scaled by T is guaranteed to be less than γ if there exists P > 0 such that the LMI " # AT P + P A + CzT SCz P Bw T <0 Bw P −γ 2 S holds with S = T T T . The smallest scaled L2 gain of the system can therefore be computed as a GEVP. minimizing γ 2 subject to the constraint (6.i P Bw. the “scaled L2 gain” kz̃k2 α= inf sup .6. the desired inequality follows. (6. We now show how we can obtain bounds on the scaled L2 gain using LMIs for LTI systems and LDIs.. . kw̃k2 6=0 kw̃k2 T >0 The scaled L2 gain has the following interpretation: kzi k max sup ≤ α. This electronic version is for personal use and may not be duplicated or distributed.67)    T Bw P 0 −γ 2 S Therefore.67) is an EVP. .66).. i=1. L. S > 0 and S diagonal. (6. the scaled L2 gain is guaranteed to be lower than γ if there exists S > 0. .i T < 0. Pn z 2 kz̃k22 t kzi k22 supkw̃k2 6=0 = supkwk2 6=0 Pni z 2i kw̃k22 i ti kwi k2 2 2 kzi k2 ≥ supkwi k2 6=0 . T diagonal.3 Input-to-Output Properties 95 Consider for example.i T SCz. with S diagonal. the scaled L2 gain (with scaling T ) is less than γ if there exist λ ≥ 0. and P > 0 which satisfy " # ATi P + P Ai + Cz. This is a GEVP. P > 0. tnz ). . i = 1.i P −γ 2 S The optimal scaled L2 gain γ is therefore obtained by minimizing γ over (γ.

8. considered in §10. This is equivalent to infeasibility of the LMI (in the variables P and W ): " # AT P + P A PB xT0 P x0 − Tr W Q ≥ 0. In this case. the problem reduces to checking whether the LMI " # AT P + P A PB xT0 P x0 < 1. we refer the reader to the articles of Yakubovich (see [Yak92] and references therein). For a general study of these. or equivalently Z ∞ wT W w dt ≤ Tr W Q for every W ≥ 0. As before.1. ≤ 0. P > 0. lim x(t) = 0 t→∞ is less than Tr W Q for every W ≥ 0. This is a multi-criterion convex quadratic problem.2).2 (and §10. We now derive LMI conditions that are both necessary and sufficient. BT P −R P > 0. We begin by observing that a point x0 belongs to the reachable set if and only if the optimal value of the problem Z ∞ minimize max wi2 dt i 0 subject to ẋ = −Ax + Bw w. 0 0 Many of the results derived for NLDIs and DNLDIs then become necessary and sufficient. and also Megretsky [Meg93. Meg92a. x(0) = x0 .14). Meg92b]. where the authors consider inputs w satisfying 0 wwT dt ≤ Q. Integral quadratic constraints were introduced by Yakubovich. lim x(t) = 0 t→∞ is less than one. x0 belongs to the reachable set if and only the optimal value of the problem Z ∞ minimize wT W w dt 0 subject to ẋ = −Ax + Bw w. A variation R ∞of this problem is considered in [SZ92]. 0 They get necessary conditions characterizing the corresponding reachable set. Tr R = 1 © ª is feasible. x(0) = x0 . Copyright ° . diagonal. Reachable sets for componentwise unit-energy inputs We study in greater detail the set of states reachable with componentwise unit-energy inputs for LTI systems (see §6. This shows that the reachable set is the intersection of ellipsoids ξ | ξ T P ξ ≤ 1 satisfying equation (6.9) we consider a generalization of the NLDI in which the pointwise condition pT p ≤ q T q is replaced by the integral constraint Z ∞ Z ∞ pT p dt ≤ q T q dt.96 Chapter 6 Analysis of LDIs: Input/Output Properties Notes and References Integral quadratic constraints In §8. R > 0. BT P −W c 1994 by the Society for Industrial and Applied Mathematics. ≤ 0.

This electronic version is for personal use and may not be duplicated or distributed. which we rewrite for convenience as " # (A + αI/2)T P + P (A + αI/2) P Bw T ≤ 0. subject to a unit-step input: " # ẋ = A(t)x + bw (t).3] considers the more general problem of time-varying ellipsoidal ap- proximations of reachable sets with unit-peak inputs for time-varying systems.3 can be used to find a bound on the step response peak for LDIs.68) is independent of T .2).1. PLDIs and NLDIs.1. we have 0 v T v dτ ≤ 1/α. Reachable sets with unit-peak inputs Schweppe [Sch73.3) and bounds on output peak (§6. xT (0) = 0.1).3. we conclude that maxt≥0 |s(t)| does not exceed M . where M 2 = maxt≥0 cz (t)T Qcz (t). These LMIs can be used to determine state-feedback matrices as well (see the Notes and References of Chapter 7). s = cz (t)x. the ellipsoid x | x P x ≤ 1 contains the reachable set with unit-peak inputs. with new exponentially time-weighted variables xT (t) = eα(t−T )/2 x(t) and v(t) = eα(t−T )/2 w(t) as α ³ ´ ẋT = A(t) + I xT + Bw (t)v. Bounds on overshoot The results of §6.2. therefore..1 imply that xT (T ) satisfies xT (T )T©P xT (T ) ≤ 1. Sabin and Summers [SS90] study the approximation of the reachable set via quadratic func- tions. terms of which closely resemble those in the matrix inequalities described in this book.23) can be interpreted as an exponential time-weighting procedure. Now suppose that P > 0 satisfies condition (6.25). α Then. Then for every T > 0. an upper bound on maxt≥0 |s(t)| (i.23). Consider a single-input single-output LDI. §4. Suppose there exist Q > 0 and α > 0 satisfying (6. 1 A(t)Q + QA(t)T + αQ + bw (t)bw (t)T < 0. (6. We note that the resulting bounds can be quite conservative. rewrite LDI (6. x(0) = 0. cz (t) 0 Since a unit-step is also a unit-peak input. A survey of the available techniques can be found in the article by Gayek [Gay91].68) Bw P −αI The results of §6. Since LMI (6. the positive- definite matrix describing the ellipsoidal approximation satisfies a matrix differential equa- tion. This can be used to find LMI-based bounds on overshoot for LTI systems.2.e.2. A(t) bw (t) ∈ Ω. the over- shoot) can be found by combining the results on reachable sets with unit-peak inputs (§6. The technique used to derive LMI (6.Notes and References 97 These are precisely the conditions obtained in [SZ92]. . 2 RT Since w(t)T w(t) ≤ 1. See also [FG88]. from §6.1. ª x(T )T P x(T ) ≤ T 1. Fix α > 0.

2. we can com- pute a bound on the impulse response peak by combining the results on invariant ellipsoids from §5.. This is an illustration of the fact that passive systems have nice “peaking” properties.2 with those on the peak value of the output from §6. x(0) = 0. consider the LTI system with transfer function n 1 Y s − si H(s) = . P b = cT . s + s1 s + si i=2 where si+1 À si > 0.47) can be no more than 2n − 1 times the actual maximum value of the impulse response. In this case. the dynamics of this system are widely spaced.47) turns out to be 2n − 1 times the actual maximum value of the impulse response. Copyright ° . c 1994 by the Society for Industrial and Applied Mathematics. i. For instance. We conjecture that the bound can be no more conservative. we know there exists a positive-definite P such that AT P +P A ≤ 0. Therefore.e. The bound obtained this way can be quite conservative. The bound on the peak value of the impulse response of the system computed via the EVP (6. so that the maximum value of the impulse response is cb and is attained for t = 0.2. for every LTI system. Examples for which the bound obtained via the EVP (6. which is used in nonlinear control [Kok92].47) is sharp include the case of passive LTI systems. where n is the dimension of a minimal realization of the system [Fer93]. the bound on the peak of the impulse response computed via the EVP (6. that is. y = cx is just the output y with initial condition x(0) = b and zero input.98 Chapter 6 Analysis of LDIs: Input/Output Properties Bounds on impulse response peak for LTI systems The impulse response h(t) = ceAt b of the single-input single-output LTI system ẋ = Ax + bu.

time-varying systems.. singleton.2) satisfies certain properties or specifications. ∂g ∂g ∂g  ∂x ∂w ∂u and f (0. this is called (linear.2) z = (Cz (t) + Dzu (t)K) x + Dzw (t)w. provided we have ∂f ∂f ∂f    ∂x ∂w ∂u   (x. w. t). stability. In this chapter we consider the state-feedback synthesis problem. t) = 0. u. " # (7. Remark: Using the idea of global linearization described in §4. As an example.. This yields the closed-loop LDI ẋ = (A(t) + Bu (t)K) x + Bw (t)w. Here u : R+ → Rnu is the control input and w : R+ → Rnw is the exogenous input signal .1) A(t) Bw (t) Bu (t) ∈ Ω. w and u. t. the problem of finding a matrix K so that the closed-loop LDI (7. polytope. z = Cz (t)x + Dzw (t)w + Dzu (t)u. z = g(x.g.e. (7. image of a unit ball under a matrix linear-fractional mapping). 0. and suppose that u = Kx. 0. Since the control input is a linear function of the state. constant) state-feedback . i. t). w. g(0. Cz (t) Dzw (t) Dzu (t) where Ω has one of our special forms (i. 0. u. 99 .Chapter 7 State-Feedback Synthesis for LDIs 7. u.. w. for all x. t) = 0.e.3. t) ∈ Ω. e. 0. and the matrix K is called the state-feedback gain. the methods of this chapter can be used to synthesize a linear state-feedback for some nonlinear. we can synthesize a state-feedback for the system ẋ = f (x.1 Static State-Feedback Controllers We consider the LDI ẋ = A(t)x + Bw (t)w + Bu (t)u. Let K ∈ Rnu ×n .

. . Equivalently.3) becomes ẋ = Ax + Bu u + Bp p. . k∆(t)k ≤ 1. q = Cq x + Dqu u + Dqp p (7.3) becomes ẋ = Ax + Bu u.4) is (quadratically) stable if and only if there exists P > 0 such that (A + Bu K)T P + P (A + Bu K) < 0. Substituting into (7.1) is said to be quadratically stabilizable (via linear state-feedback) if there exists a state-feedback gain K such that the closed-loop system (7. [A(t) Bu (t)] ∈ Co {[A1 Bu. Copyright ° . .2) is quadrat- ically stable (hence.1 ] . i = 1. q = Cq x + Dqp p + Dqu u. (7. . • Diagonal Norm-bound LDIs: For DNLDIs. Quadratic stabilizability can be expressed as an LMIP. .3) becomes ẋ = Ax + Bu u + Bp p. [AL Bu.5) • Norm-bound LDIs: For NLDIs. we have ẋ = Ax + Bu u + Bp p. • LTI systems: Let us fix the matrix K. . (7. . or equivalently. pT p ≤ q T q. but by a simple change of variables we can obtain an equivalent condition that is an LMI. i = 1. so that for Q > 0 we have K = Y Q−1 .4) • Polytopic LDIs: PLDIs are given by ẋ = A(t)x + Bu (t)u. Equivalently. nq . L. [A(t) Bu (t)] ∈ Ω. q = Cq x + Dqp p + Dqu u. stable).7) pi = δi (t)qi . (7.1 Quadratic stabilizability The system (7. equation (7. q = Cq x + Dqp p + Dqu u. (7. (7.L ]} . (7. |pi | ≤ |qi |. |δi (t)| ≤ 1. .9) c 1994 by the Society for Industrial and Applied Mathematics. we have ẋ = Ax + Bu u + Bp p. (7.100 Chapter 7 State-Feedback Synthesis for LDIs 7.8) yields AQ + QAT + Bu Y + Y T BuT < 0.2. Define Y = KQ. . 7. .3) • LTI systems: For LTI systems.6) p = ∆(t)q. . there exists Q > 0 such that Q(A + Bu K)T + (A + Bu K)Q < 0.8) Neither of these conditions is jointly convex in K and P or Q. The LTI system (7.2 State Properties We consider the LDI ẋ = A(t)x + Bu (t)u. (7.

With Y = KQ.i T < 0. it is not hard to show that this is equivalent to feasibility of the LMIs (7. the system (7. µ > 0 and Y such that  Ã !  AQ + QAT + µBp BpT T T T T µB D p qp + QC q + Y D qu  +Bu Y + Y T BuT  < 0.12) • Norm-bound LDIs: With u = Kx. For any Q > 0 satisfying (7. . . where σ is any scalar such that (7.   µDqp BpT + (Cq + Dqu K)Q T −µ(I − Dqp Dqp ) This condition has a simple frequency-domain interpretation: The H∞ norm of the transfer matrix from p to q for the LTI system ẋ = (A + Bu K)x + Bp p. these LMI conditions are necessary and sufficient for stabiliz- ability. (7.6.4) with state-feedback u = Y Q−1 x. we can without loss of generality take σ = 1.5). thus reducing the number of variables by one. then the quadratic function V (ξ) = ξ T Q−1 ξ proves (quadratic) stability of system (7.10).9).6) is quadratically stable (see LMI (5. the NLDI (7. . a stabilizing state-feedback gain is given by K = −(σ/2)BuT Q−1 . . Quadratic stabilizability is equivalent to the existence of Q > 0 and Y with QATi + Ai Q + Bu.i Y + Y T Bu. • Polytopic LDIs: For the PLDI (7.11). .11) where B̃u is an orthogonal complement of Bu .2: There exist Q > 0 and a scalar σ such that AQ + QAT − σBu BuT < 0. involving fewer variables. Thus.2.4) is quadratically stabilizable if and only if there exist Q > 0 and Y such that the LMI (7. and since the LMI is homogeneous in Q and σ.10). another equivalent condition is B̃uT AQ + QAT B̃u < 0. (7. a stabilizing state-feedback gain is K = −(σ/2)BuT Q−1 . can be derived using the elimination of matrix variables described in §2. i = 1. In terms of linear system theory.7.10) holds (condition (7. or (7.2 State Properties 101 which is an LMI in Q and Y . If Q > 0 satisfies the LMI (7.10) Since we can always assume σ > 0 in LMI (7. From the elimination procedure of §2. q = (Cq + Dqu K)x + Dqp p is less than one.10).11) implies that such a scalar exists). L. Remark: We will use the simple change of variables Y = KQ many times in the sequel.6. (7. the state-feedback gain K is recovered as K = Y Q−1 . the same argument applies. ¡ ¢ (7. It allows us to recast the problem of finding a state-feedback gain as an LMIP with variables that include Q and Y . stabilizability is equivalent to the condition that every unstable mode be controllable. An alternate equivalent condition for (quadratic) stabilizability.11). For LTI systems. If this LMI is feasible. we conclude that the NLDI is quadratically stabilizable if there exist Q > 0.14)) if there exist Q > 0 and µ > 0 such that  Ã !  AQ + QAT + Bu KQ T T µBp Dqp + Q(Cq + Dqu K)  + QK T BuT + µBp BpT  < 0.13)   µDqp BpT + Cq Q + Dqu Y T − µ(I − Dqp Dqp ) This electronic version is for personal use and may not be duplicated or distributed.9) holds. (7.

  Dqp M BpT + Cq Q + Dqu Y T − (M − Dqp M Dqp ) As before. An alternate.2. M = diag(µ1 . We say that the ellipsoid E = ξ ∈ Rn ¯ ξ T Q−1 ξ ≤ 1 © ¯ ª c 1994 by the Society for Industrial and Applied Mathematics. a stabilizing state-feedback gain is K = (−σ/2)BuT Q−1 .2.16)) if there exist Q > 0.   (7. 7.15) Cq Q + µDqp BpT −µ(I − Dqp DqpT ) (Again. we can freely set µ = 1 in this LMI.102 Chapter 7 State-Feedback Synthesis for LDIs Using the elimination procedure of §2. q = (Cq + Dqu K)x + Dqp p. and Y such that  Ã !  AQ + QAT + Bp M BpT T T T T Bp M Dqp + QCq + Y Dqu  +Bu Y + Y T BuT  < 0. .) In the remainder of this chapter we will assume that Dqp in (7. and obtain equivalent LMIs in fewer variables. . • Diagonal Norm-bound LDIs: With u = Kx.14) with µ = 1.6) is zero in order to simplify the discussion. we can eliminate the variable Y using the elimination procedure of §2.7) is quadratically stable (see LMI (5. . Then the DNLDI is quadratically stabilizable if there exists K such that for some diagonal positive-definite matrix M .   Dqp M BpT + (Cq + Dqu K)Q T −(M − Dqp M Dqp ) This condition leads to a simple frequency-domain interpretation for quadratic stabi- lizability of DNLDIs: Let H denote the transfer matrix from p to q for the LTI system ẋ = (A + Bu K)x + Bp p. If Q > 0 and σ > 0 satisfy (7. There is no analytical method for checking this condition. the DNLDI (7. µnq ) > 0 satisfying  Ã !  AQ + QAT + Bu KQ T T Bp M Dqp + Q(Cq + Dqu K)  + QK T BuT + Bp M BpT  < 0. all the following results can be extended to the case in which Dqp is nonzero. we can interpret quadratic stabilizability in terms of holdable ellipsoids.2. . we obtain the equivalent conditions:  Ã !  AQ + QAT QCqT + µBp DqpT T − σBu Dqu − σBu BuT + µBp BpT     < 0. we can set µ = 1.2 Holdable ellipsoids Just as quadratic stability can also be interpreted in terms of invariant ellipsoids.6. M > 0 and diagonal. With Y = KQ. we conclude that the NLDI is quadratically stabilizable if there exist Q > 0. By homogeneity.6. The latter condition is always satisfied since the NLDI is well-posed.14) Cq Q + µDqp BpT − σDqu BuT −µ(I − Dqp Dqp T T ) − σDqu Dqu T −µ(I − Dqp Dqp ) < 0. (7. equivalent condition for quadratic stabilizability is expressed in terms of the orthogonal complement of [BuT Dqu T T ] . which we denote by G̃: " # T AQ + QAT + µBp BpT QCqT + µBp Dqp T G̃ G̃ < 0. Copyright ° . kM −1/2 HM 1/2 k∞ ≤ 1.

3) with u = Kx.16) to hold for all kx(0k ≤ 1.12) or (7. etc. we will not be able to compute the minimum volume holdable ellipsoid that contains a given polytope P (see problem (5. In contrast. As a rule of thumb. imposing norm constraints on the input u = Kx.7. For example suppose we require (7.2. (7. (7.13)). max ku(t)k = maxt≥0 kY Q−1 x(t)k t≥0 ≤ maxx∈E kY Q−1 xk = λmax (Q−1/2 Y T Y Q−1/2 ).13)). where Q > 0 and Y satisfy the stabilizability conditions (either (7. Y µ2 I As another extension we can handle constraints on ∆ ku(t)kmax = max |ui (t)| i This electronic version is for personal use and may not be duplicated or distributed. with a few exceptions. and conse- quently.3). With this parametrization of quadratic stabilizability and holdable ellipsoids as LMIs in Q > 0 and Y . This implies that x(t) belongs to E for all t ≥ 0. ≥0 (7. Therefore the LMIs described in the previous section also characterize holdable ellipsoids for the system (7. and in addition x(0)T Q−1 x(0) ≤ 1. such as finding the coordinate transformation that minimizes the condition number of Q. results expressed as LMIs in P are not. Therefore. We can extend this to the case where x(0) lies in an ellipsoid or polytope. This restricts the extension of some of the results from Chapter 5 (and Chapter 6) to state-feedback synthesis.9). we can find an upper bound on the norm of the control input u(t) = Kx(t) as follows.16) x(0) Q Y µ2 I hold. Remark: In Chapter 5. however. the constraint ku(t)k ≤ µ is enforced at all times t ≥ 0 if the LMIs " # " # 1 x(0)T Q YT ≥ 0.9). results in the analysis of LDIs that are expressed as LMIs in the variable Q can be extended to state-feedback synthesis.3) if there exists a state-feedback gain K such that E is invariant for the system (7.12) or (7. 7. . ≥ 0. Pick Q > 0 and Y which satisfy the quadratic stabilizability condition (either (7.2 State Properties 103 is holdable for the system (7. it is not possible to rewrite these LMIs as LMIs with Q−1 as a variable. we will be able to compute the minimum diameter holdable ellipsoid (see problem (5.3 Constraints on the control input When the initial condition is known. This is easily shown to be equivalent to " # Q YT Q ≥ I. we saw that quadratic stability and invariant ellipsoids were characterized by LMIs in a variable P > 0 and also its inverse Q = P −1 . For example.34)) containing P.33)) as an optimization problem over LMIs. quadratic stabilizability and ellipsoid holdability can be expressed as LMIs only in variables Q and Y . we can solve various optimization problems.

[AL Bw. nq .   ¯   ∆ Z T Rue = x(T ) ¯ . T ≥ 0   0 c 1994 by the Society for Industrial and Applied Mathematics. Q > 0 and Y satisfy the stabilizability conditions (either (7. . .1 Reachable sets with unit-energy inputs For the system (7. 7. w. Equivalently. the constraint ku(t)kmax ≤ µ for t ≥ 0 is implied by the LMI " # " # 1 x(0)T X Y ≥ 0. .3 Input-to-State Properties We next consider the LDI ẋ = A(t)x + Bw (t)w + Bu (t)u. Copyright ° . • Norm-bound LDIs: For NLDIs.17) becomes ẋ = Ax + Bu u + Bp p + Bw w.17) • LTI systems: For LTI systems. • Polytopic LDIs: PLDIs are given by ẋ = A(t)x + Bw (t)w + Bu (t)u. where [A(t) Bw (t) Bu (t)] ∈ Co {[A1 Bw.13)).17). the set of states reachable with unit energy is defined as  ¯  ¯ x. L. i = 1. q = Cq x + Dqu u. .17) becomes ẋ = Ax + Bu u + Bp p + Bw w.104 Chapter 7 State-Feedback Synthesis for LDIs in a similar manner: max ku(t)kmax = maxt≥0 kY Q−1 x(t)kmax t≥0 ≤ maxx∈E kY Q−1 xkmax maxi Y Q−1 Y T ii . equation (7. q = Cq x + Dqu u pi = δi (t)qi . (7. ¡ ¢ = Therefore. . pT p ≤ q T q.17) with u = Kx. .1 Bu. u = Kx. x(0) Q YT Q where once again. . 7. equation (7. Xii ≤ µ2 .12) or (7. equation (7.1 ] . |δi (t)| ≤ 1.3. we have ẋ = Ax + Bu u + Bp p + Bw w.17) becomes ẋ = Ax + Bw w + Bu u. • Diagonal Norm-bound LDIs: For DNLDIs. . q = Cq x + Dqu u p = ∆(t)q. k∆(t)k ≤ 1 which we can also express as ẋ = Ax + Bu u + Bp p + Bw w. . .9). (7. . ¯   ¯ ¯ wT w dt ≤ 1. x(0) = 0. |pi | ≤ |qi |. u satisfy (7.L Bu. q = Cq x + Dqu u. i = 1. ≥ 0. .L ]}.

¯ • LTI systems: From §6. An equivalent condition is expressed in terms of G̃. . (7. Setting KQ = Y . For any Q > 0 and Y satisfying this LMI. we eliminate the variable Y to obtain the LMI in Q > 0 and the variable σ: T QAT + AQ − σBu BuT + Bw Bw ≤ 0.3 Input-to-State Properties 105 We will now derive conditions under which there exists a state-feedback gain K guar- anteeing that a given ellipsoid E = {ξ ∈ Rn ¯ ξ T Q−1 ξ ≤ 1 } contains Rue . .i Bw. (7. Another equivalent condition for E to contain Rue is B̃uT AQ + QAT + Bw Bw T ¡ ¢ B̃u ≤ 0. .7.11)): " # QAT + AQ + Bu Y + Y T BuT + Bw Bw T + µBp BpT (Cq Q + Dqu Y )T ≤ 0.1.i < 0. a corresponding state-feedback gain is given by K = −(σ/2)BuT Q−1 .19) where B̃u is an orthogonal complement of Bu . (7.2.1. the ellipsoid E contains Rue for the system (7. • Polytopic LDIs: For PLDIs.4) for some state-feedback gain K if K satisfies T AQ + QAT + Bu KQ + QK T BuT + Bw Bw ≤ 0. • Norm-bound LDIs: For NLDIs. the state-feedback gain K = Y Q−1 is such that the ellipsoid E contains the reachable set for the closed-loop system. E contains Rue for some state-feedback gain K if there exist µ > 0 and Y such that the following LMI holds (see LMI (6. E contains Rue for some state-feedback gain K if there exist Q > 0 and Y such that the following LMI holds (see LMI (6.6.20)   Cq Q + µDqp BpT − σDqu BuT T −µ(I − Dqp Dqp T ) − σDqu Dqu The corresponding state-feedback gain is K = (−σ/2)BuT Q−1 . Cq Q + Dqu Y −µI We can eliminate Y to obtain the LMI in Q > 0 and σ that guarantees that E contains the reachable set Rue for some state-feedback gain K:  Ã !  AQ + QAT − σBu BuT T T T QCq + µBp Dqp − σBu Dqu  +µBp BpT + Bw BwT  < 0. the orthogonal complement of [BuT DquT T ] :  Ã !  AQ + QAT T T QC q + µB D p qp  G̃T  +µBp BpT + Bw Bw T  G̃ < 0. .i T T + Bw.  Cq Q + µDqp BpT T −µ(I − Dqp Dqp ) This electronic version is for personal use and may not be duplicated or distributed. . we conclude that E ⊇ Rue for some K if there exist Y such that QAT + AQ + Bu Y + Y T BuT + Bw Bw T ≤ 0. i = 1.18) For any Q > 0 and σ satisfying this LMI. Using the elimination procedure of §2.i Y + Y T Bu.9)): QATi + Ai Q + Bu. L.

12)): " # T QAT + AQ + Bu Y + Y T BuT + Bw Bw + Bp M BpT (Cq Q + Dqu Y )T ≤ 0.17)). L. M > 0 and diagonal. i = 1.3) lies in a given half-space.i Y + Y T Bu. Bw −R • Polytopic LDIs: For PLDIs. u = Kx.i T Bw.3). the set of reachable states for inputs with componentwise unit-energy for the system (7. ¯   ¯ ¯ wiT wi dt ≤ 1.17) is defined as  ¯  ¯ x. µ > 0. u satisfy (7. E ⊇ Ruce for the LTI system (7.15). Cq Q + Dqu Y −M Using the elimination procedure of §2.4) if there exists diagonal R > 0 with unit trace such that the LMI " # QAT + AQ + Bu Y + Y T BuT Bw T ≤ 0. E contains Rue for some state- feedback gain K if there exist Q > 0.i −R • Norm-bound LDIs: For NLDIs. ¯ • LTI systems: From LMI (6. w. . in contrast with the results in Chapter 6. .   ¯   ∆ Z T Ruce = x(T ) ¯ . and Y such that the following LMI holds (see LMI (6.1 for details): • We can find a state-feedback gain K such that a given point x0 lies outside the set of reachable states for the system (7. Using these LMIs (see §6.3.2.i T < 0.6. T ≥0   0 We now consider the existence of the state-feedback gain K such that the ellipsoid E = {ξ ∈ Rn ¯ ξ T Q−1 ξ ≤ 1 } contains Ruce for LTI systems and LDIs. we must use the same outer ellipsoidal approximation to check different faces. E ⊇ Ruce for some state-feedback gain K if there exist Q > 0. . In this case.    T Bw 0 −R c 1994 by the Society for Industrial and Applied Mathematics. Copyright ° . (This is due to the coupling induced by the new variable Y = KQ. Bw. x(0) = 0. and a diagonal R > 0 with unit trace (see LMI (6. we can eliminate the variable Y to obtain an equivalent LMI in fewer variables. such that " # QATi + Ai Q + Bu. Y and a diagonal R > 0 with unit trace (see LMI (6. • We can find a state-feedback gain K such that the set of reachable states for the system (7.17).1.106 Chapter 7 State-Feedback Synthesis for LDIs • Diagonal Norm-bound LDIs: For DNLDIs. . E ⊇ Ruce for some state-feedback gain K if there exist Q > 0. Y (= KQ).2 Reachable sets with componentwise unit-energy inputs With u = Kx. such that   QAT + Y T BuT + AQ + Bu Y + µBp BpT QCqT + Y T Dqu T Bw Cq Q + Dqu Y −µI 0  ≤ 0.) 7. This result can be extended to check if the reachable set is contained in a polytope.19)).

(7. α > 0 and µ > 0 such that  Ã !  AQ + QAT + αQ T (Cq Q + Dqu Y )  T +Bu Y + Y T Bu + µBp BpT + Bw Bw /α  ≤ 0. for LTI systems and NLDIs.L This electronic version is for personal use and may not be duplicated or distributed.L ∈ Co .4 State-to-Output Properties 107 Remark: For LTI systems and NLDIs. u = Kx.21) • Polytopic LDIs: For PLDIs we have " # (" # " #) A(t) Bu (t) A1 Bu.5).i Bw. u satisfy (7. . the set of states reachable with inputs with unit- peak is defined as ( ¯ ) ¯ x.1 Dzu.28)) such that Ai Q + QATi + Bu. x(0) = 0.4) if there exists α > 0 such that T AQ + QAT + Bu Y + Y T BuT + Bw Bw /α + αQ ≤ 0. Cz (t) Dzu (t) Cz.. Y . • Polytopic LDIs: For PLDIs. • Norm-bound LDIs: From Chapter 6. 7. • LTI systems: For LTI systems. . and consider ẋ = A(t)x + Bu (t)u. the state equations are ẋ = Ax + Bu u. ∆ Rup = x(T ) ¯ .3 Reachable sets with unit-peak inputs For a fixed state-feedback gain K.4 State-to-Output Properties We consider state-feedback design to achieve certain desirable state-to-output prop- erties for the LDI (4.5).i /α ≤ 0 for i = 1. ¯ ¯ w(t)T w(t) ≤ 1.   Cq Q + Dqu Y −µI This condition is an LMI for fixed α.3.L Dzu. where Y = KQ.1 Cz. L. z = Cz (t)x. We set the exogenous input w to zero in (4.7. w. .. it is possible to eliminate the variable Y . z = Cz x + Dzu u.25). . . 7.17). E contains Ruce for the LTI system (7. T ≥0 • LTI systems: From condition (6.. Remark: Again. it is possible to eliminate the variable Y .1 AL Bu.i T T + αQ + Bw. .. E ⊇ Ruce if there exist Q > 0 and Y (see condi- tion (6. E contains Ruce if there exist Q > 0.i Y + Y T Bu.

z = Cz x + Dzu u q = Cq x + Dqu u.1 Bounds on output energy We first show how to find a state-feedback gain K such that the output energy (as defined in §6. L. . nq .i Q + Dzu. . L Cz. pi = δi (t)qi . T • Diagonal Norm-bound LDIs: Finally. z = Cz x + Dzu u. .22).39) that for a given state-feedback gain K. q = Cq x + Dqu u. z = Cz x + Dzu u q = Cq x + Dqu u p = ∆(t)q. • LTI systems: We conclude from LMI (6. i = 1. .1) of the closed-loop system is less than some specified value. the output energy is bounded above by x T0 Q−1 x0 . p p ≤ q T q. q = Cq x + Dqu u. (7. for any Q > 0. i = 1.  (7. inequality (7.23)   Cz Q + Dzu Y −I 0   Cq Q + Dqu Y 0 −µI c 1994 by the Society for Industrial and Applied Mathematics.i Q + Dzu. . We assume first that the initial condition x0 is given.i Y −I for some Y . where Q > 0 and Y = KQ satisfy " # AQ + QAT + Bu Y + Y T BuT (Cz Q + Dzu Y )T ≤ 0.2. Copyright ° . the output energy is bounded above by xT0 Q−1 x0 .i Y + Y B u. |pi | ≤ |qi |. which can be rewritten as ẋ = Ax + Bu u + Bp p. • Norm-bound LDIs: In the case of NLDIs. Y and µ ≥ 0 such that  Ã !  AQ + QAT + Bu Y  T T T (Cz Q + Dzu Y )T (Cq Q + Dqu Y )T   +Y B u + µB p B p   ≤ 0.108 Chapter 7 State-Feedback Synthesis for LDIs • Norm-bound LDIs: NLDIs are given by ẋ = Ax + Bu u + Bp p. Of course. z = Cz x + Dzu u. k∆(t)k ≤ 1. we can then find a state-feedback gain that guarantees an output energy less than γ by solving the LMIP xT0 Q−1 x0 ≤ γ and (7. which can be rewritten as ẋ = Ax + Bu u + Bp p.i Y )T    +B u. . |δi (t)| ≤ 1. where Q satisfies the LMI  Ã !  Ai Q + QATi  T T (Cz.22) Cz Q + Dzu Y −I Regarding Y as a variable. .i  ≤ 0. .22) is closely related to the classical Linear-Quadratic Regulator (LQR) problem. 7.21) does not exceed xT0 Q−1 x0 . . .  i = 1. see the Notes and References. • Polytopic LDIs: In this case. . the output energy of system (7. DNLDIs are given by ẋ = Ax + Bu u + Bp p.4. .

the output energy is bounded above by xT0 Q−1 x0 .  Ã !  (A + Bu K)Q + Q(A + Bu K)T T T Q(Cz + Dzu K)  +Bw Bw  ≤ 0.   C z Q + D zu Y −I 0   Cq Q + Dqu Y 0 −M Given an initial condition.24)   (Cz + Dzu K)Q −γ 2 I Introducing Y = KQ.2.1 L2 and RMS gains We seek a state-feedback gain K such that the L2 gain kzk2 sup kzk2 = sup kwk2 =1 kwk2 6=0 kwk2 of the closed-loop system is less than a specified number γ.25) Cz Q + Dzu Y −γ 2 I In the case of LTI systems. As mentioned in Chapter 6. (7.5 Input-to-Output Properties We finally consider the problem of finding a state-feedback gain K so as to achieve de- sired properties between the exogenous input w and the output z for the system (7.7. this condition is necessary and sufficient. in the case of DNLDIs.1). there exists a state-feedback gain K such that the L2 gain of an LTI system is less than γ. If x0 is a random variable with E x0 xT0 = X0 .3.5. EVPs that yield state-feedback gains that minimize the expected value of the output energy can be derived. We can extend these results to the case when x0 is specified to lie in a polytope or an ellipsoid (see §6. Y and M ≥ 0 and diagonal such that  Ã !  AQ + QAT + Bu Y T T (Cz Q + Dzu Y ) (Cq Q + Dqu Y )  +Y T BuT + Bp M BpT      ≤ 0. it is possible to simplify the inequality (7. the list of problems considered here is far from exhaustive.26) This electronic version is for personal use and may not be duplicated or distributed. this can be rewritten as " # AQ + QAT + Bu Y + Y T BuT + Bw Bw T (Cz Q + Dzu Y )T ≤ 0. for any Q > 0. 7. • Diagonal Norm-bound LDIs: Finally. we can eliminate the variable Y from this LMI.1). if there exist K and Q > 0 such that. (7.5 Input-to-Output Properties 109 Remark: As before.2.2.3. . From §6. Assuming CzT Dzu = 0 and Dzu T Dzu invertible. 7. • LTI systems: As seen in 6. (7. finding a state-feedback gain K so as to minimize the upper bound on the extractable energy for the various LDIs is therefore an EVP. the L2 gain for LTI systems is equal to the H∞ norm of the corresponding transfer matrix.25) and get the equivalent Riccati inequality AQ + QAT − Bu (Dzu T T Dzu )−1 BuT + Bw Bw + QCzT Cz Q/γ 2 ≤ 0.

28) 2 −γ I  Cz Q + Dzu Y 0        Cq Q + Dqu Y 0 −µI We can therefore find state-feedback gains that minimize the upper bound on the L2 gain. η satisfying Z T wT z − ηwT w dt ≥ 0 ¡ ¢ 0 for all T ≥ 0.i Q + Dzu. there exists a state-feedback gain such that the L2 gain of a PLDI is less than γ if there exist Y and Q > 0 such that  Ã !  Ai Q + QATi + Bu.  T T Bw.110 Chapter 7 State-Feedback Synthesis for LDIs The corresponding Riccati equation is readily solved via Hamiltonian matrices.2) is passive.i Y 2ηI − (Dzw..2 Dissipativity We next seek a state-feedback gain K such that the closed-loop system (7.i − Cz.i  ≤ 0. (7.5. Q > 0 and Y holds: " # AQ + QAT + Bu Y + Y T BuT Bw − QCzT − Y T Dzu T T T ≤ 0.i Y −γ 2 I • Norm-bound LDIs: For NLDIs.i Y )  +Y T Bu. provable with quadratic Lyapunov functions.i Y + Y B u. • Polytopic LDIs: From §6. the LMI that guarantees that the L2 gain is less than γ for some state-feedback gain K is     AQ + QAT   +Bu Y + Y T BuT  (Cz Q + Dzu Y )T (Cq Q + Dqu Y )T        T   +Bw Bw + µBp BpT   ≤ 0. i. 7. yields a closed- loop system with H∞ -norm less than γ.i Y T (Cz.62). Copyright ° .i − QCz. we conclude that the dissipation of the LTI system is at least η if the LMI in η.60). assuming Dzw (t) is nonzero and square. The T resulting controller. (7.i Q − Dzu.i + Dzw.i ) c 1994 by the Society for Industrial and Applied Mathematics. with state-feedback gain K = −(Dzu Dzu )−1 BuT .e.2. and using LMI (6. we wish to maximize the dissipativity. more generally.i Bw.i  ≤ 0. • LTI systems: Substituting Y = KQ.i − Y T Dzu. Bw − Cz Q − Dzu Y 2ηI − (Dzw + Dzw ) • Polytopic LDIs: From (6. for the various LDIs by solving EVPs.3.i Q + Dzu.i    +B u. there exists a state-feedback gain such that the dissipation exceeds η if there exist Q > 0 and Y such that  Ã !  Ai Q + QATi T  T T Bw.27)   Cz.i T T + Bw.

6 Observer-Based Controllers for Nonlinear Systems 111 • Norm-bound LDIs: From (6. . (7. 7. For exam- ple. D̄uy ∈ Rnu ×n . C̄u ∈ Rnu ×r .3.29) where Ā ∈ Rr×r . we can incorporate scaling techniques into many of the results above to derive componentwise results. Since the new LMIs thus obtained are straightforward to derive. that these more general LMIs are also infeasible.15). B̄y ∈ Rr×n .29) allows us to meet more specifications than can be met using a static state-feedback. The number r is called the order of the controller. however. . u = K x̄. (7. there exists a state-feedback gain such that the dissipation exceeds η if there exist Q > 0 and Y such that  Ã ! Ã !  AQ + QAT + Bu Y Bw − QCzT T (Cq Q + Dqu Y )  +Y T BuT + µBq BqT −Y T Dzu T     Ã !  T Bw − C z Q ≤0   T −(D −   zw + D zw 2ηI) 0    −Dzu Y   Cq Q + Dqu Y 0 −µI We can therefore find state-feedback gains that maximize the lower bound on the dissipativity.5.6 Observer-Based Controllers for Nonlinear Systems We consider the nonlinear system ẋ = f (x) + Bu u. We assume the function f : Rn → Rn satisfies f (0) = 0 and ∂f ∈ Co {A1 . 7. which is the same as (4. provable with quadratic Lyapunov functions. (7. . . this dynamic state-feedback controller reduces to the state-feedback we have considered so far (which is called static state-feedback in this context). Note that by taking r = 0. We look for a stabilizing observer-based controller of the form x̄˙ = f (x̄) + Bu u + L(Cy x̄ − y). AL are given. In this case we might turn to the more general dynamic state- feedback.3 Dynamic versus static state-feedback controllers A dynamic state-feedback controller has the form x̄˙ = Āx̄ + B̄y x. and y : R+ → Rq is the measured or sensed variable.31) This electronic version is for personal use and may not be duplicated or distributed.4. u : R+ → R is the control variable. . for the various LDIs by solving EVPs.7.64). suppose there does not exist a static state-feedback gain that yields closed-loop quadratic stability. It might seem that the dynamic state-feedback controller (7. We would find. Remark: As with §6. For specifications based on quadratic Lyapunov functions. AM } ∂x where A1 . this is not the case. u = C̄u x̄ + D̄uy x. See the Notes and References. .30) n p where x : R+ → R is the state variable. . we will omit them here. . y = Cy x. We could work out the more complicated LMIs that characterize quadratic stabilizability with dynamic state-feedback. . however.

by setting K = −(1/2)BuT Q−1 and L = −(1/2)P −1 Cy . . .112 Chapter 7 State-Feedback Synthesis for LDIs i. Lef65]. dt x̄ x̄ In the Notes and References. . we search for the matrices K (the estimated-state feedback gain) and L (the observer gain) such that the closed-loop system " # " # ẋ f (x) + Bu K x̄ = (7. Using the elimination procedure of §2. . . . Q.32) x̄˙ −LCy x + f (x̄) + (Bu K + LCy )x̄ is stable.35) for some σ and µ. These conditions are that some P > 0 and Q > 0 satisfy Ai Q + QATi < σBu BuT .35) and ATi P + P Ai < µCyT Cy . . . x̄. we prove that this is true if there exist P . where B̃u and C̃yT are orthogonal complements of Bu and CyT . Copyright ° . . ATi P + P Ai + W Cy + CyT W T < 0. [Bar70a. M (7.32) is stable if it is quadratically stable. .. Y . Ai Q + QATi + Bu Y + Y T BuT < 0. which is true if there exists a positive-definite matrix P̃ ∈ R2n×2n such that for any nonzero trajectory x.e. in c 1994 by the Society for Industrial and Applied Mathematics.2. M (7.. . For any P > 0. . By homogeneity we can freely set σ = µ = 1. M (7. see e. . .36) for some σ and µ. i = 1.30). and W satisfying these LMIs. we have: " #T " # d x x P̃ < 0. . . ¡ ¢ i = 1. i = 1. . there corresponds a stabilizing observer-based controller of the form (7. obtained by setting K = Y Q−1 and L = P −1 W . The observer-based controller with K = −(σ/2)B uT Q−1 and L = −(µ/2)P −1 Cy stabilizes the nonlinear system (7. If P and Q satisfy these LMIs.34) hold. . i = 1. Q > 0.6. . we can obtain equivalent conditions in which the variables Y and W do not appear. .31). To every P . respectively. Q > 0 satisfying these LMIs with σ = µ = 1. The closed-loop system (7.33) and P > 0. i = 1. . Y . It is clearly related to the theory of optimal control.g. we can obtain a stabilizing observer-based controller of the form (7. Notes and References Lyapunov functions and state-feedback The extension of Lyapunov’s methods to the state-feedback synthesis problem has a long history. M (7. and W such that the LMIs Q > 0. then they satisfy the LMIs (7. Q. M hold for some P > 0. Another equivalent condition is that B̃uT Ai Q + QATi B̃u < 0. i = 1.31). .36) and (7. M ¡ ¢ and C̃y ATi P + P Ai C̃yT < 0. . .

nonlinear and possibly time-varying control systems and calls it “the problem of control quality”. . EBFB92. McFarlane and Rotea [PM92. Wei94]. The L2 gain of PLDIs has been studied by Obradovic and This electronic version is for personal use and may not be duplicated or distributed. as well as in Stoorvogel [Sto91]. p. Gu92b. see [Cor94]. PMR93]. The problem of maximizing decay rate is discussed in great detail in [Yan92]. BPG89b. i. Wei [Wei90] derives necessary and sufficient “sign- pattern” conditions on the perturbations for the existence of a linear feedback for quadratic stability for a class of single-input uncertain linear dynamical systems.. At that time. GPB91. a convex problem involving LMIs. Sontag formulates a general theorem giving necessary and sufficient Lyapunov type conditions for stabilizability. the (exact) value function satisfies a partial differential equation and is hard to compute. see also the Notes and References of the previous chapter. FBBE92. KR91. One example is the “Lyapunov Min-Max Controller” described in [Gut79. which enables the extension of many of the results of Chapters 5 and 6 to state-feedback synthesis. BPG89a. The change of variables Q = P −1 and Y = KP −1 . by restricting our search to quadratic approximate value functions we end up with a low complexity task. GR88]. Arzelier et al. Hol- lot [Hol87] describes conditions under which a quadratically stabilizable LDI has infinite stabilizability margin. CL90. see also [SRC93]. [ABG93] consider the problem of robust stabilization with pole assignment for PLDIs via cutting-plane techniques. shows that there exist LDIs that are quadratically stabilizable. Gut79. Cor85]. SC91. adding performance indices allows one to recover many classical results of automatic control. KKR93.e. see also [HB80. other references on decay rates and quadratic stability margins are [Gu92b. see also [Lei79. when Letov [Let61] studied the performance of unknown. the performance criteria were decay rate and output peak deviations for systems subject to bounded peak inputs (see [Let61. We suspect that the methods described in this chapter can be interpreted as a search for suboptimal controls in which the candidate value functions are restricted to a specific class. PB84. See also [OC87]. though not via linear state-feedback.e. For a recent and broad review about quadratic stabilization of uncertain systems. Quadratic stabilizability The term “quadratic stabilizability” seems to have been coined by Hollot and Barmish in [HB80]. The problem of approximating reachable sets for LTI systems under various assumptions on the energy of the exogenous input has been considered for example by Skelton [SZ92] and references therein. Rya88. See also [GPB91. quadratic. Cor90].. RK91. The idea that the output variance for white noise inputs (and related LQR-like performance measures) can be minimized using LMIs can be found in [BH88a. See also [Zin90. see also [SP94e]. Pon62]). The problem of finding a state-feedback gain to minimize the scaled L2 gain for LTI sys- tems is discussed in [EBFB92]. For LTI systems his criteria reduce to the LMIs in this chapter. Bar85]. RK93. PGB93]. Quadratic Lyapunov functions for proving performance The addition of performance considerations in the analysis of LDIs can be traced back to 1955 and even earlier. Sko91b. In the general optimal control problem. Bar83. BH89. GP82.28) (this is shown in [Pet87a]). PZP+ 92. PBG89. Petersen [Pet87b] derives a Riccati-based approach for stabilizing NLDIs. In [Son83]. Petersen [Pet85].234] for details). In the case of LTI systems. which is in fact the state-feedback H∞ equation (7. is due to Bernussou. i. is used in the paper by Thorp and Barmish [TB81]. PG93] con- sider the extension to PLDIs. though not explicitly described. Peres. The counterpart for NLDIs is in Petersen. Gu92a. Several approaches use Lyapunov-like functions to synthesize nonlinear state-feedback control laws. BB91. Peres and Geromel [BPG89a]. where the authors give necessary and sufficient conditions for it. BCL83. Cor85. This change of variables.Notes and References 113 which the natural Lyapunov function is the Bellman–Pontryagin “min-cost-to-go” or value function (see [Pon61. Souza and Geromel [PSG92. SI93].

Copyright ° .37) is often called a “dissipation inequality” [Wil71b]. BuT P + Dzu T Cz T Dzu Dzu which does not possess any obvious convexity property. Gu93]. In particular. Dzu T Dzu is invertible T and Dzu Cz = 0. the authors give analytic conditions for making an LTI system passive via state-feedback. who point out explicitly that quadratic sta- bility of NLDIs with an L2 gain bound is equivalent to the scaled H∞ condition (6. XdS92. Its occurrence in control theory is much older. (7. we find: We shall investigate whether differential inclusions ẋ ∈ F (x(t)). we mention an instance of a Lyapunov function with a built-in performance criterion. Sznaier and Ly [SSL93]. We assume for simplicity that (A. (7. x(0) = x0 have trajectories satisfying the property Z t ∀t > s. dSFX93]. find the control input u that minimizes the output ∞ energy 0 z T z dt.26) is also encountered in full-state feedback H∞ control [Pet87a]. the Linear-Quadratic Regulator (LQR) problemR is: Given an initial condition x(0).39) c 1994 by the Society for Industrial and Applied Mathematics. B. In the book by Aubin and Cellina [AC84. We note that the Riccati inequal- ity (7.2. it is interesting to compare the inequality (7. Finally. With −1 Qare = Pare . For LTI systems. by Peres. Similar ideas can be found in [CP72].3). §6]. Zhu and Skelton [GCZS93]. Kokotovic [Kok92] considers stabilization of nonlinear systems. Geromel and Souza [PGS91] and also by Ohara. Carpenter. We shall encounter such Lyapunov functions in §8. see also [SKS93]. z = Cz x + Dzu u.37) s We shall then say that a function V [· · ·] satisfying this condition is a Lyapunov function for F with respect to W . In [PP92]. Yak71] and Mageirou [Mag76]. NLDIs have been studied in this context by Petersen [Pet89]. Masubuchi and Suda [OMS93]. see for example Starr and Ho [SH68]. and DeSouza et al. LMI formulation of LQR problem For the LTI system ẋ = Ax + Bu u. and provides a motivation for rendering an LTI system passive via state-feedback (see §6.3. The problem of minimizing the control effort given a performance bound on the closed-loop system can be found in the articles of Schömig. the quadratic matrix inequality found there (expressed in our notation) is " # AT P + P A + γ 2 P B w Bw T P + CzT Cz P Bu + CzT Dzu ≥ 0. we can rewrite (7. and Grigoriadis. see also [ZKSN92.114 Chapter 7 State-Feedback Synthesis for LDIs Valavani [OV92]. [XFdS92.25) which provides a necessary and sufficient condition for a system to have L2 gain less than γ with the corresponding matrix inequality found in the article by Stoorvogel and Trentel- man [ST90]. V (x(t)) − V (x(s)) + W (x(τ ). (7. C) is minimal.38) as AQare + Qare AT − Bu (Dzu T Dzu )−1 BuT + Qare CzT Cz Qare = 0.38) The optimal output energy for an initial condition x(0) is then given by x(0)T Pare x(0). Yakubovich [Yak70. It turns out that the optimal input u can be expressed as a constant state-feedback u = Kx. T where K = −(Dzu Dzu )−1 BuT Pare and Pare is the unique positive-definite matrix that satisfies the algebraic Riccati equation AT Pare + Pare A − Pare Bu (Dzu T Dzu )−1 BuT Pare + CzT Cz = 0. WXdS92. it appears verbatim in some articles on the theory of nonzero- sum differential games. ẋ(τ )) dτ ≤ 0. Inequality (7.54).

z = Cz x. We will demonstrate this fact on one simple problem considered in this chapter. it is well- known (see for example [Wil71b]) that this EVP is another formulation of the LQR problem. Cz Q −γ 2 I This electronic version is for personal use and may not be duplicated or distributed. is an EVP. First.1. (7. ¡ Dzu (7. [Kai80]). the problem of maximizing x(0)T P x(0) subject P > 0 and the constraint AT P + P A − P Bu (Dzu T Dzu )−1 BuT P + Cz CzT ≥ 0. x(0) = 0. then Q ≤ Q are .42) which is not a convex constraint in P . similar statements can be made for all the properties and for all the LDIs considered in this chapter. and we showed that in this case. and the optimal state-feedback T −1 T gain is Kopt = Yopt Q−1 are = −(Dzu Dzu ) Bu Pare .Notes and References 115 and the optimal output energy as x(0)T Q−1 are x(0). the EVP is equivalent to minimizing x(0)T P x(0) subject to ¢−1 T AT P + P A + CzT Cz − P Bu Dzu BuT P ≤ 0. provided the properties are specified using quadratic Lyapunov functions. the optimal value of this EVP must equal the optimal output energy given via the solution to the Riccati solution (7.40) holds for some Q > 0 and Y if and only if it holds for Y = −(Dzu Dzu )−1 BuT . using a simple completion-of-squares argument T that LMI (7.5. We can derive this analytic solution for the EVP via the following steps. Q > 0 must satisfy ¢−1 QAT + AQ + QCzT Cz Q − Bu Dzu T BuT ≤ 0. For LTI systems.40) Cz Q + Dzu Y −I Of course. In fact.4.41) Next. we considered the closely related problem of finding a state-feedback gain K that minimizes an upper bound on the energy of the output.41). ¡ Dzu (7. Consider the LTI system ẋ = Ax + Bu u + Bw w. Static versus dynamic state-feedback An LTI system can be stabilized using dynamic state-feedback if and only if it can be stabi- lized using static state-feedback (see for example.39). and T −1 therefore the optimal value of the EVP is just x(0) Qare x(0).1. which is nothing other than (7. . given an initial condition.43) From §7. the upper bound equals the output energy. (7. It is interesting to note that with P = Q−1 . the minimum output energy was given by minimizing x(0)T Q−1 x(0) subject to Q > 0 and " # AQ + QAT + Bu Y + Y T BuT (Cz Q + Dzu Y )T ≤ 0. it can be shown by standard manipulations that if Q > 0 satisfies (7. In section §7. we have x(0)T Q−1 T −1 are x(0) ≤ x(0) Q x(0).42). it can be shown. However. but with the inequality reversed. the Riccati equation can be thus be interpreted as yielding an analytic solution to the EVP. Therefore for every initial condition x(0). the exists a static state-feedback u = Kx such that the L2 gain from w to z does not exceed γ if the LMI in variables Q > 0 and Y is feasible: " # AQ + QAT + Bu Y + Y T BuT + Bw Bw T QCzT ≤ 0. in this case.

we get the equivalent LMI in Q > 0 and scalar σ: QCzT Cz Q AQ + QAT + Bw Bw T + ≤ σBu BuT . where Acl = Abig + Bbig Kbig with " # " # " # A 0 0 Bu Bc Ac Abig = . the closed-loop system is ẋcl = Acl xcl + Bcl u. Kbig = .45) does not exceed γ if and only if the LMI (7.116 Chapter 7 State-Feedback Synthesis for LDIs (In fact this condition is also necessary. KR88. This will establish that the smallest upper bound on the L2 gain. c 1994 by the Society for Industrial and Applied Mathematics. other references that discuss the question of when dynamic or nonlinear feedback does better than static feedback include Petersen.44) with the change 2 2 of variables Q = γ Q11 and σ = γ τ . Bcl P −γ 2 I " # ATbig P + Pbig A − τ P Bbig Bbig T T P + Ccl Ccl P Bcl T ≤ 0. Cc and Dc such that the L2 gain of the system (7. γ2 −1 where Q11 is the leading n × n block of Pbig . (7. the L2 gain of the closed-loop system does not exceed γ if the LMI in Pbig > 0 is feasible: " # T ATbig P + Pbig A + P Bbig Kbig + Kbig T Bbig T P + Ccl Ccl P Bcl T ≤ 0. KPR88.45) We will show that there exist matrices Ac . RK89]. The last LMI is precisely (7. RK88. Bcl P −γ 2 I Eliminating Kbig from this LMI yields two equivalent LMIs in Pbig > 0 and a scalar τ . is the same irrespective of whether a static state-feedback or a dynamic state-feedback is employed. Bc . Ccl = [Cz 0] . but this is irrelevant to the current discussion.44) γ2 We next consider a dynamic state-feedback for the system: ẋc = Ac xc + Bc x. " # ATbig P + Pbig A − τ I + Ccl T Ccl P Bcl T ≤ 0. u = Cc xc + Dc x.43. 0 From §6.44) holds.7. Khar- gonekar and Rotea [Pet85. while the second reduces to τ > 0 and T Bw Bw AQ11 + Q11 AT + + Q11 Cz CzT Q11 ≤ τ Bu BuT . xcl (0) = 0. provable via quadratic Lyapunov functions.) Eliminating Y .3. With state vector xcl = [xT xTc ]T . z = Ccl xcl . A similar result is established [PZP+ 92]. xc (0) = 0. Bbig = . Copyright ° . 0 0 I 0 Dc Cc " # Bw Bcl = . We will show only the part that is not obvious. Pet88. Bcl P −γ 2 I It is easy to verify that the first LMI holds for large enough τ . (7.2.

finding dy- namic output-feedback controllers with the smallest possible number of states. i. Skelton. M. .32) is quadratically stable if and only if the system (7.47) has a solution if and only if the LMIs (7. . .34) (in the variables P > 0. One variation of this problem is the “reduced-order controller problem”. . EG93. x̄ − x: " # " # x̄˙ LCy (x̄ − x) + f (x̄) + Bu K x̄ = . i = 1.33) and (7. ISG93. M. GdSS93.47) has a solution P̃ . With P̃ partitioned as n × n blocks.46) x̄˙ − ẋ f (x̄) − f (x) + LCy (x̄ − x) Using the results of §4. (7. Observer-based controllers for nonlinear systems We prove that the LMIs (7. . . i. however. (7. M. . Therefore. that output-feedback synthesis problems have high complexity and are therefore unlikely to be recast as LMI problems. . . It is shown in [PZPB91. . We suspect. local algorithms for such problems and report practical success. . " # P11 P12 P̃ = T . see also [HCM93].47) implies (Ai + Bu K)T P11 + P11 (Ai + Bu K) < 0. 0 Aj + LCy Simple state-space manipulations show the system (7. and Geromel. . P12 P22 we easily check that the inequality (7. Peres.46) is quadratically stable. Geromel and de Souza [IS93b. Several researchers have considered such problems: Steinberg and Corless [SC85].47) We will now show that the LMI (7.32) in the coordinates x̄. The general problem of minimizing the rank of a matrix subject to an LMI constraint has high complexity. in fact. W and Y ) ensure the existence of an observer-based controller of the form (7. Q > 0.34) do. 1≤j≤M x̄˙ − ẋ x̄ − x with " # Ai + B u K LCy Ãij = . j = 1. Galimidi and Barmish [GB86]. . (7.32) is quadratically stable if there exists P̃ > 0 such that ÃTij P̃ + P̃ Ãij < 0.33) and (7.3. de Souza and Skelton [PGS93].48) This electronic version is for personal use and may not be duplicated or distributed.g. that is. j = 1.30) quadratically stable. SIG93]. Nevertheless several researchers have tried heuristic. First suppose that (7. Others studying this topic include Iwasaki. We start by rewriting the system (7.31) which makes the system (7. [Dav94]). Pac94] that this problem can be reduced to the problem of minimizing the rank of a matrix P ≥ 0 subject to certain LMIs. a special rank minimization problem can be shown to be equivalent to the NP-hard zero-one linear pro- gramming problem (see e.Notes and References 117 Dynamic output-feedback for LDIs A natural extension to the results of this chapter is the synthesis of output-feedback to meet various performance specifications. the system (7. we can write " # " # x̄˙ ¯ © ª x̄ ∈ Co Ãij ¯ 1 ≤ i ≤ M.

. GA94. Conversely. .34) and (7. M . Other relevant references include [PB92. . M . .33). BP91] consider the problem in the context of NLDIs. c 1994 by the Society for Industrial and Applied Mathematics. . j = 1.33) and define L = P −1 W . . AGB94]. Denoting by Q22 the lower-right n × n block of Q̃.47) becomes Ãij Q̃ + Q̃ÃTij < 0. We refer the reader to the articles cited for precise explanations of what these control laws are. . the closed-loop system (7. . i = 1. .49) is equivalent to (7. Zhou and Doyle in [LZD91] and Becker and Packard [Bec93. Problems of observer-based controller design along with quadratic Lyapunov functions that prove stability have been considered by Bernstein and Haddad [BH93] and Yaz [Yaz93]. We now prove there exists a positive λ such that " # λQ−1 0 P̃ = 0 P satisfies (7. Define Q̃ = P̃ −1 . the inequality (7. IS93a. .118 Chapter 7 State-Feedback Synthesis for LDIs −1 With the change of variables Q = P11 and Y = Bu Q. the inequality (7. . Copyright ° . Since (7. Pac94.47) and therefore proves the closed-loop system (7. We compute  Ã !  (Ai + Bu K)T Q−1 −1  λ + Q−1 (Ai + Bu K) λQ LCy  ÃTij P̃ + P̃ Ãij =  . . . M and νi = λmax ((Ai + Bu K)T Q−1 + Q−1 (Ai + Bu K)).32) is quadratically stable if λ > 0 satisfies λ(Q−1 LCy ((Aj + LCy )T P + P (Aj + LCy ))−1 (LCy )T Q−1 ) − (A + Bu K)T Q−1 − Q−1 (A + Bu K) > 0 for i. M.32) is quadratically stable. IS93b. . 1≤j≤M 1≤i≤M where µj = λmin (Q−1 LCy ((Aj + LCy )T P + P (Aj + LCy ))−1 (LCy )T Q−1 ). . Lu. using Schur complements. . . .    (Aj + LCy )T P  λ(LCy )T Q−1 + P (Aj + LCy ) for i. and which design problems can be recast as LMI problems. i = 1. K = Y Q−1 . j = 1. Such controllers are called gain-scheduled or parameter-dependent. (7. such a λ always exists. The inequality (7. BPPB93.34) and (7. This condition is satisfied for any λ > 0 such that λ min µj > max νi . j = 1. .49) With P = Q−1 22 and W = P L. M. Q. PBPB93. . . Therefore. Gain-scheduled or parameter-dependent controllers Several researchers have extended the results on state-feedback synthesis to handle the case in which the controller parameters can depend on system parameters. Y and W satisfy the LMIs (7.33) are satisfied. .34). we see that this inequality implies (Ai + LCy )Q22 + Q22 (Ai + LCy )T < 0. j = 1. suppose that P . . i.48) is equivalent to (7. M. .

however. .2) occurs when the functions φ i are linear. φi (σ) = δi σ. . . equivalently.q denotes the ith row of Cq . (8. which implies that V (ξ) ≥ ξ T P ξ > 0 for nonzero ξ. where p(t) ∈ Rnp .1 Analysis of Lur’e Systems We consider the Lur’e system ẋ = Ax + Bp p + Bw w.1). Our analysis will be based on Lyapunov functions of the form np Z Ci. Such systems are readily transformed to the form given in (8.3) i=1 0 where Ci. . . and the functions φi satisfy the [0.2) or.Chapter 8 Lur’e and Multiplier Methods 8. i. An important special case of (8.e. pi (t) = φi (qi (t)). We require P > 0 and λi ≥ 0. (8. q = Cq x. Bw . It is possible to handle the more general sector conditions αi σ 2 ≤ σφi (σ) ≤ βi σ 2 for all σ ∈ R.2) by a loop transformation (see the Notes and References). φi (σ) = δi (t)σ and δi (t) ∈ [0.q ξ X T V (ξ) = ξ P ξ + 2 λi φi (σ) dσ. It is important to distinguish this case from the PLDI obtained with (8. . np . Note that the Lyapunov function (8. i = 1. Thus the data describing the Lyapunov function are the matrix P and the scalars λi . φi (σ)(φi (σ) − σ) ≤ 0 for all σ ∈ R.1) and (8. In the terminology of control theory.. where δi ∈ [0. 1] sector conditions 0 ≤ σφi (σ) ≤ σ 2 for all σ ∈ R. The results that follow will hold for any nonlinearities φi satisfying the sector conditions. (8. i = 1. (8. time-varying parameters. In cases where the φi are known. this is referred to as a system with unknown-but-constant parameters.3) depends on the (not fully specified) non- linearities φi . 1].1) z = Cz x. where αi and βi are given. np . Bp . . Thus the data P and λi should be thought of as providing a recipe for 119 . which is referred to as a system with unknown. The data in this problem are the matrices A. Cq and Cz . the results can be sharpened.1). 1]. .

1) and (8. .q (Ax + Bp p) . (8. The S-procedure then yields the following sufficient condition for (8. ..5) It is easily shown that {(ξ. . . . π) | ξ 6= 0 or π 6= 0. λnp ) ≥ 0 and T = diag(τ1 . . (8.1). λnp ). π) | ξ 6= 0.5) } . . i = 1.120 Chapter 8 Lur’e and Multiplier Methods constructing a specific Lyapunov function given the nonlinearities φi . (8. . dt i=1 condition (8. As an exam- ple. np . .1. we are really synthesizing a parameter-dependent quadratic Lyapunov function for our parameter- dependent system. . .4) dt Since np à ! dV (x) T X =2 x P + λi pi Ci. and seek P and λi such that dV (x) < 0 for all nonzero x satisfying (8. . Perform a loop transformation on the Lur’e system to bring the nonlinearities to sector [−1.1 Stability We set the exogenous input w to zero. . as follows. τnp ) ≥ 0 " # AT P + P A P Bp + AT CqT Λ + CqT T <0 (8.e.1.. . . δnp ) and Λ = diag(λ1 . 1]. Remark: Condition (8. (8.q (Aξ + Bp π) < 0 i=1 for all nonzero ξ satisfying πi (πi − ci. see §5. c 1994 by the Society for Industrial and Applied Mathematics. It is also necessary when there is only one nonlinearity. Λ = diag(λ1 . . Then the LMI above is the same (to within a scaling of the variable T ) as one that yields quadratic stability of the associated DNLDI. 8.6) is only a sufficient condition for the existence of a Lur’e Lyapunov function that proves stability of system (8. consider the special case of a system with unknown-but-constant parameters.q ξ) ≤ 0.5) } = {(ξ. when np = 1. Copyright ° . .6) BpT P + ΛCq A + T Cq ΛCq Bp + BpT CqT Λ − 2T holds. The Lyapunov function will then have the form V (ξ) = ξ T P + CqT ∆ΛCq ξ ¡ ¢ where ∆ = diag(δ1 . BpT P + T Cq −2T which can be interpreted as a condition for the existence of a quadratic Lyapunov function for the Lur’e system.e.2). . Remark: When we set Λ = 0 we obtain the LMI " # AT P + P A P Bp + CqT T < 0. . i.4) holds if and only if np à ! X ξT P + λi πi Ci. i. φi (σ) = δi σ.4): The LMI in P > 0. In other words. .

.  ¯   ¯ x. τnp ) ≥ 0 such that   AT P + P A P Bp + AT CqT Λ + CqT T P Bw  T  Bp P + ΛCq A + T Cq ΛCq Bp + BpT CqT Λ − 2T ΛCq Bw  ≤ 0 (8.3 Output energy bounds We consider the system (8. . . . T ≥0   ¯    ¯  0 The set np ( ¯ Z Ci. so that E ⊇ ¯ F ⊇ Rue . wT w dt ≤ 1. x(0) = 0    ¯   ∆ ¯ Rue = x(T ) ¯ ¯ Z T . . F gives a possibly better outer approximation. the ellipsoid E = {ξ ∈ Rn ¯ ξ T P ξ ≤ 1 } contains F. The condition (8. i=1 for every ξ satisfying πi (πi − ci. Of course. (8.1) with initial condition x(0).9) dt then J ≤ V (x(0)). λnp ) ≥ 0. np . . .8)  T T T Bw P B w Cq Λ −I holds. . .1. 8. (8. . w satisfy (8. the condition (8. We can use these results to prove that a point x0 does not belong to the reachable set by solving appropriate LMIPs.1) and (8.1 Analysis of Lur’e Systems 121 8. where Λ = diag(λ1 . E gives an outer approximation of Rue if the nonlinearities are not known. Therefore. and compute upper bounds on the output energy Z ∞ J= z T z dt 0 using Lyapunov functions V of the form (8. i = 1.7) holds if there exists T = diag(τ1 . If d V (x) + z T z ≤ 0 for all x satisfying (8.1. . .q ξ ) ¯ X ¯ T F = ξ ¯ ξ Pξ + 2 λi φi (σ) dσ ≤ 1 ¯ 0 i=1 contains Rue if d V (x) ≤ w T w for all x and w satisfying (8. If the nonlinearities φi are known.1). This electronic version is for personal use and may not be duplicated or distributed.9) is equivalent to np à ! X T 2 ξ P+ λi πi Ci.3).2 Reachable sets with unit-energy inputs We consider the set reachable with unit-energy inputs.q ξ) ≤ 0.2).8.q (Aξ + Bp π) + ξ T CzT Cz ξ ≤ 0. .3).7) dt From the S-procedure.

.1. Using the S-procedure. . λnp ) and T = diag(τ1 . . . Copyright ° . τnp ) ≥ 0 and (8. Λ = diag(λ1 .11) dt then the L2 gain of the system (8. i = 1. . . in which the point- wise constraint p(t)T p(t) ≤ q(t)T q(t) is replaced with a constraint on the integrals of p(t)T p(t) and q(t)T q(t): ẋ = Ax + Bp p + Bu u + Bw w.1) does not exceed γ. this is satisfied if there exists T = diag(τ1 . .4 L2 gain We assume that Dzw = 0 for simplicity. λnp ) ≥ 0.2 Integral Quadratic Constraints In this section we consider an important variation on the NLDI.10) BpT P + ΛCq A + T Cq ΛCq Bp + BpT CqT Λ − 2T Since V (x(0)) ≤ x(0)T (P + CqT ΛCq )x(0). an upper bound on J is obtained by solving the following EVP in the variables P . If there exists a Lyapunov function of the form (8. . . 8. Λ = diag(λ1 .10).12).122 Chapter 8 Lur’e and Multiplier Methods Using the S-procedure. np . . P > 0 If the nonlinearities φi are known. . . . Λ ≥ 0. provable using Lur’e Lyapunov functions. T = diag(τ1 . P . .1). (8.11) is equivalent to np à ! X T 2 ξ P+ λi πi Ci.q ξ) ≤ 0. Λ and T subject to P > 0. we conclude that the condition (8. q = Cq x + Dqp p + Dqu u + Dqw w. . . (8. τnp ) ≥ 0 such that  T  A P + P A + CzT Cz P Bp + AT CqT Λ + CqT T P Bw  T T T  Bp P + ΛCq A + T Cq ΛCq Bp + Bp Cq Λ − 2T ΛCq Bw  ≤ 0. .q (Aξ + Bp π) ≤ γ 2 wT w − ξ T CzT Cz ξ i=1 for any ξ satisfying πi (πi − ci. . . . T ≥ 0. . 0 0 c 1994 by the Society for Industrial and Applied Mathematics. . . we can obtain possibly better bounds on J by modifying the objective in the EVP appropriately.13) Z t Z t p(τ )T p(τ ) dτ ≤ q(τ )T q(τ ) dτ. 8. . τnp ): minimize x(0)T P + CqT ΛCq x(0) ¡ ¢ subject to (8.3) and γ ≥ 0 such that d V (x) ≤ γ 2 wT w − z T z for all x and w satisfying (8. . is therefore obtained by minimizing γ over γ. τnp ) ≥ 0 such that " # AT P + P A + CzT Cz P Bp + AT CqT Λ + CqT T ≤ 0. This is an EVP. . .9) holds if there exists T = diag(τ1 . (8. The condition (8. z = Cz x + Dzp p + Dzu u + Dzw w (8.12)  T T T 2 Bw P Bw Cq Λ −γ I The smallest upper bound on the L2 gain. .

3) P > 0.14) dt or µ t ¶ d Z xT P x + λ q(τ )T q(τ ) − p(τ )T p(τ ) dτ ¡ ¢ < 0. 8.   AT P + P A + λCqT Cq P Bp + λCqT Dqp   < 0. Moreover.2. An important example is when p and q are related by a linear system.2 Integral Quadratic Constraints 123 Such a system is closely related to the NLDI with the same data matrices. We can also consider a generalization of the DNLDI. in stability analysis. Here. Suppose P > 0 and λ ≥ 0 are such that d T x P x < λ pT p − q T q . the V of Chapters 5–7 decreases monotonically to zero. let us examine the condition (8. it can be shown that limt→∞ x(t) = 0. dt which in turn. and roughly speaking. V is not a Lyapunov function in the conventional sense. the very same V decreases to zero. In control theory terms. but not necessarily monotonically. dt 0 Note that the second term is always nonnegative. our analysis will be based on quadratic functions of the state V (ξ) = ξ T P ξ. i.1 Stability Consider system (8. ẋf = Af xf + Bf q.. whenever pT p ≤ q T q. ¡ ¢ (8.13). (P Bp + λCqT Dqp )T T −λ(I − Dqp Dqp ) 8. It is exactly the same as the condition obtained by applying the S-procedure to the condition d T x P x < 0.2. as an example we will encounter constraints of the form 0 p(τ )T q(τ ) dτ ≥ 0 in §8. the interpretation of V is different here. xf (0) = 0. p = Cf xf + Df q. in which we have compo- nentwise integral quadratic constraints. indeed. leaving others to the reader.2 L2 gain We assume that x(0) = 0 and. For example. We will demon- strate this generalization for stability analysis and L2 gain bounds. However. for simplicity. where kDf + Cf (sI − Af )−1 Bf k∞ ≤ 1.13) is described as a linear system with (dynamic) nonexpansive feedback. or the system is stable. λ ≥ 0. We will now show that many of the results on NLDIs (and DNLDIs) from Chap- ters 5–7 generalize to systems with integral quadratic constraints.8.e. that Dzw = 0 and Dqp = 0. . the very same LMIs will arise.3. Now. This electronic version is for personal use and may not be duplicated or distributed.14). every trajectory of the associated NLDI is a trajectory of (8. Thus. however. the system (8. leads to the LMI condition for quadratic stability of NLDI (5. a differential inclusion. Using standard arguments from Lyapunov theory. This system is not.13) without w and z. we can consider integral quadratic constraints with R t different sector bounds. As in Chapters 5–7.

(8. We can consider the more general case when αi ≤ δi ≤ βi using a loop transformation (see the Notes and References).124 Chapter 8 Lur’e and Multiplier Methods Suppose P > 0 and λ ≥ 0 are such that d T x P x < λ pT p − q T q + γ 2 wT w − z T z. . .9) from w to z does not exceed γ.3). .i − Dm.i T ¡ T ¢ ≤ 0.3.13) does not exceed γ.i xm.i ). and δi .i ). i = 1. np are any nonnegative numbers. Cm = diag(Cm.i − Cm. np . The only exceptions are the results on coordinate transformations (§5. also §7. q = Cq x + Dqp p. . 6 and 7 extend imme- diately to systems with integral quadratic constraints.18) Bm. Condition (8. . which in turn is equivalent to the existence of Pi ≥ 0. Bm = diag(Bm. .i xm. . 0 This last condition is equivalent to passivity of the LTI systems (8. . and on reachable sets with unit-peak inputs (§6.i (See §6.i + Dm. Z T Z T T T T γ 2 wT w − z T z dt.i ) is controllable.i is stable and (Am. we begin by defining np LTI systems with input qi and output qm.1 . Bm. i = 1. .i + Dm.i .1. where p(t) ∈ Rnp .i ). .15) dt Integrating both sides from 0 to T .2.3 Multipliers for Systems with Unknown Parameters We consider the system ẋ = Ax + Bp p + Bw w.17) qm.i T Pi Bm. . (8.1. np .55) which guarantees that the L 2 gain of the NLDI (4. .i qi .i (t) ∈ R: ẋm. Remark: Almost all the results described in Chapters 5. We further assume that Z t qi (τ )qm. c 1994 by the Society for Industrial and Applied Mathematics. .3.1.3.i Pi + Pi Am.np ]T are the input and output respectively of the diagonal system with realization Am = diag(Am. .i = Am.2). .17). ¡ ¢ ¡ ¢ x(T ) P x(T ) + λ q(τ ) q(τ ) − p(τ ) p(τ ) dτ < 0 0 This implies that the L2 gain from w to z for system (8. Copyright ° .15) leads to the same LMI (6.i qi . i = 1.) ∆ Thus. . where we assume that Am.i (τ ) dτ ≥ 0 for all t ≥ 0. .i ).i + Bm. xm.i .i Pi − Cm. qm. . . i = 1. 8. Dm = diag(Dm.i (0) = 0. satisfying " # ATm. where qm. For reasons that will become clear shortly.3 and §7. . np . (8. z = Cz x + Dzw w.16) pi = δ i q i . q and qm = [qm. ¡ ¢ (8.i = Cm.

conclusions about (8. i = 1. δ ≥ 0. if system (8. For instance. dt qm = C̃q x̃ + D̃qp p. B̃p = . we conclude that system (8. .20) dt i=1 or np Z à ! t d X x̃T P x̃ + 2 pi (τ )qm. it can be shown using standard arguments that limt→∞ x̃(t) = 0.19) is stable.21) can also be interpreted as a positive dissipation con- dition for the LTI system with transfer matrix G(s) = −W (s)H(s).19) for some appropriate xm . the system (8.2 Reachable sets with unit-energy inputs The set of reachable states with unit-energy inputs for the system ẋ = Ax + Bp p + Bw w. (8.8.16). Hence W is often called a “multiplier”. .i (τ ) dτ < 0. . where W (s) = Cm (sI − Am )−1 Bm + Dm and H(s) = Cq (sI − A)−1 Bp + Dqp .19) without w and z. q = Cq x + Dqp p.16) is stable if there exists a passive W such that −W (s)H(s) has positive dissipation.16) have implica- tions for (8. This electronic version is for personal use and may not be duplicated or distributed. 8. . Cm and Dm : " # ÃT P + P à P B̃p + C̃qT < 0. B m Cq Am Bm Dqp 0 C̃q = [Dm Cq Cm ]. B̃w = .22) p = δq.16).1 Stability Consider system (8. then [xT xTm ]T is a trajectory of (8. Z t pi (τ )qm. np .3. so is system (8.19) z = C̃z x̃ + D̃qp p + Dzw w.21) B̃pT P + C̃q D̃qp + D̃qp T Remark: LMI (8.i (τ ) dτ ≥ 0. C̃z = [Cz 0]. Therefore. Suppose P > 0 is such that np d T X x̃ P x̃ + 2 pi qm. Condition (8. D̃qp = Dm Dqp . See the Notes and References for details.. 0 It is easy to show that if x is a trajectory of (8.3.20) is just the LMI in P > 0.e. (8.19). dt i=1 0 Then. 8. Therefore. (8.3 Multipliers for Systems with Unknown Parameters 125 Defining " # " # " # A 0 Bp Bw à = . . consider the following system with integral quadratic constraints: dx̃ = Ãx̃ + B̃p p + B̃w w. i.19) is stable. (8.i < 0.

(8.24) and " # xT0 P11 x0 − 1 xT0 P12 T > 0.i (τ ) dτ < w(τ )T w(τ ) dτ.19). Cm . i=1 0 0 © ¯ then the set E = x̃ ¯ x̃T P x̃ ≤ 1 contains the reachable set for the system (8.24)    B̃pT P + C̃q 0 T D̃qp + D̃qp Now. wT w dt ≤ 1. Cm and Dm :  T  à P + P à P B̃w P B̃p + C̃qT T B̃w P −I 0  < 0. Copyright ° . ª Condition (8. x0 does not belong to the reachable set for the system (8.18).23) and there exists no z such that " #T " # x0 x0 P ≤ 1.22). (8. and DM satisfying (8.23) dt i=1 or np Z t Z t X x̃T P x̃ + 2 pi (τ )qm.22) if there exist P > 0. z P12 P22 z Therefore.126 Chapter 8 Lur’e and Multiplier Methods is  ¯   ¯ x. Pi ≥ 0. z z Partitioning P conformally with the sizes of x0 and z as " # P11 P12 P = T . x(0) = 0    ¯   ∆ ¯ Rue = x(T ) ¯¯ Z T . (8. P12 x0 P22 This is an LMIP. λ ≥ 0. Lur’e and Postnikov were the first to propose Lyapunov functions consisting of a quadratic form c 1994 by the Society for Industrial and Applied Mathematics. using quadratic functions for the augmented system. w satisfy (8. Notes and References Lur’e systems The Notes and References of Chapters 5 and 7 are relevant to this chapter as well. P12 P22 we note that " #T " #" # x0 P11 P12 x0 −1 T minz T = xT0 (P11 − P12 P22 P12 )x0 .i < wT w. the point x0 lies outside the set of reachable states for the system (8. T ≥0   ¯    ¯  0 We can obtain a sufficient condition for a point x0 to lie outside the reachable set Rue .23) is equivalent to the LMI in P > 0. Suppose P > 0 is such that np d T X x̃ P x̃ + 2 pi qm.22) if there exists P satisfying (8.

PS83]. In some special cases they even point out geometrical properties of the regions defined by the inequalities. Tsy64c. Construction of Lyapunov functions for systems with integral quadratic constraints We saw in §8. A precursor is the paper by Karmarkar and Siljak [KS75]. p = g(xf . especially in the former Soviet Union.25) where xf (t) ∈ Rnf .25). Moreover. sometimes under the name “real-µ problems”. ẋf = f (xf . Postnikov.13) with integral quadratic constraints. q. Tsy64a. convexity of the regions is not noted in the early work. xf satisfies (8.1 that the quadratic positive function V (ξ) = ξ T P ξ is not necessarily a Lyapunov function in the conventional sense for the system (8. and Letov the problem of constructing Lya- punov functions reduces to the satisfaction of some inequalities. t). Tsypkin [Tsy64d. q. The book [Lur57] followed this article. Tsy64b]. q. t). See also [Sko91a. Rt Rt Since 0 pT p dτ ≤ 0 q T q dτ . it was not known in the 1940’s and 1950’s that solution of convex inequalities is practically and theoretically tractable. Work on this topic was continued by Popov [Pop62]. p = g(xf . in cases when p and q are related by ẋf = f (xf . Yakubovich [Yak64]. Sko91b] for general numerical methods to prove stability of nonlinear systems via the Popov or circle criterion. the Lyapunov function V : Rn × Rnf → R+ given by ∆ V (ξ.Notes and References 127 plus the integral of the nonlinearity [LP44]. it can be shown (see for example [Wil72]) that Vf : Rnf → R+ given by T ½ Z ¯ ¾ ∆ q T q − pT p dt ¯¯ xf (0) = ξf .2). t). SP94h. in which the author showed how to solve the problem of finding such Lyapunov functions analyti- cally for systems of order 2 or 3.2. See also [IR64]. In any case. Meg92b. although V (x(t)) → 0 as t → ∞. . Integral quadratic constraints have recently received much attention.3 were This electronic version is for personal use and may not be duplicated or distributed. unknown parameters Problems involving system (8. see Siljak [Sil89] for a survey. that they form a parallelogram. See for example [Yak88. The results of §8.16) are widely studied in the field of robust control. ξf ) = ξ T P ξ + Vf (ξf ) proves stability of the interconnection ẋ = Ax + Bp p. Sav91. Szego [Sze63] and Meyer [Mey66]. t). T ≥ 0 ¡ ¢ ¯ Vf (ξf ) = sup − 0 is a Lyapunov function that proves stability of system (8. But as far as we know. who used a locally con- vergent algorithm to determine margins for a Lur’e-Postnikov system with one nonlinearity. in the early books by Lur’e. SP94d]. We now show how we can explicitly construct a Lyapunov function given P . SP94b. q. i. Meg92a.25)..e. Another early book that uses such Lyapunov functions is Letov [Let61]. Note that this construction also shows that ξ T P ξ proves the stability of the NLDI (5. Yak92. (8. The ellipsoid method is used to construct Lyapunov functions for the Lur’e stability problem in the papers by Pyatnitskii and Skorodinskii [PS82. Indeed. q = Cq x + Dqp p. it does not do so monotonically. Systems with constant. Jury and Lee [JL65] consider the case of Lur’e systems with multiple nonlinearities and derive a corresponding stability criterion.

and apply them to Aerospace problems. and how these tests can be reduced to LMIPs. the multipliers W are of the form (1 + qs) for some q ≥ 0. HHHB92.3). This observation was made by Brockett and Willems in [BW65]. We also mention [TV91]. t). t) = δq(t) for some unknown real number δ. and Nemirovskii [Nem94]). Braatz and Young [BYDM93]. Kharitonov’s theorem [Kha78] and its extensions [BHL89]. When ∆ is a nonlinear time-invariant memoryless operator satisfying a sector condition (i. using the S-procedure [Yak77]. checking whether there exists a static. e. There are elegant analytic solutions for some very special problems involving parameter- dependent linear systems. HH93a. In contrast. HCB93. In the case when ∆(q. HH93b. see also [GG94]). A related result. that feasibility of LMI (8. Complexity of stabilization problems The stability (and performance) analysis problems considered in §8.g. ∞) if and only if there exists a passive multiplier W such that W Hqp is also passive. Poljak and Rohn [PR94]. A nice survey of these results can be found in Barmish [Bar93]. c 1994 by the Society for Industrial and Applied Mathematics. HB93b. given a transfer function Hqp they show that the transfer function Hqp /(1+kHqp ) is stable for all values of k in (0. For example. where Tesi and Vicino study a hybrid system.128 Chapter 8 Lur’e and Multiplier Methods derived by Fan. NT73. Tits and Doyle in [FTD91] using an alternate approach. SL93a. Sil69. states that checking whether there exists a common stabilizing LTI controller for three LTI systems is undecidable. SLC94] where the authors devise appropriate multipliers for constant real uncertainties. HHH93a. We conjecture that checking whether a general DNLDI is stable is also NP-hard. In general multiplier theory we consider the system ẋ = Ax + Bp p + Bw w. Multiplier methods See [Wil69a. generalize the use of quadratic Lyapunov functions along with multipliers for various classes of nonlinearities. we must also cite Yakubovich. HHH93b]. Likewise. who showed.16) is stable is NP-hard (see Coxson and Demarco [CD91]. along with Bern- stein and Haddad. consisting of a Lur’e system with unknown parameters. the multiplier W can be any passive transfer matrix. DV75. The paper [BHPD94] by Balakrishnan et al.16) is stabilizable by a constant state-feedback can be shown to be NP-hard using the method of [Nem94]. How93. Pop64]. shows how various stability tests for uncertain systems can be red- erived in the context of multiplier theory. Wil76] for discussion of and bibliography on multiplier theory and its connections to Lyapunov stability theory. many stabilization problems for uncertain systems are NP-hard. the problem of checking whether system (8. This is the famous Popov criterion [Pop62. p(t) = ∆(q.. q = Cq x + Dqp p. output-feedback control law for a single LTI system is rationally decidable [ABJ75]. Hall and How. See also [LSC94]. See also [CHD93]. Safonov and Wyetzner use a convex parametrization of multipliers found by Zames and Falb [ZF68. Copyright ° .. Wil70.. checking if (8. ¡ ¢ The method involves checking that Hm = W Cq (sI − A)−1 Bp + Dqp satisfies Hm (jω) + Hm (jω)∗ ≥ 0 for every ω ∈ R.e. for some choice of W from a set that is determined by the properties of ∆. SC93. see for ex- ample [HB91a. involving the ap- plication of an off-axis circle criterion (see also §3.3 have high complexity. HB93a. ZF67] to prove stability of systems subject to “monotonic” or “odd-monotonic” nonlinearities (see [SW87] for details. This case has also been considered in [CS92b. due to Blondel and Gevers [BG94]. For instance. when we have a Lur’e system).6) is equivalent to the existence of q nonnegative such that (1 + qs)(cq (sI − A)−1 bp ) + 1/k has positive dissipation.

including so-called infinite sectors. in which.e.26) and. . using our well- posedness assumption. . We assume this system is well-posed. i. . Kamenet- skii calls his results “convolution method for solving matrix inequalities”. that the number of these LMIs grows exponentially with the number of nonlinearities. 1]. see the book by Desoer and Vidyasagar [DV75. This electronic version is for personal use and may not be duplicated or distributed..27). Note that (8. We can therefore express (8.Notes and References 129 Multiple nonlinearities for Lur’e systems Our analysis method for Lur’e systems with multiple nonlinearities in §8. since the S-procedure can be conservative in this case. in a numerical implementation. Rap88] devises a way of determining if there exists Lyapunov functions of the form (8.. [0. αnq ). −1 C̄ = (I − DΛ) C. αi σ 2 ≤ σφi (σ) ≤ βi σ 2 . (8.26) pi (t) = φi (qi (t)). Here we have dropped the variables w and z and the subscripts on B. This results in A + BΛ(I − DΛ)−1 C x + B(I − ΛD)−1 Γp̄.e. . say. We now substitute p = Γp̄ + Λq into (8. Define ∆ 1 p̄i = (φi (qi ) − αi qi ) = φ̄i (qi ). . For detailed descriptions of loop transformations. Kamenetskii [Kam89] and Rapoport [Rap89] derive corresponding frequency-domain stability criteria.1 can be conser- vative. The term loop transformation comes from a simple block-diagram interpretation of the equa- tions given above.e. however.1 to the loop-transformed system (8.. βi ] into the standard sector. it is probably better to derive the LMIs associated with the more general Lur’e system than to loop transform to the standard Lur’e system. we simply apply the methods of §8. βi − α i It is readily shown that 0 ≤ σ φ̄i (σ) ≤ σ 2 for all σ. Rapoport [Rap86. . Let Λ and Γ denote the diagonal matrices Λ = diag(α1 . . q = C̄x + D̄p̄. p50]. q = Cx + Dp. solve for ẋ and q in terms of x and p̄. Loop transformations Consider the Lur’e system with general sector conditions. i. The construction above is a loop transformation that maps nonlinearities in sector [α i . where Ā = A + BΛ(I − DΛ)−1 C. So to analyze the more general Lur’e system (8.26). (8. i. C and D to simplify the presentation. βnq − αnq ).26) as ẋ = Āx + B̄ p̄. We note.27) is in the standard Lur’e system form.27) p̄i (t) = φ̄i (qi (t)). det(I − D∆) 6= 0 for all diagonal ∆ with αi ≤ ∆ii ≤ βi . ¡ ¢ ẋ = q = (I − DΛ)−1 Cx + (I − DΛ)−1 DΓp̄.e. B̄ = B(I − ΛD)−1 Γ. . 0 ≤ σ φ̄i (σ) ≤ σ 2 . Rap87. Γ = diag(β1 − α1 .3) nonconservatively by generating appropriate LMIs. . D̄ = (I − DΛ)−1 DΓ. βi = +∞.. so that p̄ = Γ−1 (p − Λq). ẋ = Ax + Bp. Similar transformations can be used to map any set of sectors into any other. i. In practice.

.

M (0) = E x(0)x(0)T . Mean-square stability implies.1 Analysis of Systems with Multiplicative Noise 9. identically distributed random variables with E p(k) = 0. as ∆ M (k) = E x(k)x(k)T . are independent. Define M (k). . that x(k) → 0 almost surely. It can be shown (see the Notes and References) that mean-square stability is equivalent to the existence of a matrix P > 0 satisfying the LMI L X AT P A − P + σi2 ATi P Ai < 0. . p(1). regardless of x(0). E p(k)p(k)T = Σ = diag(σ1 . we say the system is mean-square stable. (9.2) We assume that x(0) is independent of the process p. .. for example. (9. 131 .Chapter 9 Systems with Multiplicative Noise 9.3) i=1 If this linear recursion is stable. Of course. the state correlation matrix. (9.1 Mean-square stability We first consider the discrete-time stochastic system L Ã ! X x(k + 1) = A0 + Ai pi (k) x(k). limk→∞ M (k) = 0.5) i=1 holds for some Q > 0.1) i=1 where p(0). equivalent condition is that the dual inequality L X AQAT − Q + σi2 Ai QATi < 0 (9. (9. i.. .e. M satisfies the linear recursion L X M (k + 1) = AM (k)AT + σi2 Ai M (k)ATi . σL ).1. .4) i=1 An alternative. . .

5) offer no computa- tional (or theoretical) advantages for checking mean-square stability. which satisfies x̄(k + 1) = Ax̄(k) + Bw w(k). We can consider variations on the mean-square stability problem. we can interpret the function V (ξ) = ξ T P ξ as a stochastic Lyapunov function: E V (x) decreases along trajectories of (9. x(0) = 0.1). for example. k=0 Let x̄(k) denote the mean of x(k).1. 9. Then.7) for all trajectories. . we are given A0 . we can derive GEVP characterizations that yield the exact mean-square stability margin. since the mean-square stability margin can be obtained by computing the eigenvalue of a (large) matrix. . i. determining the mean-square stability margin.2 State mean and covariance bounds with unit-energy inputs We now add an exogenous input w to our stochastic system: L X x(k + 1) = Ax(k) + Bw w(k) + (Ai x(k) + Bw. AL . Again. X(k) = E (x(k) − x̄(k)) (x(k) − x̄(k)) .4). . Suppose V (ξ) = ξ T P ξ where P > 0 satisfies E V (x(k + 1)) ≤ E V (x(k)) + w(k)T w(k) (9. ∞ X w(k)T w(k) ≤ 1. the system (9. c 1994 by the Society for Industrial and Applied Mathematics.e. j=0 which implies that. From the LMIP characterizations of mean-square stability.. and let T X(k) denote the covariance of x(k). k ≥ 0. for all k ≥ 0. Alternatively. Copyright ° . k X E V (x(k)) ≤ w(j)T w(j).6) i=1 We assume that the exogenous input w is deterministic. T E x(k)T P x(k) = x̄(k)T P x̄(k) + E (x(k) − x̄(k)) P (x(k) − x̄(k)) = x̄(k)T P x̄(k) + Tr X(k)P ≤ 1.1) is mean-square stable. i. and asked to find the largest γ such that with Σ < γ 2 I. the function V (M ) = Tr M P is a (linear) Lyapunov function for the deterministic system (9. Remark: Mean-square stability can be verified directly by solving the Lyapunov equation in P L X AT P A − P + σi2 ATi P Ai + I = 0 i=1 and checking whether P > 0. We will develop joint bounds on x̄(k) and X(k). Here.i w(k)) pi (k).4) and (9. . the GEVP offers no computational or theoretical advantages. with energy not exceeding one..e. (9.3) (see the Notes and References). Thus the LMIPs (9.132 Chapter 9 Systems with Multiplicative Noise Remark: For P > 0 satisfying (9.

i=1 where L " # " # AT h i X ATi h i M= T P A Bw + σi2 T P Ai Bw.1.i . Bw i=1 Bw.8) implies LMI (9. k ≥ 0.i 0 I and L X AT P A + σi2 ATi P Ai < P (9.8) and P > 0. which is another EVP. since we have Tr M (k)P ≤ λmin (P ) Tr M (k) and therefore Tr M (k) ≤ 1/λmin (P ). We note that " #T " # x̄(k) x̄(k) E V (x(k + 1)) = M w(k) w(k) L Ã ! X + Tr AT P A + σi2 ATi P Ai X(k). We can derive further bounds on M (k). Of course.9). we can derive an upper bound on λmax (M ) by maximizing Tr P subject to (9. we conclude that x̄(k) and X(k) must belong to the set © ¯ (x(k). the LMI conditions L " # " # " # AT h i X 2 A T i h i P 0 T P A B w + σ i T P Ai Bw. we have the bound Tr M (k)P ≤ 1 on the state covariance matrix.i w(k)) pi (k).10) L X z(k) = Cz x(k) + Dzw w(k) + (Cz.8). x(0) = 0.8) im- plies (9. thus.8) Bw i=1 Bw. For example. Therefore. i=1 This electronic version is for personal use and may not be duplicated or distributed. 9. LMI (9. For example. we can derive bounds on Tr M by maximizing λmin (P ) subject to (9.1 Analysis of Systems with Multiplicative Noise 133 Let us now examine condition (9.7). As another example.7). i=1 (9.9. we conclude that LMI (9.i Next.i ≤ (9.i w(k)) pi (k).8) and P > 0. X(k)) ¯ x̄(k)T P x̄(k) + Tr X(k)P ≤ 1 ¯ ª for any P > 0 satisfying the LMI (9.3 Bound on L2 gain We now consider the system L X x(k + 1) = Ax(k) + Bw w(k) + (Ai x(k) + Bw.9) i=1 imply E V (x(k + 1)) ≤ x̄T (k)P x̄(k) + w(k)T w(k) L Ã ! X T 2 T + Tr A P A + σi Ai P Ai X(k) i=1 T T ≤ x̄ (k)P x̄(k) + w(k) w(k) T + E (x(k) − x̄(k)) P (x(k) − x̄(k)) = E V (x(k)) + w(k)T w(k). This is an EVP.i x(k) + Dzw. .

i u(k) + Dzw.12) can be shown to be equivalent to the LMI " #T " #" # " # A Bw P 0 A Bw P 0 − Cz Dzw 0 I Cz Dzw 0 γ2I T (9. (9. i=1 satisfies various properties. We assume that w is deterministic. which is an EVP. i=1 (9.1 Stabilizability We seek K and Q > 0 such that (A + Bu K)Q(A + Bu K)T − Q L X + σi2 (Ai + Bu.i w(k)) pi (k).2. The condition (9. (9. i=1 We seek a state-feedback u(k) = Kx(k) such that the closed-loop system x(k + 1) = (A + Bu K)x(k) + Bw w(k)+ XL ((Ai + Bu. We define the L2 gain η of this system as ( ∞ ¯ ∞ ) ¯X ∆ X η 2 = sup E z(k)T z(k) ¯ w(k)T w(k) ≤ 1 . i=1 c 1994 by the Society for Industrial and Applied Mathematics.13).3.i Dzw.14) z(k) = Cz x(k) + Dzu u(k) + Dzw w(k)+ XL (Cz.11) ¯ ¯ k=0 k=0 Suppose that V (ξ) = ξ T P ξ.4 to obtain componentwise results.134 Chapter 9 Systems with Multiplicative Noise with the same assumptions on p. i=1 Cz. Remark: It is straightforward to apply the scaling method developed in §6. with P > 0.i + σi2 ≤ 0. i=1 (9.12) Then γ ≥ η.i u(k) + Bw.i w(k)) pi (k).i 0 I Cz.2 State-Feedback Synthesis We now add a control input u to our system: x(k + 1) = Ax(k) + Bu u(k) + Bw w(k)+ XL (Ai x(k) + Bu.13) L " # " # " # X A i B w. Copyright ° .i K)T < 0.15) z(k) = (Cz + Dzu K)x(k) + Dzw w(k)+ XL ((Cz.i x(k) + Dzu.i Minimizing γ subject to LMI (9. yields an upper bound on η. 9. satisfies E V (x(k + 1)) − E V (x(k)) ≤ γ 2 w(k)T w(k) − E z(k)T z(k).i K)x(k) + Dzw.i P 0 A i B w.i + Dzu.i Dzw.i w(k)) pi (k).i w(k)) pi (k).i K)x(k) + Bw.i K)Q(Ai + Bu. 9.

i " #T " # " # AQ + Bu Y Bw AQ + Bu Y Bw Q 0 + Q̃−1 ≤ .i + Dzu.i K Dzw. Applying a congruence with Q̃ = diag(Q. we get L " #T " # X Ai Q + Bu.16) + σi2 (Ai Q + Bu.i K Dzw.i Ai + Bu.i Y )T < 0.15).i Cz. we obtain the equivalent condition (AQ + Bu Y )Q−1 (AQ + Bu Y )T − Q L X (9.i σi2 Q̃ −1 i=1 Cz. Therefore.i Cz.i Ai Q + Bu. we can maximize the closed-loop mean-square stability margin.i Q + Dzu.i K Bw.i Y Bw. System (9. (9. Y = KQ.17) 9. As an extension.2 State-Feedback Synthesis 135 With the change of variable Y = KQ.i Y )Q−1 (Ai Q + Bu.i " #T " # " # A + Bu K Bw A + Bu K Bw P 0 + P̃ ≤ Cz + Dzu K Dzw Cz + Dzu K Dzw 0 γ2I where " # P 0 P̃ = .14) is mean-square stabilizable (with constant state-feedback) if and only if the LMI Q > 0. (9.i Y )Q−1 (Ai Q + Bu. finding K that maximizes γ reduces to the following GEVP: maximize γ subject to Q > 0. In this case.2 Minimizing the bound on L2 gain We now seek a state-feedback gain K which minimizes the bound on L2 gain (as defined by (9. i=1 which is easily written as an LMI in Q and Y .9.17) +γ 2 (Ai Q + Bu.i + Dzu. . I). system (9.16) is feasible.15) is mean-square stable for Σ ≤ γ 2 I if and only if (AQ + Bu Y )Q−1 (AQ + Bu Y )T − Q L X (9. a stabilizing state-feedback gain is K = Y Q−1 .11)) for system (9.i σi P̃ i=1 Cz. This electronic version is for personal use and may not be duplicated or distributed.i Q + Dzu.i Y Bw.i Y Dzw. Cz Q + Dzu Y Dzw Cz Q + Dzu Y Dzw 0 γ2I This inequality is readily transformed to an LMI.2.i Y Dzw. η ≤ γ for some K if there exist P > 0 and K such that L " #T " # X 2 Ai + Bu.i Y )T < 0 i=1 holds for some Y and Q > 0.i K Bw. 0 I We make the change of variables Q = P −1 . Thus.

for an arbitrary choice of M (0). Stability of system with multiplicative noise For a survey of different concepts of stability for stochastic systems see [Koz69] and the references therein. the solution to the linear recursion (9. Kushner used this idea to derive bounds for return time. For the systems considered here.3) goes to zero as k → 0.. A thorough stability analysis of (nonlinear) stochastic systems was made by Kushner in [Kus67] using this idea of a stochastic Lyapunov func- tion. The criterion then reduces to an H2 norm condition on a certain transfer matrix. i. which is positive on the cone of nonnegative matrices. In particular. The definition of mean-square stability can be found in [Sam59].e.e. this approach does not provide a way to construct the Lyapunov functions.3) can be found in [McL71.136 Chapter 9 Systems with Multiplicative Noise Notes and References Systems with multiplicative noise Systems with multiplicative noise are a special case of stochastic systems.3).. The related Lyapunov equation was given by Klein- mann [Kle69] for the case L = 1 (i. To prove that condition (9. (This is in parallel with the results of [EP92]. a single uncertainty).3) is stable. Wil73]. Mean- square stability is a strong form of stability. the corresponding solution M (k) converges to 0.4). A further standard argument from Lyapunov theory proves that M (k) converges to zero as k → ∞. Proof of stability criterion We now prove that condition (9. Copyright ° .10). For an introduction to this topic see Arnold [Arn74] or Åström [Åst70]. we assume that the system is mean-square stable. We now write (9. [EP92]) E t=0 kx(t)k2 has a (finite) limit as T → ∞ for all initial conditions. (9.4) is a necessary and sufficient condition for mean-square stability. Let P > 0 satisfy (9. (This matrix can be expressed via Kronecker products c 1994 by the Society for Industrial and Applied Mathematics. where m(k) is a vector containing the n 2 elements of M (k). Linearity of system (9. see below. For nonzero M (k) satisfying (9.) Kats and Krasovskii introduced in [KK60] the idea of a stochastic Lyapunov function and proved a related Lyapunov theorem. A rigorous treatment of continuous-time stochastic systems requires much technical machin- ery (see [EG94]). it is also equivalent to L2 -stability. The equation for the state correlation matrix (9. expected output energy. it implies stability of the mean E x(k) and that all trajectories converge to zero with probability one [Kus67. Introduce the (linear Lyapunov) function V (M ) = Tr M P . which is the property that (see PT for example. Wil73].4) is also necessary. We first prove that condition (9.3) as m(k + 1) = Am(k). Frequency- domain criteria are given in [WB71]. This means that for every positive initial condition M (0) ≥ 0.4) is sufficient.3) implies that. In fact there are many ways to formulate the continuous-time version of a system such as (9. etc. and A is a real matrix. see Itô [Ito51] and Stratonovitch [Str66]. Ã L ! X V (M (k + 1)) = Tr AM (k)A + T σi2 Ai M (k)ATi P Ã i=1 ! L X = Tr M (k) T A PA + σi2 ATi P Ai i=1 < Tr M (k)P = V (M (k)). An alge- braic criterion for mean-square stability of a system with multiplicative white noise is given in [NS72]. However. It is reminiscent of the classical (deterministic) Hurwitz criterion.

the function k X P (k) = N (j) j=0 has a (finite) limit for k → ∞. . An exact stability margin can also be computed for continuous-time systems with multiplica- tive noise provided the Itô framework is considered. the computation of an exact stability margin seems difficult for Stratonovitch systems. i = 1.4). ¡P L ¢1/2 Measuring the “size” of the perturbation term by i=1 k∆i k2 . In [EP92]. . and the corresponding matrix difference equation. are integrals of white noise). i = 1. AL . Now (9. El Bouhtouri and Pritchard provide a complete robustness analysis in a slightly different framework than ours. The fact that a stochastic analog of the deterministic stability margin is easier to compute should not be surprising. . leads the authors of [EP92] to the robustness margin minimize γ subject to P > 0. . . are standard Brownian motions (which.18) implies L X P (k + 1) = N (0) + AT P (k)A + σi2 ATi P (k)Ai i=1 Taking the limit k → ∞ shows that P satisfies (9. . where k · k denotes the Lipschitz norm.18) i=1 is also stable. they consider a continuous-time Itô system of the form L X dx(t) = Ax(t)dt + Bi ∆i (Cx(t))dpi (t). L This electronic version is for personal use and may not be duplicated or distributed. For systems with deterministic parameters only known to lie in ranges. L X N (k + 1) = AT N (k)A + σi2 ATi N (k)Ai .) Stability of A is equivalent to that of AT . . . namely for so-called “block-diagonal” perturbations. . Since N (k) satisfies a stable first-order linear difference equation with constant coefficients. i=1 where the operators ∆i are Lipschitz continuous. .10) can be viewed as a linear system subject to random parameter variations. Now let N (0) > 0. which only considers the average behavior of the system. BiT P Bi ≤ γ. (9. . We note that our framework can be recovered by assuming that the operators ∆ i are restricted to a diagonal structure of the form ∆i (z) = σi z. we are dealing with a very special form of stability. roughly speaking. the processes pi . Mean-square stability margin System (9. AT P + P A + C T C < 0. The fact that N (0) > 0 implies P > 0.18). L. .Notes and References 137 involving A0 . In contrast. . and let N (k) be the corresponding solution of (9. the computation of an exact stability margin is an NP-hard problem [CD92]. Roughly speaking. with ∆i (0) = 0. which we denote by P . see [EG94].

see Wonham [Won66]. In our framework. the bulk of earlier research on this topic concentrated c 1994 by the Society for Industrial and Applied Mathematics. El Bouhtouri and Pritchard provide a solution of this problem in [EP94]. i=1 In fact. i=1 with M (0) = 0. Copyright ° . In this case. See the Notes and References for Chapter 6 for details on how these more complicated situations can be handled. Stability conditions for arbitrary noise intensities are given in geometric terms in [WW83]. i. and w is independent of p. However. It is also possible to extend the results in this chapter to “uncertain stochastic systems”. this happens if and only if the optimal value of the corresponding GEVP is zero. that is. We derive an upper bound on the state correlation (or covariance) of the system.i T ¡ ¢ + .i W Bw. the state mean x̄ is zero. the above LMI formulation extends immediately to more complicated situations (for instance.6) in which w is a unit white noise process. Since the system is mean-square stable. when a single scalar uncertain parameter perturbs an LTI system. of course.1 can be extended to the stochastic framework. C).138 Chapter 9 Systems with Multiplicative Noise For L = 1. and B1 and C are vectors. E w(k)w(k)T = W . Bounds on state covariance with noise input We consider system (9. For L > 1 however. ∆i = σi I.i T ¡ ¢ + . the matrix M∞ can be computed directly as the solution of a linear matrix equation. applying their results to our framework would be conservative. i=1 where the time-varying matrix A(k) is only known to belong to a given polytope. both this result and ours coincide with the H2 norm of (A.e. Note that the LMI approach can also solve the state-feedback synthesis problem in the framework of [EP92]. This is consistent with the earlier results of Kleinmann [Kle69] and Willems [WB71] (see also [BB91. Extension to other stochastic systems The Lur’e stability problem considered in §8. w(k) are independent. The state correlation M (k) satisfies the difference equation L X M (k + 1) = AM (k)AT + Bw W Bw T σi2 Ai M (k)ATi + Bw. State-feedback synthesis Conditions for stabilizability for arbitrary noise intensities are given in geometric terms in [WW83]. of the form à L ! X x(k + 1) = A(k) + σi Ai pi (k) x(k). identically distributed with E w(k) = 0.i W Bw. This extension can be cast in terms of LMIs. while the “direct” method does not. the state correlation M (k) has a (finite) limit M ∞ which satisfies L X M∞ = AM∞ AT + Bw W Bw T σi2 Ai M∞ ATi + Bw. since we only consider “diagonal” perturbations with “re- peated elements”. p114] for a related result). when the white noise input w has a covariance matrix with unknown off-diagonal elements). B1 .. Apart from this work.

with guaranteed convergence). Won68. FR75.g.Notes and References 139 on Linear-Quadratic Regulator theory for continuous-time Itô systems. This electronic version is for personal use and may not be duplicated or distributed. The solution of this problem. [Phi89. . BH88b] for additional details. RSH90]) use homotopy algorithms to solve these equations.) The existing methods (see e. which addresses the problem of finding an input u that minimize a performance index of the form Z ∞ x(t)T Qx(t) + u(t)T Ru(t) dt ¡ ¢ J(K) = E 0 where Q ≥ 0 and R > 0 are given.e.. can be expressed as a (linear. (See [Kle69. constant) state-feedback u(t) = Kx(t) where the gain K is given in terms of the solution of a non-standard Riccati equation. by recognizing that the solution is an extremal point of a certain LMI feasibility set. as found in [McL71] or [Ben92]. with no guarantee of global con- vergence. We can solve these non-standard Riccati equations reliably (i.

.

Chapter 10

Miscellaneous Problems

10.1 Optimization over an Affine Family of Linear Systems
We consider a family of linear systems,
ẋ = Ax + Bw w, z = Cz (θ)x + Dzw (θ)w, (10.1)
p
where Cz and Dzw depend affinely on a parameter θ ∈ R . We assume that A is
stable and (A, Bw ) is controllable. The transfer function,

Hθ (s) = Cz (θ)(sI − A)−1 Bw + Dzw (θ),
depends affinely on θ.
Several problems arising in system and control theory have the form
minimize ϕ0 (Hθ )
(10.2)
subject to ϕi (Hθ ) < αi , i = 1, . . . , p
where ϕi are various convex functionals. These problems can often be recast as LMI
problems. To do this, we will represent ϕi (Hθ ) < αi as an LMI in θ, αi , and possibly
some auxiliary variables ζi (for each i):
Fi (θ, αi , ζi ) > 0.
The general optimization problem (10.2) can then be expressed as the EVP
minimize α0
subject to Fi (θ, αi , ζi ) > 0, i = 0, 1, . . . , p

10.1.1 H2 norm
The H2 norm of the system (10.1), i.e.,
Z ∞
2 1
kHθ k2 = Tr Hθ (jω)∗ Hθ (jω) dω,
2π 0
is finite if and only if Dzw (θ) = 0. In this case, it equals Tr Cz Wc CzT , where Wc > 0 is
the controllability Gramian of the system (10.1) which satisfies (6.6). Therefore, the
H2 norm of the system (10.1) is less than or equal to γ if and only if the following
conditions on θ and γ 2 are satisfied:
Dzw (θ) = 0, Tr Cz (θ)Wc Cz (θ)T ≤ γ 2 .
The quadratic constraint on θ is readily cast as an LMI.
141

142 Chapter 10 Miscellaneous Problems

10.1.2 H∞ norm
From the bounded-real lemma, we have kHθ k∞ < γ if and only if there exists P ≥ 0
such that
" # " #
AT P + P A P B w Cz (θ)T h i
T 2
+ T
Cz (θ) Dzw (θ) ≤ 0.
Bw P −γ I Dzw (θ)

Remark: Here we have converted the so-called “semi-infinite” convex constraint
kHθ (iω)k < γ for all ω ∈ R into a finite-dimensional convex (linear matrix)
inequality.

10.1.3 Entropy
The γ-entropy of the system (10.1) is defined as
2 Z ∞
 −γ

∆ log det(I − γ 2 Hθ (iω)Hθ (iω)∗ ) dω, if kHθ k∞ < γ,
Iγ (Hθ ) = 2π −∞
∞, otherwise.

T
When it is finite, Iγ (Hθ ) is given by Tr Bw P Bw , where P is a symmetric matrix
with the smallest possible maximum singular value among all solutions of the Riccati
equation
1
AT P + P A + Cz (θ)T Cz (θ) + T
P B w Bw P = 0.
γ2
For the system (10.1), the γ-entropy constraint Iγ ≤ λ is therefore equivalent to an
LMI in θ, P = P T , γ 2 , and λ:
 T 
A P + P A P Bw Cz (θ)T
T T
Dzw (θ) = 0, Bw P −γ 2 I 0  ≤ 0, Tr Bw P Bw ≤ λ.
 

Cz (θ) 0 −I

10.1.4 Dissipativity
Suppose that w and z are the same size. The dissipativity of Hθ (see (6.59)) exceeds
γ if and only if the LMI in the variables P = P T and θ holds:
" #
AT P + P A P Bw − Cz (θ)T
T
≤ 0.
Bw P − Cz (θ) 2γI − Dzw (θ) − Dzw (θ)T
We remind the reader that passivity corresponds to zero dissipativity.

10.1.5 Hankel norm
The Hankel norm of Hθ is less than γ if and only if the following LMI in Q, θ and γ 2
holds:
Dzw (θ) = 0, AT Q + QA + Cz (θ)T Cz (θ) ≤ 0,
2 1/2 1/2
γ I − Wc QWc ≥ 0, Q ≥ 0.

(Wc is the controllability Gramian defined by (6.6).)
c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

10.2 Analysis of Systems with LTI Perturbations 143

10.2 Analysis of Systems with LTI Perturbations
In this section, we address the problem of determining the stability of the system
ẋ = Ax + Bp p, q = Cq x + Dqp p, pi = δi ∗ qi , i = 1, . . . , np , (10.3)
where p = [p1 · · · pnp ]T , q = [q1 · · · qnp ]T , and δi , i = 1, . . . , np , are impulse
responses of single-output single-output LTI systems whose H∞ norm is strictly less
than one, and ∗ denotes convolution. This problem has many variations: Some of the
δi may be impulse response matrices, or they may satisfy equality constraints such as
δ1 = δ2 . Our analysis can be readily extended to these cases.
Let Hqp (s) = Cq (sI −A)−1 Bp +Dqp . Suppose that there exists a diagonal transfer
matrix W (s) = diag(W1 (s), . . . , Wnp (s)), where W1 (s), . . . , Wnp (s) are single-input
single-output transfer functions with no poles or zeros on the imaginary axis such
that
sup °W (iω)Hqp (iω)W (iω)−1 ° < 1
° °
(10.4)
ω∈R

Then from a small-gain argument, the system (10.3) is stable. Such a W is referred
to as a frequency-dependent scaling.
Condition (10.4) is equivalent to the existence of γ > 0 such that
Hqp (iω)∗ W (iω)∗ W (iω)Hqp (iω) − W (iω)∗ W (iω) + γI ≤ 0,
for all ω ∈ R. From spectral factorization theory, it can be shown that such a W
exists if and only if there exists a stable, diagonal transfer matrix V and positive µ
such that
Hqp (iω)∗ (V (iω) + V (iω)∗ )Hqp (iω) − (V (iω) + V (iω)∗ ) + γI ≤ 0,
(10.5)
V (iω) + V (iω)∗ ≥ µI,
for all ω ∈ R.
In order to reduce (10.5) to an LMI condition, we restrict V to belong to an affine,
finite-dimensional set Θ of diagonal transfer matrices, i.e., of the form
Vθ (s) = DV (θ) + CV (θ)(sI − A)−1 B,
where DV and CV are affine functions are θ.
Then, Hqp (−s)T Vθ (s)Hqp (s) − Vθ (s) has a state-space realization (Aaug , Baug ,
Caug (θ), Daug (θ)) where Caug and Daug are affine in θ, and (Aaug , Baug ) is controllable.
From §10.1.4, the condition that Vθ (iω) + Vθ (iω)∗ ≥ µI for all ω ∈ R is equivalent
to the existence of P1 ≥ 0, θ and γ > 0 such that the LMI
" #
ATV P1 + P1 AV P1 BV − CV (θ)T
≤0
(P1 BV − CV (θ)T )T −(DV (θ) + DV (θ)T ) + µI
holds.
The condition Hqp (iω)∗ (Vθ (iω) + Vθ (iω)∗ )Hqp (iω) − (Vθ (iω) + Vθ (iω)∗ ) + γI ≤ 0
is equivalent to the existence of P2 = P2T and θ satisfying the LMI
" #
ATaug P2 + P2 Aaug P2 Baug + Caug (θ)T
≤ 0. (10.6)
(P2 Baug + Caug (θ)T )T Daug (θ) + Daug (θ)T + γI
(See the Notes and References for details.) Thus proving stability of the system (10.3)
using a finite-dimensional set of frequency-dependent scalings is an LMIP.
This electronic version is for personal use and may not be duplicated or distributed.

144 Chapter 10 Miscellaneous Problems

10.3 Positive Orthant Stabilizability
The LTI system ẋ = Ax is said to be positive orthant stable if x(0) ∈ R n+ implies that
x(t) ∈ Rn+ for all t ≥ 0, and x(t) → 0 as t → ∞. It can be shown that the system
ẋ = Ax is positive orthant stable if and only if Aij ≥ 0 for i 6= j, and there exists
a diagonal P > 0 such that P AT + AP < 0. Therefore checking positive orthant
stability is an LMIP.
We next consider the problem of finding K such that the system ẋ = (A + BK)x
is positive orthant stable. Equivalently, we seek a diagonal Q > 0 and K such that
Q(A + BK)T + (A + BK)Q < 0, with the off-diagonal entries of A + BK being
nonnegative. Since Q > 0 is diagonal, the last condition holds if and only if all
the off-diagonal entries of (A + BK)Q are nonnegative. Therefore, with the change of
variables Y = KQ, proving positive orthant stabilizability for the system ẋ = Ax+Bu
is equivalent to finding a solution to the LMIP with variables Q and Y :

Q > 0, (AQ + BY )ij ≥ 0, i 6= j, QAT + Y T B T + AQ + BY < 0. (10.7)

Remark: The method can be extended to invariance of more general (polyhedral)
sets. Positive orthant stability and stabilizability of LDIs can be handled in a
similar way. See the Notes and References.

10.4 Linear Systems with Delays
Consider the system described by the delay-differential equation
L
d X
x(t) = Ax(t) + Ai x(t − τi ), (10.8)
dt i=1

where x(t) ∈ Rn , and τi > 0. If the functional
L Z
X τi
T
V (x, t) = x(t) P x(t) + x(t − s)T Pi x(t − s) ds, (10.9)
i=1 0

where P > 0, P1 > 0, . . . , PL > 0, satisfies dV (x, t)/dt < 0 for every x satisfying (10.8),
then the system (10.8) is stable, i.e., x(t) → 0 as t → ∞.
It can be verified that dV (x, t)/dt = y(t)T W y(t), where
 
XL
 AT P + P A + Pi P A 1 · · · P A L   

 i=1

 x(t)
AT1 P −P1 · · · 0   x(t − τ1 ) 
   

W = 
,
 y(t) = 
 .. .
.. .. . .

 . . .   
 . . . .  

  x(t − τL )
T
AL P 0 · · · −PL

Therefore, we can prove stability of system (10.8) using Lyapunov functionals of the
form (10.9) by solving the LMIP W < 0, P > 0, P1 > 0, . . . , PL > 0.

c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

9) proves the stability of the system (10.5 Interpolation Problems 10. Multiplying every block entry of W on the left and on the right by P −1 and setting Q = P −1 .2 to check its stability.. . .10) is stable. . . with ui ∈ Cq and v1 .10. discussed in Chapter 5. λm with λi ∈ C+ = {s | Re s > 0} and distinct. . P1 . checking stabilizability of the system (10..     ATL P 0 · · · −PL for some P . We also note that we can regard the delay elements of system (10. .2 to eliminate the matrix variable Y and obtain an equivalent LMIP with fewer variables. PL > 0. QL .5 Interpolation Problems 145 Remark: The techniques for proving stability of norm-bound LDIs. Y . . . it can be verified that the LMI (5. . .. . .. there exists a state-feedback gain K such that a Lya- punov functional of the form (10. and use the techniques of §10.. i = 1. From the discussion above.15) that we get for quadratic stability of a DNLDI.   .8) and consider L d X x(t) = Ax(t) + Bu(t) + Ai x(t − τi ). ..    . . . can also be used on the system (10. if   XL  (A + BK)T P + P (A + BK) + Pi P A 1 · · · P A L     i=1  T A1 P −P1 · · · 0     W =  <0 . . in the case when the state x is a scalar.8) using Lyapunov functionals of the form (10. . .9) is an LMIP in the variables Q. .6. Remark: It is possible to use the elimination procedure of §2. . .10) dt i=1 We seek a state-feedback u(t) = Kx(t) such that the system (10. we obtain the condition   XL  AQ + QAT + BY + Y T B T + Qi A 1 Q · · · A L Q     i=1  T −Q · · ·    QA 1 1 0  X=  < 0. . .. . is the same as the LMI W < 0 above.         T QAL 0 · · · −QL Thus.10). . we add an input u to the system (10. Q1 .1 Tangential Nevanlinna-Pick problem ∆ Given λ1 . . . vm . .. .    . .8) as convolution operators with unit L2 gain. m. (10.  . 10. um . Qi = P −1 Pi P −1 and Y = KP −1 .8). . . . with vi ∈ Cp .5. u1 . . . . the tangential Nevanlinna-Pick This electronic version is for personal use and may not be duplicated or distributed. Next.

.11) arises in multi-input multi-output H∞ control theory. . (10. . + D = D∗ > 0 γopt = inf kDHD k∞ ¯ . For future reference. if possible. From Nevanlinna-Pick theory. it follows that γopt is the smallest positive γ such that there exists P > 0. . we note that N = Gin − Gout . i = 1. . with kHk∞ ≤ 1. . . . λm ). . γ 2 Gin − Gout ≥ 0. problem (10.5.11) has a solution and only if the Pick matrix N defined by u∗i uj − vi∗ vj Nij = λ∗i + λj ) is positive semidefinite. where A∗ Gin + Gin A − U ∗ U = 0. m. U = [u1 · · · um ]. i = 1. N can also be obtained as the solution of the Lyapunov equation A∗ N + N A − (U ∗ U − V ∗ V ) = 0. Copyright ° .2 Nevanlinna-Pick interpolation with scaling We now consider a simple variation of problem (10. This is a GEVP.11) Problem (10. Solving the tangential Nevanlinna-Pick problem simply requires checking whether N ≥ 0. A∗ Gout + Gout A − V ∗ V = 0. λm with λi ∈ C+ . . . m. .12) corresponds to finding the smallest scaled H∞ norm of all interpolants.3 Frequency response identification We consider the problem of identifying the transfer function H of a linear system from noisy measurements of its frequency response at a set of frequencies. . 10. a function H : C −→ Cp×q which is analytic in C+ . m where D is the set of m × m block-diagonal matrices with some specified block struc- ture. . With a change of variables P = D ∗ D and the discussion in the previous section. . 10. with vi ∈ Cp . um . This problem arises in multi-input multi-output H∞ control synthesis for systems with structured perturbations. . the problem is to find ( ¯ ) −1 ¯ H is analytic in C . . H(λi )ui = vi . . .146 Chapter 10 Miscellaneous Problems problem is to find. u1 . where A = diag(λ1 . A∗ Gout + Gout A − V ∗ P V = 0. i = 1. .5. V = [v1 · · · vm ]. . P ∈ D such that the following equations and inequality hold: A∗ Gin + Gin A − U ∗ P U = 0. . (10.12) ¯ ¯ D ∈ D. . vm . . . and satisfies H(λi )ui = vi . We seek H satisfying two constraints: c 1994 by the Society for Industrial and Applied Mathematics.11): Given λ1 . . Problem (10. . with ui ∈ Cp and v1 . .

B) is stabilizable. and n = [n1 · · · nm ]. minimize ². and ² is the measurement precision. ni is the (unknown) measurement error. • Prior assumption. the LQR problem is to determine u that minimizes Z ∞ z T z dt. Here. Solving this EVP answers the question “Given α and a bound on α-shifted H∞ norm of the system. x(0) = x0 . there exist H and n with H(ωi ) = fi + ni satisfying these conditions if and only if there exist ni with |ni | ≤ ². For some ni with |ni | ≤ ². where A = diag(jω1 . α-shifted H∞ norm of H does not exceed M . fi is the measurement of the frequency response at frequency ωi . jωm ). 0 R1/2 u with (A. ∗ (A + αI) Gin + Gin (A + αI) − e∗ e = 0. 0 The solution of this problem can be expressed as a state-feedback u = Kx with K = −R−1 B T P . 10. what is the “smallest” possible noise such that the measurements are consistent with the given values of α and M . minimize M . The inverse problem of optimal control is the following. where P is the unique nonnegative solution of AT P + P A − P BR−1 B T P + Q = 0. with |ni | ≤ ². z= . With this observation. From Nevanlinna-Pick theory. . ∗ (A + αI) Gin + Gin (A + αI) − e∗ e = 0. Solving this GEVP answers the question “Given α and a bound on the noise values. f = [f1 · · · fm ].6 The Inverse Problem of Optimal Control 147 • Consistency with measurements. • For fixed α and ². such that (Q. (Q. Gin > 0 and Gout > 0 such that M 2 Gin − Gout ≥ 0. Given a matrix K. It can be shown that these conditions are equivalent to M 2 Gin − Gout ≥ 0. determine if there exist Q ≥ 0 and R > 0. . A) is detectable and R > 0. we can answer a number of interesting questions in fre- quency response identification by solving EVPs and GEVPs. ∗ (A + αI) Gout + Gout (A + αI) − (f + n)∗ (f + n) = 0. e = [1 · · · 1]. A) is detectable and u = Kx is the This electronic version is for personal use and may not be duplicated or distributed.6 The Inverse Problem of Optimal Control Given a system " #" # Q1/2 0 x ẋ = Ax + Bu. . . we have fi = H(jωi ) + ni .10. what is the smallest possible α-shifted H∞ norm of the system consistent with the measurements of the frequency response?” • For fixed α and M . ∗ (A + αI) Gout + Gout (A + αI) − (f + n)∗ (f + n) ≥ 0. .

15).13) where x : Z+ → Rn . suppose that the state quantization noise is modeled as a white noise sequence w(k) with E w(k)T w(k) = η. u : Z+ → Rnu y : Z+ → Rny . ° ° (10.15) is equivalent to the existence of P > 0 such that " # AT P A − P + T −T T −1 AT P B ≤ 0. This yields the bound ° −1 °T (zI − A)−1 B ° ≤ 1/α. (10. the input-to-state transfer matrix.14) satisfies two competing constraints: First. and second. T −1 (zI − A)−1 B should be “small”. we seek R > 0 and Q ≥ 0 such that there exists P nonnegative and P1 positive-definite satisfying (A + BK)T P + P (A + BK) + K T RK + Q = 0. A) being detectable. such that the new realization x̄(k + 1) = T −1 AT x̄(k) + T −1 Bu(k) y(k) = CT x̄(k). injected directly into the state. (10. Suppose we assume that the RMS value of the input is bounded by α and require the RMS value of the state to be less than one. where Wo is the observability Gramian of the original system (A. R and Q.) 10. The constraint (10.148 Chapter 10 Miscellaneous Problems optimal control for the corresponding LQR problem. C).16) Our problem is then to compute T to minimize the noise power (10. BT P A B T P B − I/α2 The output noise power can be expressed as p2noise = η 2 Tr T T Wo T. (The condition involving P1 is equivalent to (Q. P1 .15) where the H∞ norm of the transfer matrix H(z) is defined as kHk∞ = sup {kH(z)k | |z| > 1} . C(zI −A)−1 T should be “small”. B T P + RK = 0 and AT P1 + P1 A < Q. given by ∞ X Wo = (AT )k C T CAk .16) subject to the overflow avoidance constraint (10. This is an LMIP in P . B. Next. and its effect on the output is measured by its RMS value. which is just η times the H2 norm of the state-to-output transfer matrix: pnoise = η °C(zI − A)−1 T °2 . y(k) = Cx(k). The system realization problem is to find a change of state coordinates x = T x̄. k=0 c 1994 by the Society for Industrial and Applied Mathematics. in order to minimize effects of state quantization at the output. the state-to-output transfer matrix.7 System Realization Problems Consider the discrete-time minimal LTI system x(k + 1) = Ax(k) + Bu(k). ° ∞ (10. in order to avoid state overflow in numerical implementa- tion. Equivalently. Copyright ° .

17).15). Then. X>0 R > 0. X > 0. (10.10. where Xopt is an optimal value of X in the GEVP.18) BT P A B T P B − R/α2 where R > 0 is diagonal. Y . P > 0.7 System Realization Problems 149 Wo is the solution of the Lyapunov equation AT Wo A − Wo + C T C = 0. (10. that is. # R is diagonal.17). ≥0 I X Similar methods can be used to handle several variations and extensions. The constraint kC(zI − A)−1 T k∞ ≤ γ is equivalent to the LMI (over Y > 0 and X = T −T T −1 > 0): " # AT Y A − Y + C T C AT Y ≤ 0. using the methods of §10.19) YA Y − X/α2 Therefore choosing T to minimize kC(zI − A)−1 T k∞ subject to (10.18). Y > 0.17) BT P A B T P B − I/α2 This is a convex problem in X and P . (10. P and R) is minimize η 2 Tr Y Wc subject to (10. we can measure the effect of the quantization noise on the output with the H∞ norm. (10. P and R): " # AT P A − P + X AT P B ≤ 0. and can be transformed into the EVP minimize η 2 Tr Y Wc " # Y I subject to (10. Input u has RMS value bounded componentwise Suppose the RMS value of each component ui is less than α (instead of the RMS value of u being less than α) and that the RMS value of the state is still required to be less than one. P > 0. P > 0. This electronic version is for personal use and may not be duplicated or distributed. " Tr R = 1 Y I ≥0 I X Minimizing state-to-output H∞ norm Alternatively. With X = T −T T −1 .15) is the GEVP minimize γ subject to P > 0. . (10. so that the EVP (over X.9. an equivalent condition is the LMI (with variables X. we can choose T to minimize kC(zI − A)−1 T k∞ subject to the constraint (10.19) An optimal state coordinate transformation T is any matrix that satisfies X opt = (T T T )−1 . with unit trace. the realization problem is: Minimize η 2 Tr Wo X −1 subject to X > 0 and the LMI (in P > 0 and X) " # AT P A − P + X AT P B ≤ 0.

150 Chapter 10 Miscellaneous Problems 10. . This is a convex optimization problem. . p. p. The standard Linear-Quadratic Gaussian (LQG) problem is to minimize Jlqg = lim E z(t)T z(t) t→∞ over u. The optimal cost is given by ∗ Jlqg = Tr (Xlqg U + QYlqg ) . (10. (10. Ri > 0. i = 0. and that (C. i=1 i=1 Noting that Xlqg ≥ X for every X > 0 that satisfies AT X + XA − XBR−1 B T X + Q ≥ 0. 0 Ri u We assume that (Q0 .21) t→∞ 0 The multi-criterion LQG problem is to minimize Jlqg over u subject to the measurabil- i ity condition and the constraints Jlqg < γi .20) with p X p X Q = Q0 + τi Q i and R = R0 + τi R i . we have several exogenous outputs of interest given by " 1/2 #" # Qi 0 x zi = 1/2 . 0 R1/2 u where u is the control input. . In the multi-criterion LQG problem. we further assume that Q ≥ 0 and R > 0. . y is the measured output. i = 0. A) and (Q. positive-definite spectral density matrices W and V respectively. i = 1. and U = Ylqg C T V −1 CYlqg . we conclude that the optimal cost is the maximum of p p à ! X X Tr XU + (Q0 + τi Qi )Ylqg − γ i τi . where Xlqg and Ylqg are the solutions of (10. . Qi ≥ 0.20) AYlqg + Ylqg AT − Ylqg C T V −1 CYlqg + W = 0. . .8 Multi-Criterion LQG Consider the LTI system given by " #" # Q1/2 0 x ẋ = Ax + Bu + w. τp . whose solution is given by maximizing p X Tr (Xlqg U + QYlqg ) − γ i τi . z is the exogenous output. For each zi . p. that (A. . . . . . subject to the condition that u(t) is measurable on y(τ ) for τ ≤ t. A) is observable. we associate a cost function i Jlqg = lim E zi (t)T zi (t). where Xlqg and Ylqg are the unique positive-definite solutions of the Riccati equations AT Xlqg + Xlqg A − Xlqg BR−1 B T Xlqg + Q = 0. . . i=1 over nonnegative τ1 . A) are observable. i=1 i=1 c 1994 by the Society for Industrial and Applied Mathematics. . y = Cx + v. B) is controllable. z= . w and v are independent white noise signals with constant. Copyright ° . .

is computed by solving the EVP in P = P T Xp maximize x(0)T P x(0) − τi γ i i=1 " # (10. i=1 i=1 i=1 where τ1 ≥ 0. . i=1 i=1 Computing this is an EVP. . i = 1. we consider the LTI system ẋ = Ax + Bu. For any u. we define p p ∆ X X S(τ. the infimum over u of S(τ. τp ≥ 0. . u). 0 CiT Ri u Here the symmetric matrices " # Qi Ci .23) AT P + P A + Q P B + C T subject to ≥0 BT P + C R Therefore. τp ≥ 0 and à p !−1 p X X T T A X + XA − XB R0 + τi R i B X + Q0 + τi Qi ≥ 0. . p. . .9 Nonconvex Multi-Criterion Quadratic Problems 151 over X. . . C = C0 + τi C i . τ1 . . u) = J0 + τi J i − τi γ i . . . τp .10. i = 0. . . . . . . CiT Ri are not necessarily positive-definite. . x(t) → 0 as t → ∞ The solution to this problem proceeds as follows: We first define p X p X p X Q = Q0 + τi Q i . . with τ = [τ1 · · · τp ]. when it is not −∞. 10. τ1 ≥ 0. . p.22) subject to J i ≤ γi . . . B) is controllable. . when it is not −∞. i = 0. where (A. . τp ≥ 0. . Next. . is given by: sup inf {S(τ. . . τ1 ≥ 0. i=1 i=1 Then. the solution of problem (10. the optimal control problem is solved by the EVP in P = P T and τ : p X T maximize x(0) P x(0) − τi γ i " i=1 # AT P + P A + Q P B + C T subject to ≥ 0. R = R0 + τi R i . p. (10. x(0) = x0 . .9 Nonconvex Multi-Criterion Quadratic Problems In this section. . . u) | x(t) → 0 as t → ∞ } τ u (See the Notes and References.) For any fixed τ . BT P + C R This electronic version is for personal use and may not be duplicated or distributed.22). . we define a set of p + 1 cost indices J0 . The constrained optimal control problem is: minimize J0 . subject to X > 0. . Jp by Z ∞ " #" # T T Qi Ci x Ji (u) = [x u ] dt.

our results extend theirs to the multi-input case. It is interesting to note that the “structured stability problem” or “µ-analysis problem” can be cast in terms of linear systems with delays. delays) then the system is stable for all ∆ with k∆k∞ ≤ 1. Diagonal quadratic Lyapunov functions also arise in the study of large-scale systems [MH78]. A related paper is by Geromel [Ger85].1. [Bit88. HB91b] and references therein. In particular we see that Krasovskii’s method from 1956 can be interpreted as a Lyapunov-based µ-analysis method. The study of positive orthant holdability draws heavily from the theory of positive matrices. Stabilization of systems with time delays The introduction of Lyapunov functionals of the form (10.4. Neumann and Stern [BNS89. ch6]. positive delays. see e.g. we refer the reader to the Notes and References in Chap- ter 3. using the approach of [CH93]. It can be shown (see [Boy86]) that if the feedback system is stable for all ∆ which are diagonal. verifying supω∈R µ(H(jω)) < 1 can be cast as checking stability of a linear system with L arbitrary. with ∆ii (s) = e−sTi . see Boyd and Barratt [BB91] and the references therein.4]. Skorodinskii [Sko90] observed that the problem of proving stability of a system with delays via Lyapunov–Krasovskii functionals is a convex problem.. SL93b. Finally. Thus.152 Chapter 10 Miscellaneous Problems Notes and References Optimization over an affine family of linear systems Optimization over an affine family of transfer matrices arises in linear controller design via Youla’s parametrization of closed-loop transfer matrices.3 are easily generalized to the LDIs considered in Chapter 4. FBB92]. A survey of results and applications of diagonal Lyapunov stability is given in Kaszkurewicz and Bhaya [KB93]. See also [PF92. where µ denotes the structured singular value (see [Doy82]). WBL92. we note that the results of §10. Ti > 0 (i. He casts the problem as an EVP. Consider a stable LTI system with transfer matrix H and L inputs and outputs. references for which are [BNS89] and the book by Berman and Plemmons [BP79.6) is derived from Theorems 3 and 4 in Willems [Wil71b]. The problem of checking whether a given polyhedron is invariant as well as the associated state-feedback synthesis problem have been considered by many authors. Other examples include finite-impulse-response (FIR) filter design (see for example [OKUI88]) and antenna array weight design [KRB89]. c 1994 by the Society for Industrial and Applied Mathematics. The LMI method of §10.3 can be extended to the polyhedral case.e. BBT90. which is a problem of the form considered in §10. Diagonal solutions to the Lyapunov equation play a central role in the positive orthant stabilizability and holdability problems. They solve the positive orthant holdability problem when the control input is scalar. who gives a computational procedure to find diagonal solutions of the Lyapunov equation. Positive orthant stabilizability The problem of positive orthant stability and holdability was extensively studied by Berman. connected in feedback with a diagonal transfer matrix ∆ with H∞ norm less than one. In [Kav94] Kavranoğlu considers the problem of approximating in the H∞ norm a given transfer matrix by a constant matrix. Copyright ° . see also §3. This system is stable for all such ∆ if and only if supω∈R µ(H(jω)) < 1. A generalization of the concept of positive orthant stability is that of invariance of polyhedral sets. §7.9) is due to Krasovskii [Kra56].. LMI (10. Stability of systems with LTI perturbations For references about this problem.

. Zames and Francis. Gohberg and Rodman [BGR90]. Other system identification problems can be cast in terms of LMI problems. The inverse problem of optimal control This problem was first considered by Kalman [Kal64]. System identification The system identification problem considered in §10. Safonov [Saf86] reduces the design of controllers for systems with structured perturbations to the scaled matrix Nevanlinna-Pick problem.6]. also see [OF85]. see also the references cited there.. Kimura reduces a class of H∞ -optimal controller design problems to the tangential Nevanlinna-Pick problem.e. Skelton and Grigoriadis discuss the problem of optimal finite word-length controller implementa- tion [LSG92].Notes and References 153 Interpolation problems The classical Nevanlinna-Pick problem dates back at least to 1916 [Pic16. Liu. dt This electronic version is for personal use and may not be duplicated or distributed. Genin and Kamp discuss the role of the matrix Nevanlinna- Pick problem in circuit and systems theory [DGK81]. indirectly. its extensions and variations can be found in the book by Ball. A number of researchers have studied the application of Nevanlinna-Pick theory to problems in system and control: Delsarte.. that the (nonlinear) system not exhibit limit cycles due to quantization or overflow. RW94a].. Boyd and Doyle [BD87]. Fujii and Narazaki [FN84] and the references therein. in [ZF83]. a bound on the peak gain from input to state. u = K x̂. See also Rotea and Williamson [RW94b.5. The book by Gevers and Li [GL93a] describes a number of digital filter realization problems. [Wil74b].e. Rotea [Rot93] calls this problem “generalized H2 control”. Thi84]. see e. He shows that one can restrict attention to observer-based controllers such as d x̂ = (A + BK)x̂ + Y C T V −1 (y − C x̂).g. It is discussed in great detail by Anderson and Moore [AM90. which give an- alytical solutions via balanced realizations for special realization problems.3 can be found in [CNF92]. System realization problems For a discussion of the problem of system realizations see [MR76.g. see. see also [And66c]. In [Kim87]. by checking that the return difference inequality I − B T (−iωI − AT )−1 K R I − K T (iωI − A)−1 B ≥ R ¡ ¢ ¡ ¢ holds for all ω ∈ R. There are two matrix versions of this problem: the matrix Nevanlinna-Pick problem [DGK79] and the tangential Nevanlinna-Pick problem [Fed75]. Chang and Pearson [CP84] solve a class of H∞ -optimal controller design problems using matrix Nevanlinna-Pick theory. the most general form of the inverse optimal control problem. study the implications of interpolation conditions in controller design. i. . they solve the problem when the control weighting matrix R is known. i. We also note that an H∞ norm constraint on the input-to-state transfer matrix yields. see also [Bal94]. We conjecture that the LMI approach to system realization can also incorporate the constraint of stability. A comprehensive description of the Nevanlinna-Pick problem. Our result handles the case of unknown R. Nev19]. e. Multi-criterion LQG The multi-criterion LQG problem is discussed in [BB91]. §5. [PKT + 94]. This can be used to guarantee no overflow given a maximum peak input level. who solved it when the control u is scalar.

Mixed H2 –H∞ problem We consider the analog of an NLDI with integral quadratic constraint: " #" # Q1/2 0 x ẋ = Ax + Bp p + Bu u. The EVP (10. . (10. there exists an optimal control law that achieves the best performance). Z and X minimize Tr R0 X + Tr Q0 (P + Y ) " # X Z subject to ≥ 0. ZT P AP + P AT + BZ + Z T B T + Y C T V −1 CY = 0. the two formulations can be used together in an efficient primal-dual method (see. Meg93] and Yakubovich [Yak92]. Rotea and Skelton [ZRS93] to solve a similar class of problems. The covariance of the residual state x̂−x is Y and the covariance P of the state x̂ is given by the solution of the Lyapunov equation: (A + BK)P + P (A + BK)T + Y C T V −1 CY = 0. Also. [VB93b]). We also mention an algorithm devised by Zhu.. see [Toi84]. z= 0 R1/2 u Z ∞ Z ∞ pT p dt ≤ q T q dt 0 0 c 1994 by the Society for Industrial and Applied Mathematics.23) is derived from Theorem 3 in Willems [Wil71b]. Nonconvex multi-criterion quadratic problems This section is based on Megretsky [Meg92a. The multi-criterion problem is then solved by the EVP in P . In our discussion we omitted important technical details. .21) is then i Jlqg = Tr Ri KP K T + Tr Qi (Y + P ) . Yakubovich gives conditions under which the infimum is actually a minimum (that is. . For a variation on the multi-criterion LQG problem discussed here. taking Z = KP as a new variable. Meg92b. q = Cq x. such as constraint regularity. e. we replace (10.24) i The value of each cost index Jlqg defined by (10. This is illustrated in the following section.g.154 Chapter 10 Miscellaneous Problems where Y is given by (10. i = 1. Tr Ri X + Tr Qi (P + Y ) ≤ γi . which is covered in these articles. Many of the sufficient conditions for NLDIs in Chapters 5–7 can be shown to be necessary and sufficient conditions when we consider integral constraints on p and q.20). Now. Copyright ° . L This formulation of the multi-criterion LQG problem is the dual of our formulation.24) by the linear equality in W and P AP + P AT + BZ + Z T B T + Y C T V −1 CY = 0 and each cost index becomes i Jlqg = Tr Ri ZP −1 Z T + Tr Qi (Y + P ) . by analyzing them as nonconvex multi-criterion quadratic problems. see also [TM89. Mak91]. .

µ = 1/τ . . Indeed.9. τ ≥0   BpT P −τ I The minimization over K is now simply done by introducing the new variables W = P −1 .Notes and References 155 R∞ Define the cost function J = 0 z T z dt. in this particular case. µ≥0   µBpT −µI which is the same as the EVP (7. over all possible linear state-feedback strategies u = Kx. This maximum is obtained by solving the EVP minimize x(0)T P x(0)   (A + Bu K)T P + P (A + Bu K)+ P Bp  subject to Q + K T RK + τ CqT Cq  ≤ 0. we know we can add the constraint P > 0.) For fixed K.23). Our goal is to determine minK maxp J. The corresponding EVP is then minimize x(0)T W −1 x(0)  Ã !  W AT + Y BuT + AW + Bu Y + µBp  subject to W QW + Y T RY + W CqT Cq W/µ  ≤ 0.1 with integral constraints on p and q instead of pointwise in time constraints. finding the maximum of J over admissible p’s is done using the results of §10. This electronic version is for personal use and may not be duplicated or distributed.4. (Note that this is the same as the prob- lem considered in §7. Y = KW .

.

.e. i. Re(a) The real part of a ∈ C. real k-vectors. R+ The nonnegative real numbers.  . M ≥0 M is symmetric and positive semidefinite.Notation R. i=1 Mii . λmax (P. Ik The k × k identity matrix. λmin (M ) The minimum eigenvalue of M = M T . M >N M and N are symmetric and M − N > 0. C The complex numbers.e. Q = QT > 0.. i. i.e.. The subscript is omitted when k is not relevant or can be determined from context. M = M T and z T M z ≥ 0 for all z ∈ Rn . for a vector x. i. diag(· · ·) Block-diagonal matrix formed from the arguments. Pn Tr M Trace of M ∈ Rn×n . Mm ) =  . M∗ Complex-conjugate transpose of a matrix M : (M ∗ )ij = Mji ∗ .  Mm 157 . Reduces to the Euclidean norm. Q) For P = P T . M >0 M is symmetric and positive-definite. M 1/2 For M > 0.. λmax (M T M ). λmax (Q−1/2 P Q−1/2 ). λmax (P. M 1/2 is the unique Z = Z T such that Z > 0...e. (a + a∗ )/2. i. Q).  . i.. .e.e. i. . where ∗ α denotes the complex-conjugate of α ∈ C..e. Rm×n The real numbers. p kM k The spectral norm of a matrix or vector M√. λmax (M ) The maximum eigenvalue of the matrix M = M T . z T M z > 0 for all nonzero z ∈ Rn . Q) denotes the maximum eigen- value of the symmetric pencil (P. Rk .. . real m × n matrices.   M1 ∆  diag (M1 . Z 2 = M . S The closure of a set S ⊆ Rn . MT Transpose of a matrix M : (M T )ij = Mji ..e. kxk = xT x. i.

we can take p = n + 1 here. p ≥ 0 . c 1994 by the Society for Industrial and Applied Mathematics. ¯ ¯ i=1 (Without loss of generality. given by ( p ¯ ) ¯ ∆ X Co S = λi xi ¯ xi ∈ S.) Ex Expected value of (the random variable) x. Copyright ° .158 Notation Co S Convex hull of the set S ⊆ Rn .

Acronyms Acronym Meaning Page ARE Algebraic Riccati Equation 3 CP Convex Minimization Problem 11 DI Differential Inclusion 51 DNLDI Diagonal Norm-bound Linear Differential Inclusion 54 EVP Eigenvalue Minimization Problem 10 GEVP Generalized Eigenvalue Minimization Problem 10 LDI Linear Differential Inclusion 51 LMI Linear Matrix Inequality 7 LMIP Linear Matrix Inequality Problem 9 LQG Linear-Quadratic Gaussian 150 LQR Linear-Quadratic Regulator 114 LTI Linear Time-invariant 52 NLDI Norm-bound Linear Differential Inclusion 53 PLDI Polytopic Linear Differential Inclusion 53 PR Positive-Real 25 RMS Root-Mean-Square 91 159 .

.

Aut. Lett. [AGB94] P. AC-38(7):1128–1131. Aut. Control. Garcia. [AF90] J. Algebraic description of bounded real matrices. [ABJ75] B. Laub. I. In Panos Pardalos. On maximizing the minimum eigenvalue of a linear combination of symmetric matrices. and E. Alizadeh. Alizadeh. 1994. [AC84] J. In Pro- ceedings of second annual Integer Programming and Combinatorial Optimization conference. AC-19(6):680–692. 1992. Zinober. Cellina. Boston. Frankowska. Variable structure and Lyapunov control. B. Akgül. AC-20:53–66. December 1966. Anderson and J. and G. North-Holland. Anderson and P. on Matrix Analysis and Applications. 1964. Garofalo.Bibliography [ABG93] D. 10:347–382. Becker. Combinatorial optimization with semidefinite matrices. volume 2 of Systems & Control. SIAM J. Alizadeh. Moore. R. 1974.. 1989. and L. 1994. Springer-Verlag. P. Jury. [All89] J. Springer-Verlag. F. [AM90] B. Polytopic coverings and robust stability analysis via Lyapunov quadratic forms. P. of Minnesota. [AG64] M. Arnold and A. Aizerman and F. [AL84] W. 1975. Set-valued analysis. 1984. [And66a] B. Aut. D. [Ali92a] F. July 1993. Self-scheduled H∞ control of linear parameter-varying systems. San Francisco. 1992. [AM74] B. 72(12):1746–1754. IEEE Trans. Univ. Bose. and G. Moylan. 2(12):464–465. volume 264 of Comprehensive Studies in Mathematics. Combinatorial optimization with interior point methods and semi- definite matrices. P. Allwright. Absolute stability of regulator systems. Advances in Optimization and Parallel Computing. Topics in Relaxation and Ellipsoidal Methods. Holden-Day. [AGG94] F. Optimization over the positive-definite cone: interior point meth- ods and combinatorial applications. Spectral factorization of a finite-dimensional nonstationary matrix covariance. editor.. PhD thesis. 161 . IEEE Trans. Pole assignment of linear uncertain systems in a sector via a Lyapunov-type approach. [Ali92b] F. Proc. Control. pages 269–288. Amato. Arzelier. Pitman. Information Systems. Output feedback stabilization and related problems—Solution via decision methods. Optimal Control: Linear Quadratic Methods. 1990. Anderson. Control. volume 193 of Lecture notes in control and information sciences. Bernussou. In A. J. Gantmacher. Aubin and A. chapter 13. Gahinet. 1984. Submitted. Carnegie-Mellon University. Differential Inclusions. F. December 1984. editor. N. Birkhauser. Glielmo. Electron. October 1991. Anderson. 1990. [Akg84] M. Generalized eigenproblem algorithms and software for algebraic Riccati equations. IEEE. A. Apkarian. [Ali91] F. J. K. IEEE Trans. J. volume 97 of Research Notes in Mathematics. Prentice-Hall. Aubin and H.

Algebraic properties of minimal degree spectral factors. [And73] B. Prentice-Hall. Int. volume 2. San Francisco. Badawi. Anderson. Anderson. A system theory criterion for positive real matrices. Joint American Control Conf. Prentice-Hall. [Bar70b] A. August 1985. editor.. In Proc. June 1993. Wiley. Network analysis and synthesis: a modern systems theory approach. [BBFE93] S. 1994. [BBC90] G. Automat- ica. B. Barker. The testing for optimality of linear systems. Barratt. J. S. volume 70 of Math. 1970. R. Aut. Int. [BBB91] V. Introduction to stochastic control theory. A. 1974. [AS79] M.162 Bibliography [And66b] B. Control. Anderson. [BBK89] S. Leondes. Boyd. Introduction to the theory of stability. [Bal94] V. Aut. Academic Press. Walters-Noordhoff. Positive diagonal solutions to the Lyapunov equations. 1973. Franklin Inst. [And67] B. J. [And66c] B. Feron. Bona. Control. Barmish. L. SIAM J. Boyd. J. [Åst70] K. 5:73–87. Åström. August 1982. El Ghaoui. Balakrishnan. Sufficient conditions for the absence of auto-oscillations in pulse systems. [Bar83] B. 2(3):207–219. Academic Press. Multivariable circle criteria and M-matrix condition. Vongpanitlerd. Numerische Mathematik. Journal of Optimization Theory and Applications. On the quadratic matrix inequality and the corresponding alge- braic Riccati equation. c 1994 by the Society for Industrial and Applied Mathematics. Berman. and V. Control system analysis and synthesis via linear matrix inequalities. Boyd. Barmish. [Bar85] B. pages 11–17. 1966. Kabamba. IEEE Trans. California. 1993. and P. and S. pages 2147–2154. Boyd. In Proc. 1970. In C. A. Optimally scaled matrices. Arnold. Boyd and C. Balemi. Balakrishnan and S. Plemmons. Bauer. [Bad82] F.. A. Aut. [BB91] S. R. Stochastic differential equations: theory and applications. T. Control. MacMillan. [BB92] V. New Tools for Robustness of Linear Systems. 1963. Correction in 11:321–322. Anderson. 1967. Anderson and S. 26:887–898. [Bar70a] E. volume 53. Mathematics of Control. [BBP78] G. Int. 9:491–500. Control. V. of Robust and Nonlinear Control. Control and Dynamic Systems: Advances in Theory and Applications. J. H∞ controller synthesis with time-domain constraints using interpolation theory and convex optimization. Control. Linear and Multilinear Algebra. 1989. J. August 1983. 1991. New York. in Science and Engr. Barkin. and R. Parameter estimation algorithms for a set-membership description of uncertainty. 1992. and L. 36(2):313–322. New York. Remote Control. and Systems. Necessary and sufficient conditions for quadratic stabilizabil- ity of an uncertain system. Automatica. Global optimization in control system analysis and design. Barbashin. 46(4):399–408. 1(4):295–317. [Bar93] B. R. Araki and M. 1978. Branch and bound algorithm for computing the minimum stability degree of parameter-dependent linear systems. American Control Conf. J. Saeki. 5:171–182. Signals. Belforte. Linear Controller Design: Limits of Performance. 1973. Stabilization of uncertain systems via linear control. Balakrishnan. 282:155–160. Balakrishnan. AC-28(8):848–850. I. Balakrishnan. 5:249–256. Copyright ° .. A bisection method for computing the H∞ norm of a transfer matrix and related problems. 1966. [Arn74] L. Barmish. New York. IEEE Trans. 31:942–946. 4(1):29–40. 1975. E. 1979. V. Cerone. October–December 1991. [Bau63] F. 1990. 1970. [AV73] B. Stability of control systems with multiple nonlinearities. P. Submitted.

Berkeley. . 1981. Feron. IEEE Conf. and Systems. Simultaneous stabilizability question of three linear systems is rationally undecidable. Cambridge University Press. Method of centers for minimizing generalized eigenval- ues. [BD87] S. and L. Robust stability and performance analysis for linear dynamic systems. C. Operations Research. Burgat. pages 2182–2187. E. [Bec91] C. Fan.. Boyd. 46:1373–1380. D. pages 1259–1260. Benzaouia. 1988. Bernstein and W. I. November 1985.. [BD85] V. California. 58:387–409. Birkhäuser. editor. L. and control of linear systems with multiplicative noise. Brighton. [BGR90] J. 9:1–6. AC-34(7):751–758. C. SIAM Journal on Control and Opti- mization. Linear matrix inequalities in system and control theory. Systems Science. Inc. 29(6):1039–1091. [BH88a] D. Computational issues in solving LMIs. American Mathematical Society. Blondel and M. In Proc. and G. A. Doyle. volume 7 of Proceedings of Symposia in Pure Mathematics. American Control Conf. estimation. and The Mathworks. pages 1–11. [BDG+ 91] G. 6(1):135–145. El Ghaoui. J. Chicago. In V. Optimal projection equations for reduced- order modeling. Optimiz. IEEE Conf. Robust stability and performance analysis for state space systems via quadratic Lyapunov bounds. G. [BGT81] R. Fading memory and the problem of approximating nonlinear operators with Volterra series. Bernstein and D. Aut. J. Barmish. J. S. Convexity. Haddad.. on Communication.Bibliography 163 [BBT90] C. Stochastic Control of Partially Observable Systems. Bellman and K. San Francisco. Bland. O. volume 2. [Ben92] A. June 1993. Allerton House. Haddad. [BH88b] D. July 1993. MUSYN. Beck. [BF63] R. American Control Conf. Rodman. L. M. A new class of stabilizing con- trollers for uncertain dynamical systems. Positively invariant sets of discrete-time systems with constrained inputs. July 1989. Syst. PhD thesis. Tarbouriech. Goldfarb. Balas. and L. Bensoussan. volume 4. On systems of linear inequalities in Hermitian matrix variables. [BEFB93] S. Becker. [BFBE92] V. Todd. µ-analysis and synthesis. Boyd and L. Borisov and S. 188:63–111. Comparison of peak and RMS gains for discrete-time systems. Great Britain. Balakrishnan. 1993. Linear Algebra and Applications. special issue on Numerical Linear Algebra Methods in Control. on Decision and Control. pages 2195–2196. Balakrishnan. E. [BCL83] B. Int. 1990. In Proc. El Ghaoui. Basel. The ellipsoid method: A survey. Quadratic Stability and Performance of Linear Parameter Dependent Systems. In Proc. Cambridge. 1994. Theory Appl. Illinois. Automation and Remote Control. Gevers. pages 832–836. In Proc. volume 45 of Operator Theory: Advances and Applications. 1988. 1990. A. Klee. In Proc. G. Haddad. N. and S. 1985. Doyle.. and M. Diligenskii. A. Boyd and J. Glover.. J. and Roy Smith. June 1992. Interpolation of Rational Matrix Func- tions. Control Letters. R. [BH93] D. Bernstein and W. IEEE Trans. Circuits Syst. June 1987. K. 1963. Nonlinear controllers for positive-real systems with arbitrary input nonlinearities. University of California. Feron.. Corless. Leitmann. Boyd and L. 1991. on Decision and Control. Bernstein and W. Numerical method of stability analysis of nonlinear systems. IEEE Trans. Annual Allerton Conf. [BE93] S. 21:1249– 1271. Hyland. Mathematics of Control. March 1983. [Bec93] G. Ball. December 1991. El Ghaoui. Monticello. Gohberg. S. 21(2):246–255. [BH89] D. October 1993. Control and Computing. Control. Boyd. Packard. Signals. and V. volume 1. [BG94] V. Computing bounds for the structured singular value via an interior point algorithm. Signals and Systems. 1992. inc. This electronic version is for personal use and may not be duplicated or distributed. [BC85] S. CAS- 32(11):1150–1171. Chua.

Automatica. Copyright ° . 1989. Control of parametrically dependent linear systems: a single quadratic Lyapunov approach. 1989. Lin. July 1966. M. [BP79] A. Aut. IEEE Trans. Control. Balakrishnan. IEEE Trans. Cal- ifornia. Control and Information. 1965. Robust control tools: Graphical user-interfaces and LMI algorithms. Peres.. American Control Conf. Systems. pages 255–257. [BR71] D. [BNS89] A. Balas. In 1993 Amer- ican Control Conference. Nonnegative Matrices in Dynamic Systems. P. Berman and R. Finite Dimensional Linear Systems. Control. Barrett. John Wiley & Sons. Peres. Bitsoris. Huang. Special issue on Numerical Approaches in Control Theory. Linear matrix inequalities in analysis with multipliers. and J. Johnson. Invited session on Recent Advances in the Control of Uncertain Systems. In Proc. Wiley-Interscience. Stability of dynamical systems: A constructive approach. Stern. and J. Aut. Brockett. W. [BK93] B. Recursive state estimation for a set-membership description of uncertainty. C. 1989. and J. June 1988. Boyd. In Proc. L. 1970. [BPG89b] J. Optimization theory and the converse of the circle criterion. AC-16(2):117–128. c 1994 by the Society for Industrial and Applied Mathematics. [BPG89a] J. A linear programming oriented procedure for quadratic stabilization of uncertain systems. Syst. Control Letters. Packard. 1989. H. Barmish and H. IFAC/IFORS/IMACS Symposium Large Scale Systems: Theory and Applications. Tong. A. Packard. The status of stability theory for deterministic systems. [BHPD94] V. [BP91] G. In Proc. Philbrick. volume 3. Plemmons. J. I. [Bro77] R. New York. and H. American Control Conf. 47(6):1713–1726. C. Root locations of an entire polytope of polynomials: it suffices to check the edges. and R. 401–413. Control. Control. A. Berman. Positively invariant polyhedral sets of discrete-time linear systems. 1989. 1:135–138. Neumann. 1(1):61–71. [Bro70] R. Brockett.. C. volume 21. Becker and A. June 1993. CAS-26(4):224–234. 1977. Bartlett. Geromel. CAS-27(11):1121–1130. K. [Boy86] S. 1979. [BT80] R. Willems. Bernussou. P. [Bro65] R. Nonnegative matrices in the mathematical sciences. A survey of extreme point results for robustness of control systems. C. Constructive stability and asymptotic stability of dynamical systems. on Decision and Control. AC-11:596–606. Kang. Circuits Syst. K. 38(3):111–117. Washington. [Bit88] G. and Systems.. Y. IEEE Conf. Brockett. A note on parametric and nonparametric uncertainties in control sys- tems. Mathematics of Control. W. Doyle. Lundquist. H. 1971. Circuits Syst. IEEE Trans. pages 697–701. L. and G. AC-10:255–261. [Boy94] S. pages 2702–2703. Linear Algebra and Appl. In National Electronics Conference. Bernussou. [BW65] R.164 Bibliography [BHL89] A.. San Francisco. Rhodes. [BPPB93] G.. Brockett and J. On improving the circle criterion. Brayton and C. [BT79] R. 29(1):13–35. [Bro66] R. Signals. Robust decentralized regulation: A linear programming approach. Brockett. Aut. D. Computer Science and Applied Mathematics. Becker. 1994. pages 1847–1849. R. C. Academic Press. Boyd.. W. Gain-scheduled state-feedback with quadratic stabil- ity for uncertain systems. Brayton and C. R. 1980. IEEE Trans. Int. 1979. V. American Control Conf. 1993. 1965. IEEE Trans. March 1994. Berlin-GDR. Bertsekas and I. 121:265–289. and M. June 1986. Seattle. Frequency domain stability criteria. pages 2795–2799. D. Determinantal formulation for matrix completions associated with chordal graphs. In Proc. Geromel. Packard. L. D. W. Hollot. 13:65–72. J. Tong. [BJL89] W. June 1991.

August 1972. Annual Allerton Conf. Davis. and L. Zinober. June 1992. Computational com- plexity of µ calculation. Bakonyi and H. Cullum. AC-17:474–483. In A. contractive and strong Parrott type completion problems.. 1975. W.. volume 193 of Lecture notes in control and information sciences. Control and Computing. Aut. Corless. Coxson and C. Deterministic control of uncertain systems. Proceedings of the IFAC workshop on model error concepts and compensation. Optimal disturbance rejection in linear multivariable systems. Control. 1980. [CNF92] J. Univ. 1991. Control. Chiang and L. P. 49(6):2215–2240. Braatz. pages 105–106. [CM81] B. Riccati equation approaches for robust stability and performance. Math. 1992. CA. DeMarco. December 1993. American Control Conf. 1994. J. Programming Study. Engineering Cybernetics. California.Bibliography 165 [BW92] M. AC-29(10):880–887.-D. June 1993. chapter 9. [Che80a] F. of Elec. 1980. E. Corless. and P.. 1990. Wolfe. N. Adaptive guaranteed cost control of systems with uncer- tain parameters. pages 125–128. Control. Collins. editor. Chicago. Peng. IEEE Trans. 1989. Allerton House.. In Proc. [Cor85] M. pages 181–203. A constructive methodology for estimating the stability regions of interconnected nonlinear systems. [CP72] S. essentially optimal algorithms. October 1984. Woerdeman. The minimization of certain nondiffer- entiable sums of eigenvalues of symmetric matrices. and M.. Journal of Optimization Theory and Applications. L. Chernousko. and M. Haddad. Optimal guaranteed estimates of indeterminancies with the aid of ellipsoids II. Morari. Castelan and J.. 3:35–55. Leitmann. Chernousko. Hennet. This electronic version is for personal use and may not be duplicated or distributed. of Wisconsin-Madison. San Francisco. 38:73–80. volume 2. On invariant polyhedra of continuous-time linear systems. American Control Conf. Peregrinus Ltd. Circuits Syst. Operator Theory: Advances and Applications. In Proc. Robust stability analysis and controller design with quadratic Lya- punov functions. Technical Report ECE-92-4. 1990. 64:481–494. C. Control. Optimal guaranteed estimates of indeterminancies with the aid of ellipsoids I. 18(4):1–9. 1980. Springer-Verlag. Engineering Cybernetics. Fekih-Ahmed. Structured and simultaneous Lyapunov functions for system stability problems. K. M. B. Variable structure and Lyapunov control. Aut. In Proc. [CL90] M. pages 251–257. D. editor. Chen. In Proc. 1981. [Che80c] F. Boyd and Q. [Cor90] M. C. [CHD93] E. Aut. Deterministic control of uncertain systems: a Lya- punov theory approach. Monticello. M. G. Chang and J. In A. [Che80b] F. D. IEEE Trans. 59:78–95. Optimal guaranteed estimates of indeterminancies with the aid of ellipsoids III. [BY89] S. Worst-case identification in H∞ : Vali- dation of a priori information. Dept. [CH93] E. San Francisco. J. Coxson and C. Linear Algebra and Appl. June 1992. J. 37(5):577–588.. USA. Chang and T. Donath. . Stabilization of uncertain discrete-time systems. Testing robust stability of general matrix polytopes is an NP-hard computation. Illinois. Jr. Engineering Cybernetics. E. [BYDM93] R. Yang. 38(11):1680–1685. and Comp. Chernousko. and error bounds. IEEE Trans. pages 1079–1083. Eng. DeMarco. Craven and B. C. Doyle. [CP84] B. C. Corless and G. Mond. 18(5):1–7. Young. June 1993. Pearson. W. L. Computing the real structured singular value is NP-hard. L. The central method for positive semi-definite. [CD92] G. Int. B. 1985. H. [CD91] G. IEEE Trans. Guaranteed rates of exponential convergence for uncertain dynamical systems. Nett. Linear programming with matrix variables. [CDW75] J. Corless. American Control Conf. L. on Com- munication. L. pages 1682–1683. Boston. [Cor94] M. Fan. 1990. Zinober. [CFA90] H. 18(3):1–9.

and K. Chicago. Optimization.. American Control Conf. February 1979.. Doyle. Terlaky. 8(3):674–675. In Proc. M. and R. [Dav94] J. Control. of the European Contr. Comp. Y. [Dan93] J. Leuven. Iterative solution of problems of linear and quadratic programming. Y. Feron. 36(1):47–61. July 1993. New York. Rank minimization under LMI constraints: a framework for output feedback problems. 1993. Van Dooren. Chiang and M. [DGK79] P. Appl. Dym and I. Kahan. volume 4. 7(5):627–636.. Yurkovich. Belgium. M. Matrix Anal. Extensions of band matrices with band inverses. SIAM J. Feedback Systems: Input-Output Properties. V. Appl. Dokl. Deller. [Doy82] J. Acoust. Safonov. IEE Proc. Gohberg. Delsarte. The Nevanlinna-Pick problem for matrix- valued functions. Numerical Anal. Univ. SIAM Jour. [Dik67] I. 1965. [DV75] C. Genin. A. Kurak. [EBFB92] L. Pang. IEEE Trans. Gahinet. June 1992. Den Hertog. and T. Delsarte. Xie. February 1992. pages 2417–2418. Speech. [CS92a] R. David. S. An optimal volume ellipsoid algorithm for parameter estimation. [DKW82] C. [EG93] L. Dancis. Weinberger. Control. and Y. Davison and E. A generalization of the Popov criterion. Fu. [CYP93] M. Signal Processing. AC-38(8):1292– 1296. The Mathworks. March 1993. Norm-preserving dilations and their applications to optimal error bounds. Copyright ° . Stone. C. and H. June 1981. S. Passino. On maximizing a robust- ness measure for structured nonlinear perturbations. Linear Algebra and Appl. [DGK81] P. September 1971. Linear Algebra and Appl. Robust Control Toolbox. Analysis of feedback systems with structured uncertainties. Kamp. Dancis. Math. A generalized eigenvalue approach for solving Riccati equations.. J. In Proc. 1992. 1989. F. A large-step analytic center method for a class of smooth convex programming problems. 1992. Safonov. The Linear Complementarity Problem. June 1992. Cottle.166 Bibliography [CPS92] R. [DRT92] D. Kamp. W. Desoer. 2(1):55–70. Positive semidefinite completions of partial hermitian matrices. American Control Conf. 129-D(6):242–250.. [Des65] C. Chicago. On the role of the Nevanlinna-Pick problem in circuit and systems theory.. Control. Genin. Cheung. Soviet Math. H∞ analysis and synthesis of discrete-time systems with time-varying uncertainty. IEEE Trans.. 36:1–24. Automatica. Aut. 1994.. SIAM J. PhD thesis. Set membership identification in digital signal processing.. Balakrishnan. [CS92b] R. J.. volume 3. Algorithms for analysis and design of robust controllers. Roos. ESAT. [dSFX93] C. IEEE Trans. AC-38(3). and Y. [Doo81] P. 1967. 2:121–135. Sci. 1981. [Dan92] J. [Del89] J. Academic Press. July 1993. IEEE Trans. Aut. 6(4):4–20. pages 1176–1179. and L. El Ghaoui. A. 1975. c 1994 by the Society for Industrial and Applied Mathematics. AC-10:182–185. Vidyasagar. Y. [DG81] H. Stat.. 14(3):813–829. June 1982. In Proc. Real Km -synthesis via generalized Popov mul- tipliers. E. Choosing the inertias for completions of certain partially specified matrices. November 1982. Conf. SIAM J. Dikin. M. SIAM J. Academic Press. 19(3):445– 469. G. G. E. 1992.. Circuit Theory and Appl. E. Inc. 175:97–114. de Souza. Desoer and M.. pages 2923–2924. Davis. M. [DK71] E. 9:177–187. and S. El Ghaoui and P. Kat.. Aut. 3001 Leuven. Chiang and M. Boyd. 1981. A computational method for determining quadratic Lyapunov functions for non-linear systems.

[FBBE92] E. POB 8640. PhD thesis. Fedčina. October 1993. Faurre. Control Letters. Syst. and V. Pritchard. Stanford. Technical Report 92-028. Elsner. New York. H. Control Letters. Submitted. In Proc. A quadratically convergent algorithm on minimizing the largest eigenvalue of a symmetric matrix. McCormick. Chicago. He. Faurre. INRIA. Differential Equations with Discontinuous Righthand Sides. 1937. Fiacco and G. [FH82] E. Design of stabilizing state feedback for delay systems via convex optimization. Balakrishnan. Syst. Control Letters. Balakrishnan. PhD thesis. Analysis and state-feedback synthesis for systems with multiplica- tive noise. Faurre and M. Representation of Stochastic Processes. El Bouthouri and A. 1977. Kluwer Academic Publishers. 1973. This electronic version is for personal use and may not be duplicated or distributed. C. [EP92] A. [FM68] A. Control and Opt. Boyd. January 1988. 19(1):29–33. Emelyanov. Fletcher. [Fau67] P. Estimating reachable sets for two-dimensional lin- ear discrete systems. [Fil88] A. pages 2921–2922. 1990. September 1993. 23:493–513. January 1994. Tucson. 1992. Korovin. V. Wiley. Mehrmann. Feron. Syst.. and L. Semidefinite matrix constraints in optimization. France. Fogel and Y. Stanford University. Minimization of the norm. Automatica. and V. Universität Bielefeld.Bibliography 167 [EG94] L. 1994. Gayek. V. CA 94305. Elements of System Theory. Finsler. [FBB92] E. Nikitin. Stanford Univer- sity. Filippov. 1968. Arm. K. [FD77] P. volume 4. Minimization of the condition number of a positive-definite matrix by completion. In Proc. 188-189:231–253. Zhivoglyadov. [EP94] A. F. On the value of information in system identification- bounded noise case. [Fau73] P. S. Chemnitz. Linear Algebra and Appl. Group of isometries and some stabilization and stabilizability aspects of uncertain systems. Dordrect. June 1992. Pritchard. Journal of Optimization Theory and Applications. 1982. Reprinted 1990 in the SIAM Classics in Applied Mathematics series. Mehrmann. 1993. Comentarii Mathematici Helvetici. Germany. Huang. Descriptions of solutions of the tangential Nevanlinna-Pick prob- lem. E. December 1992. Feron. Stability radii of linear systems with respect to stochastic perturbations. Elsner. the norm of the inverse and the condition number of a matrix by completion. [FG88] M. Linear Matrix Inequalities for the Problem of Absolute Stability of Control Systems. I. The Netherlands. Technical Report 13. V. American Control Conf. 60:37–42. July 1992. Feron. [Fin37] P. North-Holland. P. 1967. 19(5-6):451–477. Doklady Akad. 18(2):229–238. 1975. V. He. Nonlinear programming: sequential unconstrained minimization techniques. 1988. and S. El Ghaoui. [EHM93] L. Germany. [Fan93] M. [Fle85] R. IEEE Conf. and S. Technical report. SIAM J. 1985. Le Chesnay. Translated from 1960 Russian version. V.. Depeyrot. D-4800 Bielefeld 1. . Fan. on Decision and Control. El Bouhtouri and A. S. Über das Vorkommen definiter und semi-definiter Formen in Scharen quadratischer Formen. Fisher and J. E. [Fed75] P. Domaine de Voluceau. [Fer93] E. TU-Chemnitz-Zwikau. pages 2195–2196. 56(1):67– 88.. Réalisations Markoviennes de processus stationnaires. J. El Ghaoui. K. C. Rocquencourt. volume 4. 78150. Problems of Control and Information Theory. 9:199–192. Boyd.. [EZKN90] S. Numerical methods for H2 related problems. A Riccati equation approach to maximizing the stability radius of a linear system under structured stochastic Lipschitzian perturbations. Nauk. [EHM92] L.

Fiedler and V. L. Gahinet. A convex characterization of parameter dependent H∞ controllers. H. Galimidi and B. Aut. E. A LMI-based parametrization of all H∞ controllers with applications. Barmish. A course in H∞ Control Theory. Apkarian. Fiedler and V. Copyright ° . IEEE Trans. 1975. Soc. [FT86] M. 87:382–400. June 1987. Proc. San Antonio. Control. Aut. Fogel. Control. February 1993. AC-36(1):25–38. Narazaki. March 1984. A complete optimality condition in the inverse problem of optimal control. Glielmo. R. Gahinet and P. A. 9:163–172. E. IEEE Trans. A survey of techniques for approximating reachable and controllable sets. volume 88 of Lecture Notes in Control and Information Sciences. Tits. H. [FY79] A. H. Stability robustness of interval matrices via Lyapunov quadratic forms. Nocedal. Brighton. 1955.. Control. On minimizing the largest eigenvalue of a sym- metric matrix. Friedland. Control. Springer-Verlag. [Gah92] P. AC-31(8):734–743. Overton. Yakubovich. [Gay91] J. 1992. AC-31(5):410–419. Fan. Math. 1979. 1973. Fan and A. Straus. Control. K. Amer. [Fra87] B. Fradkov and V. March 1988. Deterministic and stochastic optimal control. On matrices with non-positive off-diagonal elements and positive principal minors. Garofalo. The constrained Lyapunov problem and its application to robust output feedback stabilization. IEEE Trans. R. Aut. [GA93] P. A. Ptak. Fiedler and V. 1967. The S-procedure and duality relations in nonconvex problems of quadratic programming. Aut. [FP62] M. G. and J. Vestnik Leningrad Univ. Forsythe and E. Ptak. AC-33(3):284–289. Fleming and R. Nekooie. [GA94] P. IEEE Conf. January 1991. and M. SIAM J. In Proc. W. Gayek. [FS55] G. 6(1):101–109. pages 656–661. Springer-Verlag. 6:340–345. IEEE Conf. SIAM J. on Decision and Control.. 1966. Characterization and efficient computation of the structured singular value. Fan and A. Francis. Gahinet and P. December 1992. August 1986. The formulation and analysis of numerical methods for inverse eigenvalue problems. Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics. IEEE Trans. IEEE Trans. 24(3):634–667. Numer. Apkarian. 1994. Aut. May 1986. Fujii and M. December 1993. c 1994 by the Society for Industrial and Applied Mathematics. In Russian. Control. pages 1724–1729. L. C. Fan and B. [FN92] M. J. Aut. [GB86] A. 92:420–433. K. K. To appear. Aut. Control. IEEE Conf. Doyle. 22(2):327–341. Texas. on Control and Optimization. 1979. L. pages 134–139. Numerische Mathematik. IEEE Trans. Tits. Diagonally dominant matrices. IEEE Conf. H. Some generalizations of positive definiteness and mono- tonicity. [FTD91] M. 1987. Czechoslovak Mathematical Journal.168 Bibliography [FN84] T. December 1991. In Proc. Rishel. 38(2):281– 284. on Decision and Control. on Decision and Control. [FNO87] S. [GCG93] F. Math. K. On best conditioned matrices. pages 937–942. [Fog79] E.. AC-24(5):752–757. and L. [FP67] M. Celentano. [FR75] W. m-form numerical range and the computation of the structured singular value. G. In Proc. A convex parametrization of H∞ suboptimal controllers. A. H. Tits. on Decision and Control. L. Czechoslovak Mathe- matical Journal. [FP66] M. 1962. In Proc. IEEE Trans. Anal. System identification via membership set constraints with energy con- strained noise. [FT88] M. Ptak.

California. London. Goodall and E. In Proc. January 1993. volume 2 of Algorithms and Combinatorics. Parametrizations in Control. Aut. 4(1):61–80. L. Skelton. 29(2):381–402. March 1990. Gapski and J. 26(6):1431–1441.. G. D. Grigoriadis. P. Optimal redesign of linear systems. and R. on Communication. Robustness of Linear Systems against Parameter Variations. [GS88] R. [GFS93] K. [Gha90] L. Wolkowicz. C.. Li. [GL93b] K. C. [GJSW84] R. Gahinet and A. Skelton. Static output feedback controllers: Robust stability and convexity. Safonov. Gács and L. Math. B. Studies. INRIA. 1981. Peres. E. Grigoriadis. M. Stanford CA 94305. [GS92] K. Monticello. Aut. Aut. on Robust Control. In Proc. IEEE Trans. Loh. Skelton. and R. El Ghaoui. and R. Grone. J. pages 2680–2684. El Ghaoui. Allerton House. San Francisco. 1988. [Ger85] J. American Control Conf. On a convex parameter space method for linear control design of uncertain systems. R. AC-30:404–406. Golub and C. 1994. C. September 1993. second edition. Matrix Computations. [GLS88] M. Lovász. Johnson. Press. A. C. Nemirovskii. Positive definite com- pletions of partial Hermitian matrices. De Gaston and M. 1988. Control and Computing. Palmor. [GL93a] M.. M Sá. [GR88] D. IEEE Trans. Alternating convex projection methods for covariance control design. Grigoriadis and R. 57(1):59–68. Bernussou. 1984. Carpenter. Gu and N. On minimax eigenvalue problems via constrained optimiza- tion. and H. [GL81] P. Quadratic Lyapunov functions for rational systems: An LMI approach. 1993. Schrijver. [Gil66] E. IEEE Trans. L. E. On the determination of a diagonal solution of the Lyapunov equation. Linear Algebra and Appl. Geromel. SIAM J. Springer-Verlag. Geometric Algorithms and Combina- torial Optimization. LMI Lab: A Package for Manipulating and Solving LMIs. [GdSS93] J. 1993. 1989. Properties of min-max controllers in uncertain dy- namical systems. Teo. [GL89] G. 58:109–124. Illinois. R. Geromel. Control and Optimization. Zhu. G. Gevers and G. Control. Aut. Van Loan. Journal of Optimization Theory and Applications. Annual Allerton Conf. Johns Hopkins Univ. IEEE Trans.. C. Khachiyan’s algorithm for linear programming. 20(6):850–861. Geromel. An iterative procedure for computing the minimum of a quadratic form on a convex set. Baltimore. SIAM J. E. AC-33(2):156–171. Signal Proc. E. E. PhD thesis. C. P. [GP82] S. de Souza. 1966. April 1985. March 1991. M. P. Aut. 1988. Ryan. November 1988. Goh and D. June 1993. and J. 1992.. This electronic version is for personal use and may not be duplicated or distributed. Program. Cont. 14:61–68. [GPB91] J. Lovász. E. November 1982. [GN93] P. . Grötschel. November 1994. Submitted to IEEE Trans. K. [GG94] P. SIAM Journal on Control and Optimization. Stanford University. Direct computation of stability bounds for systems with polytopic uncertainties. [GT88] C. Geromel. and A. Gutman and Z. M. [Gha94] L. A convex approach to the absolute stability problem. Springer-Verlag. Feedback controlled differential inclusions and stabilization of uncertain dynamical systems. M. Control. on Control. Exact calculation of the multiloop stability margin. C. 38(2):363–366. Frazho. Control.Bibliography 169 [GCZS93] K. Skelton. IEEE Trans. Estimation and Filter- ing Problems: Accuracy Aspects. SIAM Journal on Control and Optimization. February 1993. volume 3. Control. Application of alternating convex projections methods for computation of positive Toeplitz matrices. Gilbert. Submitted to IFAC Conf. E. To appear. Communications and Control Engineering.

Chicago. P. L. June 1974. [Gu92b] K. Texas. January 1994. Gu. volume 3. December 1993. R. Barmish. AC-38(9). on Communication. Connections between the Popov stability criterion and bounds for real parametric uncertainty. Designing stabilizing control of uncertain systems by quasiconvex opti- mization. Control and Computing. pages 2274–2279. American Control Conf. Loh. P. [HB91b] J. Bélanger. pages 1084–1089. Safonov. 39(1):127–131. time invariant plants with uncertain parameters. K. Gu.. Robust and Nonlinear Control.. part I: Continuous-time theory. 27:549–554. [GZL90] K. U. volume 2. University of Illinois. Aut. and Popov theorems: A comparative study.. J. [Gu94] K. Control. Gu. Int. Collins. V. [HB80] C. 3:313–339. P. [HH93b] J. R. [GV74] G. AC-24(3):437–443. and R. Uncertain dynamical systems — a Lyapunov min-max approach. 1991. AC- 35(5):601–604. 1981. Hall and J. and J. In Proc. 1976. Automatica. 11(3):472–479.. 1(4):290–293. Horisberger and P.. E. Haddad and D. G. Haddad. Hestenes. American Control Conf. [Gu92a] K. Goh. AC-21:705–708. [HB93a] W. IEEE Trans. Designing stabilizing control of uncertain systems by quasiconvex opti- mization. pages 1666–1670. How. Copyright ° . Ly. Parameter-dependent Lyapunov functions. C. In Proc. IEEE Trans. December 1991. San Francisco. and N. G. Amer- ican Control Conf. con- stant real parameter uncertainty. Robust stability analysis using the small gain. Bi- affine matrix inequality properties and computational methods. [HB76] H. positivity. Brighton. the Finite Dimensional Case. Control. Zohdy. Bernstein. R. Mixed H2 /µ performance bounds using dissipation theory. IEEE Conf. on Decision and Control. How and S. Bernstein. Linear feedback stabilization of arbitrary time-varying uncertain systems. June 1993. Krieger. Aut. American Control Conf. 1980. In Proc. IEEE Trans. American Control Conf. 1993. San Fransico. On a characterization of the best `2 scaling of a matrix. circle. A class of invariant regulators for the discrete-time linear constrained regulator problem. Turan.. IEEE Trans. [Gu93] K. Gu. Haddad. Control. on Numerical Analysis. San Fransico. and Popov theorems and their application to robust stability. Control Sys. Aut. on Decision and Control. Monticello. June 1993. In Proc. June 1993. June 1993. In Proc. December 1993. In Proc. Varah. Regulators for linear. IEEE Trans.. M. Hollot and B. and the Popov criterion in robust analysis and synthesis. Tech. and D. Gu. H. Hennet and J. Bernstein. June 1992. Golub and J. Collins. Control. San Antonio. [Hes81] M. Huntington. pages 2790–2794. May 1990. Gutman. In Proc. pages 697–706. 1994. Optimization Theory. 18th Allerton Conf. Optimal quadratic stabilizability of uncer- tain linear systems. Moser. Aut. Haddad and D. M.K. Control. [HB91a] W. Beziat. IEEE Trans. Haddad and D.. P. A. New York. 2632–2633. Fixed structure computation of the structured singular value. Papavassilopoulos. circle. R. [HH93a] S. SIAM J. Proc. G. In Proc. 1992. Necessary and sufficient conditions of quadratic stability of uncertain linear systems. M. [HCM93] W. [HCB93] W. Amer- ican Control Conf. E. pages 1673–1674. [HB93b] W. Explicit construction of quadratic Lyapunov functions for the small gain. Aut.170 Bibliography [GTS+ 94] K. The trade-off between H∞ attenuation and uncertainty. Bernstein. positivity. c 1994 by the Society for Industrial and Applied Mathematics. Hall. R. June 1979. Off-axis absolute stability criteria and µ-bounds involving non-positive-real plant-dependent multipliers for robust stability and performance with locally slope-restricted monotonic nonlinearities. Califor- nia. IEEE Conf. [Gut79] S.

Hall. In AIAA Guid. Joint American Control Conf. Purdue University. V. IEEE Conf. March 1993.. E. Cambridge University Press. W. Haddad.. Hiriart-Urruty and C. J. Febru- ary 1993. Control and Opt. June 1993. Bernstein. M. 3(55). How. 1993. December 1992. Control. IN 47907. Progr. American Control Conf. Convex Analysis and Minimization Algorithms I. Intitüt für Angewandte Mathematik. Hiriart-Urruty and D. Opt. Linear Algebra and Appl. To appear.. In Proc. Optimal ellipsoidal approximations around the analytic center. Interior-point methods for convex programming. Toulouse.. and Control Conf. W. Theory Appl. Parametrization of all stabilizing controllers via quadratic Lyapunov functions. How. J. Robust Control Design with Real Parameter Uncertainty using Abso- lute Stability Theory.Bibliography 171 [HHH93a] J. Nagoya Math. [Ito51] K. Iwasaki and R. 1993. [ISG93] T. Huard. September 1993. Helton and H. 185:1–19. Math. In Proc. S. 1992. R. E. West Lafayette. R. P. Submitted to IEEE Transactions on Control Systems Technology. Ibrahim and Z. 1993. Iwasaki. Iwasaki and R. Submitted to Math. . American Control Conf. Topics in Matrix Analysis. Lemaréchal. Jarre. [Hol87] C. [Hu87] H. 1991. [HHHB92] W. 1993. A complete solution to the general H∞ control problem: LMI existence conditions and state space formulas. [Jar93c] F. Robust control synthesis examples with real parameter uncertainty using the Popov criterion. Haddad.. 31(5):1360–1377. [Iwa93] T. and S. Technical report. A Unified Matrix Inequality Approach to Linear Control Design. 1967. S.. Hall. R.. and D. Skelton. [IS93a] T. [Jar91] F. Horn and C. P. An algorithm for rescaling a matrix positive definite. New York. V. Nav. [Jar93a] F. pages 1611–1621. Extensions of mixed-µ bounds to monotonic and odd monotonic nonlinearities using absolute stability theory. [How93] J. SIAM J. [Jar93b] F. Amsterdam. and Cont. California. Arizona. Linear Algebra and its Applications. C. 1987. Robust controllers for the Middeck Active Control Experiment using Popov controller synthesis. How. Linear quadratic suboptimal control via static output feedback. [IR64] E. [HUL93] J.. On a formula concerning stochastic differentials. How. In Proc. Optim. Iwasaki. [HW93] J. San Francisco. August 1993. Bound invariant Lyapunov functions: A means for enlarging the class of stabilizable uncertain systems. Woerdeman. San Francisco. Tech- nical report.. Ye. In Proc.. Cambridge. 1993. Ito.-B. Sensitivity analysis of all eigenvalues of a sym- metric matrix. PhD thesis. Geromel.. pages 605–609. Submitted to Syst. Springer-Verlag. Univ. E. 8700 Würzburg. 46(1):161–184. volume 2. Submitted to J. [HJ91] R. Haddad. PhD thesis. Skelton. Rekasius. Paul Sabatier. Int. P. Hall. pages 256–260. on Decision and Control. 1951. and J. Massachusetts Institute of Technology. [HUY92] J. [Hua67] P. June 1993. Hu.. Letters. 96:131–147. June 1964. Construction of Liapunov functions for non- linear feedback systems with several nonlinear elements. S. Hollot. Symmetric Hankel operators: Minimal norm extensions and eigenstructures. Jarre. An interior-point method for convex fractional programming. [IS93b] T. [HHH93b] J. Johnson. Resolution of mathematical programming with nonlinear constraints by the method of centers. North Holland. Skelton. and W. M. pages 1090–1095. J. December 1993. volume 305 of Grundlehren der mathematischen Wissenschaften. Jarre. An interior-point method for minimizing the maximum eigenvalue of a linear combination of matrices. J. Tucson. Jarre. University of Würzburg.-B. This electronic version is for personal use and may not be duplicated or distributed. 1991. Appl. Germany. 1987.

Directional interpolation approach to H∞ -optimization and robust stabilization. Fritz John. 49:201–205. [Kim87] H. Kats and N. 20:191–194. Control and Computing. Lee. Kharitonov. N. Johnson and M. On a new characterization of linear passive systems. on Communication. [Kal63a] R. John. Control. 32(12):1085–1093. c 1994 by the Society for Industrial and Applied Mathematics. First Annual Allerton Conf. 809. Prikl. pages 456–470. Latchman. on Matrix Analysis and Applications. Khargonekar. Rotea. 1991. editor. IEEE Trans. Dokl. Kalman. 1965. K. E. Massachussetts. Automation and Remote Control. 29(1):57–70. Moser. [KK60] I. Aut. A. April 1993. Lyapunov functions for the problem of Lur’e in automatic control. East European series. [Kha79] L. 44(12):1543–1552. New Jersey. [Kah68] W. Math. Kalman. E. 1985. Kaszkurewicz and A. Kamenetskii. Automation and Remote Control. Lundquist. [Kal64] R. 1984. [KL85a] B. 14(2):508–520. Control Letters. On the existence of positive diagonal P such that P A + AT P < 0. Submitted. [Kav94] D. Basic Eng.. Absolute stability and absolute instability of control systems with several nonlinear nonstationary elements. P. 1989. Khachiyan and B. J. 162:541–556. on Optimization. A new polynomial-time algorithm for linear programming. Ser. Collected Papers. Bull. Com- binatorica. J. Copyright ° . Kouvaritakis and D. SIAM J. 86:51–60. 1980. Necessary and sufficient stability criterion for systems with structured uncertainties: The major principal direction alignment principle. Boston. Bhaya. J. Extremum problems with inequalities as subsidiary conditions. [Kis91] M. Int. Kouvaritakis and D. pages 543–560. 1978. Kalantari. 1982. In Proc. 1963. Nat. D. Jury and B.. Automatica. 1994. [Kar84] N. [KL85b] B. Kamenetskii. Kailath. 1960. On the stability of systems with random pa- rameters. Linear Algebra and Appl. W. Robust stability and diagonal Lyapunov func- tions. Int. Sci. Canad. Acad. 2(4):668–672. 26:943–961. Singular value and eigenvalue techniques in the analysis of systems with structured perturbations. 1963. SIAM J. Proc. USA. Linear Systems. Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. 1992. [Kal63b] R. R. Differential’nye Uraveniya. 1968. Computation of the optimal solution for the zeroth order L ∞ norm approximation problem. Kisielewicz. Khachiyan. Syst. Aut... 1979.. The absolute stability of systems with many nonlin- earities. 1985. 41(6):1381–1412. Diagonal matrix scaling and linear program- ming. 1964. Differential inclusions and optimal control. Kalman. Karmarkar. 1993. Mixed H2 /H∞ control for discrete-time systems via convex optimization. November 1992. An inertia formula for Hermitian matrices with sparse inverses. E. When is a linear control system optimal? Trans. Khalil. A polynomial algorithm in linear programming. Control. Mek. In J. 11(3):437–441. [Kha78] V. [Kam89] V. Control. Boston. AC-27:181–184. and M. Kimura. [KB93] E. Kaminer. Birkhauser. Krasovskii. [Kam83] V. Convolution method for matrix inequalities. IEEE Trans. 1985. P. Soviet Math. [Kha82] H. [KKR93] I. 50(5):598–607. Prentice-Hall.172 Bibliography [JL65] E. 1983. volume 44 of Mathe- matics and its applications. Kavranoğlu. Control. [Kai80] T. 4(4):373–395. 14(11):1483– 1485. A. Circumscribing an ellipsoid about the intersection of two ellipsoids. I. I. Latchman. Kluwer Academic Publishers. December 1987. L.. ASME. [JL92] C. 42(3):575–598. Mat. A. [Joh85] F. Automation and Remote Control. [KK92] L.. Kahan.

. P. A Schur method for solving algebraic Riccati equations. 1993. August 1969. A specification language approach to solving convex antenna weight design problems. IEEE Trans. Kraaijevanger. 8:445– 451. Siljak. Gradient method of constructing Lyapunov functions in problems of absolute stability. [Koz69] F. July 1992. Application of Lyapunov’s second method for equations with time delay. Petersen. Control Letters. Stochastic stability and control. [Kok92] P. Kamenetskii and E. Zhou. Khargonekar. [Kra91] J. Mathematical Programming. 1967. Stanford Uni- versity. 5:95–112. Mek. on Decision and Control. [Lei79] G. [KRB89] N. June 1992. IEEE Trans. S. Lau. J. A. Boyd. Lefschetz. pages 2412–2417. [KPZ90] P. [KP87a] V. 1:503–507. [KR91] P. [KR88] P. IEEE Trans. 61:137– 159. Kushner. Karmarkar and D. January 1989. Khraishi. Control.. Set-membership identification of systems with parametric and nonparametric uncertainty. . Academic Press. and S. 1956. AC-35(3):356–361. On the stability of linear stochastic systems. Laub. On the complexity of approximating the maximal inscribed ellipsoid for a polytope. Mixed H2 /H∞ control: A convex opti- mization approach. L. [Kra56] N. Control. 1991. Identification of systems with parametric and nonparametric uncertainty. May 1990. Trans. September 1979. Petersen. Rotea. and S. Kamenetskii and Y. Control. 1975. 151:245–254. Boyd. Lau. Control. Proc. Kokotovic. and S. Stabilization of uncertain systems with norm bounded uncertainty using control Lyapunov functions. and Control. Automation and Remote Control. IEEE Trans. P. 101:212–216. Control.. 1988. Aut. Stability of Nonlinear Control Systems. Leitmann. Linear Algebra and Appl. M. Potra. Khargonekar and M. [KPR88] P. San Diego. I. S. [Kus67] H. and Y. S. H∞ -optimal control with state-feedback. Regelungstechnik. Aut. 20(3):315–327. An iterative method of Lyapunov function construction for differential inclusions. [KPY91] K. 1987. Aut. Kleinman. and K. New York. Pyatnitskii. [KS75] V.Bibliography 173 [KLB90] R. 1991. Rotea. 152:169–189. Mat. IEEE Control Systems. Linear Algebra and Appl. This electronic version is for personal use and may not be duplicated or distributed. IEEE Conf. A. March 1990. R. Mea- surement. Maximization of absolute stability regions by mathematical programming methods. Todd. Kosut. December 1979. August 1988. V. R. New York. 1969. of the ASME: Journal of Dynamic Systems. Khargonekar and M. R. In Proc. Academic Press. A. J. Mathematics in Science and Engineering. IEEE Trans. [KP87b] V. Information Systems Laboratory. and M. M. Robust stabilization of uncertain linear systems: quadratic stabilizability and H∞ control theory. On some efficient interior point methods for nonlinear convex programming. 2:59–61. Technical Report 8901. Electrical Engineering Department. Guaranteed asymptotic stability for some linear systems with bounded uncertainties.. A survey of stability of stochastic systems. Automatica. Control. Syst. AC-33(8):786–788. 1965. D. J. Aut. Boyd. A characterization of Lyapunov diagonal stability using Hadamard products. I. 1987. pages 1–9. F. [Lau79] A. [KT93] L. Prikl. AC-36(7):824–837. July 1991. N. Aut. Khargonekar. Kosut. [KLB92] R. Roy. AC-24(6):913–921. Kortanek. AC- 37(7):929–941. Ye. Rotea. Khachiyan and M. A. Kozin. P. The joy of feedback: Nonlinear and adaptive. [Lef65] S. G. Aut. J. [Kle69] D. P. A. IEEE Trans. Pyatnitskii. Krasovskii. American Control Conf.

Royal Institute of Technol- ogy. December 1990. Doyle. Honolulu. J. Lyapunov. I. Technical Report TRITA/MAT-92-0015. Control. S-100 44 Stockholm. [LZD91] W. 1992. Convergent systems. Control. In Proc. [Mak91] P. On global linearization. Numerische Mathematik. R. IEEE Trans. [McL71] P. Ly. 1976. Existence and uniqueness theorems for the alge- braic Riccati equation. Control. and S. Copyright ° . SIAM-AMS proceed- ings. J. N. [LH66] B. Ahmad. [Meg92a] A. Letov. Aut. Positive real Parrott theorem with application to LMI controller synthesis. 1961. H. Leake. Lozano-Leal and S. Princeton. Stationery Off. Hawaii. Problème général de la stabilité du mouvement. Technical Report 222. March 1992. American Control Conf. J. 1992. Liu. H. K. Stability in Nonlinear Control Systems. Lancaster and L. and K. Megretsky. April 1990. Lu. On the matrix Riccati equation. Parameter set estimation of systems with uncertain nonparametric dynamics and disturbances. Megretsky. Lur’e. Universität Bremen.. Kosut. 1992. McLane. [LSG92] K. C. [LLJ90] R. 1957. and R. [LKB90] M. 1944. Postnikov. Values and strategies in infinite duration linear quadratic games. Department of Mathematics. In Proc. [LN92] H. AC-35:1243–1245. [LSL69] R. Control. 1966. c 1994 by the Society for Industrial and Applied Mathematics. Royal Institute of Technol- ogy. Real/complex multivariable stability margin computation via generalized Popov multiplier — LMI approach. [Lev59] J. pages 3162–3167. and J. Control. In Proc. Levin. Sweden. [Meg92b] A. Princeton. Joshi. [LR80] P. F. 1947. M. Saeks.. Liu. Math. Sweden. small-gain conditions and internal stability for infinite-dimensional systems. Y Chiang. Aut. 16(6):292–299. and R. on Decision and Control. Rodman. 1994. R.and control-dependent disturbances. In Russian. [Lur57] A. Control. Lieu and P. Safonov. [LP44] A. On the theory of stability of control systems. Zhou. AC-13(4):384–391. Stabilization of LFT systems. 54(4):921–941. Circle criteria. Princeton University Press. R. Logemann. J. Multiple models. 1980. 1990. August 1968. December 1991. multiplicative noise and linear quadratic control- algorithm aspects. Ly. AC-21:547–550. Aut. Aut. and F. IEEE Conf. Princeton University Press. 10:519–524. Latchman and R. H. Soc. Aut. Grigoriadis. [Lya47] A. IEEE Conf. 1994. Safonov. 32(2):285–309. Lau. Institut für Dynamische Systeme. Some Nonlinear Problems in the Theory of Automatic Control. G. I. W.. E. De- cember 1971. In Russian. S–procedure in optimal non-stochastic filtering. [LSC94] J. Lur’e and V. Int. AC- 37(11):1849–1853. Mäkilä. Brighton. M. Amer. [Liu68] R. G. Liu. Strictly positive real transfer functions revisited.174 Bibliography [Let61] A. J. Applied mathematics and mechanics. pages 93–102. IEEE Trans. Int.. IEEE Trans. M. 8:56–67. 1969. Optimal stochastic control of linear systems with state. Power distribution approach in robust control. M. 1959. Optimal controllers for finite wordlength implementation. IEEE Trans. M. Control. On the equivalence between similarity and nonsimilarity scalings in robustness analysis. IEEE Trans. Skelton. Aut. Mageirou. Proc. [LSA94] J. on Decision and Control. 372. Department of Mathematics. A. J. pages 1239–1244. 8(3). 1951. volume 2. [Log90] H. In Proc. Huard. M. Norris. IEEE Trans. American Control Conf. Technical Report TRITA/MAT-92-0027. Control. October 1991. S-100 44 Stockholm. Boyd. volume 17 of Annals of Mathematics Studies. London. La méthode des centres dans un espace topologique. [Mag76] E.

Lyapunov functions that specify nec- essary and sufficient conditions of absolute stability of nonlinear nonstationary control system Pt. Control. Nemirovskii. 1987. On the existence of Lyapunov functions for the problem of Lur’e. Conic formulation of a convex programming problem and duality. Molchanov and E. Meilakhs. Zubov’s construction procedure for Lyapunov functions.. & Math. Inst. Philadelphia. Nesterov and A. I. [NF92] B. Sci.. AC-23:143–149. Sci. Econ. & Math... 23:551–562... 1978.. Nevanlinna. [MR76] C. Control Letters. 1986. volume 13 of Studies in Applied Mathematics. IEEE Trans. K. Econ. Mathematics of Control. Submitted to Math. H. P. [Mol87] A. S. [NN94] Yu. Automation and Remote Control. Molchanov and E. A. Nesterov and A. 1993. Nesterov and A. (620–630). Aut. Nesterov and A.. Stability criteria for large-scale systems. A general approach to polynomial-time algo- rithms design for convex programming. Signals. Fenn. [MP89] A. Nekooie and M. Manuscript. Moscow. 13:59–64. A. Oc- tober 1991. T. E. Submitted to Computational Optimization and Applications. 2(4):575–601. Nemirovsky. pages 728–736. 13:1–71. 1988. J. 1919. Aut. I–III. Econ. AC-38(5):753– 756. Molchanov. R. G. [NN90a] Yu.. Vogt. 1989. Centr. Pyatnitskii. 1991. Inst. Sci. [Meh91] S. [MH78] P. J. Aut. USSR Acad. A quadratically convergent local algorithm on minimizing sums of the largest eigenvalues of a symmetric matrix. Sci. (443–451). Inst. May 1993. [Nem94] A. 1994. Margolis and W. Pyatnitskii. [MP86] A. & Math. G. PA. SIAM. USSR Academy of Sciences. Construction of a Lyapunov function for parametrically per- turbed linear systems. [Mei91] A. Ser. An interior point method for generalized linear- fractional programming. Optimization over positive semidefinite ma- trices: Mathematical background and user’s manual. Control engineering applications of V. [Mey66] K. Centr. Meyer. On the implementation of a primal-dual interior point method. Nemirovsky.5):(344–354). Roberts. [Nev19] R. Anal. Automation and Remote Control. Syst. [MV63] S. Nemirovskii. Self-concordant functions and polynomial time methods in convex programming. Moylan and D. Programming. 3(3):373–383. Technical report. Acceleration of the path-following method for optimization over the cone of positive semidefinite matrices.. IEEE Trans. Nemirovsky. 1990. Nesterov and A. April 1990. & Math. November 1991. S. IEEE Trans.4.. Acad. 1994. Moscow USSR. Nemirovsky. Lyapunov functions for nonlinear discrete-time control sys- tems. April 1963. P. USSR. Series B. Control. IEEE Trans. . [NN90b] Yu. Interior-point polynomial methods in convex programming. SIAM Control. Technical report. Circuits Syst. Centr. Nemirovsky. [NN93] Yu. Control. September 1976. Synthesis of minimum roundoff noise fixed-point digital filters. USSR Acad. S. Necessary and sufficient conditions of stability: A multiloop generalization of the circle criterion. Automation and Remote Control. [NN91a] Yu. [NN88] Yu. 32 Krasikova St. P. Moscow 117418 USSR. Econ. October 1992. This electronic version is for personal use and may not be duplicated or distributed. Mullis and R. Technical report. Mehrotra. Fan. SIAM Journal on Optimization. 46(3.. 52(10):1479–1481. Hill. Megretsky. [NN91b] Yu. J. Criteria of asymptotic stability of dif- ferential and difference inclusions encountered in control theory. Several NP-Hard problems arising in robust stability analysis. Nemirovsky. Nesterov and A. Nesterov and A.Bibliography 175 [Meg93] A. USSR Acad. Über beschränkte funktionen die in gegebenen punkten vorgeschreibene funktionswerte bewirkt werden. Moscow. AC- 8:104–113. 6(1):99–105. Inst. USSR. M. Centr. 1966. 1991. and Systems.

Suda. L. Narendra and J. Overton. Taylor. Prob. [Ove88] M. So. [NT73] K.. Francis. In Japanese. Womersley. January 1994. Aut. [Ola94] A. pages 88–120. San Francisco. Olas. S. N. editor. On the sum of the largest eigenvalues of a symmetric matrix. Ohara and T. Kando. and A. 1988. J. on Matrix Analysis and Applications. 62(2):279–287. Curtain. Packard. Construction of optimal Lyapunov functions for systems with structured uncertainties. H. 1993. [OW92] M. Problem Complexity and Method Efficiency in Optimization. [OSA93] A. 1992. Stability and performance in the presence of mag- nitude bounded real uncertainty: Riccati equation based state space approaches. SIAM J. O’Young and B. Valavani.. Systems. Amari. Nemirovskii and D. [Ove92] M. Optimal design of FIR linear phase filters via convex quadratic programming method. June 1993. Control. Gain scheduling via linear fractional transformations. pages 2816–2820. [OS94] A. Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. 30:311–328. August 1989. c 1994 by the Society for Industrial and Applied Mathematics. Iwazumi. SIAM J. 62:321–357. 1992. Aut. Automatica. Large-scale optimization of eigenvalues. [NS72] T. of Control and Info. California. J. July 1992. 19(8):1469–1482. Overton and R. 1978.176 Bibliography [Nor86] J. Woerdeman.. 1994. H. Application of the simplex method to the multi-variable sta- bility problem. Journal Opt. Overton. Quadratic stabilization and robust dis- turbance attenuation using parametrization of stabilizing state feedback gains – convex optimization approach. Norton. 22:79–92. In Proc.. Springer verlag. Control Letters. Control systems synthesis via convex optimization. [OV92] D. An Introduction to Identification. Chernousko. P. Womersley. volume 3. Control and Information. Sensitivity tradeoffs for multivariable plants. [OMS93] E. Ohara. 1993. Ovseevich and F. American Control Conf. Panier. D. June 1993. pages 1680–1681. Sugie. On optimal ellipsoids approximating reachable sets. On the need for special purpose algorithms for minimax eigenvalue problems. Overton and R. Yudin. I. Partial matrix contractions and intersections of matrix balls. Olas and S. Naevdal and H. [OKUI88] Y. Special issue on Numerical Approaches in Control Theory. Dualistic differential geometry of positive definite matrices and its application to optimizations and matrix completion problems. Parrott. 38(3):139–146. Electrical science. 1973. [OF85] S. Academic Press.. Lecture notes on Mathematics: Stability of stochastic dynamical systems. 175:225–238. [Pac94] A. 1986. 9(2):256–268. 1983. Analytical study on n-th order linear system with stochastic coefficients. March 1994. Academic Press. Control. Th. IEEE Trans. pages 173–185. Masubuchi. [OW93] M. Sawaragi. Journal of Functional Analysis. H.-Nagy-Foias lifting theorem. 16(2):125–134. AC-30(7):625–632. pages 41–45. [NW92] G. On minimizing the maximum eigenvalue of a symmetric matrix. Copyright ° . and T. In Proc. [OC87] A. Suda. Obradovic and L. In R. New York. Int. Frequency domain criteria for absolute stability. [OS93] A. Ukai. [Pan89] E. 1987. A. Systems Sci. IEEE Trans. Theory Appl. Ohashi. Linear Algebra and Appl. San Fransico. On a quotient norm and the Sz. Syst. on Matrix Analysis and Applications. 1992. John Wiley & Sons. 39(1):167–171. and N. [NY83] A. Mathematical Programming. July 1985. Ohara. American Control Conf. 1988. F. Optimization. 1972. SIAM J.. [Par78] S. Nakamizo and Y. In MTNS-93. I.

Bibliography 177

[PB84] I. Petersen and B. Barmish. The stabilization of single-input uncertain linear
systems via control. In Proc. 6th International Conference on Analysis and
Optimization of Systems, pages 69–83. springer-Verlag, 1984. Lecture notes in
control and information sciences, Vol. 62.
[PB92] A. Packard and G. Becker. Quadratic stabilization of parametrically-dependent
linear systems using parametrically-dependent linear, dynamic feedback. ASME
J. of Dynamic Systems, Measurement and Control, July 1992. Submitted.
[PBG89] P. L. D. Peres, J. Bernussou, and J. C. Geromel. A cutting plane technique
applied to robust control synthesis. In IFAC workshop on Control Applications
of Nonlinear Programming and Optimization, Paris, 1989.
[PBPB93] A. Packard, G. Becker, D. Philbrick, and G. Balas. Control of parameter de-
pendent systems: applications to H∞ gain scheduling. In 1993 IEEE Regional
Conference on Aerospace Control Systems, pages 329–333, May 1993.
[PD90] A. Packard and J. C. Doyle. Quadratic stability with real and complex pertur-
bations. IEEE Trans. Aut. Control, AC-35(2):198–201, February 1990.
[PD93] A. Packard and J. Doyle. The complex structured singular value. Automatica,
29(1):71–109, 1993.
[Pet85] I. R. Petersen. Quadratic stabilizability of uncertain linear systems: existence of
a nonlinear stabilizing control does not imply existence of a stabilizing control.
IEEE Trans. Aut. Control, AC-30(3):291–293, 1985.
[Pet87a] I. R. Petersen. Disturbance attenuation and H∞ optimization: a design method
based on the algebraic Riccati equation. IEEE Trans. Aut. Control, AC-
32(5):427–429, May 1987.
[Pet87b] I. R. Petersen. A stabilization algorithm for a class of uncertain systems. Syst.
Control Letters, 8(4):351–357, 1987.
[Pet88] I. R. Petersen. Quadratic stabilizability of uncertain linear systems containing
both constant and time-varying uncertain parameters. Journal of Optimization
Theory and Applications, 57(3):439–461, June 1988.
[Pet89] I. R. Petersen. Complete results for a class of state-feedback disturbance atten-
uation problems. IEEE Trans. Aut. Control, AC-34(11):1196–1199, November
1989.
[PF92] S. Phoojaruenchanachai and K. Furuta. Memoryless stabilization of uncertain
linear systems including time-varying state delays. IEEE Trans. Aut. Control,
37(7), July 1992.
[PG93] P. L. D. Peres and J. C. Geromel. H2 control for discrete-time systems, opti-
mality and robustness. Automatica, 29(1):225–228, 1993.
[PGB93] P. L. D. Peres, J. C. Geromel, and J. Bernussou. Quadratic stabilizability of
linear uncertain systems in convex-bounded domains. Automatica, 29(2):491–
493, 1993.
[PGS91] P. L. D. Peres, J. C. Geromel, and S. R. Souza. Convex analysis of discrete-time
uncertain H∞ control problems. In Proc. IEEE Conf. on Decision and Control,
Brighton, UK, December 1991.
[PGS93] P. L. D. Peres, J. C. Geromel, and S. R. Souza. H∞ robust control by static
output feedback. In Proc. American Control Conf., volume 1, pages 620–621,
San Francisco, California, June 1993.
[PH86] I. R. Petersen and C. V. Hollot. A Riccati equation approach to the stabilization
of uncertain linear systems. Automatica, 22(4):397–411, 1986.
[Phi89] Y. A. Phillis. Algorithms for systems with multiplicative noise. J. Contr. Dyn.
Syst., 30:65–81, 1989.
[Pic16] G. Pick. Über die beschränkungen analytischer funktionen, welche durch
vorgegebene funktionswerte bewirkt werden. Math. Ann., 77:7–23, 1916.
[PKT+ 94] K. Poolla, P. Khargonekar, A. Tikku, J. Krause, and K. Nagpal. A time-domain
approach to model validation. IEEE Trans. Aut. Control, May 1994.

This electronic version is for personal use and may not be duplicated or distributed.

178 Bibliography

[PM92] I. R. Petersen and D. C. McFarlane. Optimal guaranteed cost control of uncer-
tain linear systems. In Proc. American Control Conf., Chicago, June 1992.
[PMR93] I. R. Petersen, D. C. McFarlane, and M. A. Rotea. Optimal guaranteed cost
control of discrete-time uncertain systems. In 12th IFAC World Congress, pages
407–410, Sydney, Australia, July 1993.
[Pon61] L. Pontryagin. Optimal Regulation Processes, volume 18. American Mathemat-
ical Society Translations, 1961.
[Pon62] L. Pontryagin. The Mathematical Theory of Optimal Processes. John Wiley &
Sons, 1962.
[Pop62] V. M. Popov. Absolute stability of nonlinear systems of automatic control.
Automation and Remote Control, 22:857–875, 1962.
[Pop64] V. M. Popov. One problem in the theory of absolute stability of controlled
systems. Automation and Remote Control, 25(9):1129–1134, 1964.
[Pop73] V. M. Popov. Hyperstability of Control Systems. Springer-Verlag, New York,
1973.
[Pos84] M. Post. Minimum spanning ellipsoids. In Proc. ACM Symp. Theory Comp.,
pages 108–116, 1984.
[Pot66] J. E. Potter. Matrix quadratic solutions. J. SIAM Appl. Math., 14(3):496–501,
May 1966.
[PP92] G. Picci and S. Pinzoni. On feedback-dissipative systems. Jour. of Math. Sys-
tems, Estimation and Control, 2(1):1–30, 1992.
[PR91a] E. S. Pyatnitskii and L. B. Rapoport. Existence of periodic motions and test for
absolute stability of nonlinear nonstationary systems in the three-dimensional
case. Automation and Remote Control, 52(5):648–658, 1991.
[PR91b] E. S. Pyatnitskii and L. B. Rapoport. Periodic motion and tests for absolute
stability of non-linear nonstationary systems. Automation and Remote Control,
52(10):1379–1387, October 1991.
[PR94] S. Poljak and J. Rohn. Checking robust nonsingularity is NP-complete. Mathe-
matics of Control, Signals, and Systems, 6(1):1–9, 1994.
[PS82] E. S. Pyatnitskii and V. I. Skorodinskii. Numerical methods of Lyapunov func-
tion construction and their application to the absolute stability problem. Syst.
Control Letters, 2(2):130–135, August 1982.
[PS83] E. S. Pyatnitskii and V. I. Skorodinskii. Numerical method of construction
of Lyapunov functions and absolute stability criteria in the form of numerical
procedures. Automation and Remote Control, 44(11):1427–1437, 1983.
[PS85] F. P. Preparata and M. I. Shamos. Computational Geometry, an Introduction.
Texts and monographs in Computer Science. Springer-Verlag, New York, 1985.
[PS86] E. S. Pyatnitskii and V. I. Skorodinskii. A criterion of absolute stability of
non-linear sampled-data control systems in the form of numerical procedures.
Automation and Remote Control, 47(9):1190–1198, August 1986.
[PSG92] P. L. D. Peres, S. R. Souza, and J. C. Geromel. Optimal H2 control for uncertain
systems. In Proc. American Control Conf., Chicago, June 1992.
[PW82] E. Polak and Y. Wardi. A nondifferentiable optimization algorithm for the design
of control systems subject to singular value inequalities over a frequency range.
Automatica, 18(3):267–283, 1982.
[Pya68] E. S. Pyatnitskii. New research on the absolute stability of automatic control
(review). Automation and Remote Control, 29(6):855–881, June 1968.
[Pya70a] E. S. Pyatnitskii. Absolute stability of nonlinear pulse systems with nonsta-
tionary nonlinearity. Automation and Remote Control, 31(8):1242–1249, August
1970.
[Pya70b] E. S. Pyatnitskii. Absolute stability of nonstationary nonlinear systems. Au-
tomation and Remote Control, 31(1):1–11, January 1970.
c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

Bibliography 179

[Pya71] E. S. Pyatnitskii. Criterion for the absolute stability of second-order non-linear
controlled systems with one nonlinear nonstationary element. Automation and
Remote Control, 32(1):1–11, January 1971.
[PZP+ 92] A. Packard, K. Zhou, P. Pandey, J. Leonhardson, and G. Balas. Optimal, con-
stant I/O similarity scaling for full-information and state-feedback control prob-
lems. Syst. Control Letters, 19:271–280, October 1992.
[PZPB91] A. Packard, K. Zhou, P. Pandey, and G. Becker. A collection of robust control
problems leading to LMI’s. In Proc. IEEE Conf. on Decision and Control,
volume 2, pages 1245–1250, Brighton, December 1991.
[Rap86] L. B. Rapoport. Lyapunov stability and positive definiteness of the quadratic
form on the cone. Prikladnaya matematika i mekhanika, 50(4):674–679, 1986.
In Russian; English translation in Applied Mathematics and Mechanics.
[Rap87] L. B. Rapoport. Absolute stability of control systems with several nonlinear
stationary elements. Automation and Remote Control, pages 623–630, 1987.
[Rap88] L. B. Rapoport. Sign-definiteness of a quadratic form with quadratic constraints
and absolute stability of nonlinear control systems. Sov. Phys. Dokl., 33(2):96–
98, 1988.
[Rap89] L. B. Rapoport. Frequency criterion of absolute stability for control systems
with several nonlinear stationary elements. Automation and Remote Control,
50(6):743–750, 1989.
[Rap90] L. B. Rapoport. Boundary of absolute stability of nonlinear nonstationary sys-
tems and its connection with the construction of invariant functions. Automation
and Remote Control, 51(10):1368–1375, 1990.
[Rap93] L. B. Rapoport. On existence of non-smooth invariant functions on the boundary
of the absolute stability domain. Automation and Remote Control, 54(3):448–
453, 1993.
[RB92] E. Rimon and S. Boyd. Efficient distance computation using best ellipsoid fit.
Proceedings of the 1992 IEEE International Symposium on Intelligent Control,
pages 360–365, August 1992.
[RCDP93] M. A. Rotea, M. Corless, D. Da, and I. R. Petersen. Systems with structured
uncertainty: Relations between quadratic and robust stability. IEEE Trans.
Aut. Control, AC-38(5):799–803, 1993.
[Rei46] W. T. Reid. A matrix differential equation of the Riccati type. Amer. J. Math.,
68:237–246, 1946.
[Rin91] U. T. Ringertz. Optimal design of nonlinear shell structures. Technical Report
FFA TN 1991-18, The Aeronautical Research Institute of Sweden, 1991.
[RK88] M. A. Rotea and P. P. Khargonekar. Stabilizability of linear time-varying and
uncertain systems. IEEE Trans. Aut. Control, 33(9), September 1988.
[RK89] M. A. Rotea and P. P. Khargonekar. Stabilization of uncertain systems with
norm bounded uncertainty - a control Lyapunov function approach. SIAM J.
on Optimization, 27(6):1402–1476, November 1989.
[RK91] M. A. Rotea and P. P. Khargonekar. Generalized H2 /H∞ control via convex
optimization. In Proc. IEEE Conf. on Decision and Control, volume 3, pages
2719–2720, Brighton, England, June 1991. Also in Proc. IMA Workshop on
Robust Control, University of Minnesota, 1993.
[RK93] M. A. Rotea and I. Kaminer. Generalized H2 /H∞ control of discrete-time sys-
tems. In 12th IFAC World Congress, volume 2, pages 239–242, Sydney, Aus-
tralia, July 1993.
[Rob89] G. Robel. On computing the infinity norm. IEEE Trans. Aut. Control, AC-
34(8):882–884, August 1989.
[Roc70] R. T. Rockafellar. Convex Analysis. Princeton Univ. Press, Princeton, second
edition, 1970.
[Roc82] R. T. Rockafellar. Convex Analysis and Optimization. Pitman, Boston, 1982.
This electronic version is for personal use and may not be duplicated or distributed.

180 Bibliography

[Roc93] R. T. Rockafellar. Lagrange multipliers and optimality. SIAM Review, 35:183–
283, 1993.
[Rot93] M. A. Rotea. The generalized H2 control problem. Automatica, 29(2):373–383,
1993.
[Rox65] E. Roxin. On generalized dynamical systems defined by contingent equations.
Journal of Differential Equations, 1:188–205, 1965.
[RP93] M. A. Rotea and R. Prasanth. The ρ performance measure: A new tool for
controller design with multiple frequency domain specifications. Submitted to
Trans. Aut. Control, 1993.
[RP94] M. A. Rotea and R. Prasanth. The ρ performance measure: A new tool for con-
troller design with multiple frequency domain specifications. In Proc. American
Control Conf., Baltimore, 1994.
[RSH90] S. Richter and A. Scotteward Hodel. Homotopy methods for the solution of
general modified algebraic Riccati equations. In Proc. IEEE Conf. on Decision
and Control, pages 971–976, 1990.
[RVW93] F. Rendl, R. Vanderbei, and H. Wolkowocz. A primal-dual interior-point method
for the max-min eigenvalue problem. Technical report, University of Waterloo,
Dept. of Combinatorics and Optimization, Waterloo, Ontario, Canada, 1993.
[RW94a] M. A. Rotea and D. Williamson. Optimal realizations of finite wordlength dig-
ital controllers via affine matrix inequalities. In Proc. American Control Conf.,
Baltimore, 1994.
[RW94b] M. A. Rotea and D. Williamson. Optimal synthesis of finite wordlength filters
via affine matrix inequalities. IEEE Trans. Circuits Syst., 1994. To appear.
Abridged version in Proc. 31st Annual Allerton Conference on Communication,
Control, and Computing, Monticello, IL, pp. 505-514, 1993.
[Rya88] E. P. Ryan. Adaptive stabilization of a class of uncertain nonlinear systems: A
differential inclusion approach. Systems and Control Letters, 10:95–101, 1988.
[SA88] M. Saeki and M. Araki. Search method for the weighting matrix of the multi-
variable circle criterion. Int. J. Control, 47(6):1811–1821, 1988.
[Sae86] M. Saeki. A method of robust stability analysis with highly structured uncer-
tainties. IEEE Trans. Aut. Control, AC-31(10):935–940, October 1986.
[Saf80] M. G. Safonov. Stability and Robustness of Multivariable Feedback Systems. MIT
Press, Cambridge, 1980.
[Saf82] M. G. Safonov. Stability margins of diagonally perturbed multivariable feedback
systems. IEE Proc., 129-D:251–256, 1982.
[Saf86] M. G. Safonov. Optimal diagonal scaling for infinity-norm optimization. Syst.
Control Letters, 7:257–260, 1986.
[Sam59] J. C. Samuels. On the mean square stability of random linear systems. Trans.
IRE, PGIT-5(Special supplement):248–, 1959.
[San64] I. W. Sandberg. A frequency-domain condition for the stability of feedback
systems containing a single time-varying nonlinear element. Bell Syst. Tech. J.,
43(3):1601–1608, July 1964.
[San65a] I. W. Sandberg. On the boundedness of solutions of non-linear integral equations.
Bell Syst. Tech. J., 44:439–453, 1965.
[San65b] I. W. Sandberg. Some results in the theory of physical systems governed by
nonlinear functional equations. Bell Syst. Tech. J., 44:871–898, 1965.
[Sav91] A. V. Savkin. Absolute stability of nonlinear control systems with nonstationary
linear part. Automation and Remote Control, 41(3):362–367, 1991.
[SC85] A. Steinberg and M. Corless. Output feedback stabilization of uncertain dynam-
ical systems. IEEE Trans. Aut. Control, AC-30(10):1025–1027, October 1985.
[SC91] A. G. Soldatos and M. Corless. Stabilizing uncertain systems with bounded
control. Dynamics and Control, 1:227–238, 1991.
c 1994 by the Society for Industrial and Applied Mathematics.
Copyright °

San Francisco. D. Iterational method of construction of Lyapunov-Krasovskii functionals for linear systems with delay. [Sko91a] V. editor. 1968. C. June 1993. [SIG93] R. II. [SD83] M. 51(9):1205–1212. 296(2):115–122. [Sil71] D. [SGL94] M. 291(2):109–118. Automation and Remote Control. Springer-Verlag. T. American Control Conf.. New algebraic criteria for positive realness. In Proceedings of 25th Conference on Decision and Control. Skelton and T. Skelton.Bibliography 181 [SC93] M.. Franklin Inst. Extremal problems on the set of nonnegative definite matrices. I. New York. 1991. In S. American Control Conf. and K. 1994. In IFAC World Congress. J. volume 2. Safonov and R. Control system synthesis via bilinear matrix inequalities. E. 52(7):941–945. 1985. I. Iwasaki. E. K. Recursive state estimation: Unknown but bounded errors and system inputs. Aut. Skorodinskii. Franklin Inst. March 1993. Z. July 1989. A multiplier method for computing real mul- tivariable stability margin. John Wiley & Sons. [SI93] R. pages 197–207. M. D. 1973. D. Doyle. Lee. [SD84] M. C. Safonov and P. Goh. Minimization Methods for Non-differentiable Functions. [Sch73] F. pages 303–324. Chiang. Skorodinskii. T. Safonov and J. Siljak. [Sil73] D. 1984. In C. 1990. Leondes.. Iwasaki. IEEE Trans. Control. H. G. IEEE Trans. 57(3):519–536. W. Automation and Remote Control. Minimizing conservativeness of robust singular values. G. Khargonekar. Algebraic criteria for positive realness relative to the unit circle. 1993. Controller synthesis to render a closed loop transfer function positive real. P. Springer Series in Computational Mathematics. volume 56. Optimal scaling for multivariable stability margin singular value computation. [Sko91b] V. Control. Multivariable stability margin calculation with uncertain correlated parameters. and J. [Sil89] D. Siljak. [SL93a] M. Sydney. In Proc. Iterational methods of testing frequency domain criteria of absolute stability for continuous control systems. Australia. [SH68] A. Shapiro. Reidel. Academic Press. Sun. Grigoriadis. D. Harvard University . J. In Proceedings of MECO/EES 1983 Symposium. G. D. and D. July 1993. Control and Dynamic Systems. pages 1074–1078. California. In Proc. Safonov and J. [Sko90] V. Real/complex Km -synthesis without curve fitting. Multivariable Control.. 1983. Berlin. Shim. A unified approach to linear control design. 67:7–18. Y. Nonlinear Systems: Parameter Analysis and Design. Sideris and R. Tzafestas. Int. N. Doyle. C. Automation and Remote Control. Manuscript. Englewood Cliffs. Siljak. G. Skorodinskii. This electronic version is for personal use and may not be duplicated or distributed. Starr and Y. Shor. Linear Algebra and Appl.. Parameter space methods for robust control design: A guided tour. [Sho85] N.Division of Engineering and Applied Physics. Control. pages 766–771. C. I. [Sd86] A. Technical Report 564. Iterational methods of verification for frequency domain criteria of absolute stability for continuous control systems. Schweppe. Uncertain dynamic systems. AC-13(1):22–28. . 1991. J. 34(7):674–688. Aut.. February 1971. deGaston. Massachusetts. 52(8):1082–1088. [Sha85] A.J. [Sch68] F. Cam- bridge. Nonzero-sum differential games. Ly. IEEE. Liapunov and covariance controllers. I. G. 1985. Safonov. Schweppe. Prentice-Hall. December 1986. editor. Ho. [SKS93] W. 1969. G. H. 1968. August 1973. 1993. New York. Siljak. [Sil69] D.

. V. Control. AC-34(12):1272–1276. G. Fast computation of the multivariable stability margin for real interrelated uncertain parameters. [SLC94] M. A Lyapunov-like characterization of asymptotic controllability. To appear. AC-17(2):197– 205. Australia. Savkin and I. L. Aut. Su and P. Schweppe. Systems Sci. Campbell 2600. Petersen. M. Engineering. In Symposium on Robust Control Design. of Elec. Copyright ° . To appear. Baltimore.. Engineering. Australia. Engineering. Nonlinear versus linear control in the absolute stabilizability of uncertain systems with structured uncertainty. IEEE Trans. Int. The diameter of an intersection of ellipsoids and BMI robust synthesis. IEEE Trans. June 1994. Campbell 2600. Dept. Petersen. Springer-Verlag. Australian Defence Force Academy. Aus- tralian Defence Force Academy. New algorithms in convex programming based on a notion of ‘centre’ (for systems of analytic inequalities) and on rational extrapolation. Dept. Sabin and D. Dept. A connection between H∞ control and the absolute stabilizability of uncertain systems. [Son88] G. Schlaepfer and F. Sonnevend. Robust stability for linear uncertain time-delay systems with delay-dependence.. Corless. An uncertainty averaging approach to optimal guaranteed cost control of uncertain systems with structured uncertainty. [SS72] F. [SRC93] S. Sideris and R. Ly. Safonov. C. Tech- nical report. System order reduction in robust stabilization problems. Aut. Robust and Nonlinear Control. Savkin and I. Control. [Son83] E. Y. [SP94b] A. Australia 1993. [Son90] E. [SS81] T. volume 6 of Texts in Applied Math- ematics. Savkin and I. Submitted. Papavassilopoulos. IEEE Trans. To appear. Safonov and G. Opt. V. Australian Defence Force Academy. Dept. SIAM J. 1981. May 1983. M. of Elec. 1994. Submitted to 1994 ACC. Aut. V. V. Output feedback ultimate boundedness control for uncertain dynamic systems with structured uncertainty. M. 1994. Savkin and I. Chiang. 1990. 1990. C. 21(4):675–692. Petersen. Technical report. J. Singh. c 1994 by the Society for Industrial and Applied Mathematics. In Proc. Savkin and I. Petersen. 1994. Summers.182 Bibliography [SL93b] T. Australia. D. P. of Elec. On computation of multivariable stability margin using generalized Popov multiplier—LMI approach. December 1989. Swei. Contr. [SP94c] A. Sontag. also in 12th IFAC World Congress. [SP94e] A. A. G. 1993. [SP94f] A. Sharma and V. Savkin and I. A method for robust stabilization related to the Popov stability criterion. 1993. Savkin and I. 1994. J. V. 1994. Continuous-time state estimation under disturbances bounded by convex sets. 21:462–471. American Control Conf. 1994. 24(6):1067–1080. Int. [SP94d] A. 1988. Technical report. Control.. Petersen. Mathematical Control Theory. J. [SP89] A. and R. [SP94h] A. Petersen. Aut. S. IFAC. Syst. AC-26(2):585–586. July. J. Time-varying problems of H∞ control and absolute stabilizability of uncertain systems. Engineering. of Elec. Peña. Technical report. (6)27-30. Int. Systems Sci. N. 1993. [SP94a] M. Sontag. On the absolute stability of multivariable discrete- time nonlinear systems. Control. and M. V. [SS90] G. Control Letters. Australian Defence Force Academy. Liu. 1972. Campbell 2600. 84:311–326. Rotea. Int. Minimax optimal control of uncertain systems with structured uncertainty. In- ternational Series of Numerical Mathematics. [SP94g] A. J. Australia. 1994. Petersen. Control. Optimal technique for estimating the reachable set of a controlled n-dimensional linear system. Campbell 2600. IEEE Trans. J. S. 1994. V.

Control. 9:263– 266. IEEE Trans. New York. [SW87] M. Robustness measures for one-sided parameter uncertainties. F. 28(5):1190– 1208. D. [VB93a] L. 2nd IFAC. In Eur. April 1992. New York. pages 362–371. Aut. IEEE Trans.Bibliography 183 [SSHJ65] D. 1993. Wyetzner. Phys. Tsypkin. AC-37(10):1568–1572. Butterworth & Co. editor. February 1989. Tsypkin. August 1992. Aut. T. of Circuit Theory and Applica- tions. Doklady. pages 595–599. [TSC92] J. Minimum control effort state-feedback H∞ -control. Robust absolute stability of Lur’e control systems in parameter space. Safonov and G. Remote Control. 1965. IEEE Trans. IEEE Trans. In Conference on Decision and Control. SIAM J. Leondes. AC-37(4):496– 498. San Francisco. E.. Control. The quadratic matrix inequality in singular H∞ control with state feedback. T. [Tsy64c] Ya. Syst. Control and Optimization. Fundamentals of the theory of non-linear pulse control systems. pages 172–180. September 1990. [TB81] J. and Design. Aut. Vicino. Convexity property of the one- sided multivariable stability margin. Proc. New York. C. Control. A. 49(2):655–666. Aut. S. Vavasis. Leondes. L. Stoorvogel and H. A multiobjective linear quadratic Gaussian control problem. H. 1979. Nat. Thorp and B. In Proc. Szegö. A. Tsypkin. G. Tekawy. Sci. Automatica. [TSL92] J. California. pages 194–199. June 1993. Linear Algebra and Appl. 1966. AC-29(3):279–280. J. M.. December 1981. 27(1):147–151. Toivonen and P. R. Acad. . [Tsy64b] Ya. pages 1–64. 1964. Int. The generation of Liapunov functions. 1964. Aut. 25:918–923. 19(2):131–137. March 1984. 25:261–267. Tekawy. Zhu. Hsieh. Sznaier. and R. Safonov. T. Stoorvogel. Control. Y. The robust H2 problem: A worst case design. This electronic version is for personal use and may not be duplicated or distributed. M.. Trentelman. volume 1. Sov. Int. Advances in Control Systems. Mäkilä. M. 1964. Vandenberghe and S. On the absolute stability of sampled-data control systems. Control Letters. State space discrete systems. Skelton and G. December 1991. [SSL93] E.. [TM89] H. Safonov. Optimal L∞ bounds for disturbance robustness. [Toi84] H. 1963. [TV91] A. [Tsy64d] Ya. Remote Control. [Vav91] S. Ly. Computer-aided design procedure for multi- objective LQG control problems. 50:558–560. Tesi and A. England. 1984. [Str66] R. Thiele. [Tsy64a] Ya. G. 1964. Academic Press. A. Aut. and C. A. London. 25:219–237. Uhlig. M. Boyd. 35(4):559–579. [ST90] A. J. SIAM J. In Proc. De- cember 1987. AC-32(12):1128–1131. Barmish. [Uhl79] F. January 1991. G. On guaranteed stability of uncertain linear systems via linear control. Computer-aided stability criterion renders Popov criterion obsolete. A polynomial-time algorithm for determining quadratic Lyapunov functions for nonlinear systems. Control. In C. Toivonen. [Sze63] G. Stratonovitch. A new representation of stochastic integrals and equations. [Sto91] A. Smith. [Thi84] L. Chiang. A criterion for absolute stability of automatic pulse systems with monotonic characteristics of the nonlinear element. Oxford University Press. Journal of Optimization Theory and Applications. pages 1065–1068. Tsypkin. 1991. Schömig. T. P. Absolute stability of a class of nonlinear automatic sampled data systems. and C. Schultz. volume 2. L. G. 12. Brighton. Control and Opt. Conf. [SZ92] R. American Control Conf. Nonlinear Optimization: Complexity Issues. Johnson. Frequency criteria for the absolute stability of nonlinear sampled data systems. Circuit Th. October 1992. A recurring theorem about pairs of quadratic forms and extensions: A survey. and U. T.

N. Willems. 45:321–343. A stability criterion for nonautonomous difference equations with application to the design of a digital FSK oscillator. C. Willems and G. In Proc. Robust stability and stabilization of time de- lay systems in real parameter space. Lee. Control. Primal-dual potential reduction method for prob- lems involving matrix inequalities. Control. To be published in Math. Anal. Willems. L.184 Bibliography [VB93b] L. Control. Willems. volume 1. Willsky. Programming. [Wil73] J. invertibility. Digital Signal Processing and Control and Estimation Theory: Points of Tangency. Boley. [Vid92] M. C. October 1974. J. IEEE Trans. Circuits Syst. 1972. C. D. MIT Press. [Wil69a] J. Wang. Willson Jr. and Appl. Aut. Vidyasagar. IMA Journal of Numerical Analysis. Control. 21(1):69–80. [Wil76] J. December 1971. The Analysis of Feedback Systems. 105:63–105. 18:184–186. M. Quadratic stabilizability of linear systems with structural independent time-varying uncertainties. Nonlinear Systems Analysis. [Won66] W. Control. 1991. [Wen88] J. 7(4):645–671. [WBL92] M. [Wil74b] A. S. C. Weinmann. [VV85] A. [WB71] J. second edition. 11:481–492. New York. 1990. March 1990. Lyapunov criteria for weak stochastic stability. 1974. Control. T. Math. Aut. of Diff. The generation of Lyapunov functions for input-output stable systems. C. IEEE Trans. Willson. 1993. On matrix measures and convex Liapunov functions. AC-16(4):292–299. [Wei90] K. 1966. Linear Algebra and Appl.. IEEE Trans. Least squares stationary optimal control and the algebraic Riccati equation. SIAM J. 1971. Control. Proc. 2:195–207. Aut. [Wat91] G. Vidyasagar. Willems. 62:90–103. Dissipative dynamical systems I: General theory. Willems. c 1994 by the Society for Industrial and Applied Mathematics. instability. 49:1713–1738. AC-19(5). SIAM J. pages 85–86. L. 1994. [Vid78] M. Blankenship. 1978. Aut. AC-35(3):268–277. Vandenberghe and S. Automatica. [Wil71b] J. Watson. June 1992. Woerdeman. Stability. [Wil70] A. IEEE. Strictly contractive and positive completions for block ma- trices. [Woe90] H. 1970. Bell System technical Journal. IEEE Trans. Vidyasagar. 1976. volume 62 of Research Mono- graphs. [Wil71a] J. Willems. On the existence of a nonpositive solution to the Riccati equation. Wen. L. A. [Wil72] J. 1992. C. Wonham. [Wil69b] J. Aut. November 1969. C. Uncertain models and robust control. Mechanisms for the stability and instability in feedback systems. The circle criterion and quadratic Lyapunov functions for stability analysis. An algorithm for optimal `2 scaling of matrices. Chicago. Boyd. 9(1):105–134.. 1985. Areas of Intersection. AC-16(6):621–634. Willems. Frequency domain stability criteria for stochastic systems. [Wei94] A. IEEE Trans. Maximal Lyapunov functions and domains of attraction for autonomous nonlinear systems. Time domain and frequency domain conditions for strict positive realness. 1969. MIT Press. C. American Control Conf. Englewood Cliffs. 1988. October 70. IEEE Trans.. [Wil79] A. 21:124–130. [Wil70] J. John Wiley & Sons. J. Control.. Wei. and E. Willems. 1979. 64:24–35. B. Aut. Springer-Verlag. Stability Theory of Dynamical Systems. [Wil74a] J. Archive for Rational Mechanics and Analysis. Equations. L. II: Linear sys- tems with quadratic supply rates. Vannelli and M. IEEE Trans. Copyright ° . 33:988–992. February 1971. J. New theorems on the equations of nonlinear DC transistor net- works. 1973. and Parallel Directions. Prentice-Hall. and causality. Willems.

E. Ishlinsky and F.. Willems. 12(5):1454– 1458. Yu. In A. A. M. Soviet Math. III. 21(3):352–373. AC-37(8). On a matrix Riccati equation of stochastic control... [Yak88] V. 1977. Control and Opt. April 1967. Chernousko. Minimization of quadratic functionals under the quadratic constraints and the necessity of a frequency condition in the quadratic criterion for absolute stability of nonlinear control systems. Frequency methods of qualitative investigation into nonlin- ear systems. Yakubovich. A. An optimization method for convex programs—interior-point method and analytical center. 19:139–149. Control Letters. [Yak92] V. Yakubovich. Robust H∞ control for linear systems with norm- bounded time-varying uncertainty. 5:652–656. 38(3):155– 160. Moscow. Automation and Remote Control. II. Wang. Fu. Xie and C. 1993. A. 1970. IEEE Trans. [WXdS92] Y. Minimum Decay Rate of a Family of Dynamical Systems. The S-procedure in non-linear control theory. Solution of certain matrix inequalities encountered in non- linear control theory. Control and Information. Vestnik Leningrad Univ. Yakubovich. Dokl. July 1992. 1971. Control. M. Wright. Robust control of a class of uncertain non-linear systems. 4:73–93. A. 1992.. E. [WW83] J. The solution of certain matrix inequalities in automatic control theory. Robust stabilization of uncertain systems. 1971. Aut. Dichotomy and absolute stability of nonlinear systems with periodically nonstationary linear part. Zames. On synthesis of optimal controls in a linear differential game on a finite time interval with quadratic payoff. 11(6):1478–1481. and positivity. [XdS92] L. editors. Yaz. Yakubovich. Control and Opt. de Souza. C. 1988. Soviet Math. Control Letters. Interior methods for constrained optimization. 3:620–623. [Yaz93] E. E. Yakubovich.Bibliography 185 [Won68] W. Control. and C. AC-11:228–238. SIAM J. CA 94305. Advances in Theoretical and Applied Mechanics. 37(8):1253–1256. Stanford University. and C. 1973. On synthesis of optimal controls in a linear differential game with quadratic payoff. [Wri92] M. Frequency conditions for the existence of absolutely stable periodic and almost periodic limiting regimes of control systems with many nonstationary elements. L. Soviet Math. Yakubovich. In Japanese. A. [Yak82] V. IEEE Trans. [XFdS92] L. August 1992. 753–763. Doklady. 1962. Yakubovich. In Russian. Syst. Math. Yakubovich. I. [Yak77] V. [Zam66a] G. L. IEEE Trans. [Yan92] Q. Aut. In IFAC World Congress. A. 1. SIAM J. Acta Numerica. Syst.. de Souza. Dokl. 1964. Dokl. Stanford. Wonham. [Yak70] V. Aut. May 1983. On the input-output stability of time-varying nonlinear feedback systems—Part I: Conditions derived using concepts of loop gain. In Russian. This electronic version is for personal use and may not be duplicated or distributed. January 1992. 1982. Yakubovich. de Souza. A. PhD thesis. March 1994. 14:593– 597. [Yak73] V. Nonconvex optimization problem: The infinite-horizon linear- quadratic control problem with quadratic constraints. 1966. Control Letters. 6:681–697. Soviet Math. August 1992. Control. 577–592. Willems and J. [Yak71] V. 1968. Xie. 1992. The method of matrix inequalities in the stability theory of nonlinear control systems. [Yak66] V. conicity. Yoshise. A. A. Xie. A. Systems. L. MIR. 21(1):11–17. [Yos94] A. London. Dokl. Control Letters.. Special issue on Numerical Approaches in Control Theory. A. [Yak64] V. Yang. Yakubovich. pages 102–124. Stabilizing compensator design for uncertain nonlinear systems. H∞ control and quadratic stabilization of systems with parameter uncertainty via output feedback. April 1966. Syst.. Soviet Math. Yakubovich. Syst. [Yak62] V. . 25- 26(4):905–917. 1961. [Yak67] V.

Aut. Control. Zinober. Robust stability and performance of uncertain systems in state space. Zhou and P. P. Aut. [ZF67] G. Stoustrup. June 1993. Control Letters. AC-11:465–476. Zhou. IEEE Trans. minimax sensitivity and optimal robust- ness. Rotea.. Copyright ° . and R. pages 1675–1679. Feedback. Control. AC- 32(7):621–623. Skelton. On the stability of systems with monotone and odd monotone nonlinearities. In Proc. Khargonekar. L. AC-28(5):585–601. and H. Stability conditions for systems with monotone and slope-restricted nonlinearities. 6(1):89–108. L. [Zub55] V. Reprinted by Krieger. Zames and P. May 1983. [ZF83] G. S. 1963. [Zin90] A. P. IEEE Trans. Zames. M. Zubov. Robust stabilization of linear systems with norm-bounded time-varying uncertainty. 12(2):221–223. Control. [ZKSN92] K. Problems in the theory of the second method of Lyapunov. pages 662–667. Zhu. New York. Aut. A. Peter Pere- grinus Ltd. I. Francis. on Decision and Control. [ZD63] L. Aut. July 1987. McGraw-Hill. Tucson. IEEE Trans. Control. American Control Conf. Zames and P. [ZF68] G. San Fransico. July 1968. Falb.186 Bibliography [Zam66b] G. A. December 1992. London. [ZK88] K. 1955. c 1994 by the Society for Industrial and Applied Mathematics. Deterministic control of uncertain systems. Falb. In Proc. A convergent feasible algorithm for the output covariance constraint problem. construc- tion of the general solution in the domain of asymptotic stability. Control. 1979. Stability robustness bounds for linear state- space models with structured uncertainty. April 1967. July 1966. Desoer. [ZK87] K. P.. A. 19:179–210. editor. Prikladnaya Matematika i Mekhanika. Khargonekar. Zames and B. [ZRS93] G. Zadeh and C. E. IEEE Trans. 10:17–20. On the input-output stability of time-varying nonlinear feedback systems—Part II: Conditions involving circles in the frequency plane and sector nonlinearities. SIAM J. IEEE Conf. Khargonekar. Niemann. Linear System Theory: The State Space Approach. J. P. H. 1988. Syst. I. 1990. Zhou and P. A.

64 C Convex hull. 159 Complexity Affine convex optimization. 42 method of centers. 26 Convex function Brownian motion. 136 equality. 81. 12. 137 LMI representation. 7 apology for too many. 52 Approximation norm constraint. 110. 62. 30 state-feedback. 16. 49 qualification. 18. 110. Analytic center. 11 187 . 24 matrix inequality. 152 software. 124 gain-scheduled. 65. 26. 12. 111 Augmented system. 18 Youla parametrization. 26. 151 Almost sure convergence. 29 family of transfer matrices. 106 Karmarkar’s. 112 Lyapunov function. 51 Convex problem. 2. 19 Analytic solution. 56. 22 Acronyms Schur. 116 interior-point method. 158 Central path. 117 B state-feedback. 15 Convex Bellman function.Index A Complement Absolute stability. 4 coordinate transformation. 27 stabilization problem. 115 Contractive completion problem. 48 list. 24 Control input. 49 Controllability Gramian. 5 Completion problem. 131 Constraint system with multiplicative noise. 3. 128 Algebraic Riccati equation. 111. 45 on control input. 29 Lyapunov. 103 ellipsoidal bound. 78 of PLDI. 16 Convex optimization Cheap ellipsoid approximation. 48 LMI. 15 Conjugate-gradients. 103 ellipsoidal. 3. 67 integral quadratic vs. 115 Componentwise results Algorithm using scaling. 18 LDI. 96. 18. 58 Controller ARE. 30 Condition number. 51. 4 Concave quadratic programming. 111 reduced-order. 115 dynamic feedback. 5 Closed-loop system ellipsoid algorithm. 50 complexity. 31 Co. 2 duality. 15 122 ellipsoidal approximation. 99 problem structure. 30 Constrained optimal control problem. 37 Nesterov and Nemirovskii’s. 49 Componentwise unit-energy input for Lyapunov inequalities. 72 orthogonal. 103 Newton. 29 Circle criterion. 74 Bounded-real lemma. 96 interior-point. 9. 18 projective. pointwise. 134 conjugate-gradients. 18 via scaling. 40. 94 ellipsoid. 19 α-shifted H∞ norm. 99 Barrier function. 54. 141 S-procedure. 118 order. 14 static state-feedback. 12 dynamic state-feedback.

141 via analytic center. 54 structured. 123 norm-bound LDI. 5. 48 observability. 146 Differential inclusion. 104. 45. 106 Löwner–John. 142 E Generalized eigenvalue problem. 87. 142 Exogenous input. cutting plane. 146 EVP. 78. 110 DNLDI. 9 controllability. 8 Identification. 22 NLDI. 102 holdable. 29 Dissipativity value. 142 extractable energy. 47. 10 E. 26 approximating sum. 54 G Gain well-posedness. 44 H approximating polytope. 19 I Euclidean norm. 87 return-difference. 148 LDI. 97 condition number. 142 Equality constraint. 125. 12. 10 Eigenvalue problem. 12 106. 49 H2 –H∞ problem. 80 Finsler’s lemma.188 Index Convolution. 144 nonexpansive. 143 DI. 29. 11 GEVP. 66 examples. 84. 43 LDI. 19 E. 102 minimum volume. Copyright ° . 49 state-feedback. 109 Holdability. 153 c 1994 by the Society for Industrial and Applied Mathematics. 108 poor man’s. 49 approximating intersection. 91. 102 Holdable ellipsoid invariant. 59 Frequency measurements. 68 approximating reachable set. 11 Cutting plane. 98 Expected output energy Inequality LDI. 89 approximation. 10 Iγ . 65 Extractable energy digital filter realization. 46 Hankel norm approximating union. 54 Diameter. 12 F Fading memory. 74 D Family of transfer matrices Decay rate. 56 quasiconvex. 29. 144 parametrization. 52 Implicit equality constraint. 22 Gramian of variable. 87 state-feedback. 50 H2 norm. 49 LDI. 154 Entropy. 49 transfer matrix. 11. 142 diameter. 51 Function selector-linear. 103 state-feedback. 111 Gain-scheduled controller. 40. 152 Delay. 109 Coordinate transformation Exponential time-weighting. 57 L2 or RMS. 51 Feedback Diagonal diagonal norm-bound. 112 LDI. 5 Impulse response. 27. 121. 42 Half-space approximating reachable set. 93 state-feedback. 69 time-varying. 53 invariant ellipsoid of LDI. 143 state-feedback. 109 CP. 78 procedure. 99 Dynamic state-feedback. 118 γ-entropy. 54 Lyapunov function. 85 Ellipsoid algorithm. 132 Hamiltonian matrix. 69. 54 Elimination state-feedback. 99 in LMI. 91 Duality. 44 output energy. 9. 80 H∞ norm. 10 Global linearization. 44.

7 Lur’e system. 104 program. 108 Löwner–John ellipsoid. 148 Linear LDI. 87 convex function representation. 122 mapping. 65 control or exogenous. 11 primal-dual. 4. 2 κ. 61 Input stability margin. 108. 129 output energy. 30 programming. 31 better name. 100 system with multiplicative noise. 56 Riccati equation solution. analytic center. 18 Löwner–John ellipsoid. 31 Hankel norm. 122 Lemma state-feedback. 87 LMI-Lab. 88 LP.Index 189 Infeasible LMI. 51 software. 111 safe. 110 strict. 19. 49 optimization. 136 feasibility problem. 81 semidefinite terms. 18 expected output energy. 2 Kalman-Yakubovich-Popov lemma. 108 equality constraint. 18 Linearization Interval matrix. 4 Itô stochastic system. 10 polytope. 54 Invariance Linear-Quadratic Regulator. 150 quadratically stabilizable. 71 extractable energy. . 2 infeasible. 104 Initial condition return time. not quadratically stable. 108 of positive orthant. 66 slack variable. 40 global. 9 dissipativity. 7 output energy. 52 stable. 68 definition. 104. 4. 31 L2 gain. 114 This electronic version is for personal use and may not be duplicated or distributed. 119 norm-bound. 37 history. 121. 107 state-feedback. 77 LDI. 4 K graphical condition. 82. 29 reachable set. 71 stability. 138 quadratic stability. 93 filter realization. 68 LMI-tool. 9 Inverse problem of optimal control. 29 for LDI. 64. 7. 29 Karmarkar’s algorithm. 61 LQR. 53 sector condition. 4 interior-point method. 125 Linear-fractional Integral quadratic constraint. 28 Invariant ellipsoid LMI approximating reachable set. 44 output peak. 53 Interior-point method. 89 unit-peak input. 96. 94. 144 multi-criterion. 54 standard problems. 9 invariant ellipsoid. unit-energy. 93. 108 scaling. 132 73 unit-peak. 152 Line-search. 99 Input-to-output properties unit-energy input. 53 LQG positive orthant stabilizability. 132 Linear differential inclusion. 125. 85. 104. 7 state-feedback. 78. 77. 30 LDI reduction. 24 extractable energy. 3 componentwise unit-energy input. 10 system with multiplicative noise. 133 Finsler. 89 LMIP. 7 L nonstrict. 91. 8 definition. 31 diagonal norm-bound. 85. 22 Input-to-state properties PR. 77. 51 system with unknown parameters. 109 Loop transformation. 22 decay rate. 109 bounded-real. 9 grandfather. 147 father. 25. 82 Lur’e system. 15 106 analytic solution. 14. 77 inequality. 26 system with multiplicative noise. 121 matrix inequality. 14 multiple.

16. 84 integral quadratic constraints. 85 Newton’s method. 28 matrix. 141 Moore–Penrose inverse. 25. 150. 124 absolute stability. Copyright ° . 136 Nonconvex quadratic problem Lyapunov inequality mixed LQR–L2 problem. 122 Multi-criterion LQG problem. 62 Method of centers. 48 Norm Hamiltonian. 8 maximum singular value. 67 inequality. 53 Lyapunov function input-to-state properties. 69 with delay. 11 with nonparametric uncertainty. 143 Mixed H2 –H∞ problem. 146 quadratic. 57 of LTI system. 40. 66 NLDI functional. 47. 53. 8 input-to-state properties. 97 definition. 146 equation. 2. 75 LDI. 53 H∞ . 132 realization. 41 problems. 74 quadratic stability. 146 linear. 123 Margin LDI. 7. 74 system with multiplicative noise. 127 well-posedness. 79 convex. 18 elimination of variable. 65 Nonlinear system quadratic stabilizability. 37 scaled singular value. 8 interval. 7 Euclidean. 119 Nevanlinna-Pick interpolation. 64. 145 Lyapunov Nevanlinna-Pick theory. 91 quadratic stability. 91 Moore–Penrose inverse. 109 Multiplicative noise. 142 µ-analysis. 8 P0 . 154 reduction to a strict one. 7 state-feedback. 31 Nonparametric uncertainty Matrix in LTI system. 135 stability. 99 matlab. 63 Lur’e. 126 N Lur’e term Nesterov and Nemirovskii’s algorithm. 131 system with multiplicative noise. 131 stochastic. 143 completion problem. 144 Minimum volume ellipsoid. 119 in interpolation problems. 144 state-to-output properties. 61 multiplicative. 53 c 1994 by the Society for Industrial and Applied Mathematics. 58 inequality. 3 nonparametric uncertainty. 28 LTI system. 52 Maximum singular value. 38 quadratic inequality. 27. 152 Lur’e system. 133 Multiplier Lur’e resolving equation. 26 α-shifted H∞ . 119 Noise parameter-dependent. 151 Nonexpansive. 128 definition. 148 state-feedback. 91 M. 132 state-feedback. 62.190 Index LTI system transfer. 25 definition. 89 linear-fractional mapping. 28 quadratic approximation of piecewise Pick. 113 fading memory. 153 scaled. 30 transfer matrix. 91 Minimum size ellipsoid. 62 diagonal. 40 Nonstrict LMI. 132. 22. 15 exponent. 2. 20 multi-criterion LQG. 40 Hankel. 56 theory. 75 H2 . 8 definition. 8 norm. 26 M feedback. 119 history. 78. 144 approximating PLDI. 78 Mean-square stability overshoot. 123. 148 pencil. 94 Multiple LMI. 4 in Lyapunov function. 8 Norm-bound LDI. 154 L2 gain M-matrix. 143 Lur’e system system with unknown parameters. 96.

152 extractable energy. 10 Polynomial-time. 144 LDI. 106 Polytopic LDI. 41 Passivity. 96. 85 Positive-definite Lur’e system. 62 Optimization state-to-output properties. 80 Orthant ellipsoid approximation. 25. 64 system with multiplicative noise. 14 Polyhedron linear-fractional. 54. 3 of reachable set.Index 191 Notation Perturbation differential equations. 74 of LDI. 65 LQG cost. 5 LTI. 57 Pick matrix. 152 linear program. 53 Output. 112. 51 input-to-state properties. 93 controller. 121 completion. 53 Orthogonal complement. 84 ellipsoid algorithm. 25 Overshoot Primal-dual interior-point method. 114 Youla. 2. 22 P S. 12 LMI. 68 Penrose inverse. 105 dissipativity. 42 invariant. 40 state-feedback. 8. 108 matrix. 30 Parameter-dependent PR lemma. Positive-real 138 lemma. 113. 30 Polytope. 54 well-posedness of DNLDI. 18 LTI system. optin. 22 return time. 144. 88 Positive orthant stabilizability. 28 invariant ellipsoid. 100 This electronic version is for personal use and may not be duplicated or distributed. 41 Output energy Popov criterion. 93 Quadratic programming state-feedback. 124 Q scaling matrix problem. 12 Pointwise constraint. 7 Output peak representable. 39. 52 Polytopic norm. 152 Quadratic approximation Passive. 133 Quadratic stabilizability. 57 Projective algorithm. 42 stochastic. 78 Optimal control. 61 family of transfer matrices. 23 P0 matrix. 148 transfer matrix. 152 Output variance for white noise input. 25 Overflow. 53 Observability Gramian. 109 stability. 28 LDI. 61 Performance margin. 71 Outer approximation of reachable set symmetric. 29 LDI. 118 Lyapunov function. 110 indefinite. 25 of polytopic norm. 42 Path of centers. 137 NP-hard. 16 Quadratic stability Pencil. 157 norm-bound. 150 nonlinear system. 39 Quadratic Parametrization Lyapunov function. 143 table. 11 approximated by invariant ellipsoid. 97 Procedure elimination of variable. 12 quasiconvex. 119 Parametric uncertainty. 53 NP-complete. 31 69 Order of controller. 119 LDI. 143 matrix inequality. 146 Piecewise linear norm. 41 O PLDI. 87. 113 vs. stability. 58 Ω. 85 approximation by NLDI. . 128 structured. 122 interior-point method. 11 invariant. 151 quadratic stability. 48 state-feedback. 2. 111 containing reachable set.

134 filter realization. 148 positive orthant. Copyright ° . 134 state-feedback. 37 LDI. 120 Realization of system. 104 Stability system with multiplicative noise. 97 positive orthant. 77 LTI system. 144 Reduced-order controller. 111 Scaled L2 gain synthesis. 131 componentwise results. 29 Small-gain theorem. 94 system with multiplicative noise. 64 relaxed DI. 72 system with unknown parameters. 132. 148 Singular value plot. 143 problem. 56 continuous-time. 77 S-procedure Lur’e system. 9 Riccati inequality. 44. 4 mean-square. 51 description. 107 nonstrict. 148 in matrix problem. 84 interpolation. 115 Standard LMI problems. 99 RMS norm. 28 Static state-feedback. 100 and S-procedure. 56. 99 nonlinear system. 23 state-feedback. 81 LDI. 132 Selector-linear Stochastic system differential inclusion. 61 Rue . 56 response. 117 system with multiplicative noise. 111. 8 function. 28 Sector condition. 97 loop transformation. 144 Safe initial condition. 153 system with multiplicative noise. 51 Stabilizability Response LDI. 52 LDI. 129 Stochastic stability margin. 148 global linearization. 99 LDI. 128 Riccati equation. 38 Lur’e system. 27 Quasiconvex Slack variable in LMIs. 111 State properties system with multiplicative noise. 31 length. 71 complexity. 121 Schur complement state-feedback. 143 RMS gain State. 82 Lur’e system. 31 Sparsity pattern. 132 absolute. 144 Return-difference inequality. 145 Lur’e system. 110. 3. 91 in linear system with delay. 99 Root-Mean-Square. 67 Ruce . 64 system with multiplicative noise. 3. 61 Scaled singular value. 94 State-to-output properties for componentwise results. 115 ∗. 100 step or impulse. 46 LDI. 20 Stability margin strictly feasible LMI. 134 Return time Stabilization problem LDI. 71 scaling. 136 c 1994 by the Society for Industrial and Applied Mathematics. 26. 137 Relaxed version of a DI. 121 and scaling. 145 Routh–Hurwitz algorithm. 125 degree. 62 Rup . 134 LDI. 40 R Sphere Reachable set smallest or largest.192 Index Qualification Self-concordant barrier. 109 controller. 11 Software for LMI problems. 120 Scaling state-feedback. 15 constraint. 40 Quantization. 19 LDI. 91 State-feedback state-feedback. 99 strict. 132 Reduction system with unknown parameters. 65 Regularity conditions. 35 LDI. 125 Lyapunov inequality. 19 Simultaneous matrix completion. 7 Step scilab. 99 S positive orthant stability.

136 V Strict feasibility Value function. 148 DNLDI. 26. 13 Symmetric polytope. 97 LDI. 13 matrix problem. 131 System with unknown parameters matrix problem. 25 Transformation loop. 5 uncertainty. . 5 Transfer matrix. 134 System W Well-posedness realization. 54.Index 193 discrete-time. 8 singular value. 57 stochastic. 2 U Uncertainty LTI. 91 optimization. 146 vol. 152 slack. 53. 53 system. 131 Unknown parameters Stopping criterion system with. 63 uncertain. 48 invariant ellipsoid. 119 Tsypkin criterion. 112 reduction to. 51 Tr. 41 output-feedback. 104 system with multiplicative noise. 51 Unit-energy input LDI. 124 random. 56 White noise with multiplicative noise. 152 Vertex Structured uncertainty of polytope. 150 state-feedback. 64. 77 Lur’e system. 131 Brownian motion. 141 positive-real. 124 Y Youla parametrization. 125 Unit-intensity white noise. 9 Structured matrix. 107 This electronic version is for personal use and may not be duplicated or distributed. 152 T Tangential Nevanlinna-Pick interpolation. 137 System theory duality. 18 elimination. 54. 121 state-feedback. 82 state-feedback. 143 parametric. 8 feedback. 41 interpolation. 124 method of centers. 5 Trace. 99. 132 system with unknown parameters. 131 time-varying. 145 Time-varying feedback matrix. 117. 25 LTI system. 138 Unit-peak input. 35 ellipsoid algorithm. 19 Variable Strict LMI. 39 multipliers for. 69 Synthesis of ellipsoid. 136 NLDI. 39 Volume Sturm method. 17 Stratonovitch stochastic system.