This action might not be possible to undo. Are you sure you want to continue?
For Use with MATLAB
Pascal Gahinet
Arkadi Nemirovski
Alan J. Laub
Mahmoud Chilali
®
User’s Guide
Version 1
LMI Control Toolbox
How to Contact The MathWorks:
www.mathworks.com Web
comp.softsys.matlab Newsgroup
support@mathworks.com Technical support
suggest@mathworks.com Product enhancement suggestions
bugs@mathworks.com Bug reports
doc@mathworks.com Documentation error reports
service@mathworks.com Order status, license renewals, passcodes
info@mathworks.com Sales, pricing, and general information
5086477000 Phone
5086477001 Fax
The MathWorks, Inc. Mail
3 Apple Hill Drive
Natick, MA 017602098
For contact information about worldwide offices, see the MathWorks Web site.
LMI Control Toolbox User’s Guide
COPYRIGHT 1995 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used
or copied only under the terms of the license agreement. No part of this manual may be photocopied or repro
duced in any form without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by
or for the federal government of the United States. By accepting delivery of the Program, the government
hereby agrees that this software qualifies as "commercial" computer software within the meaning of FAR
Part 12.212, DFARS Part 227.72021, DFARS Part 227.72023, DFARS Part 252.2277013, and DFARS Part
252.2277014. The terms and conditions of The MathWorks, Inc. Software License Agreement shall pertain
to the government’s use and disclosure of the Program and Documentation, and shall supersede any
conflicting contractual terms or conditions. If this license fails to meet the government’s minimum needs or
is inconsistent in any respect with federal procurement law, the government agrees to return the Program
and Documentation, unused, to MathWorks.
MATLAB, Simulink, Stateflow, Handle Graphics, and RealTime Workshop are registered trademarks, and
TargetBox is a trademark of The MathWorks, Inc.
Other product or brand names are trademarks or registered trademarks of their respective holders.
Printing History: May 1995 First printing New for Version 1
i
Contents
Preface
About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1
Introduction
Linear Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Toolbox Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
LMIs and LMI Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
The Three Generic LMI Problems . . . . . . . . . . . . . . . . . . . . . . . 15
Further Mathematical Background . . . . . . . . . . . . . . . . . . . . . 19
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
2
Uncertain Dynamical Systems
Linear TimeInvariant Systems . . . . . . . . . . . . . . . . . . . . . . . . 23
SYSTEM Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Time and Frequency Response Plots . . . . . . . . . . . . . . . . . . . 26
Interconnections of Linear Systems . . . . . . . . . . . . . . . . . . . . 29
ii Contents
Model Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Uncertain StateSpace Models . . . . . . . . . . . . . . . . . . . . . . . . . 214
Polytopic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Affine ParameterDependent Models . . . . . . . . . . . . . . . . . . . . 215
Quantification of Parameter Uncertainty . . . . . . . . . . . . . . . . 217
Simulation of ParameterDependent Systems . . . . . . . . . . . . . 219
From Affine to Polytopic Models . . . . . . . . . . . . . . . . . . . . . . . . 220
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
LinearFractional Models of Uncertainty . . . . . . . . . . . . . . . 223
How to Derive Such Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Specification of the Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . 226
From Affine to LinearFractional Models . . . . . . . . . . . . . . . . . 232
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
3
Robustness Analysis
Quadratic Lyapunov Functions . . . . . . . . . . . . . . . . . . . . . . . . 33
LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Quadratic Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Maximizing the Quadratic Stability Region . . . . . . . . . . . . . . . . 38
Decay Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Quadratic H∞ Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
ParameterDependent Lyapunov Functions . . . . . . . . . . . . 312
Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
µ Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Structured Singular Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Robust Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
The Popov Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
Real Parameter Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
iii
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
4
StateFeedback Synthesis
MultiObjective StateFeedback . . . . . . . . . . . . . . . . . . . . . . . . 43
Pole Placement in LMI Regions . . . . . . . . . . . . . . . . . . . . . . . . 45
LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Extension to the MultiModel Case . . . . . . . . . . . . . . . . . . . . . . . 49
The Function msfsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
5
Synthesis of H∞ Controllers
H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Riccati and LMIBased Approaches . . . . . . . . . . . . . . . . . . . . . . 57
H∞ Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Validation of the ClosedLoop System . . . . . . . . . . . . . . . . . . . 513
MultiObjective H∞ Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . 515
LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
The Function hinfmix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
LoopShaping Design with hinfmix . . . . . . . . . . . . . . . . . . . . . . 520
iv Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
6
Loop Shaping
The LoopShaping Methodology . . . . . . . . . . . . . . . . . . . . . . . . 62
The LoopShaping Methodology . . . . . . . . . . . . . . . . . . . . . . . . 63
Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Specification of the Shaping Filters . . . . . . . . . . . . . . . . . . . . 610
Nonproper Filters and sderiv . . . . . . . . . . . . . . . . . . . . . . . . . . 612
Specification of the Control Structure . . . . . . . . . . . . . . . . . 614
Controller Synthesis and Validation . . . . . . . . . . . . . . . . . . . 616
Practical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
Loop Shaping with Regional Pole Placement . . . . . . . . . . . 619
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
7
Robust GainScheduled Controllers
GainScheduled Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Synthesis of GainScheduled H• Controllers . . . . . . . . . . . . . 77
Simulation of GainScheduled Control Systems . . . . . . . . . . 79
Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
v
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
8
The LMI Lab
Background and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Overview of the LMI Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Specifying a System of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
A Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
setlmis and getlmis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
lmivar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
lmiterm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
The LMI Editor lmiedit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
How It All Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
Retrieving Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
lmiinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
lminbr and matnbr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
LMI Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822
From Decision to Matrix Variables and Vice Versa . . . . . . 828
Validating Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
Modifying a System of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
dellmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
dellmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830
setmvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
Structured Matrix Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
ComplexValued LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
Specifying c
T
x Objectives for mincx . . . . . . . . . . . . . . . . . . . . . 838
Feasibility Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
vi Contents
WellPosedness Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
SemiDefinite B(x) in gevp Problems . . . . . . . . . . . . . . . . . . . . 841
Efficiency and Complexity Issues . . . . . . . . . . . . . . . . . . . . . . . 841
Solving M + PTXQ + QTXTP < 0 . . . . . . . . . . . . . . . . . . . . . . . . 842
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
9
Command Reference
List of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
H∞ Control and Loop Shaping . . . . . . . . . . . . . . . . . . . . . . . . . . 96
LMI Lab: Specifying and Solving LMIs . . . . . . . . . . . . . . . . . . 97
LMI Lab: Additional Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Preface
Preface
viii
About the Authors
Dr. Pascal Gahinet is a reserach fellow at INRIA Rocquencourt, France. His
research interests include robust control theory, linear matrix inequalities,
numerical linear algebra, and numerical software for control.
Prof. Arkadi Nemirovski is with the Faculty of Industrial Engineering and
Management at Technion, Haifa, Israel. His research interests include convex
optimization, complexity theory, and nonparametric statistics.
Prof. Alan J. Laub is with the Electrical and Computer Engineering
Department of the University of California at Santa Barbara, USA. His
research interests are in numerical analysis, mathematical software, scientific
computation, computeraided control system design, and linear and largescale
control and filtering theory.
Mahmoud Chilali is completing his Ph.D. at INRIA Rocquencourt, France.
His thesis is on the theory and applications of linear matrix inequalities in
control.
Acknowledgments
ix
Acknowledgments
The authors wish to express their gratitude to all colleagues who directly or
indirectly contributed to the making of the LMI Control Toolbox. Special
thanks to Pierre Apkarian, Gregory Becker, Hiroyuki Kajiwara, and Anca
Ignat for their help and contribution. Many thanks also to those who tested and
helped refine the software, including Bobby Bodenheimer, Markus
Brandstetter, Eric Feron, K.C. Goh, Anders Helmersson, Ted Iwasaki, Jianbo
Lu, Roy Lurie, Jason Ly, John Morris, Ravi Prasanth, Michael Safonov,
Carsten Scherer, Andy Sparks, Mario Rotea, Matthew Lamont Tyler, Jim
Tung, and John Wen. Apologies, finally, to those we may have omitted.
The work of Pascal Gahinet was supported in part by INRIA.
Preface
x
1
Introduction
Linear Matrix Inequalities . . . . . . . . . . . . . 12
Toolbox Features . . . . . . . . . . . . . . . . . . 13
LMIs and LMI Problems . . . . . . . . . . . . . . . 14
The Three Generic LMI Problems . . . . . . . . . . . . 15
Further Mathematical Background . . . . . . . . . 19
References . . . . . . . . . . . . . . . . . . . . . 110
1 Introduction
12
Linear Matrix Inequalities
Linear Matrix Inequalities (LMIs) and LMI techniques have emerged as
powerful design tools in areas ranging from control engineering to system
identification and structural design. Three factors make LMI techniques
appealing:
•A variety of design specifications and constraints can be expressed as LMIs.
•Once formulated in terms of LMIs, a problem can be solved exactly by
efficient convex optimization algorithms (the “LMI solvers”).
•While most problems with multiple constraints or objectives lack analytical
solutions in terms of matrix equations, they often remain tractable in the
LMI framework. This makes LMIbased design a valuable alternative to
classical “analytical” methods.
See [9] for a good introduction to LMI concepts. The LMI Control Toolbox is
designed as an easy and progressive gateway to the new and fastgrowing field
of LMIs:
•For users mainly interested in applying LMI techniques to control design,
the LMI Control Toolbox features a variety of highlevel tools for the analysis
and design of multivariable feedback loops (see Chapters 2 through 7).
•For users who occasionally need to solve LMI problems, the “LMI Editor” and
the tutorial introduction to LMI concepts and LMI solvers provide for quick
and easy problem solving.
•For more experienced LMI users, the “LMI Lab” (Chapter 8) offers a rich,
flexible, and fully programmable environment to develop customized
LMIbased tools.
The LMI Control Toolbox implements stateoftheart interiorpoint LMI
solvers. While these solvers are significantly faster than classical convex
optimization algorithms, it should be kept in mind that the complexity of LMI
computations remains higher than that of solving, say, a Riccati equation. For
instance, problems with a thousand design variables typically take over an
hour on today's workstations. However, research on LMI optimization is still
very active and substantial speedups can be expected in the future. Thanks to
its efficient “structured” representation of LMIs, the LMI Control Toolbox is
geared to making the most out of such improvements.
Toolbox Features
13
Toolbox Features
The LMI Control Toolbox serves two purposes:
•Provide stateoftheart tools for the LMIbased analysis and design of robust
control systems
•Offer a flexible and userfriendly environment to specify and solve general
LMI problems (the LMI Lab)
The control design tools can be used without a priori knowledge about LMIs or
LMI solvers. These are dedicated tools covering the following applications of
LMI techniques:
•Specification and manipulation of uncertain dynamical systems (lineartime
invariant, polytopic, parameterdependent, etc.)
•Robustness analysis. Various measures of robust stability and performance
are implemented, including quadratic stability, techniques based on
parameterdependent Lyapunov functions, analysis, and Popov analysis.
•Multimodel/multiobjective statefeedback design
•Synthesis of outputfeedback H
∞
controllers via Riccati and LMIbased
techniques, including mixed H
2
/H
∞
synthesis with regional pole placement
constraints
•Loopshaping design
•Synthesis of robust gainscheduled controllers for timevarying
parameterdependent systems
For users interested in developing their own applications, the LMI Lab
provides a generalpurpose and fully programmable environment to specify
and solve virtually any LMI problem. Note that the scope of this facility is by
no means restricted to controloriented applications.
1 Introduction
14
LMIs and LMI Problems
A linear matrix inequality (LMI) is any constraint of the form
(11)
where
•x = (x
1
, . . . , x
N
) is a vector of unknown scalars (the decision or optimization
variables)
•A
0
, . . . , A
N
are given symmetric matrices
•< 0 stands for “negative definite,” i.e., the largest eigenvalue of A(x) is
negative
Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of (11) since
they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0, respectively.
The LMI (11) is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that
. As a result,
•Its solution set, called the feasible set, is a convex subset of R
N
•Finding a solution x to (11), if any, is a convex optimization problem.
Convexity has an important consequence: even though (11) has no analytical
solution in general, it can be solved numerically with guarantees of finding a
solution when one exists. Note that a system of LMI constraints can be
regarded as a single LMI since
where diag (A
1
(x), . . . , A
K
(x)) denotes the blockdiagonal matrix with
A
1
(x), . . . , A
K
(x) on its diagonal. Hence multiple LMI constraints can be
imposed on the vector of decision variables x without destroying convexity.
A x ( ) := A
0
x
1
A
1
… x
N
A
N
0 < + + +
A
y z +
2

\ .
 
0 <
A
1
x ( ) 0 <
.
.
.
A
K
x ( ) 0 <
is equivalent to
¹
¦
¦
¦
´
¦
¦
¦
¦
A x ( ) := diag A (
1
x ( ),… A
K
x ( ) ) 0 < ,
LMIs and LMI Problems
15
In most control applications, LMIs do not naturally arise in the canonical form
(11), but rather in the form
L(X
1
, . . . , X
n
) < R(X
1
, . . . , X
n
)
where L(
.
) and R(
.
) are affine functions of some structured matrix variables X
1
,
. . . , X
n
. A simple example is the Lyapunov inequality
(12)
where the unknown X is a symmetric matrix. Defining x
1
, . . . , x
N
as the
independent scalar entries of X, this LMI could be rewritten in the form (11).
Yet it is more convenient and efficient to describe it in its natural form (12),
which is the approach taken in the LMI Lab.
The Three Generic LMI Problems
Finding a solution x to the LMI system
(13)
is called the feasibility problem. Minimizing a convex objective under LMI
constraints is also a convex problem. In particular, the linear objective
minimization problem
(14)
plays an important role in LMIbased design. Finally, the generalized
eigenvalue minimization problem
(15)
is quasiconvex and can be solved by similar techniques. It owes its name to the
fact that is related to the largest generalized eigenvalue of the pencil
(A(x),B(x)).
Many control problems and design specifications have LMI formulations [9].
This is especially true for Lyapunovbased analysis and design, but also for
A
T
X XA 0 < +
A x ( ) 0 <
Minimize c
T
x subject to A x ( ) 0 <
Minimize λ subject to
A x ( ) λB x ( ) <
B x ( ) 0 >
C x ( ) 0 <
¹
¦
´
¦
¦
1 Introduction
16
optimal LQG control, H
∞
control, covariance control, etc. Further applications
of LMIs arise in estimation, identification, optimal design, structural design
[6, 7], matrix scaling problems, and so on. The main strength of LMI formu
lations is the ability to combine various design constraints or objectives in a
numerically tractable manner.
A nonexhaustive list of problems addressed by LMI techniques includes the
following:
•Robust stability of systems with LTI uncertainty (µanalysis) [24, 21, 27]
•Robust stability in the face of sectorbounded nonlinearities (Popov criterion)
[22, 28, 13, 16]
•Quadratic stability of differential inclusions [15, 8]
•Lyapunov stability of parameterdependent systems [12]
•Input/state/output properties of LTI systems (invariant ellipsoids, decay
rate, etc.) [9]
•Multimodel/multiobjective state feedback design [4, 17, 3, 9, 10]
•Robust pole placement
•Optimal LQG control [9]
•Robust H
∞
control [11, 14]
•Multiobjective H
∞
synthesis [17, 23, 10, 18]
•Design of robust gainscheduled controllers [5, 2]
•Control of stochastic systems [9]
•Weighted interpolation problems [9]
To hint at the principles underlying LMI design, let’s review the LMI
formulations of a few typical design objectives.
Stability. the stability of the dynamical system
is equivalent to the feasibility of
Find P = P
T
such that A
T
P + P A < 0, P > I.
x
·
Ax =
LMIs and LMI Problems
17
This can be generalized to linear differential inclusions (LDI)
where A(t) varies in the convex envelope of a set of LTI models:
A sufficient condition for the asymptotic stability of this LDI is the feasibility of
Find P = P
T
such that
RMS gain . the randommeansquares (RMS) gain of a stable LTI system
is the largest input/output gain over all bounded inputs u(t). This gain is the
global minimum of the following linear objective minimization problem [1, 26,
25].
Minimize γ over X = X
T
and γ such that
LQG performance . for a stable LTI system
x
·
A t ( )x =
A t ( ) Co A
1
,…,A
n
{ } ∈ a
i
A
i
:
i 1 =
n
∑
a
i
0 ≥ a
i
i 1 =
N
∑
1 = ,
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
=
A
i
T
P PA
i
0 P I. > , < +
x
·
Ax Bu + =
y Cx Du + =
¹
´
¦
A
T
X XA + XB C
T
B
T
X γ – I D
T
C D γ – I \ .




 
0 <
X 0 >
G
x
·
Ax Bw + =
y Cx =
¹
´
¦
1 Introduction
18
where w is a white noise disturbance with unit covariance, the LQG or H
2
performance G
2
is defined by
It can be shown that
Hence is the global minimum of the LMI problem
Minimize Trace (Q) over the symmetric matrices P,Q such that
Again this is a linear objective minimization problem since the objective Trace
(Q) is linear in the decision variables (free entries of P,Q).
G
2
2
: = E
T ∞ →
lim
1
T
 y
T
0
T
∫
t ( )y t ( )dt
¹ )
´ `
¦ ¹
1
2π
 G
H
∞ –
∞
∫
jω ( )G jω ( )dω =
G
2
2
inf Trace CPC
T
( ) { : AP PA
T
BB
T
0} < + + =
G
2
2
AP PA
T
BB
T
0 < + +
Q CP
PC
T
P
\ .

 
0 >
Further Mathematical Background
19
Further Mathematical Background
Efficient interiorpoint algorithms are now available to solve the three generic
LMI problems (13)–(15) defined in “The Three Generic LMI Problems” on
page 15. These algorithms have a polynomialtime complexity. That is, the
number N(ε) of flops needed to compute an εaccurate solution is bounded by
N(ε) ð M N
3
log(V/ε)
where M is the total row size of the LMI system, N is the total number of scalar
decision variables, and V is a datadependent scaling factor. The LMI Control
Toolbox implements the Projective Algorithm of Nesterov and Nemirovski [20,
19]. In addition to its polynomialtime complexity, this algorithm does not
require an initial feasible point for the linear objective minimization problem
(14) or the generalized eigenvalue minimization problem (15).
Some LMI problems are formulated in terms of inequalities rather than strict
inequalities. For instance, a variant of (14) is
Minimize c
T
x subject to A(x) ð 0.
While this distinction is immaterial in general, it matters when A(x) can be
made negative semidefinite but not negative definite. A simple example is
(16)
Such problems cannot be handled directly by interiorpoint methods which
require strict feasibility of the LMI constraints. A wellposed reformulation of
(16) would be
Minimize c
T
x subject to x Š 0.
Keeping this subtlety in mind, we always use strict inequalities in this manual.
Minimize c
T
x subject to
x x
x x \ .

 
0. ≥
1 Introduction
110
References
[1] Anderson, B.D.O, and S. Vongpanitlerd, Network Analysis, PrenticeHall,
Englewood Cliffs, 1973.
[2] Apkarian, P., P. Gahinet, and G. Becker, “SelfScheduled H
∞
Control of
Linear ParameterVarying Systems,” Proc. Amer. Contr. Conf., 1994, pp.
856860.
[3] Bambang, R., E. Shimemura, and K. Uchida, “Mixed H
2
/H
∞
Control with
Pole Placement,” StateFeedback Case,“ Proc. Amer. Contr. Conf., 1993, pp.
27772779.
[4] Barmish, B.R., “Stabilization of Uncertain Systems via Linear Control,“
IEEE Trans. Aut. Contr., AC–28 (1983), pp. 848850.
[5] Becker, G., Packard, P., “Robust Performance of LinearParametrically
Varying Systems Using ParametricallyDependent Linear Feedback,” Systems
and Control Letters, 23 (1994), pp. 205215.
[6] Bendsoe, M.P., A. BenTal, and J. Zowe, “Optimization Methods for Truss
Geometry and Topology Design,” to appear in Structural Optimization.
[7] BenTal, A., and A. Nemirovski, “Potential Reduction PolynomialTime
Method for Truss Topology Design,” to appear in SIAM J. Contr. Opt.
[8] Boyd, S., and Q. Yang, “Structured and Simultaneous Lyapunov Functions
for System Stability Problems,” Int. J. Contr., 49 (1989), pp. 22152240.
[9] Boyd, S., L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory, SIAM books, Philadelphia, 1994.
[10] Chilali, M., and P. Gahinet, “H
∞
Design with Pole Placement Constraints:
an LMI Approach,” to appear in IEEE Trans. Aut. Contr. Also in Proc. Conf.
Dec. Contr., 1994, pp. 553558.
[11] Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H
∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421448.
[12] Gahinet, P., P. Apkarian, and M. Chilali, “Affine ParameterDependent
Lyapunov Functions for Real Parametric Uncertainty,” Proc. Conf. Dec. Contr.,
1994, pp. 20262031.
References
111
[13] Haddad, W.M. and D.S. Berstein,“ParameterDependent Lyapunov
Functions, Constant Real Parameter Uncertainty, and the Popov Criterion in
Robust Analysis and Synthesis: Part 1 and 2,” Proc. Conf. Dec. Contr., 1991, pp.
22742279 and 26322633.
[14] Iwasaki, T., and R.E. Skelton, “All Controllers for the General H
∞
Control
Problem: LMI Existence Conditions and StateSpace Formulas,” Automatica,
30 (1994), pp. 13071317.
[15] Horisberger, H.P., and P.R. Belanger, “Regulators for Linear
TimeVarying Plants with Uncertain Parameters,” IEEE Trans. Aut. Contr.,
AC–21 (1976), pp. 705708.
[16] How, J.P., and S.R. Hall, “Connection between the Popov Stability
Criterion and Bounds for Real Parameter Uncertainty,” Proc. Amer. Contr.
Conf., 1993, pp. 10841089.
[17] Khargonekar, P.P., and M.A. Rotea,“Mixed H
2
/H
∞
Control: a Convex
Optimization Approach,” IEEE Trans. Aut. Contr., 39 (1991), pp. 824837.
[18] Masubuchi, I., A. Ohara, and N. Suda, “LMIBased Controller Synthesis:
A Unified Formulation and Solution,” submitted to Int. J. Robust and
Nonlinear Contr., 1994.
[19] Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, pp. 840844.
[20] Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in
Convex Programming: Theory and Applications, SIAM Books, Philadelphia,
1994.
[21] Packard, A., and J.C. Doyle, “The Complex Structured Singular Value,”
Automatica, 29 (1994), pp. 71109.
[22] Popov, V.M., “Absolute Stability of Nonlinear Systems of Automatic
Control,” Automation and Remote Control, 22 (1962), pp. 857875.
[23] Scherer, C., “Mixed H
2
H
∞
Control,” to appear in Trends in Control: A
European Perspective, volume of the special contributions to the ECC 1995.
[24] Stein, G. and J.C. Doyle, “Beyond Singular Values and Loop Shapes,” J.
Guidance, 14 (1991), pp. 516.
[25] Vidyasagar, M., Nonlinear System Analysis, PrenticeHall, Englewood
Cliffs, 1992.
1 Introduction
112
[26] Willems, J.C., “LeastSquares Stationary Optimal Control and the
Algebraic Riccati Equation,” IEEE Trans. Aut. Contr., AC–16 (1971), pp.
621634.
[27] Young, P. M., M. P. Newlin, and J. C. Doyle, “Let's Get Real,” in Robust
Control Theory, Springer Verlag, 1994, pp. 143174.
[28] Zames, G., “On the InputOutput Stability of TimeVarying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228238 and 465476.
2
Uncertain Dynamical
Systems
Linear TimeInvariant Systems . . . . . . . . . . . 23
SYSTEM Matrix . . . . . . . . . . . . . . . . . . . 23
Time and Frequency Response Plots . . . . . . . . . 26
Interconnections of Linear Systems . . . . . . . . . 29
Model Uncertainty . . . . . . . . . . . . . . . . .212
Uncertain StateSpace Models . . . . . . . . . . . .214
Polytopic Models . . . . . . . . . . . . . . . . . . .214
Affine ParameterDependent Models . . . . . . . . . . .215
Quantification of Parameter Uncertainty . . . . . . . . .217
Simulation of ParameterDependent Systems . . . . . . .219
From Affine to Polytopic Models . . . . . . . . . . . . .220
Example . . . . . . . . . . . . . . . . . . . . . .221
LinearFractional Models of Uncertainty . . . . . . .223
How to Derive Such Models . . . . . . . . . . . . . . .223
Specification of the Uncertainty . . . . . . . . . . . . .226
From Affine to LinearFractional Models . . . . . . . . .232
References . . . . . . . . . . . . . . . . . . . . .235
2 Uncertain Dynamical Systems
22
The LMI Control Toolbox offers a variety of tools to facilitate the description
and manipulation of uncertain dynamical systems. These include functions to:
•Manipulate the statespace realization of linear timeinvariant (LTI)
systems as a single SYSTEM matrix
•Form interconnections of linear systems
•Specify linear systems with uncertain statespace matrices (polytopic
differential inclusions, systems with uncertain or timevarying physical
parameters, etc.)
•Describe linearfractional models of uncertainty
The next sections provide a tutorial introduction to uncertain dynamical
systems and an overview of these facilities.
Linear TimeInvariant Systems
23
Linear TimeInvariant Systems
The LMI Control Toolbox provides streamlined tools to manipulate statespace
representations of linear timeinvariant (LTI) systems. These tools handle
general LTI models of the form
where A, B, C, D, E are real matrices and E is invertible, as well as their
discretetime counterpart
Recall that the vectors x(t), u(t), y(t) denote the state, input, and output
trajectories. Similarly, x
k
, u
k
, y
k
denote the values of the state, input, and
output vectors at the sample time k.
The “descriptor” formulation ( ) proves useful when specifying
parameterdependent systems (see “Affine ParameterDependent Models” on
page 215) and also avoids inverting E when this inversion is poorly
conditioned. Moreover, many dynamical systems are naturally written in
descriptor form. For instance, the secondorder system
.
admits the statespace representation
where .
SYSTEM Matrix
For convenience, the statespace realization of LTI systems is stored as a single
MATLAB matrix called a SYSTEM matrix. Specifically, a continuous or
E
dx
dt
 Ax t ( ) Bu t ( ) + =
y t ( ) Cx t ( ) Du t ( ) + =
Ex
k 1 +
Ax
k
Bu
k
+ =
y
k
Cx
k
Du
k
+ =
E I ≠
mx
··
fx
·
kx + + u y x = , =
1 0
0 m \ .

 
dX
dt

0 1
k – f – \ .

 
X
0
1 \ .

 
u y 1 0 , ( )X = , + =
X t ( ) :=
x t ( )
x
·
t ( ) \ .

 
2 Uncertain Dynamical Systems
24
discretetime LTI system with statespace matrices A, B, C, D, E is represented
by the structured matrix
where . The upper right entry n corresponds to the number of states
(i.e., A ∈ R
n×n
) while the entry Inf is used to differentiate SYSTEM matrices
from regular matrices. This data structure is similar to that used in the µ
Analysis and Synthesis Toolbox.
The functions ltisys and ltiss create SYSTEM matrices and extract state
space data from them. For instance,
sys = ltisys( 1,1,1,0)
specifies the LTI system
To retrieve the values of A, B, C, D from the SYSTEM matrix sys, type
[a,b,c,d] = ltiss(sys)
For singleinput/singleoutput (SISO) systems, the function ltitf returns the
numerator/denominator representation of the transfer function
Conversely, the command
sys = ltisys('tf',n,d)
returns a statespace realization of the SISO transfer function n(s)/d(s) in
SYSTEM format. Here n and d are the vector representation of n(s) and d(s) (type
help poly for details).
The number of states, inputs, and outputs of a linear system are retrieved from
the SYSTEM matrix with sinfo:
…
A j E I – ( ) + B
n
0
C D
0
0 –Inf
j 1 – =
x
·
x – u y x = , + =
G s ( ) D C sE A – ( )
1 –
B
n s ( )
d s ( )
 = + =
Linear TimeInvariant Systems
25
sinfo(sys)
System with 1 state(s), 1 input(s), and 1 output(s)
Similarly, the poles of the system are given by spol:
spol(sys)
ans =
1
The function ssub selects particular inputs and outputs of a system and
returns the corresponding subsystem. For instance, if G has two inputs and
three outputs, the subsystem mapping the first input to the second and third
outputs is given by
ssub(g,1,2:3)
The function sinv computes the inverse H(s) = G(s)
–1
of a system G(s) with
square invertible D matrix:
h = sinv(g)
Finally, the statespace realization of an LTI system can be “balanced” with
sbalanc. This function seeks a diagonal scaling similarity that reduces the
norms of A, B, C.
2 Uncertain Dynamical Systems
26
Time and Frequency Response Plots
Time and frequency responses of linear systems are plotted directly from the
SYSTEM matrix with the function splot. For the sake of illustration, consider
the secondorder system
with m = 2, f = 0.01, and k = 0.5. This system is specified in descriptor form by
sys = ltisys([0 1, 0.5 0.01],[0,1],[1 0],0,[1 0,0 2])
the last input argument being the E matrix. To plot its Bode diagram, type
splot(sys,'bo')
The second argument is the string consisting of the first two letters of “bode.”
This command produces the plot of Figure 21. A third argument can be used
to adjust the frequency range. For instance,
splot(sys,'bo',logspace( 1,1,50))
displays the Bode plot of Figure 22 instead.
Figure 21: splot (sys,'bo')
mx
··
fx
·
kx + + u y x = , =
10
−1
10
0
−50
0
50
Frequency (rad/sec)
G
a
i
n
d
B
10
−1
10
0
−90
−180
0
Frequency (rad/sec)
P
h
a
s
e
d
e
g
Time and Frequency Response Plots
27
Figure 22: splot (sys,'bo',logspace (–1,1,50))
To draw the singular value plot of this secondorder system, type
splot(sys,'sv').
For a system with transfer function G(s) = D + C(sE – A)
–1
B, the singular value
plot consists of the curves
(ω, σ
i
(G(jω)))
where σ
i
denotes the ith singular value of a matrix M in the order
σ
1
(M) Š σ
2
(M) Š . . . Š σ
p
(M).
Similarly, the step and impulse responses of this system are plotted by
splot(sys,'st') and splot(sys,'im'), respectively. See the “Command
Reference” chapter for a complete list of available diagrams.
The function splot is also applicable to discretetime systems. For such
systems, the sampling period T must be specified to plot frequency responses.
For instance, the Bode plot of the system
10
−1
10
0
10
1
−50
0
50
Frequency (rad/sec)
G
a
i
n
d
B
10
−1
10
0
10
1
−90
−180
0
Frequency (rad/sec)
P
h
a
s
e
d
e
g
2 Uncertain Dynamical Systems
28
with sampling period t = 0.01 s is drawn by
splot(ltisys(0.1,0.2, 1),0.01,'bo')
x
k 1 +
0.1x
k
0.2u
k
+ =
y
k
x –
k
=
¹
´
¦
Interconnections of Linear Systems
29
Interconnections of Linear Systems
Tools are provided to form series, parallel, and simple feedback inter
connections of LTI systems. More complex interconnections can be built either
incrementally with these basic facilities or directly with sconnect
(see“Polytopic Models” on page 214 for details). The interconnection functions
work on the SYSTEM matrix representation of dynamical systems and their
names start with an s. Note that most of these functions are also applicable to
polytopic or parameter dependent models (see “Polytopic Models” on
page 214).
Series and parallel interconnections are performed by sadd and smult. In
terms of transfer functions,
sadd(g1,g2)
returns the system with transfer function G
1
(s) + G
2
(s) while
smult(g1,g2)
returns the system with transfer function G
2
(s)G
1
(s). Both functions take up to
ten input arguments, at most one of which can be a parameterdependent
system. Similarly, the command
sdiag(g1,g2)
appends (concatenates) the systems G
1
and G
2
and returns the system with
transfer function
This corresponds to combining the input/output relations
y
1
= G
1
(s)u
1
, y
2
= G
2
(s)u
2
into the single relation
G s ( )
G
1
s ( ) 0
0 G
2
s ( )
\ .


 
=
y
1
y
2 \ .


 
G s ( )
u
1
u
2 \ .


 
=
2 Uncertain Dynamical Systems
210
Finally, the functions sloop and slft form basic feedback interconnections.
The function sloop computes the closedloop mapping between r and y in the
loop of Figure 23. The result is a statespace realization of the transfer
function
(I – εG
1
G
2
)
–1
G
1
where ε = ±1.
Figure 23: sloop
This function is useful to specify simple feedback loops. For instance, the
closedloop system clsys corresponding to the twodegreeoffreedom tracking
loop
is derived by setting G
1
(s) = G(s)K(s) and G
2
(s) = 1:
clsys = smult(c, sloop(smult(k,g),1))
The function slft forms the more general feedback interconnection of Figure
24 and returns the closedloop mapping from to . To form this
interconnection when u ∈ R
2
and y ∈ R
3
, the command is
slft(P1,P2,2,3)
r
y +
−ε
G
1
(s)
G
2
(s)
r y +
−
K G C
ω
1
ω
2
\ .
 
z
1
z
2
\ .
 
Interconnections of Linear Systems
211
The last two arguments dimension u and y. This function is useful to compute
linearfractional interconnections such as those arising in H
∞
theory and its
extensions (see “How to Derive Such Models” on page 223).
Figure 24: slft
w
1 z
1
u y
w
2
z
2
P
1
(s)
P
2
(s)
2 Uncertain Dynamical Systems
212
Model Uncertainty
The notion of uncertain dynamical system is central to robust control theory.
For control design purposes, the possibly complex behavior of dynamical
systems must be approximated by models of relatively low complexity. The gap
between such models and the true physical system is called the model
uncertainty. Another cause of uncertainty is the imperfect knowledge of some
components of the system, or the alteration of their behavior due to changes in
operating conditions, aging, etc. Finally, uncertainty also stems from physical
parameters whose value is only approximately known or varies in time. Note
that model uncertainty should be distinguished from exogenous actions such as
disturbances or measurement noise.
The LMI Control Toolbox focuses on the class of dynamical systems that can be
approximated by linear models up to some possibly nonlinear and/or
timevarying model uncertainty. When deriving the nominal model and
estimating the uncertainty, two fundamental principles must be remembered:
•Uncertainty should be small where high performance is desired (tradeoff
between performance and robustness). In other words, the linear model
should be sufficiently accurate in the control bandwidth.
•The more information you have about the uncertainty (phase, structure,
time invariance, etc.), the higher the achievable performance will be.
There are two major classes of uncertainty:
•Dynamical uncertainty, which consists of dynamical components neglected
in the linear model as well as of variations in the dynamical behavior during
operation. For instance, highfrequency flexible modes, nonlinearities for
large inputs, slow time variations, etc.
•Parameter uncertainty, which stems from imperfect knowledge of the
physical parameter values, or from variations of these parameters during
operation. Examples of physical parameters include stiffness and damping
coefficients in mechanical systems, aerodynamical coefficients in flying
devices, capacitors and inductors in electric circuits, etc.
Other important characteristics of uncertainty include whether it is linear or
nonlinear, and whether it is time invariant or time varying. Model uncertainty
is generally a combination of dynamical and parametric uncertainty, and may
arise at several different points in the control loop. For instance, there may be
Model Uncertainty
213
dynamical uncertainty on the system actuators, and parametric uncertainty on
some sensor coefficients. Two representations of model uncertainty are used in
the LMI Control Toolbox:
•Uncertain statespace models. This representation is relevant for systems
described by dynamical equations with uncertain and/or timevarying
coefficients.
•Linearfractional representation of uncertainty. Here the uncertain system is
described as an interconnection of known LTI systems with uncertain
components called “uncertainty blocks.” Each uncertainty block ∆
i
(
.
)
represents a family of systems of which only a few characteristics are known.
For instance, the only available information about ∆
i
may be that it is a
timeinvariant nonlinearity with gain less than 0.01.
Determinant factors in the choice of representation include the available model
(statespace equations, frequencydomain model, etc.) and the analysis or
synthesis tool to be used.
2 Uncertain Dynamical Systems
214
Uncertain StateSpace Models
Physical models of a system often lead to a statespace description of its
dynamical behavior. The resulting statespace equations typically involve
physical parameters whose value is only approximately known, as well as
approximations of complex and possibly nonlinear phenomena. In other words,
the system is described by an uncertain statespace model
E = Ax + Bu, y = Cx + Du
where the statespace matrices A, B, C, D, E depend on uncertain and/or
timevarying parameters or vary in some bounded sets of the space of matrices.
Of particular relevance to this toolbox are the classes of polytopic or
parameterdependent models discussed next. We collectively refer to such
models as Psystems, “P–” standing for “polytopic or parameterdependent.”
Psystems are specified with the function psys and manipulated like ordinary
LTI systems except for a few specific restrictions.
Polytopic Models
We call polytopic system a linear timevarying system
E(t) = A(t) x + B(t) u
y = C(t) x + D(t) u
whose SYSTEM matrix varies within a fixed polytope
of matrices, i.e.,
where S
1
, . . . , S
k
are given vertex systems:
(21)
In other words, S(t) is a convex combination of the SYSTEM matrices S
1
, . . . , S
k
.
The nonnegative numbers α
1
, . . . , α
k
are called the polytopic coordinates of S.
x
·
x
·
S t ( )
A t ( ) jE t ( ) + B t ( )
C t ( ) D t ( )
=
S t ( ) Co S
1
… S
k
, , { } ∈ := α
i
S
i
i 1 =
k
∑
: α
i
0 α
i
i 1 =
k
∑
1 = , ≥
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
S
1
A
1
jE
1
+ B
1
C
1
D
1
,…S
k
A
k
jE
k
+ B
k
C
k
D
k
= =
Uncertain StateSpace Models
215
Such models are also called polytopic linear differential inclusions in the
literature [3] and arise in many practical situations, including:
•Multimodel representation of a system, each model being derived around
particular operating conditions
•Nonlinear systems of the form
= A(x) x + B(x) u, y = C(x) x + D(x) u
•Statespace models depending affinely on timevarying parameters (see
“From Affine to Polytopic Models” on page 220)
A simple example is the system
= (sin x) x
whose state matrix A = sin x ranges in the polytope
A Œ Co{–1, 1} = [–1, 1].
Polytopic systems are specified by the list of their vertex systems, i.e., by the
SYSTEM matrices S
1
, . . . , S
k
in (21). For instance, a polytopic model taking
values in the convex envelope of the three LTI systems s1, s2, s3 is declared
by
polsys = psys([s1 s2 s3])
Affine ParameterDependent Models
The equations of physics often involve uncertain or timevarying coefficients.
When the system is linear, this naturally gives rise to parameterdependent
models (PDS) of the form
E(p) = A(p) x + B(p) u
y = C(p) x + D(p) u
where A(
.
), . . . , E(
.
) are known functions of some parameter vector
p = (p
1
, . . . , p
n
). Such models commonly arise from the equations of motion,
aerodynamics, circuits, etc.
The LMI Control Toolbox offers various tools to analyze the stability and
performance of parameterdependent systems with an affine dependence on
the parameter vector p = (p
1
, . . . , p
n
). That is, PDSs where
A(p) = A
0
+ p
1
A
1
+ . . .+ p
n
A
n
, B(p) = B
0
+ p
1
B
1
+ . . .+ p
n
B
n
,
x
·
x
·
x
·
2 Uncertain Dynamical Systems
216
and so on. Affine parameterdependent models are wellsuited for Lyapunov
based analysis and synthesis and are easily converted to linearfractional
uncertainty models for smallgainbased design (see the nonlinear spring
example in “SectorBounded Uncertainty” on page 230).
With the notation
the affine dependence on p is written more compactly in SYSTEM matrix terms
as
S(p) = S
0
+ p
1
S
1
+ . . . + p
n
S
n
.
The system “coefficients” S
0
, . . . , S
n
fully characterize the dependence on the
uncertain parameters p
1
, . . . , p
n
. Note that S
0
, . . . , S
n
need not represent
meaningful dynamical systems. Only their combination S(p) is a relevant
description of the problem.
Affine parameterdependent systems are specified with psys by providing
•A description of the parameter vector p in terms of bounds on the parameter
values and rates of variation
•The list of SYSTEM matrix coefficients S
0
, . . . , S
n
For instance, the system
S(p) = S
0
+ p
1
S
1
+ p
2
S
2
is defined by
s0 = ltisys(a0,b0,c0,d0,e0)
s1 = ltisys(a1,b1,c1,d1,e1)
s2 = ltisys(a2,b2,c2,d2,e2)
affsys = psys(pv, [s0 s1 s2])
where pv is the parameter description returned by pvec (see next subsection for
details). The output affsys is a structured matrix storing all relevant data.
S p ( )
A p ( ) jE p ( ) + B p ( )
C p ( ) D p ( ) \ .

 
S
i
A
i
jE
i
+ B
i
C
i
D
i \ .


 
, = , =
Uncertain StateSpace Models
217
Important: By default, ltisys sets the E matrix to the identity. Omitting the
arguments e0, e1, e2 altogether results in setting E(p) = I + (p
1
+ p
2
)I.
To specify an affine PDS with a parameterindependent E matrix (e.g.,
E(p) = I), you must explicitly set E
1
= E
2
= 0 by typing
s0 = ltisys(a0,b0,c0,d0)
s1 = ltisys(a1,b1,c1,d1,0)
s2 = ltisys(a2,b2,c2,d2,0)
Quantification of Parameter Uncertainty
Parameter uncertainty is quantified by the range of parameter values and
possibly the rates of parameter variation. This is done with the function pvec.
The characteristics of parameter vectors defined with pvec are retrieved with
pvinfo.
The parameter uncertainty range can be described as a box in the parameter
space. This corresponds to cases where each uncertain or timevarying
parameter p
i
ranges between two empirically determined extremal values
and :
(21)
If p = (p
1
, . . . , p
n
) is the vector of all uncertain parameters, (22) delimits a
hyperrectangle of the parameter space R
n
called the parameter box. Consider
the example of an electrical circuit with uncertain resistor ρ and capacitor c
ranging in
ρ ∈ [600, 1000] c ∈ [1, 5]
The corresponding parameter vector p = (ρ, c) takes values in the box drawn in
Figure 25. This uncertainty range is specified by the commands:
range = [600 1000,1 5]
p = pvec('box',range)
The ith row of the matrix range lists the lower and upper bounds on p
i
.
p
i
p
i
p
i
p
i
p
i
, [ ]. ∈
2 Uncertain Dynamical Systems
218
Similarly, bounds on the rate of variation of p
i
(t) are specified by adding a
third argument rate to the calling list of pvec. For instance, the constraints
are incorporated by
rate = [0.1 1, 0.001 0.001]
p = pvec('box',range,rate)
All parameters are assumed to be timeinvariant when rate is omitted. Slowly
varying parameters can be specified in this manner. In general, robustness
against fast parameter variations is more stringent than robustness against
constant but uncertain parameters.
Figure 25: Parameter box
Alternatively, uncertain parameter vectors can be specified as ranging in a
polytopic of the parameter space R
n
like the one drawn in Figure 26 for n = 2.
This polytope is characterized by the three vertices:
Π
1
= (1, 4), Π
2
= (3, 8), Π
3
= (10, 1).
An uncertain parameter vector p with this range of values is defined by
pi1=[1,4], pi2=[3,8], pi3=[10,1]
p = pvec('pol',[pi1,pi2,pi3])
p
·
i
0.1 ρ
·
t ( ) 1 c
·
t ( ) 0.001 ≤ , ≤ ≤
c
p
600 1000
1
5
range of parameter
values
Uncertain StateSpace Models
219
The string 'pol' indicates that the parameter range is defined as a polytope.
Figure 26: Polytopic parameter range
Simulation of ParameterDependent Systems
The function pdsimul simulates the time response of affine parameter
dependent systems along given parameter trajectories. The parameter vector
p(t) is required to range in a box (type 'box' of pvec). The parameter trajectory
is defined by a function p = fun(t) returning the value of p at time t. For
instance,
pdsimul(pds,'traj')
plots the step response of a singleinput parameterdependent system pds
along the parameter trajectory defined in traj.m. To obtain other time
responses, specify an input signal as in
pdsimul(pds,'traj',1,'sin')
This command plots the response to a sine wave between t = 0 and t = 1 s.
Multiinput multioutput (MIMO) systems are simulated similarly by
specifying an appropriate input function.
p
2
p
1
3 10
1
8
4
1
Π
2
Π
3
Π
1
range of values of p
2 Uncertain Dynamical Systems
220
From Affine to Polytopic Models
Affine parameterdependent models
are readily converted to polytopic ones. Suppose for instance that each
parameter p
i
ranges in some interval [ ]. The parameter vector
p = (p
1
, . . . , p
n
) then takes values in a parameter box with 2
n
corners
Π
1
, Π
2
, . . . . If the function S(p) is affine in p, it maps this parameter box to
some polytope of SYSTEM matrices. More precisely, this polytope is the convex
envelope of the images S(Π
1
), S(Π
2
), . . . of the parameter box corners
Π
1
, Π
2
, . . . as illustrated by the figure below.
Figure 27: Mapping the parameter box to a polytope of systems
Given an affine parameterdependent system, the function aff2pol performs
this mapping and returns an equivalent polytopic model. The syntax is
polsys = aff2pol(affsys)
where affsys is the affine model. The resulting polytopic model polsys
consists of the instances of affsys at the vertices of the parameter range.
S p ( )
A p ( ) jE p ( ) + B p ( )
C p ( ) D p ( ) \ .

 
=
p
i
p
i
,
p
1
p
2
Π
4
Π
3
Π
1 Π
2
parameter box
polytope of matrices
S(p)
Uncertain StateSpace Models
221
Example
We conclude with an example illustrating the manipulation of polytopic and
parameterdependent models in the LMI Control Toolbox.
Example 2.1. Consider a simple electrical circuit with equation
where the inductance L, the resistor R, and the capacitor C are uncertain
parameters ranging in
L ∈ [10, 20], R ∈ [1, 2], C ∈ [100, 150].
A statespace representation of its undriven response is
E(L, R, C) = A(L, R, C)x
where and
This affine system is specified with psys as follows:
a0 = [0 1;0 0]; e0 = [1 0;0 0]; s0 = ltisys(a0,e0)
aL = zeros(2); eL = [0 0;0 1]; sL = ltisys(aL,eL)
aR = [0 0; 1 0]; sR = ltisys(aR,0)
aC = [0 0;0 1]; sC = ltisys(aC,0)
pv=pvec('box',[10 20;1 2;100 150])
pds = psys(pv,[s0 sL sR sC])
The first SYSTEM matrix s0 contains the statespace data for L = R = C = 0 while
sL, sR, sC define the coefficient matrices of L, R, C. The range of parameter
values is specified by the pvec command.
The results can be checked with psinfo and pvinfo:
L
d
2
i
dt
 R
di
dt
 Ci + V = +
x
·
x
T
i
di
dt
 ,
\ .
 
=
A L R C , , ( )
0 1
R – C – \ .

 
0 1
0 0 \ .

 
L 0 R
0 0
1 – 0 \ .

 
C
0 0
0 1 – \ .

 
+ + × + = =
E L R C , , ( )
1 0
0 L \ .

 
1 0
0 0 \ .

 
L
0 0
0 1 \ .

 
R 0 C 0 × + × . + + = =
2 Uncertain Dynamical Systems
222
psinfo(pds)
Affine parameterdependent system with 3 parameters (4 systems)
Each system has 2 state(s), 0 input(s), and 0 output(s)
pvinfo(pv)
Vector of 3 parameters ranging in a box
The system can also be evaluated for given values of L, R, C:
sys = psinfo(pds,'eval',[15 1.2 150])
[a,b,c,d,e] = ltiss(sys)
The matrices a and e now contain the values of A(L, R, C) and E(L, R, C) for L
= 15, R = 1.2, and C = 150.
Finally, the polytopic counterpart of this affine model is given by
pols = aff2pol(pds)
psinfo(pols)
Polytopic model with 8 vertex systems
Each system has 2 state(s), 0 input(s), and 0 output(s)
LinearFractional Models of Uncertainty
223
LinearFractional Models of Uncertainty
For systems with both dynamical and parametric uncertainty, a more general
representation of uncertainty is the linearfractional model of Figure 28 [1, 2].
In this generic model:
•The LTI system P(s) gathers all known LTI components (controller, nominal
models of the system, sensors, and actuators, . . .)
•The input vector u includes all external actions on the system (disturbance,
noise, reference signal, . . .) and the vector y consists of all output signals
generated by the system
• ∆ is a structured description of the uncertainty. Specifically,
∆ = diag(∆
1
, . . . , ∆
r
)
where each uncertainty block ∆
i
accounts for one particular source of
uncertainty (neglected dynamics, nonlinearity, uncertain parameter, etc.).
The diagonal structure of ∆ reflects how each uncertainty component ∆
i
enters the loop and affects the overall behavior of the true system.
Figure 28: Linearfractional uncertainty
How to Derive Such Models
Before proceeding with the specification of such uncertainty models, we
illustrate how natural uncertainty descriptions can be recast into the standard
linearfractional model of Figure 28.
u
y
P(s)
∆
w
q
2 Uncertain Dynamical Systems
224
Example 2.2 . Consider the feedback loop
where
•G(s) is an uncertain LTI system approximated within 5% accuracy by the
secondorder nominal model
In other words, G(s) = G
0
(s)(I + ∆(s)) where the multiplicative dynamical
uncertainty ∆(s) is any stable LTI system with RMS gain less than 0.05
•k(t) is a fluctuating gain modeled by
k(t) = k
0
+ δ(t)
where k
0
is the nominal value and δ(t) < 0.1 k
0
.
To derive a linearfractional representation of this uncertain feedback system,
first separate the uncertainty from the nominal models by rewriting the loop as
r y +
−
G(s)
k(t)
G
0
s ( )
1
s
2
0.01s 1 + +
 =
r y +
−
δ(.)
k
0
+
+
G
0
+
+
∆
LinearFractional Models of Uncertainty
225
Then pull out all uncertain components and lump them together into a single
block as follows:
If and denote the input and output vectors of the uncertainty
block, the plant P(s) is simply the transfer function from to in the
diagram.
r
+
k
0
y
+
+
+
G
0
+
δ 0
0 ∆ \ .

 
−
e
q
1
q
2
P
2
p
1
p
1
p
2 \ .

 
q
1
q
2 \ .

 
q
1
q
2
r
\ .


  p
1
p
2
y
\ .


 
r
y
P(s)
δ 0
0 ∆ \ .

 
q
1
q
2
\ .
 
p
1
p
2
\ .
 
2 Uncertain Dynamical Systems
226
Hence the SYSTEM matrix of P(s) can be computed with sconnect or by
specifying the following interconnection:
Specification of the Uncertainty
In linearfractional uncertainty models, each uncertainty block ∆
i
is a
dynamical system characterized by
•Its dynamical nature: linear timeinvariant or timevarying, nonlinear
memoryless, arbitrary nonlinear
•Its dimensions and structure (full block or scalar block ∆
i
= δ
i
× I). Scalar
blocks are used to represent uncertain parameters.
•Whether αii is real or complex in the case of scalar uncertainty ∆
i
= δ
i
by I
•Quantitative information such as norm bounds or sector bounds
The properties of each individual uncertainty block ∆
i
are specified with the
function ublock. These individual descriptions are then appended with udiag
to form a complete description of the structured uncertainty
∆ = diag(∆
1
, . . . , ∆
r
)
Proper quantification of the uncertainty is important since this determines
achievable levels of robust stability and performance. The basics about
quantification issues are reviewed next.
r
+
k
0
y
+
+
+
G
0
+
−
p
2 q
2
q
1
p
1
e
LinearFractional Models of Uncertainty
227
NormBounded Uncertainty
Norm bounds specify the amount of uncertainty in terms of RMS gain. Recall
that the RMS gain or L
2
gain of a BIBOstable system is defined as the
worstcase ratio of the input and output energies:
where
For additive uncertainty
G = G
0
+ ∆,
the bound on ∆
∞
is an estimate of the worstcase gap G – G
0
between the true
system G and its linear model G
0
(s). For multiplicative uncertainty
G = G
0
(I + ∆),
this bound quantifies the relative gap between G and its model G
0
.
Norm bounds are also useful to quantify parametric uncertainty. For instance,
an uncertain parameter p can be represented as
p = p
0
(1 + δ), δ < δ
max
where p
0
is the nominal value and δ
max
bounds the relative deviation from p
0
.
If p ∈ [ ], then p
0
and δ
max
are given by
When ∆ is LTI (i.e., when the physical system G itself is LTI), the uncertainty
can be quantified more accurately by using frequencydependent bounds of the
form
(22)
∆
∞
sup
∆w
L
2
w
L
2
 =
w L
2
∈
w 0 ≠
w
L
2
2
w
T
t ( )w t ( ) t. d
0
∞
∫
=
p p ,
p
0
p p +
2
 δ
max
p p –
2p
0
 = , =
W
1 –
s ( )∆ s ( )
∞
1 <
2 Uncertain Dynamical Systems
228
where W(s) is some SISO shaping filter. Such bounds specify different amounts
of uncertainty across the frequency range and the uncertainty level at each
frequency is adjusted through theshaping filter shaping filter W(s). For
instance, choosing
results in the frequencyweighted uncertainty bound
fquan2.ps drawn in Figure 29.
Figure 29: Frequencydependent bound on ∆(jw)
These various norm bounds are easily specified with ublock. For instance, a
2by 3 LTI uncertainty block ∆(s) with RMS gain smaller than 10 is declared by
delta = ublock([2 3],10)
while a scalar nonlinearity Φ(
.
) with unit RMS gain is defined by
W s ( )
2s 1 +
s 100 +
 =
∆ jω ( ) β ω ( ) < ( ) :=
4ω
2
1 +
ω
2
10
4
+

10
−1
10
0
10
1
10
2
10
3
10
−2
10
−1
10
0
10
1
Frequency w
B
o
u
n
d
b
e
t
a
(
w
)
LinearFractional Models of Uncertainty
229
phi = ublock(1,1,'nl')
In these commands, the first argument dimensions the block, the second
argument quantifies the uncertainty by a norm bound or sector bounds, and
the optional third argument is a string specifying the nature and structure of
the uncertainty block. The default properties are full, complexvalued, and
linear time invariant (LTI).
Scalar blocks and real parameter uncertainty are denoted by the letters s and
r, respectively. For example, the command
delta = ublock(3,0.5,'sr')
declares a 3 −βψ−3 block ∆ = δ −βψ−I with repeated real parameter δ bounded
by δ ð 0.5. Finally, setting the second argument of ublock to a SISO system
W(s) specifies a frequencydependent norm bound on the LTI uncertainty block
∆(s). Thus,
delta = ublock(2, ltisys('tf',[100 0],[1 100]))
declares an LTI uncertainty ∆(s) ∈ C
2×2
satisfying
at all frequencies ω.
After specifying each block ∆
i
with ublock, call udiag to derive a complete
description of the uncertainty structure
∆ = diag(∆
1
, . . . , ∆
n
).
For instance, the commands
delta1 = ublock([3 1],0.1)
delta2 = ublock(2,2,'s')
delta3 = ublock([1 2],100,'nl')
delta = udiag(delta1,delta2,delta3)
specify the uncertainty structure
∆ = diag(∆
1
, ∆
2
, ∆
3
)
where
•∆
1
(s) ∈ C
3×1
and ∆
1

∞
< 0.1
σ
max
∆ jω ( ) ( ) W jω ( ) < < , W s ( )
100s
s 100 +
 =
2 Uncertain Dynamical Systems
230
•∆
2
= δ by I
2
with δ ∈ C and δ < 2
•∆
3
(
.
) is a nonlinear operator with two inputs and one output and satisfies
∆
3

∞
< 100.
Finally, uinfo displays the characteristics of uncertainty structures declared
with ublock and udiag.
uinfo(delta)
SectorBounded Uncertainty
The behavior of uncertain dynamical components can also be quantified in
terms of sector bounds on their timedomain response [5, 4]. A (possibly
nonlinear) BIBOstable system φ(
.
) is said to be in the sector {a, b} if the
mapping
φ : u ∈ L
2
α y∈L
2
satisfies a quadratic constraint of the form
As a special case, passive systems satisfy
which corresponds to the sector {0, +∞}. Note that sector bounds can always be
converted to norm bounds since φ is in the sector {a, b} if and only if
If φ is passive, a bilinear transformation maps φ to the operator (I – φ)(I + φ)
–1
of norm less than one.
block dims type real/cplx full/scal bounds
1 3x1 LTI c f norm <= 0.1
2 2x2 LTI c s norm <= 2
3 1x2 NL r f norm <= 100
y t ( ) au t ( ) – ( )
T
0
∞
∫
y t ( ) bu t ( ) – ( )dt 0 for all u L
2
∈ ≤
y t ( )
T
0
∞
∫
u t ( )dt 0 ≥
φ
a b +
2
 –
∞
a b –
2
 <
LinearFractional Models of Uncertainty
231
The system φ(
.
) is called memoryless if y(t) only depends on the input value u(t)
at time t, i.e.,
y(t) = φ(u(t)).
For memoryless systems, the sector condition reduces to the “instantaneous”
property
(y(t) – au(t))
T
(y(t) – bu(t)) ð 0 at all time t.
In the scalar case, this means that the response y = φ(u) lies in the sector
delimited by the two lines y = au and y = bu as illustrated below.
A simple example is the nonlinear spring y = k(u) with
The response of this spring lies in the sector {1, 2} as is seen from Figure 210.
This scalar memoryless nonlinearity is specified with ublock by
k = ublock(1,[1 2],'nlm')
y
u
y=au
y=bu
y=φ (υ)
k u ( )
¹
¦
´
¦
¦
=
u 1 – if u 1 – <
2u if u 1 ≤
u 1 + if u 1 >
2 Uncertain Dynamical Systems
232
Robustness in the face of sectorbounded uncertainty is analyzed by the circle
and Popov criteria.
From Affine to LinearFractional Models
Since some uncertain systems are more naturally specified as affine
parameterdependent systems, the function aff2lft is provided to derive a
linearfractional representation of their parametric uncertainty. All uncertain
parameters are assumed real and are repeated in the ∆ structure when
necessary. For instance, the system
(23)
Figure 210: Nonlinear spring y = k(u)
with constant p
1
∈ [2, 10] and timevarying p
2
∈ [–1, 3] is specified by
s0 = ltisys([0 1,0 0])
s1 = ltisys([1 0, 1 1],0) % p1
dx
dt

p
1
1
p –
1
p
1
2p
2
+
\ .


 
x =
1
1
2
k(u)
LinearFractional Models of Uncertainty
233
s2 = ltisys([0 0,0 2],0) % p2
pv = pvec('box',[2 10, 1 3],[0 0,Inf Inf])
pds = psys(pv,[s0 s1 s2])
To derive the linearfractional representation of the uncertainty on p
1
, p
2
, type
[P,delta] = aff2lft(pds)
On output:
•The nominal plant P corresponds to the average values of p
1
and p
2
.
Specifically, closing the interconnection of Figure 28 with ∆ ≡ 0 yields the
system (24) for p
1
= 6 and p
2
= 1, as confirmed by
cls = slft(zeros(3),P)
a = ltiss(cls)
a =
6 1
6 8
•The uncertainty structure delta consists of real scalar blocks as confirmed
by
uinfo(delta)
Note that the parameter p
1
is repeated and that the norm bound corresponds
to the maximal deviation from the average value of the parameter.
block dims type real/cplx full/sc
al
bounds
1 2by2 LTI r s norm <= 4
2 1by1 LTV r s norm <= 2
2 Uncertain Dynamical Systems
234
The corresponding block diagram appears in Figure 211.
Figure 211: Linearfractional representation of the system (24)
u
y
P(s)
δp
1
I × 0
0 δp
2 \ .


 
References
235
References
[1] Doyle, J.C. and G. Stein, “Multivariable Feedback Design: Concepts for a
Classical/Modern Synthesis,” IEEE Trans. Aut. Contr., AC26 (1981), pp. 416.
[2] Doyle, J.C., “Analysis of Feedback Systems with Structured Uncertainties,”
IEE Proc., vol. 129, pt. D (1982), pp. 242–250.
[3] Boyd, S., L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory, SIAM books, Philadelphia, 1994.
[4] Vidyasagar, M., Nonlinear System Analysis, PrenticeHall, Englewood
Cliffs, 1992.
[5] Zames, G., “On the InputOutput Stability of TimeVarying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228238 and 465476.
2 Uncertain Dynamical Systems
236
3
Robustness Analysis
Quadratic Lyapunov Functions . . . . . . . . . . . 33
LMI Formulation . . . . . . . . . . . . . . . . . . . 34
Quadratic Stability . . . . . . . . . . . . . . . . . . 36
Maximizing the Quadratic Stability Region . . . . . . . . 38
Decay Rate . . . . . . . . . . . . . . . . . . . . . 39
Quadratic H∞ Performance . . . . . . . . . . . . . . .310
ParameterDependent Lyapunov Functions . . . . . .312
Stability Analysis . . . . . . . . . . . . . . . . . . .314
µ Analysis . . . . . . . . . . . . . . . . . . . . .317
Structured Singular Value . . . . . . . . . . . . . . .317
Robust Stability Analysis . . . . . . . . . . . . . . .319
Robust Performance . . . . . . . . . . . . . . . . . .321
The Popov Criterion . . . . . . . . . . . . . . . .324
Real Parameter Uncertainty . . . . . . . . . . . . . .325
Example . . . . . . . . . . . . . . . . . . . . . .328
References . . . . . . . . . . . . . . . . . . . . .332
3 Robustness Analysis
32
Control systems are often designed for a simplified model of the physical plant
that does not take into account all sources of uncertainty. Aposteriori
robustness analysis is then necessary to validate the design and obtain
guarantees of stability and performance in the face of plant uncertainty. The
LMI Control Toolbox offers a variety of tools to assess robust stability and
robust performance. These tools cover most available Lyapunovbased and
frequencydomain analysis techniques.
•Quadratic stability/performance
•Tests involving parameterdependent Lyapunov functions
•Mixedµ analysis
•The Popov criterion
Each test is geared to a particular class of uncertainty, for example, real
parameter uncertainty for parameterdependent Lyapunov functions, or
memoryless nonlinearities for the Popov criterion. Hence users should select
the most appropriate tool for their problem. Since all of these tests are based
on sufficient conditions, they are only productive when they succeed in
establishing robust stability or performance.
Quadratic Lyapunov Functions
33
Quadratic Lyapunov Functions
The notions of quadratic stability and quadratic performance are useful to
analyze linear timevarying systems
E(t) (t) = A(t)x(t), x(0) = x
0
.
Given such systems, a sufficient condition for asymptotic stability is the
existence of a positivedefinite quadratic Lyapunov function
V(x) =x
T
Px
such that
along all state trajectories. In terms of Q := P
–1
, this is equivalent to
(31)
at all times t.
Assessing quadratic stability is not tractable in general since (31) places an
infinite number of constraints on Q. However, (31) can be reduced to a finite
set of LMI contraints in the following cases:
1 A(t) and E(t) are fixed affine functions of some timevarying parameters
p
1
(t), . . . , p
n
(t).
A(t) = A
0
+ p
1
(t)A
1
+ . . . + p
n
(t)A
n
E(t) = E
0
+ p
1
(t)E
1
+ . . . + p
n
(t)E
n
.
This is referred to as an affine parameterdependent model.
2 A(t) + jE(t) ranges in a fixed polytope of matrices, that is,
A(t) = α
1
(t)A
1
+ . . . + α
n
(t)A
n
E(t) = α
1
(t)E
1
+ . . . + α
n
(t)E
n
with α
i
(t) Š 0 and . This is referred to as a polytopic model.
The first case corresponds to systems whose statespace equations depend
affinely on timevarying physical parameters, and the second case to
x
·
dV x t ( ) ( )
dt
 0 <
A t ( )QE t ( )
T
E t ( )QA t ( )
T
+ 0 <
α
i
t ( )
i 1 =
n
∑
1 =
3 Robustness Analysis
34
timevarying systems modeled by an envelope of LTI systems (see “Uncertain
StateSpace Models” on page 214 for details). Note that quadratic Lyapunov
functions guarantee stability for arbitrarily fast time variations, which is their
main source of conservatism.
Quadratic Lyapunov functions are also useful to assess robust RMS
performance. Specifically, a timevarying system
(32)
with zero initial state satisfies
y
L
2
< γu
L
2
for all bounded input u(t) if there exists a positivedefinite Lyapunov function
V(x) =x
T
Px such that
In terms of Q := P
–1
, this is equivalent to the family of “Bounded Real Lemma”
inequalities
The smallest γ for which such a Lyapunov function exists is called the quadratic
H
∞
performance. This is an upper bound on the worstcase RMS gain of the
system (32).
LMI Formulation
Sufficient LMI conditions for quadratic stability are as follows [8, 1, 2]:
Affine models: consider the parameterdependent model
¹
´
¦E t ( )x
·
A t ( )x B t ( )u + =
y C t ( )x D t ( )u + =
dV x ( )
dt
 y
T
y γ
2
u
T
u 0 < – +
A t ( )QE t ( )
T
E t ( )QA t ( )
T
+ B t ( ) E t ( )QC t ( )
T
B t ( )
T
γI – D t ( )
T
C t ( )QE t ( )
T
D t ( ) γI – \ .




 
0 <
Quadratic Lyapunov Functions
35
(33)
where p
i
(t) ∈ [ ], and let
denote the set of corners of the corresponding parameter box. The dynamical
system (33) is quadratically stable if there exist symmetric matrices Q and
such that
(34)
(35)
(36)
(37)
Polytopic models: the polytopic system
E(t) = A(t)x, A(t) + jE(t) ∈ Co{A
1
+ jE
1
, . . . , A
n
+ jE
n
},
is quadratically stable if there exist a symmetric matrix Q and scalars t
ij
= t
ji
such that
(38)
(39)
E p ( )x
·
A p ( )x = , A p ( ) A
0
p
1
A
1
… p
n
A
n
+ + + = ,
E t ( ) E
0
p
1
E
1
… p
n
E
n
+ + + =
p
i
p
i
,
V ω
1
… ω
n
, , ( ) : ω
i
p
i
p
i
, { } ∈ { } =
M
i
{ }
i 1 =
n
A ω ( )QE ω ( )
T
E ω ( )QA ω ( )
T
ω
i
2
i
∑
M
i
0 for all ω V ∈ < + +
A
i
PE
i
T
E
i
PA
i
T
M
i
0 for i 1,…,n = ≥ + +
M
i
0 ≥
Q I >
x
·
A
i
QE
j
T
E
j
QA
i
T
A
j
QE
i
T
E
i
QA
j
T
2t
ij
I for i j , 1,…,n { } ∈ < + + +
Q I >
3 Robustness Analysis
36
(310)
Note that these LMI conditions are in fact necessary and sufficient for
quadratic stability whenever
•In the affine case, no parameter p
i
enters both A(t) and E(t), that is, A
i
= 0 or
E
i
= 0 for all i. The conditions (35)–(36) can then be deleted and it suffices
to check (31) at the corners ω of the parameter box.
•In the polytopic case, either A(t) or E(t) is constant. It then suffices to solve
(38)–(39) for t
ij
= 0.
Similar sufficient conditions are available for robust RMS performance. For
polytopic systems, for instance, LMI conditions guaranteeing the RMS
performance γ are as follows.
A symmetric matrix Q and scalars t
ij
= t
ji
for i,j ∈ {1, . . . , n} exist such that
Quadratic Stability
The function quadstab tests the quadratic stability of polytopic or affine
parameterdependent systems. The corresponding LMI feasibility problem is
solved with feasp.
t
11
… t
1n
t
1n
… t
nn
0 <
.
.
.
… …
A
i
QE
j
T
E
j
QA
i
T
A
j
QE
i
T
E
j
QA
i
+ + + H H
B
i
T
B
j
T
+ 2γI – H
C
i
QE
i
T
C
i
QE
j
T
+ D
i
D
j
+ 2γI –
\ .





 
2t
ij
1 × <
t
11
… t
1n
t
1n
… t
nn
0 <
Q 0 >
… …
Quadratic Lyapunov Functions
37
Example 3.1. Consider the timevarying system
= A(t)x
where A(t) ∈ Co{A
1
, A
2
, A
3
} with
This polytopic system is specified by
s1 = ltisys(a1)
s2 = ltisys(a2)
s3 = ltisys(a3)
polsys = psys([s1 s2 s3])
To test its quadratic stability, type
[tmin,P] = quadstab(polsys)
Solver for LMI feasibility problems L(x) < R(x)
This solver minimizes t subject to L(x) < R(x) + t*I
The best value of t should be negative for feasibility
Iteration : Best value of t so far
1 0.040521
2 0.032876
Result: best value of t: 0.032876
fradius saturation: 0.000% of R = 1.00e+08
This system is quadratically stable
tmin =
3.2876e 02
Collectively denoting the LMI system (38)–(310) by A(x) < 0, quadstab
assesses its feasibility by minimizing τ subject to A(x) < τI and returns the
global minimum tmin of this problem. Hence the system is quadratically stable
if and only if tmin < 0. In this case, the second output P is the Lyapunov matrix
proving stability.
x
·
A
1
0 1
2 – 0.2 – \ .

 
= , A
2
0 1
2.2 – 0.3 – \ .

 
= , A
3
0 1
1.9 – 0.1 – \ .

 
=
3 Robustness Analysis
38
Maximizing the Quadratic Stability Region
This option of quadstab is only available for affine parameterdependent
systems where the E matrix is constant and each parameter p
i
(t) ranges in an
interval
p
i
(t) ∈ [ ]
Denoting the center and radius of each interval by and
, the maximal quadratic stability region is defined as the
largest portion of the parameter box where quadratic stability can be
established. In other words, the largest dilatation factor such that the system
is quadratically stable whenever
p
i
(t) ∈ [µi – θδ
i
, i + θδ
i
]
Computing this largest factor is a generalized eigenvalue minimization
problem (see gevp). This optimization is performed by quadstab when the first
entry of the options vector is set to 1.
Example 3.2. Consider the secondorder system
= A(k, f)x
where
This system is entered by
s0 = ltisys([0 1;0 0])
s1 = ltisys([0 0; 1 0],0) % k
s2 = ltisys([0 0;0 1],0) % f
pv = pvec('box',[10 12;0.5 0.7]) % parameter box
affsys = psys(pv,[s0 s1 s2])
To compute the largest dilatation of the parameter box where quadratic
stability holds, type
[marg,P] = quadstab(affsys,[1 0 0])
The result is
p
i
p
i
,
u
i
1
2

p ( =
i
p
i
) +
δ
i
1
2

p ( =
i
p
i
– )
x
·
A k f , ( )
0 1
k – f – \ .

 
k t ( ) 10 12 , [ ] f t ( ) 0.5 0.7 , [ ] ∈ , ∈ , =
Quadratic Lyapunov Functions
39
marg =
1.4908e+00
which means 149% of the specified box, that is,
k(t) ∈ [9.51, 12.49], f(t) ∈ [0.451, 0.749]
Note that marg Š 1 implies quadratic stability in the prescribed parameter box.
Decay Rate
For the timevarying system
E(t) = A(t)x,
the quadratic decay rate α* is the smallest α such that
A(t)QE(t)
T
+ E(t)QA(t)
T
< αE(t)QE(t)
T
holds at all times for some fixed Q > 0. The system is quadratically stable if and
only if α* < 0, in which case α* is an upper bound on the rate of return to
equilibrium. Specifically,
x
T
(t)Q
–1
x(t) < e
α*t
(x(0)
T
Q
–1
x(0)).
Note that is also the largest β such that the shifted system
E(t) = (A(t) + βE(t))x
is quadratically stable.
For affine or polytopic models, the decay rate computation is attacked as a
generalized eigenvalue minimization problem (see gevp). This task is
performed by the function decay.
Example 3.3. For the timevarying system considered in Example 3.1, the decay
rate is computed by
[drate,P] = decay(polsys)
This command returns
drate =
5.6016e 02
x
·
α*
2
 –
x
·
3 Robustness Analysis
310
P =
6.0707e 01 1.9400e 02
1.9400e 02 3.1098e 01
Quadratic H
∞
Performance
The function quadperf computes the quadratic H
∞
performance of an affine
parameterdependent system
E(p) = A(p)x + B(p)u
y = C(p)x + D(p)u
or of a polytopic system
E(t) = A(t)x + B(t)u
y = C(t)x + D(t)u
where
Recall that the quadratic H
∞
performance γ
qp
is an upper bound on the
worstcase input/output RMS gain of the system. Specifically
y
L
2
< γ
qp
u
L
2
holds for all bounded input u when the initial state is zero. The function
quadperf can also be used to test whether the worstcase RMS gain does not
exceed a given value γ > 0, or to maximize the portion of the parameter box
where this level γ is not exceeded.
Example 3.4. Consider the secondorder system with timevarying mass and
damping
This affine parameterdependent model is specified as
x
·
x
·
S t ( )
A t ( ) jE t ( ) + B t ( )
C t ( ) D t ( ) \ .

 
Co S
1
,…,S
n
{ } ∈ =
1 0
0 m t ( ) \ .

 
x
·
x
··
\ .

 
0 1
2 – f t ( ) – \ .

 
x
x
·
\ .

 
=
0
1 \ .

 
u +
y x m t ( ) 0.8 1.2 , [ ] f t ( ) 0.4 0.6 , [ ] ∈ , ∈ , =
Quadratic Lyapunov Functions
311
s0 = ltisys([0 1; 2 0],[0;1],[1 0],0,[1 0;0 0])
s1 = ltisys(zeros(2),[0;0],[0 0],0,[0 0;0 1]) % m
s2 = ltisys([0 0;0 1],[0;0],[0 0],0,0) % f
pv = pvec('box',[0.8 1.2 ; 0.4 0.6])
affsys = psys(pv,[s0 s1 s2])
and its quadratic H
∞
performance from u to y is computed by
qperf = quadperf(affsys)
qperf =
6.083e+00
This value is larger than the nominal LTI performance for m = 1 and f = 0.5 as
confirmed by
nomsys = psinfo(affsys,'eval',[1 0.5])
norminf(nomsys)
1.4311e+00
3 Robustness Analysis
312
ParameterDependent Lyapunov Functions
The robustness test discussed next is applicable to affine parameterdependent
systems or timeinvariant uncertain systems described by a polytope of models.
To prove the robust stability of such systems, we seek a quadratic Lyapunov
function that depends on the uncertain parameters or the polytopic coordinates
in the case of a polytopic model. The resulting tests are less conservative than
quadratic stability when the parameters are constant or slowly varying [6].
Moreover, available bounds on the rates of parameter variation can be
explicitly taken into account.
Affine models: for an affine parameterdependent system
E(p) = A(p)x
with parameter vector p = (p
1
, . . . , p
n
) ∈ R
n
, we seek positive definite Lyapunov
functions of the form
V(x, p) = x
T
Q(p)
–1
x
where
Q(p) = Q
0
+ p
1
Q
1
+ . . . + p
n
Q
n
For such Lyapunov functions, the stability condition is
equivalent to
(311)
Given interval bounds
on each p
i
and its time derivative , the vectors p and range in
ndimensional “boxes.” If V and T list the corners of these boxes, (311) holds
for all parameter trajectories if the following LMI problem is feasible [6].
Find symmetric matrices Q
0
, Q
1
, . . . , Q
n
, and such that
x
·
dV x p , ( )
dt
 0 <
E p ( )Q p ( )A
T
p ( ) A p ( )Q p ( )E p ( )
T
E p ( )
dQ
dt
E p ( )
T
0. < – +
p
i
p
i
p
i
, [ ] ν
i
ν
i
, [ ], ∈ , ∈
dp
i
dt

dp
dt

M
i
{ }
i 1 =
n
ParameterDependent Lyapunov Functions
313
•For all (ω, τ) ∈ V × T
•For ω ∈ V and i = 1, . . . , n
•Q(ω) > I for all ω ∈ V
•M
i
Š 0
Note that these conditions reduce to quadratic stability when the rates of
variation are allowed to range in (–∞, +∞). In such cases indeed,
Q
1
, . . . , Q
n
must be set to zero for feasibility.
Polytopic models: A similar extension of the quadratic stability test is
available for timeinvariant polytopic systems
E = Ax
where one of the matrices A, E is constant and the other uncertain. Assuming
that E is constant and A ranges in the polytope
A ∈ {α
1
A
1
+ . . .+ α
n
A
n
: α
i
Š 0, α
1
+ . . .+ α
n
= 1},
we seek a Lyapunov function of the form V(x, α) = x
T
Q(α)
–1
x where
Q(α) = α
1
Q
1
+ . . . + α
n
Q
n
Using such Lyapunov functions, sufficient conditions for stability over the
entire polytope are as follows.
E ω ( )Q ω ( )A ω ( )
T
A ω ( )Q ω ( )E ω ( )
T
E ω ( ) Q τ ( ) Q
0
– ( )E ω ( )
T
– ω
i
2
M
i
0 <
i
∑
+ +
A
i
T
Q
i
E ω ( ) E ω ( )
T
Q
i
A
i
A
i
T
Q ω ( )E
i
E
i
T
Q ω ( )A
i
+ + + +
A ω ( )
T
Q
i
E
i
E
i
T
Q
i
A ω ( ) E
i
T
Q τ ( ) Q
0
– ( ) – M
i
+ + 0 ≥
dp
i
dt

x
·
3 Robustness Analysis
314
There exist symmetric matrices Q
1
, . . . , Q
n
, and scalars t
ij
= t
ji
such that
Stability Analysis
Given an affine or polytopic system, the function pdlstab seeks a parameter
dependent Lyapunov function establishing robust stability over a given
parameter range or polytope of models. The syntax is similar to that of
quadstab. For an affine parameterdependent system with two uncertain
parameters p
1
and p
2
, for instance, pdlstab is invoked by
[tmin,Q0,Q1,Q2] = pdlstab(ps)
Here ps is the system description and includes available information on the
range of values and rates of variation of each parameter (see psys and pvec for
details). If no rates of variation are specified, the parameters are regarded as
timeinvariant. The function pdlstab determines whether the sufficient LMI
conditions listed above are feasible. Feasibility is established when tmin < 0. In
such cases, a parameterdependent Lyapunov function proving stability is V(x,
p) =x
T
Q(p)
–1
x with
Q(p) = Q
0
+ p
1
Q
1
+ p
2
Q
2
Remark In the case of timeinvariant uncertain parameters, pdlstab can be
combined with a branchandbound scheme to reduce conservatism and
enlarge the computed stability region. Specifically, if pdlstab fails to prove
stability over the entire parameter range, you can split the range into smaller
regions and try to establish stability on each subregion independently.
Example 3.5. Consider the secondorder system
A
i
Q
j
E
T
EQ
j
A
i
T
A
j
Q
i
E
T
EQ
i
A
j
T
2t
ij
< + + +
Q
j
I >
t
11
… t
1n
t
1n
… t
nn
0 <
… …
.
.
.
x
·
x
··
\ .

 
0 1
k t ( ) – f t ( ) – \ .

 
x
x
·
\ .

 
=
ParameterDependent Lyapunov Functions
315
where the stiffness k(t) and damping f(t) range in
k(t) ∈ [5, 10], f(t) ∈ [0.01, 0.1]
and their rates of variation are bounded by
(312)
This system and parameter data are specified by
s0 = ltisys([0 1;0 0])
s1 = ltisys([0 0; 1 0],0)
s2 = ltisys([0 0;0 1],0)
pv = pvec('box',[5 10 ; 0.01 0.1],[ 0.01 0.01 ; 1 1])
ps = psys(pv,[s0 s1 s2])
The second argument of pvec defines the range of parameter values while the
third argument specifies the bounds on their rate of variation.
This system is not quadratically stable over the specified parameter box as
confirmed by
tmin = quadstab(ps)
tmin =
8.0118e 04
Nevertheless, it turns out to be robustly stable when the parameter variations
do not exceed the maximum rates (312), as shown by
[tmin,Q0,Q1,Q2] = pdlstab(ps)
Solver for LMI feasibility problems L(x) < R(x)
This solver minimizes t subject to L(x) < R(x) + t*I
The best value of t should be negative for feasibility
Iteration : Best value of t so far
1 0.084914
2 0.011826
3 0.011826
4 1.878414e 03
dk
dt
 0.01
df
dt
 1 < , <
3 Robustness Analysis
316
5 1.878414e 03
6 4.846478e 04
7 1.528537e 04
8 7.832830e 04
Result: best value of t: 7.832830e 04
fradius saturation: 0.003% of R = 1.00e+07
This system is stable for the specified parameter trajectories
This makes pdlstab a useful tool to analyze systems with constant or
slowlyvarying parameters.
µ Analysis
317
µ Analysis
µ analysis investigates the robust stability or performance of systems with
linear timeinvariant linearfractional uncertainty. It is also applicable to
timeinvariant parameterdependent systems by first deriving an equivalent
linearfractional representation with aff2lft (see “From Affine to
LinearFractional Models” on page 232 for details). Nonlinear and/or
timevarying uncertainty is addressed by the Popov criterion discussed in the
next section.
Structured Singular Value
µ analysis is based on the concept of structured singular value (SSV or µ) [3,
12, 13, 10]. Consider the algebraic loop of Figure 31 where M ∈ C
n×m
and ∆ =
diag(∆
1
, . . ., ∆
n
) is a structured perturbation characterized by
•The dimensions of each block ∆
i
•Whether ∆
i
is a complex or realvalued matrix
•Whether ∆
i
is a full matrix or a scalar matrix of the form ∆
i
= δ
i
× I
We denote by ∆ the set of perturbations with this particular structure. The
algebraic loop of Figure 31 is well posed if and only if I – M∆ is invertible.
Measuring the smallest amount of perturbation ∆ ∈ ∆ needed to destroy
wellposedness is the purpose of the SSV.
Figure 31: Static µ problem
∆
M
q w
3 Robustness Analysis
318
Formally, the SSV of M with respect to the perturbation structure ∆ is defined
as [10]
with the convention µ
∆
(M) = 0 if I – M∆ is invertible for all ∆ ∈ ∆. From this
definition, I – M∆ remains invertible as long as ∆ ∈ ∆ satisfies
σ
max
(∆) < 1/µ
∆
(M)
That is, as long as the size of ∆ does not exceed K
∆
:= 1/µ
∆
(M). The critical size
K is called thewellposedness margin wellposedness margin. Note that µ
∆
(M)
= σ
max
(M) when ∆ = C
m×n
(unstructured case). Thus µ
∆
(M) extends the notion
of largest singular value to the case of structured perturbations.
Computing µ
∆
(M) is an NPhard problem in general. However, a conservative
estimate of the margin K
∆
(upper bound µ) can be computed by solving an LMI
problem [4]. Assuming for simplicity that the blocks ∆
i
are square, the variables
in this LMI problem are two scaling matrices
D = diag(D
1
, . . . , D
n
), G = diag(G
1
, . . . , G
n
)
with the same blockdiagonal structure as ∆ and where
•D
i
= d
i
× I with d
i
> 0 if ∆
i
is a full block, and otherwise.
•G
i
= 0 if ∆
i
is complexvalued, G
i
= g
i
× I with g
i
∈ R if ∆
i
is a realvalued full
block, and if ∆
i
= δ
i
× I with δ
i
real.
Denoting by D and G the sets of D, G scalings with such properties, an upper
bound on µ
∆
(M) is given by
where α* is the global minimum of the following generalized eigenvalue
minimization problem [4]:
Minimize α over D ∈ D and G ∈ G subject to
M
H
DM + j(GM – M
H
G) < αD, D > I
The µ upper bound ν
∆
(M) and the optimal D, G scalings are computed by the
function mubnd. Its syntax is
[mu,D,G] = mubnd(M,delta)
µ M ( ) :=
1
min σ
max
∆ ( ) : ∆ ∈ ∆ and I M – ∆ is singular { }

D
i
D
i
H
= 0 >
G
i
G
i
H
=
ν
∆
max 0 α* , ( ) =
µ Analysis
319
where delta specifies the perturbation structure ∆ (use ublock and udiag to
define delta, see “NormBounded Uncertainty” on page 227 and
“SectorBounded Uncertainty” for details). When each block ∆
i
is bounded by 1,
the output mu is equal to ν
∆
(M). If different bounds β
i
are used for each ∆
i
, the
interpretation of mu is as follows:
The interconnection of Figure 31 is well posed for all ∆ ∈ ∆ satisfying
For instance, 1/mu = 0.9 means that wellposedness is guaranteed for
perturbation sizes that do not exceed 90% of the prescribed bounds.
Robust Stability Analysis
The structured singular value µ is useful to assess the robust stability of
timeinvariant linearfractional interconnections (see Figure 32). Here P(s) is
a given LTI system and ∆(s) = diag(∆
1
(s), . . . , ∆
n
(s)) is a norm or sector
bounded linear timeinvariant uncertainty with some prescribed structure.
Figure 32: Robust stability analysis
This interconnection is stable for all structured ∆(s) satisfying ∆
∞
< 1 if and
only if [3, 12, 10, 16]
The reciprocal K
m
= 1/µ
m
represents the robust stability margin, that is, the
largest amount of uncertainty ∆ that can be tolerated without losing stability.
σ
max
∆
i
( )
β
i
mu
 <
∆(s)
P(s)
w
q
u
y
µ
m
:= sup µ
∆
P jω ( ) ( ) 1. <
ω
3 Robustness Analysis
320
As mentioned earlier, we can only compute a conservative estimate of K
m
based on the upper bound ν
∆
of µ
∆
. We refer to as the guaranteed stability
margin. A gridbased estimation of is performed by mustab:
[margin,peakf] = mustab(P,delta,freqs)
On input, delta describes the uncertainty ∆ (see ublock and udiag for details)
and freqs is an optional vector of frequencies where to evaluate ν
∆
(P(jω)). On
output, margin is the computed value of and peakf is the frequency near
which ν
∆
(P(jω) peaks, i.e., where the margin is the smallest.
The uncertainty quantification may involve different bounds for each block ∆
i
.
In this case, margin represents the relative stability margin as a percentage of
the specified bounds. More precisely, margin = θ means that stability is guaran
teed as long as each block ∆
i
satisfies
•For normbounded blocks
σ
max
(∆
i
(jω)) < θ β
i
(ω)
where β
i
(ω) is the (possibly frequencydependent) norm bound specified with
ublock
•For sectorbounded blocks
where a < b are the sector bounds specified with ublock. In the special case
b = +∞, this condition reads
for θ ð 1 and
for θ > 1
For instance, margin = 0.7 for a singleblock uncertainty ∆(s) bounded by 3
means that stability is guaranteed for ∆
∞
< 0.7by3 = 2.1.
The syntax
K
ˆ
m
K
ˆ
m
K
ˆ
m
K
ˆ
m
∆
i
is in the sector
a b +
2
 θ
b a –
2
 – ,
a b +
2
 θ
b a –
2
 +
¹ )
´ `
¦ ¹
∆
i
is in the sector a
1 0 –
1 0 +
 + ,a
1 0 +
1 0 –
 +
¹ )
´ `
¦ ¹
∆
i
is outside the sector a
1 0 –
1 0 +
 + ,a
1 0 +
1 0 –
 +
¹ )
´ `
¦ ¹
outside
µ Analysis
321
[margin,peakf,fs,ds,gs] = mustab(P,delta)
also returns the D, G scaling matrices at the tested frequencies fs. To retrieve
the values of D and G at the frequency fs(i), type
Di = getdg(ds,i), Gi = getdg(gs,i)
Note that Di and Gi are not necessarily the optimal scalings at fs(i). Indeed,
the LMI optimization is often stopped before completion to speed up compu
tations.
Caution: is computed by frequency sweep, i.e., by evaluating ν
∆
(P(jω))
over a finite grid of frequencies. The coarser the gridding, the higher the risk
of underestimating the peak value of ν
∆
(P(jω)). Worse, this function may be
discontinuous at its peaks when the uncertainty contains real blocks. In such
cases, any “blind” frequency sweep is likely to miss the peak value and yield
an overoptimistic stability margin. The function mustab has builtin tests to
detect such discontinuities and minimize the risks of missing the critical
peaks.
Robust Performance
In robust control, it is customary to formulate the design specifications as
abstract disturbance rejection objectives. The performance of a control system
is then measured in terms of the closedloop RMS gain from disturbances to
outputs (see “H• Control” on page 53 for details). The presence of uncertainty
typically deteriorates performance. For an LTI system with linearfractional
uncer tainty (Figure 33), the robust performance γ
rob
is defined as the
worstcase RMS gain from u to y in the face of the uncertainty ∆(s).
K
ˆ
m
3 Robustness Analysis
322
This robust performance is generally worse (larger) than the nominal
performance, i.e., the RMS gain from u to y for (s) ≡ 0.
Figure 33: Robust performance problem
Assessing whether the performance γ can be robustly sustained, that is,
whether γ
rob
< γ, is equivalent to a robust stability problem. Indeed,
y
L
2
< γu
L
2
holds for all ∆(s) in the prescribed uncertainty set if and only if
the interconnection of Figure 34 is stable for all ∆(s) and for ∆
perf
(s) satisfying
∆
perf
(s)
∞
< γ
–1
Given a norm or sectorbounded linear timeinvariant uncertainty ∆(s), the
function muperf assesses the robust performance γ
rob
via this equivalent robust
stability problem.
Figure 34: Equivalent robust stability problem
Two problems can be solved with muperf:
•Compute the robust performance γ
rob
for the specified uncertainty ∆(s). This
is done by
∆(s)
P(s)
u y
P(s)
∆
perf
s ( ) 0
0 ∆ s ( )
\ .

 
µ Analysis
323
grob = muperf(P,delta)
The value grob is finite if and only if the interconnection of Figure 33 is
robustly stable.
•Assess the robustness of a given performance γ = g > 0 in the face of the
uncertainty ∆(s). The command
margin = muperf(P,delta,g)
computes a fraction margin of the specified uncertainty bounds for which the
RMS gain from u to y is guaranteed not to exceed γ.
Note that g should be larger than the nominal RMS gain for ∆ = 0. The
performance g is robust if and only if margin Š 1.
The function muperf uses the following characterization of γ
rob
. Assuming ∆
∞
< 1 and denoting by D and G the related scaling sets,
where γ
rob
(ω) is the global minimum of the LMI problem
Minimize γ over D ∈ D, G ∈ G such that D > 0 and
γ
rob
= max γ
rob
ω ( )
ω
P jω ( )
H
D 0
0 I \ .

 
P jω ( ) j
G 0
0 0 \ .

 
P jω ( ) P jω ( )
H
G 0
0 0 \ .

 
–
¹ )
´ `
¦ ¹ D 0
0 γ
2
I
\ .

 
< +
3 Robustness Analysis
324
The Popov Criterion
The Popov criterion gives a sufficient condition for the robust stability of the
interconnection of Figure 35 where G(s) is a given LTI system and φ = diag (φ
1
,
. . . , φ
n
) is a sectorbounded BIBOstable uncertainty satisfying φ(0) = 0. The
operator φ can be nonlinear and/or timevarying which makes the Popov
criterion more general than the µ stability test. For purely linear time
invariant uncertainty, however, the µ test is generally less conservative.
Figure 35: Popov criterion
A statespace representation of this interconnection is
or equivalently in blockpartitioned form:
Suppose that φ
j
(
.
) satisfies the sector bound
(313)
(this includes norm bounds as the special case β
j
= –α
j
). To establish robust
stability, the Popov criterion seeks a Lyapunov function of the form
G(s)
w
q
u
y
φ
·
( )
¹
¦
´
¦
¦
x
·
Ax Bw + =
q Cx Dw + =
w φ q ( ) =
¹
¦
´
¦
¦
x
·
Ax
Σj
B
j
w
j
+ =
q
j
C
j
x D
j
w ∈ R
r
j
( ) + =
w φ q
j
( ) =
T ∀ 0 w
j
α
j
q
j
– ( )
T
0
T
∫
w
j
β
j
q
j
– ( )dt 0 ≤ , >
The Popov Criterion
325
(314)
where σ
j
> 0 and ν
j
are scalars and ν
j
= 0 unless D
j
= 0 and φ
j
(
.
) is a scalar
memoryless nonlinearity (i.e., w
j
(t) depends only on q
j
(t); in such case, the
sector constraint (313) reduces to .
Letting
Kα := diag(α
j
I
r
j
), K
β
:= diag(β
j
I
r
j
)
S := diag(σ
j
I
r
j
), N := diag(ν
j
I
r
j
)
and assuming nominal stability, the conditions V(x, t) > 0 and dV/dt < 0 are
equivalent to the following LMI feasibility problem (with the notation
H(M) := M + M
T
):
Find P = P
T
and structured matrices S, N such that S > 0 and
(315)
When φ(
.
) is nonlinear timevarying, (315) reduces to the scaled smallgain
criterion. The Popov stability test is performed by the command
[t,P,S,N] = popov(G,phi)
where G is the SYSTEM matrix representation of G(s) and phi is the uncertainty
description specified with ublock and udiag. The function popov returns the
output t of the LMI solver feasp. Hence the interconnection is robust stable if
t < 0. In this case, P, S, and N are solutions of the LMI problem (315).
Real Parameter Uncertainty
A sharper version of the Popov criterion can be used to analyze systems with
uncertain real parameters. Assume that D
j
= 0 and
w
j
= φ
j
(q
j
) := δ
j
q
j
where δ
j
is an uncertain real parameter ranging in [α
j
, β
j
]. Then a more general
form for the corresponding term in the Lyapunov function (314) is
V x t , ( )
1
2
x
T
Px ν
j
j
∑
φ
j
z ( ) z d
0
q
j
∫
σ
j
j
∑
w
j
α
j
q
j
– ( )
T
w
j
β
j
q
j
– ( ) τ d
0
t
∫
– + =
α
j
q
j
2
q
j
φ
j
q
j
( ) β
j
q
j
2
) ≤ ≤
H
I
0 \ .

 
P A B , ( )
0
I \ .

 
N CA CB , ( )
C
T
K
α
D
T
K
α
1 – \ .


 
– +
¹
¦
´
¦
¦
S
C
T
K
β
D
T
K
β
1 – \ .


 
T
)
¦
`
¦
¹
0 <
3 Robustness Analysis
326
(316)
where S
j
and N
j
are r
j
× r
j
matrices subject to , and N
j
= 0
when the parameter δ
j
is time varying. As a result, the variables S and N in the
Popov condition (315) assume a blockdiagonal structure where σ
j
I
rj
and ν
j
I
rj
are replaced by the full blocks S
j
and N
j
, respectively.
When all φ
j
(
.
) are real timeinvariant parameters, the Lyapunov function (314)
becomes
The first term can be interpreted as a parameterdependent quadratic function
of the state. This is analogous to the notion of parameterdependent Lyapunov
function discussed in “ParameterDependent Lyapunov Functions” on
page 312, even though the Popov condition (315) is not equivalent to the
robust stability conditions discussed in that section..
Finally, it is possible to obtain an even sharper test by applying the Popov
criterion to the equivalent interconnection
(317)
The Lyapunov function now becomes
with S
j
, N
j
subject to . The resulting test is discussed
in [5, 6] and often performs as well as the real stability test while eliminating
the hazards of a frequency sweep (see “Caution” on page 321).
To use this refined test, invoke popov as follows.
φ z ( )
T
N
j
z d
0
q
j
∫
w
j
α
j
q
j
– ( )
T
S
j
w
j
β
j
q
j
– ( ) τ d
0
t
∫
= –
δ
j
x
T
C
T
N
j
Cx δ
j
α
j
– ( ) δ
j
β
j
– ( ) q
j
T
0
t
∫
S
j
S
j
T
+ ( )q
j
τ d –
S
j
S
j
T
O N , N
j
T
= > +
V x t , ( )
1
2
x
T
P δ
j
j
∑
C
T
N
j
C +
\ .

 
x δ
j
α
j
– ( ) δ
j
β
j
– ( )
j
∑
– q
j
T
0
t
∫
S
j
S
j
T
+ ( )q
j
τ d =
¹
¦
´
¦
¦
x
·
= Ax
Σj
B
j
C
j
w
˜
j
+ ( )
q
˜
j
= x
w
˜
j
δ
j
q
˜
j
=
V x t , ( )
1
2
x
T
P δ
j
j
∑
N
j
+
\ .

 
x δ
j
α
j
– ( ) δ
j
β
j
– ( )
j
∑
– q
j
T
0
t
∫
S
j
S
j
T
+ ( )q
j
τ d =
S
j
S
j
T
O and N
j
N
j
T
= > +
The Popov Criterion
327
[t,P,S,N] = popov(G,phi,1)
The third input 1 signals popov to perform the loop transformation (317)
before solving the feasibility problem (315).
3 Robustness Analysis
328
Example
The use of these various robustness tests is illustrated on the benchmark
problem proposed in [14]. The physical system is sketched in Figure 36.
Figure 36: Benchmark problem
The goal is to design an outputfeedback law u = K(s)x
2
that adequately rejects
the disturbances w
1
, w
2
and guarantees stability for values of the stiffness
parameter k ranging in [0.5, 2.0]. For m
1
= m
2
= 1, a statespace description of
this system is
(318)
A solution of this problem proposed in [15] is the fourthorder controller
u = C
K
(sI – A
K
)
–1
B
K
x
2
where
u
w
1
w
2
k
x
1
x
2
m
1
m
2
x
·
1
x
·
2
x
··
1
x
··
2
0 0 1 0
0 0 0 1
k – k 0 0
k k – 0 0
x
1
x
2
x
·
1
x
·
2
0
0
1
0
u w
1
+ ( )
0
0
0
1
w
2
+ + =
A
K
0 0.7195 – 1 0
0 2.9732 – 0 1
2.5133 – 4.8548 1.7287 – 0.9616 –
1.0063 5.4097 – 0.0081 – 0.0304
B
K
0.720
2.973
3.37 –
4.419
= , =
C
K
1.506 – 0.494 1.738 – 0.932 –
=
Example
329
The concern here is to assess, for this particular controller, the closedloop
stability margin with respect to the uncertain real parameter k. Since the plant
equations (318) depend affinely on k, we can define the uncertain physical
system as an affine parameterdependent system G with psys.
A0=[0 0 1 0;0 0 0 1;0 0 0 0;0 0 0 0];
B0=[0;0;1;0]; C0=[0 1 0 0]; D0=0;
S0=ltisys(A0,B0,C0,D0); % system for k=0
A1=[0 0 0 0;0 0 0 0; 1 1 0 0;1 1 0 0];
B1=zeros(4,1); C1=zeros(1,4); D1=0;
S1=ltisys(A1,B1,C1,D1,0); % k component
pv=pvec('box',[0.5 2]) % range of values of k
G=psys(pv,[S0,S1])
After entering the controller data as a SYSTEM matrix K, close the loop with
slft.
Cl = slft(G,K)
The result Cl is a parameterdependent description of the closedloop system.
To estimate the robust stability margin with respect to k, we can now apply to
Cl the various tests described in this chapter:
•Quadratic stability: to determine the portion of the interval [0.5, 2] where
quadratic stability holds, type
marg = quadstab(Cl,[1 0 0])
marg =
4.1919e 01
The value marg = 0.419 means 41% of [0.5, 2] (with respect to the center 1.25).
That is, quadratic stability in the interval [0.943, 1.557]. Since quadratic
stability assumes arbitrarily fast time variations of the parameter k, we can
expect this answer to be conservative when k is time invariant.
•Parameterdependent Lyapunov functions: when k does not vary in
time, a less conservative estimate is provided by pdlstab. To test stability for
k ∈ [0.5, 2], type
3 Robustness Analysis
330
t = pdlstab(Cl)
t =
2.1721e 01
Since t < 0, the closed loop is robustly stable for this range of values of k.
Assume now that k slowly varies with a rate of variation bounded by 0.1. To
test if the closed loop remains stable in the face of such slow variations,
redefine the parameter vector and update the description of the closedloop
system by
pv1 = pvec('box',[0.5 2],[ 0.1 0.1])
G1 = psys(pv1,[S0,S1])
Cl1 = slft(G1,K)
Then call pdlstab as earlier.
t = pdlstab(Cl1)
t =
2.0089e 02
Since t is again negative, this level of time variations does not destroy robust
stability.
• µ analysis: to perform µ analysis, first convert the affine
parameterdependent model Cl to an equivalent linearfractional
uncertainty model.
[P0,deltak] = aff2lft(Cl)
uinfo(deltak)
Here P0 is the closed loop system for the nominal value k
0
= 1.25 of k and the
uncertainty on k is represented as a real scalar block deltak. Note that k
must be assumed time invariant in the µ framework.
To get the relative parameter margin, type
block dims type real/cplx full/scal bounds
1 1x1 LTI r s norm <=
0.75
Example
331
[pmarg,peakf] = mustab(P0,deltak)
pmarg =
1.0670e+00
peakf =
6.3959e 01
The value pmarg = 1.064 means stability as long as the deviation δk from
k
0
= 1.25 does not exceed 0.75 × 1.067 ≈ 0.80. That is, as long as k remains in
the interval [0.45, 2.05]. This estimate is sharp since the closed loop becomes
unstable for δk = –0.81, that is, for k = k
0
– 0.81 = 0.44:
spol(slft(P0, 0.81))
ans =
1.0160e+00 + 1.9208e+00i
1.0160e+00 1.9208e+00i
1.0512e+00 + 9.7820e 01i
1.0512e+00 9.7820e 01i
6.2324e 03 + 6.3884e 01i
6.2324e 03 6.3884e 01i
2.7484e 01 + 1.9756e 01i
2.7484e 01 1.9756e 01i
•Popov criterion: finally, we can apply the Popov criterion to the
linearfractional model returned by aff2lft
[t,P,D,N] = popov(P0,deltak)
t =
1.3767e 02
This test is also successful since t < 0. Note that the Popov criterion also
proves stability for k = k
0
+ δk(
.
) where δk(
.
) is any memoryless nonlinearity
with gain less than 0.75.
3 Robustness Analysis
332
References
[1] Barmish, B.R., “Stabilization of Uncertain Systems via Linear Control,”
IEEE Trans. Aut. Contr., AC–28 (1983), pp. 848–850.
[2] Boyd, S., and Q. Yang, “Structured and Simultaneous Lyapunov Functions
for System Stability Problems,” Int. J. Contr., 49 (1989), pp. 2215–2240.
[3] Doyle, J.C., “Analysis of Feedback Systems with Structured Uncertainties,”
IEE Proc., vol. 129, pt. D (1982), pp. 242–250.
[4] Fan, M.K.H., A.L. Tits, and J.C. Doyle,“Robustness in the Presence of Mixed
Parametric Uncertainty and Unmodeled Dynamics,” IEEE Trans. Aut. Contr.,
36 (1991), pp. 25–38.
[5] Feron, E, P. Apkarian, and P. Gahinet, “SProcedure for the Analysis of
Control Systems with Parametric Uncertainties via ParameterDependent
Lyapunov Functions,” Third SIAM Conf. on Contr. and its Applic., St. Louis,
Missouri, 1995.
[6] Gahinet, P., P. Apkarian, and M. Chilali, “Affine ParameterDependent
Lyapunov Functions for Real Parametric Uncertainty,” Proc. Conf. Dec. Contr.,
1994, pp. 2026–2031.
[7] Haddad, W.M. and D.S. Berstein,“ParameterDependent Lyapunov
Functions, Constant Real Parameter Uncertainty, and the Popov Criterion in
Robust Analysis and Synthesis: Part 1 and 2,” Proc. Conf. Dec. Contr., 1991, pp.
2274–2279 and 2632–2633.
[8] Horisberger, H.P., and P.R. Belanger, “Regulators for Linear TimeVarying
Plants with Uncertain Parameters,” IEEE Trans. Aut. Contr., AC–21 (1976),
pp. 705–708.
[9] How, J.P., and S.R. Hall, “Connection between the Popov Stability Criterion
and Bounds for Real Parameter Uncertainty,” Proc. Amer. Contr. Conf., 1993,
pp. 1084–1089.
[10] Packard, A., and J.C. Doyle, “The Complex Structured Singular Value,”
Automatica, 29 (1994), pp. 71–109.
[11] Popov, V.M., “Absolute Stability of Nonlinear Systems of Automatic
Control,” Automation and Remote Control, 22 (1962), pp. 857–875.
[12] Safonov, M.G., “L1 Optimal Sensitivity vs. Stability Margin,” Proc. Conf.
Dec. Contr., 1983.
References
333
[13] Stein, G. and J.C. Doyle, “Beyond Singular Values and Loop Shapes,” J.
Guidance, 14 (1991), pp. 5–16.
[14] Wie, B., and D.S. Berstein, “Benchmark Problem for Robust Control
Design,” J. Guidance and Contr., 15 (1992), pp. 1057–1059.
[15] Wie, B., Q. Liu, and K.W. Byun, “Robust H
∞
Control Synthesis Method
and Its Application to Benchmark Problems,” J. Guidance and Contr., 15
(1992), pp 1140–1148.
[16] Young, P. M., M. P. Newlin, and J. C. Doyle, “Let's Get Real,” in Robust
Control Theory, Springer Verlag, 1994, pp. 143–174.
[17] Zames, G., “On the InputOutput Stability of TimeVarying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228–238 and 465–476.
3 Robustness Analysis
334
4
StateFeedback Synthesis
MultiObjective StateFeedback . . . . . . . . . . . 43
Pole Placement in LMI Regions . . . . . . . . . . . 45
LMI Formulation . . . . . . . . . . . . . . . . . . 47
Extension to the MultiModel Case . . . . . . . . . . . 49
The Function msfsyn . . . . . . . . . . . . . . . . 411
Design Example . . . . . . . . . . . . . . . . . . 413
References . . . . . . . . . . . . . . . . . . . . . 418
4 StateFeedback Synthesis
42
In many control problems, the design specifications are a mix of performance
and robustness objectives expressed both in the time and frequency domains.
These various objectives are rarely encompassed by a single synthesis
criterion. While some tracking and robustness are best captured by an H
∞
criterion, noise insensitivity is more naturally expressed in LQG terms, and
transient behaviors are more easily tuned in terms of l
1
norm or closedloop
damping.
The LMI framework is particularly well suited to multiobjective
statefeedback synthesis. As an illustration, the LMI Control Toolbox offers
tools for statefeedback design with a combination of the following objectives:
•H
∞
performance (for tracking, disturbance rejection, or robustness aspects)
•H
2
performance (for LQG aspects)
•Robust pole placement specifications (to ensure fast and welldamped
transient responses, reasonable feedback gain, etc.)
These tools apply to multimodel problems, i.e., when the objectives are to be
robustly achieved over a polytopic set of plant models.
MultiObjective StateFeedback
43
MultiObjective StateFeedback
The function msfsyn performs multimodel H
2
/H
∞
statefeedback synthesis
with pole placement constraints. For simplicity, we describe the underlying
problem in the case of a single LTI model. The control structure is depicted by
Figure 41. The plant P(s) is a given LTI system and we assume full
measurement of its state vector x.
Figure 41: Statefeedback control
Denoting by T
∞
(s) and T
2
(s) the closedloop transfer functions from w to z
∞
and
z
2
, respectively, our goal is to design a statefeedback law u = Kx that
•Maintains the RMS gain (H
∞
norm) of T
∞
below some prescribed value γ
0
> 0
•Maintains the H
2
norm of T
2
(LQG cost) below some prescribed value ν
0
> 0
•Minimizes an H
2
/H
∞
tradeoff criterion of the form
•Places the closedloop poles in a prescribed region D of the open lefthalf
plane.
This abstract formulation encompasses many practical situations. For
instance, consider a regulation problem with disturbance d and white
measurement noise n, and let e denote the regulation error. Setting
z
2
P(s)
K
w
u
x
z
∞
α T
∞
∞
2
β T
2
2
2
+
w
d
n \ .

 
, z
∞
e, z
2
x
u \ .

 
, = = =
4 StateFeedback Synthesis
44
the mixed H
2
/H
∞
criterion takes into account both the disturbance rejection
aspects (RMS gain from d to e) and the LQG aspects (H
2
norm from n to z
2
). In
addition, the closedloop poles can be forced into some sector of the stable
halfplane to obtain welldamped transient responses.
Pole Placement in LMI Regions
45
Pole Placement in LMI Regions
The concept of LMI region [3] is useful to formulate pole placement objectives
in LMI terms. LMI regions are convex subsets D of the complex plane
characterized by
where M and L = L
T
are fixed real matrices. The matrixvalued function
is called the characteristic function of the region D. The class of LMI regions is
fairly general since its closure is the set of convex regions symmetric with
respect to the real axis. More practically, LMI regions include relevant regions
such as sectors, disks, conics, strips, etc., as well as any intersection of the
above.
Another strength of LMI regions is the availability of a “Lyapunov theorem” for
such regions. Specifically, if denote the
entries of the matrices L and M, a matrix A has all its eigenvalues in D if and
only if there exists a positive definite matrix P such that [3]
with the notation
Note that this condition is an LMI in P and that the classical Lyapunov
theorem corresponds to the special case
Next we list a few examples of useful LMI regions along with their
characteristic function f
D
:
D z C ∈ : L Mz M
T
z 0 < + + { } =
f
D
z ( ): L Mz M
T
z + + =
λ
ij
{ }
1 i ≤ ,j m ≤
and µ
ij
{ }
1 i ≤ ,j m ≤
λ
ij
P µ
ij
AP µ
ji
PA
T
+ + [ ]
1 i ≤ ,j m ≤
0 <
S
ij
[ ]
1 i ≤ ,j m ≤
:=
S
11
… S
i1
S
m1
… S
mm
\ .



 
… …
.
.
.
f
D
z ( ) z z + =
4 StateFeedback Synthesis
46
•Disk with center at (–q, 0) and radius r:
•Conic sector centered at the origin and with inner angle θ:
The damping ratio of poles lying in this sector is at least cos .
•Vertical strip h
1
< x < h
2
:
r
–q
f
D
z ( )
r – z q +
z q + r – \ .

 
=
θ
f
D
z ( )
θ
2

z z + ( ) sin cos –
θ
2

z z – ( )
cos
θ
2

z z – ( )
θ
2

z z – ( ) sin
\ .


 
=
θ
2

h
1
h
2
f
D
z ( )
2h
1
z z + ( ) – 0
0 z z + ( ) 2h
2
–
\ .


 
=
LMI Formulation
47
LMI Formulation
Given a statespace realization
of the plant P, the closedloop system is given in statespace form by
Taken separately, our three design objectives have the following LMI
formulation:
•H
∞
performance: the closedloop RMS gain from w to z
∞
does not exceed γ
if and only if there exists a symmetric matrix X
∞
such that [5]
•H
2
performance: the closedloop H
2
norm of T
2
does not exceed ν if there
exist two symmetric matrices X
2
and Q such that
¹
¦
´
¦
¦
x
·
= Ax B
1
w B
2
u + +
z
∞
= C
1
x D
11
w D
12
u + +
z
2
= C
2
x D
22
u +
¹
¦
´
¦
¦
x
·
= A B
2
K + ( )x B
1
w +
z
∞
= C
1
D
12
K + ( )x D
11
w +
z
2
= C
2
D
22
K + ( )x
A B
2
K + ( )X
∞
X
∞
A B
2
K + ( )
T
+ B
1
X
∞
C
1
D
12
K + ( )
T
B
1
T
I – D
11
T
C
1
D
12
K + ( )X
∞
D
11
γ
2
I –
\ .





 
0 <
X
∞
0 >
4 StateFeedback Synthesis
48
(see “LQG performance” on page 17 for details)
•Pole placement: the closedloop poles lie in the LMI region
where
if and only if there exists a symmetric matrix X
pol
satisfying
These three sets of conditions add up to a nonconvex optimization problem with
variables Q, K, X
∞
, X
2
and X
pol
. For tractability in the LMI framework, we seek
a single Lyapunov matrix
(41)
that enforces all three objectives. With the change of variable Y := KX, this
leads to the following suboptimal LMI formulation of our multiobjective
statefeedback synthesis problem [4, 3, 2]:
A B
2
K + ( )X
2
X
2
A B
2
K + ( )
T
+ B
1
B
1
T
I –
\ .



 
0 <
Q C
2
D
22
K + ( )X
2
X
2
C
2
D
22
K + ( )
T
X
2
\ .


 
0 >
Trace Q ( ) ν
2
<
D z C ∈ : L Mz M
T
z 0 < + + { } =
L L
T
λ
ij
{ } = =
1 i ≤ ,j m ≤
M µ
ij
{ } =
1 i ≤ ,j m ≤
λ
ij
X
pol
µ
ij
A B
2
K + ( )X
pol
µ
ij
X
pol
µ
ji
X
pol
A B
2
K + ( )
T
+ + + [ ]
1 i ≤ ,j m ≤
0 <
X
pol
0 >
X := X
∞
X
2
X
pol
= =
LMI Formulation
49
Minimize α γ
2
+ β Trace(Q) over Y, X, Q, and γ
2
satisfying
(42)
(43)
(44)
(45)
(46)
Denoting the optimal solution by (X*, Y*, Q*, γ* ), the corresponding
statefeedback gain is given by
K* = Y*(X* )
–1
and this gain guarantees the worstcase performances:
Note that K* does not yield the globally optimal tradeoff in general due to the
conservatism of Assumption (41).
Extension to the MultiModel Case
The LMI approach outlined above extends to uncertain linear timevarying
plants
AX XA
T
B +
2
Y + Y
T
B
2
T
+ B
1
XC
1
T
Y
T
D
12
T
+
B
1
T
I – D
11
T
C
1
X D
12
Y + D
11
γ
2
I –
\ .





 
0 <
Q C
2
X D
22
Y +
XC
2
T
Y
T
D
22
T
+ X
\ .


 
0 >
λ
ij
µ
ij
AX B
2
Y + ( )X
pol
µ
ji
XA
T
Y
T
B
2
T
+ ( ) + + [ ]
1 i ≤ ,j m ≤
0 <
Trace Q ( ) ν
0
2
<
γ
2
γ
0
2
<
T
∞ ∞
γ* T
2 2
Trace Q* ( ) ≤ , ≤
4 StateFeedback Synthesis
410
with statespace matrices varying in a polytope:
Such polytopic models are useful to represent plants with uncertain and/or
timevarying parameters (see “Polytopic Models” on page 214 and the design
example below). Seeking a single quadratic Lyapunov function that enforces
the design objectives for all plants in the polytope leads to the following
multimodel counterpart of the LMI conditions (42)–(46):
Minimize α γ
2
+ β Trace(Q) over Y, X, Q, and γ
2
subject to
P:
¹
¦
´
¦
¦x
·
= A t ( )x B
1
t ( )w B
2
t ( )u + +
z
∞
= C
1
t ( )x D
11
t ( )w D
12
t ( )u + +
z
2
= C
2
t ( )x D
22
t ( )u +
A t ( ) B
1
t ( ) B
2
t ( )
C
1
t ( ) D
11
t ( ) D
12
t ( )
C
2
t ( ) 0 D
22
t ( )
\ .




 
Co
A
k
B
1k
B
2k
C
1k
D
11k
D
12k
C
2k
0 D
22k \ .




 
: k 1 …
·
, , = K
¹ )
¦ ¦
¦ ¦
´ `
¦ ¦
¦ ¦
¦ ¹
∈
A
k
X XA
k
T
B +
2k
Y + Y
T
B
2k
T
+ B
1k
XC
1k
T
Y
T
D
12k
T
+
B
1k
T
I – D
11k
T
C
1k
X D
12k
Y + D
11k
γ
2
I –
\ .





 
0 <
Q C
2k
X D
22k
Y +
XC
2k
T
Y
T
D
22
T
k + X
\ .


 
0 >
γ
2
γ
0
2
<
Trace Q ( ) ν
0
2
<
λ
ij
µ
ij
A
k
X B
2k
Y + ( ) µ
ji
XA
k
T
Y
T
B
2k
T
+ ( ) + + [ ]
1 i ≤ ,j m ≤
0 <
The Function msfsyn
411
The Function msfsyn
The function msfsyn implements the LMI approach to multimodel H
2
/H
∞
synthesis outlined above. The pole placement objectives are expressed in terms
of LMI regions
characterized by the two matrices L and M. LMI regions are specified
interactively with the function lmireg.
Denoting the closedloop transfer functions from w to z
∞
and z
2
by T
∞
and T
2
,
msfsyn computes a suboptimal solution to the mixed problem:
Minimize subject to
•T
∞

∞
< γ
0
•T
2

2
< ν0
•The closedloop poles lie in D.
The syntax is
[gopt,h2opt,K,Pcl] = msfsyn(P,r,obj,region)
where
•P is the plant SYSTEM matrix in the singlemodel case, or the
polytopic/parameterdependent description of the plant in the multimodel
case (see psys for details). The dimensions of the D
22
matrix are specified by
r
•obj = [γ
0
, ν
0
, α, β] is a fourentry vector specifying the H
2
/H
∞
criterion
•region specifies the LMI region to be used for pole placement, the default
being the open lefthalf plane. Use lmireg to generate the matrix region, or
set it to [L, M] if the characteristic matrices L and M are readily available.
On output, gopt and h2opt are the guaranteed H
∞
and H
2
performances, K is
the statefeedback gain, and Pcl is the closedloop system in SYSTEM matrix or
polytopic model format.
D z C ∈ : L Mz Mz 0 < + + { } =
α T
∞ 0
2
β T
2 2
2
+
4 StateFeedback Synthesis
412
Several mixed or unmixed designs can be performed with msfsyn. The various
possibilities are summarized in the table below.
obj Corresponding Design
[0 0 0 0] pole placement only
[0 0 1 0] H
∞
optimal design
[0 0 0 1] H
2
optimal design
[g 0 0 1] minimize T
2

2
subject to T
∞

∞
< g
[0 h 1 0] minimize T
∞

∞
subject to T
2

2
< h
[0 0 a b] minimize a T
∞ ∞
2
b T
2 2
2
+
Design Example
413
Design Example
This example is adapted from [1] and covered by the demo sateldem. The
system is a satellite consisting of two rigid bodies (main body and instrumen
tation module) joined by a flexible link (the “boom”). The boom is modeled as a
spring with torque constant k and viscous damping f and finiteelement
analysis gives the following uncertainty ranges for k and f :
0.09 ð k 0.4
0.0038 ð f 0.04:
The dynamical equations are
where θ
1
and θ
2
are the yaw angles for the main body and the sensor module,
T is the control torque, and w is a torque disturbance on the main body.
Figure 41: Satellite
The control purpose is to minimize the influence of the disturbance w on the
angular position θ
2
. This goal is expressed through the following objectives:
J
1
θ
··
1
f θ
·
1
θ
·
2
– ( ) k θ
1
θ
2
– ( ) T w + = + +
J
2
θ
··
2
f θ
·
2
θ
·
1
– ( ) k θ
2
θ
1
– ( ) 0 = + + ¹ )
´ `
¦ ¹
θ
1
θ
2
main body
sensor module
4 StateFeedback Synthesis
414
•Obtain a good tradeoff between the RMS gain from w to θ
2
and the H
2
norm
of the transfer function from w to
(LQG cost of control)
•Place the closedloop poles in the region shown in Figure 43 to guarantee
some minimum decay rate and closedloop damping
Figure 42: Pole placement region
•Achieve these objectives for all possible values of the varying parameters k
and f. Since these parameters enter the plant state matrix in an affine
manner, we can model the parameter uncertainty by a polytopic system with
four vertices corresponding to the four combinations of extremal parameter
values (see “From Affine to Polytopic Models” on page 220).
θ
1
θ
2
T
\ .



 
–0.1
Design Example
415
To solve this design problem with the LMI Control Toolbox, first specify the
plant as a parameterdependent system with affine dependence on k and f. A
statespace description is readily derived from the dynamical equations as:
This parameterdependent model is entered by the commands
a0 = [zeros(2) eye(2); zeros(2,4)]
ak = [zeros(2,4) ; [1 1;1 1] zeros(2)]
af = [zeros(2,4) ; zeros(2) [1 1;1 1]]
e0 = diag([1 1 J1 J2])
b = [0 0;0 0;1 1;0 0] % b = [b1 b2]
c = [0 1 0 0;1 0 0 0;0 1 0 0;0 0 0 0] % c = [c1;c2]
d = [0 0;0 0;0 0;0 1]
% range of parameter values
pv = pvec('box',[0.09 0.4 ; 0.0038 0.04])
% parameterdependent plant
P = psys(pv,[ ltisys(a0,b,c,d,e0) , ...
ltisys(ak,0*b,0*c,0*d,0) , ...
ltisys(af,0*b,0*c,0*d,0) ])
Next, specify the LMI region for pole placement as the intersection of the
halfplane x < –0.1 and of the sector centered at the origin and with inner angle
3π/4. This is done interactively with the function lmireg:
1 0 0 0
0 1 0 0
0 0 J
1
0
0 0 0 J
2
θ
·
1
θ
·
2
θ
··
1
θ
··
2
0 0 1 0
0 0 0 1
k – k f – f
k k – f f –
θ
1
θ
2
θ
·
1
θ
·
2
0
0
0
1
w ( T) + + + =
z
∞
θ
2
, z
2
1 0 0 0
0 1 0 0
0 0 0 0
θ
1
θ
2
θ
·
1
θ
·
2
0
0
1
T + = =
4 StateFeedback Synthesis
416
region = lmireg
To assess the tradeoff between the H
∞
and H
2
performances, first compute the
optimal quadratic H
∞
performance subject to the pole placement constraint by
gopt = msfsyn(P,[1 1],[0 0 1 0],region)
This yields gopt ≈ 0. For a prescribed H
∞
performance g > 0, the best H
2
performance h2opt is computed by
[gopt,h2opt,K,Pcl] = msfsyn(P,[1 1],[g 0 0 1],region)
Here obj = [g 0 0 1] asks to optimize the H
2
performance subject to
T
∞

∞
< g and the pole placement constraint. Repeating this operation for the
values g ∈ {0.01, 0.1, 0.2, 0.5} yields the Paretolike tradeoff curve shown in
Figure 44.
By inspection of this curve, the statefeedback gain K obtained for g = 0.1 yields
the best compromise between the H
∞
and H
2
objectives. For this choice of K,
Figure 45 superimposes the impulse responses from w to θ
2
for the four
combinations of extremal values of k and f.
Finally, the closedloop poles for these four extremal combinations are
displayed in Figure 46. Note that they are robustly placed in the prescribed
LMI region.
Figure 43: Tradeoff between the H
∞
and H
2
performances
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
2.5
3
3.5
4
4.5
5
5.5
6
6.5
H−infinity performance
H
2
p
e
r
f
o
r
m
a
n
c
e
Design Example
417
Figure 44: Impulse responses for the extremal values of k and f
Figure 45: Corresponding closedloop poles
0 1 2 3 4 5 6 7 8 9 10
−2
0
2
4
6
8
10
12
x 10
−3
Time (secs)
A
m
p
l
i
t
u
d
e
Impulse responses w −> x2 for the robust design
−14 −12 −10 −8 −6 −4 −2 0
−10
−8
−6
−4
−2
0
2
4
6
8
10
Closed−loop poles for the four extremal sets of parameter values
4 StateFeedback Synthesis
418
References
[1] Biernacki, R.M., H. Hwang, and S.P. Battacharyya, “Robust Stability with
Structured Real Parameter Perturbations,” IEEE Trans. Aut. Contr., AC–32
(1987), pp. 495–506.
[2] Boyd, S., L. El Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix
Inequalities in Systems and Control Theory, SIAM books, 1994.
[3] Chilali, M., and P. Gahinet, “H
∞
Design with Pole Placement Constraints:
an LMI Approach,” to appear in IEEE Trans. Aut. Contr. Also in Proc. Conf.
Dec. Contr., 1994, pp. 553–558.
[4] Khargonekar, P.P., and M.A. Rotea,“Mixed H
2
/H
∞
Control: A Convex
Optimization Approach,” IEEE Trans. Aut. Contr., 39 (1991), pp. 824837.
[5] Scherer, C., “H
∞
Optimization without Assumptions on Finite or Infinite
Zeros,” SIAM J. Contr. Opt., 30 (1992), pp. 143–166.
5
Synthesis of H
∞
Controllers
H∞ Control . . . . . . . . . . . . . . . . . . . . . 53
Riccati and LMIBased Approaches . . . . . . . . . . . 57
H∞ Synthesis . . . . . . . . . . . . . . . . . . . . 510
Validation of the ClosedLoop System . . . . . . . . . . 513
MultiObjective H∞ Synthesis . . . . . . . . . . . . 515
LMI Formulation . . . . . . . . . . . . . . . . . . . 516
The Function hinfmix . . . . . . . . . . . . . . . . . 520
LoopShaping Design with hinfmix . . . . . . . . . . . 520
References . . . . . . . . . . . . . . . . . . . . . 522
521
5 Synthesis of H∞ Controllers
52
Disturbance rejection, input/output decoupling, and general robust stability or
performance objectives can all be formulated as gain attenuation problems in
the face of dynamical or parametric uncertainty. H
∞
control is the cornerstone
of design techniques for this class of problems.
The LMI Control Toolbox addresses three variants of outputfeedback H
∞
synthesis:
•Standard H
∞
control for continuoustime systems. Both the Riccatibased
and LMIbased solutions of this problem are implemented
•Multiobjective H
∞
design (mixed H
2
/H
∞
synthesis with closedloop pole
placement in LMI regions)
•Discretetime H
∞
control via either Riccati or LMIbased approaches
Additional facilities for loopshaping design are described in Chapter 6, “Loop
Shaping.”
H• Control
53
H
∞
Control
The H
∞
norm of a stable transfer function G(s) is its largest input/output
RMS gain, i.e.,
where L
2
is the space of signals with finite energy and y(t) is the output of the
system G for a given input u(t). This norm also corresponds to the peak gain of
the frequency response G(jω), that is,
In its abstract “standard” formulation, the H
∞
control problem is one of
disturbance rejection. Specifically, it consists of minimizing the closedloop
RMS gain from w to z in the control loop of Figure 51. This can be interpreted
as minimizing the effect of the worstcase disturbance w on the output z.
Figure 51: H
∞
control
Partitioning the plant P(s) as
G
∞
sup
y
L
2
u
L
2
 =
u L
2
∈
u 0 ≠
G
∞
sup
ω
σ
max
G jω ( ) ( ) =
K(s)
P(s)
u
y
w z
Z s ( )
Y s ( ) \ .

  P
11
s ( ) P
12
s ( )
P
21
s ( ) P
22
s ( )
\ .


 
W s ( )
U s ( ) \ .

 
, =
5 Synthesis of H∞ Controllers
54
the closedloop transfer function from w to z is given by the linearfractional
expression
(51)
Hence the optimal H
∞
control seeks to minimize F(P, K)
∞
over all stabilizing
LTI controllers K(s). Alternatively, we can specify some maximum value γ for
the closedloop RMS gain and ask the following question:
Does there exist a stabilizing controller K(s) that ensures F(P, K)
∞
< γ ?
This is known as the suboptimal H
∞
control problem, and γ is called the
prescribed H
∞
performance.
A number of control problems can be recast in this standard formulation.
Example 5.1. Consider the following disturbance rejection problem relative to
the tracking loop of Figure 52:
For disturbances d with spectral density concentrated between 0 and
1 rad/s, maintain the closedloop RMS gain from d to the output y below 1%.
Since the RMS gain is the largest gain over all squareintegrable inputs,
external signals with uneven spectral density must be specified by means of
shaping filters. For instance, a disturbance d with all its energy in the
Figure 52: Tracking loop with external disturbance
frequency band [0,1] is described as
where is an arbitrary squareintegrable signal and W
lp
(s) is a lowpass filter
with cutoff frequency at 1 rad/s. From the closedloop equation
F P K , ( ) := P
11
P
12
K I P
22
K – ( )
1 –
P
21
+
r
y +
−
K G
d
e u +
d s ( ) W
l p
s ( )d
˜
s ( ) =
d
˜
H• Control
55
y = Sd + Tr with S := (I + GK)
–1
and T := GKS,
the transfer function from the equalized disturbance to y is SW
lp
. Hence our
disturbance rejection objective is equivalent to finding a stabilizing controller
K(s) such that
S W
lp

∞
< 1.
This RMS gain constraint is readily turned into a standard H
∞
problem by
observing that, in conformance with (51),
Example 5.2. Decoupling constraints are handled in a similar fashion. For
instance, consider the loop of Figure 53 and the problem of decoupling the
action of r
1
on y
1
from that of r
2
on y
2
in the control bandwidth ω ð 10 rad/s.
The closedloop transfer function from to
is T = GK(I + GK)
–1
. Decoupling is achieved if T(jω) ≈ I in the bandwidth ω ð 10
rad/s, or equivalently if S(jω) = I – T(jω) has a small gain in this bandwidth.
This can be expressed as
d
˜
SW
l p
F P K , ( ) with Ps :=
W
l p
s ( ) G – s ( )
W
l p
s ( ) G – s ( )
\ .


 
=
r
1
r
2 \ .


 
y
1
y
2 \ .


 
10
s 10 +
S s ( )
∞
ε, <
5 Synthesis of H∞ Controllers
56
Figure 53: Decoupling problem
which is again of the standard form
Example 5.3. H
∞
optimization is also useful for the design of robust controllers
in the face of unstructured uncertainty. Consider the uncertain system
where the uncertainty ∆(
.
) is an arbitrary BIBOstable system satisfying the
norm bound ∆
∞
< 1. From the small gain theorem, the controller u = K(s)y
robustly stabilizes this uncertain system if the nominal closedloop transfer
function F (P, K) from w to z satisfies
F (P, K)
∞
< 1.
Finally, a useful application of H
∞
synthesis is the loop shaping design
procedure discussed in the next chapter.
r
1
y
1
+
−
K(s) G(s)
u
y
2
−
+
r
2
F P K , ( )
∞
ε < with P s ( ) :=
10
s 10 +
I
2
10
s 10 +
 – G s ( )
I
2
G s ( ) –
\ .



 
.
P(s)
w
z
u y
∆
·
( )
H• Control
57
Riccati and LMIBased Approaches
Since the continuous and discretetime cases are essentially similar, we
review only continuoustime H
∞
synthesis. Both the Riccatibased and
LMIbased approaches are implemented in the LMI Control Toolbox. While the
LMIbased approach is computationally more involved for large problems, it
has the merit of eliminating the regularity restrictions attached to the
Riccatibased solution. Since both approaches are statespace methods, the
plant P(s) is given in statespace form by
The Riccatibased approach is applicable to plants P satisfying
(A1) D
12
and D
21
have full rank,
(A2) P
12
(s) and P
21
(s) have no zeros on the jωaxis.
Given γ > 0, it gives necessary and sufficient conditions for the existence of
internally stabilizing controllers K(s) such that
F (P, K)
∞
< γ
Specifically, the H
∞
performance γ is achievable if and only if [2]:
(i) γ > max
x
·
= Ax B
1
w B
2
u + +
z = C
1
x D
11
w D
12
u + +
y = C
2
x D
21
w D +
22
u +
σ
max
I D
12
D
12
+
– ( )D
11
), σ
max
D
11
I ( D
21
+
D
21
– ( ) ) ( )
5 Synthesis of H∞ Controllers
58
(ii) the Riccati equations associated with the Hamiltonian pencils
have stabilizing solutions X
∞
and Y
∞
, respectively
(iii) X
∞
and Y
∞
further satisfy
X
∞
Š 0, Y
∞
Š 0, ρ(X
∞
Y
∞
) < γ
2
Using this characterization, the optimal H
∞
performance γ
opt
can be computed
by bisection (the socalled γiterations). An H
∞
controller with performance
γ ≥ γ
opt
is then given by explicit formulas involving X
∞
, Y
∞
, and the plant
statespace matrices. Singular H
∞
problems (those violating (A1)–(A2)) require
regularization by small perturbation. A notable exception is the direct
computation of the optimal H
∞
performance when D
12
or D
21
is singular [5].
In comparison, the LMI approach is applicable to any plant and does not
involve γiterations. Rather, the H
∞
performance is directly optimized by
solving the following LMI problem [4, 6]:
H λε
A 0 0 γ
1 –
B
1
B
2
0 A
T
– C
1
T
– 0 0
C
1
0 I – γ
1 –
D
11
D
12
0 γ
1 –
B
1
T
γ
1 –
D
11
T
I – 0
0 B
2
T
D
12
T
0 0
\ .










 
λ
I 0
0 0 \ .

 
– = –
J λε
A
T
0 0 γ
1 –
C
1
T
C
2
T
0 A – B
1
– 0 0
B
1
T
0 I – γ
1 –
D
11
T
D
21
T
0 γ
1 –
C
1
γ
1 –
D
11
I – 0
0 C
2
D
21
0 0
\ .









 
λ
I 0
0 0 \ .

 
– = –
H• Control
59
Minimize γ over R = R
T
and S = S
T
such that
(52)
(53)
(54)
where N
12
and N
21
denote bases of the null spaces of and (C
2
, D
21
),
respectively.
This problem falls within the scope of the LMI solver mincx. Note that the LMI
constraints (52)–(53) amount to the inequality counterpart of the H
∞
Riccati
equations, R
–1
and S
–1
being solutions of these inequalities. Again, explicit
formulas are available to derive H
∞
controllers from any solution (R, S, γ) of
(52)–(54) [3].
N
12
0
0 I
\ .

 
T
AR RA
T
+ RC
1
T
B
1
C
1
R γI – D
11
B
1
T
D
11
T
γI –
\ .





 
N
12
0
0 I
\ .

 
0 <
N
21
0
0 I
\ .

 
T
A
T
S SA + SB
1
C
1
T
B
1
T
S γI – D
11
T
C
1
D
11
γI –
\ .





 
N
21
0
0 I
\ .

 
0 <
R I
I S \ .

 
0 ≥
B
2
T
D
12
T
, ( )
5 Synthesis of H∞ Controllers
510
H
∞
Synthesis
The LMI Control Toolbox supports continuous and discretetime H
∞
synthesis
using either Riccati or LMIbased approaches. Transparency and numerical
reliability were primary concerns in the development of the related tools.
Original features of the Riccatibased functions include:
•The use of pencilbased Riccati solvers for highly accurate computation of
Riccati solutions (see care, ricpen, and their discretetime counterparts)
•Direct computation of the optimal H
∞
performance when D
12
and D
21
are
rankdeficient (without using regularization) [5]
•Automatic regularization of singular problems for the controller
computation.
In addition, reducedorder controllers are computed whenever the matrices I –
γ
–2
X
∞
Y
∞
or I – RS are nearly rankdeficient. The functions for standard H
∞
synthesis are listed in Table 51.
To illustrate the use of hinfric and hinflmi, consider the simple firstorder
plant P(s) with statespace equations
This plant is specified as a SYSTEM matrix by
a=0; b1=1; b2=2; c1=1; d11=0; d12=0; c2= 1; d21=1; d22=0;
P=ltisys(a,[b1 b2],[c1;c2],[d11 d12;d21 d22])
Note that the plant P(s) is “singular” since D
12
= 0.
Table 51: Functions for H
∞
synthesis
H
∞
synthesis Riccatibased LMIbased
Continuoustime hinfric hinflmi
Discretetime dhinfric dhinflmi
¹
¦
´
¦
¦
x
·
w 2u + =
z x =
y x – w + =
H• Synthesis
511
To determine the optimal H
∞
performance over stabilizing control laws
u = K(s)y, type
gopt = hinfric(P,[1 1])
where the second argument [1 1] specifies the numbers of measurements and
controls (lengths of the vectors y and u). The γiterations performed by hinfric
are displayed on the screen as follows:
GammaIteration:
Best closedloop gain (GAMMA_OPT): 1.008789
The optimal H
∞
performance for P(s) is therefore approximately 1. For each
tested value of γ, hinfric gives a feasible/infeasible diagnosis as well as the
cause for infeasibility when applicable.
Alternatively, you can use the LMIbased function hinflmi with the same
syntax:
gopt = hinflmi(P,[1 1])
This function optimizes the H
∞
performance using mincx:
Gamma Diagnosis
10.0000 : feasible
1.0000 : infeasible (Y is not positive semidefinite)
5.5000 : feasible
3.2500 : feasible
2.1250 : feasible
1.5625 : feasible
1.2812 : feasible
1.1406 : feasible
1.0703 : feasible
1.0352 : feasible
1.0176 : feasible
1.0088 : feasible
5 Synthesis of H∞ Controllers
512
Minimization of gamma:
Solver for linear objective minimization under LMI constraints
Iterations : Best objective value so far
Result:feasible solution of required accuracy
best objective value: 1.002517
guaranteed relative accuracy: 9.11e 03
fradius saturation: 0.018% of R = 1.00e+08
The value displayed in the second column is the current best estimate of γ
opt
.
The optimal value 1.0025 approximately matches the value found by hinfric.
When a second output argument is provided, these two functions also return
an optimal H
∞
controller K(s):
[gopt,K] = hinfric(P,[1 1])
1 2.196542
*** new lower bound: 0.571370
2 1.381964
3 1.381964
4 1.094665
5 1.052469
6 1.008499
7 1.008499
8 1.008499
*** new lower bound: 0.656673
9 1.002517
*** new lower bound: 0.878172
10 1.002517
*** new lower bound: 0.982223
11 1.002517
*** new lower bound: 0.993382
H• Synthesis
513
The output K is the SYSTEM matrix of K(s). To compute a suboptimal H
∞
controller with guaranteed performance γ < 10, type
[g,K] = hinfric(P,[1 1],10,10)
In this case only the value γ = 10 is tested. In the LMI approach, the same
problem is solved by the command
[g,K] = hinflmi(P,[1 1],10)
Finally, the syntax
[g,K,x1,x2,y1,y2] = hinfric(P,[1,1],10,10)
also returns the stabilizing solutions X
∞
= x2/x1 and Y
∞
= y2/y1 of the H
∞
Riccati equations for γ = 10. Similarly,
[g,K,x1,x2,y1,y2] = hinflmi(P,[1,1],10)
returns solutions X = x2/x1 and Y = y2/y1 of the H
∞
Riccati inequalities.
Equivalently, R = x1 and S = y1 are solutions of the characteristic LMIs
(52)(54) since x2 and y2 are always set to g × I. Further options and control
parameters are discussed in the “Command Reference” chapter.
Finally, the counterpart of these functions for discretetime H
∞
synthesis are
called dhinfric, dhinflmi, and dnorminf and follow the exact same syntax.
Validation of the ClosedLoop System
Given the H
∞
controller K computed by hinfric or hinflmi, the closedloop
mapping represented in Figure 54 is formed with the function slft:
Figure 54: Closedloop system
P(s)
u
y
w
z
K(s)
5 Synthesis of H∞ Controllers
514
Closedloop stability is then checked by inspecting the closedloop poles with
spol
spol(clsys)
while the closedloop RMS gain from w to z is computed by
norminf(clsys)
You can also plot the time and frequencydomain responses of the closed loop
clsys with splot.
Finally, hinfpar extracts the statespace matrices A, B
1
, . . . from the plant
SYSTEM matrix P:
[a,b1,b2,c1,c2,d11,d12,d21,d22] = hinfpar(P,r)
Set the second argument r to [p2 m2] if D
22
∈ R
p
2
×m
2
.
MultiObjective H• Synthesis
515
MultiObjective H
∞
Synthesis
In many realworld applications, standard H
∞
synthesis cannot adequately
capture all design specifications. For instance, noise attenuation or regulation
against random disturbances are more naturally expressed in LQG terms.
Similarly, pure H
∞
synthesis only enforces closedloop stability and does not
allow for direct placement of the closedloop poles in more specific regions of the
lefthalf plane. Since the pole location is related to the time response and
transient behavior of the feedback system, it is often desirable to impose
additional damping and clustering constraints on the closedloop dynamics.
This makes multiobjective synthesis highly desirable in practice, and LMI
theory offers powerful tools to attack such problems.
Mixed H
2
/H
∞
synthesis with regional pole placement is one example of
multiobjective design addressed by the LMI Control Toolbox. The control
problem is sketched in Figure 55. The output channel z
∞
is associated with the
H
∞
performance while the channel z
2
is associated with the LQG aspects (H
2
performance).
Denoting by T
∞
(s) and T
2
(s) the closedloop transfer functions from w to z
∞
and
z
2
, respectively, we consider the following multiobjective synthesis problem:
Figure 55: Multiobjective H
∞
synthesis
Design an outputfeedback controller u = K(s)y that
•Maintains the H
∞
norm of T
∞
(s) (RMS gain) below some prescribed value
γ
0
> 0
•Maintains the H
2
norm of T
2
(s) (LQG cost) below some prescribed value ν
0
> 0
P(s)
u
y
w
z
2
K(s)
z
∞
5 Synthesis of H∞ Controllers
516
•Minimizes a tradeoff criterion of the form
with α Š 0 and β ≥ 0
•Places the closedloop poles in some prescribed LMI region D
Recall that LMI regions are general convex subregions of the open lefthalf
plane (see “Pole Placement in LMI Regions” on page 45 for details).
LMI Formulation
Let
and
be statespace realizations of the plant P(s) and controller K(s), respectively,
and let
be the corresponding closedloop statespace equations.
Our three design objectives can be expressed as follows:
α T
∞ ∞
2
β T
2
2
2
+
x
·
= Ax B
1
w B
2
u + +
z
∞
= C
∞
x D
∞1
w D
∞2
u + +
z
2
= C
2
x D
21
w D
22
u + +
y = C
y
x D
y1
w +
¹ )
¦ ¦
¦ ¦
¦ ¦
´ `
¦ ¦
¦ ¦
¦ ¦
¦ ¹
ζ
·
A
Kζ
B
Ky
+ =
u
·
C
Kζ
D
Ky
+ =
¹ )
´ `
¦ ¹
¹
¦
´
¦
¦
x
·
cl
= A
cl
x
cl
B
cl
w +
z
∞
= C
cl1
x
cl
D
cl1
w +
z
2
= C
cl2
x
cl
D
cl2
w +
MultiObjective H• Synthesis
517
•H
∞
performance: the closedloop RMS gain from w to z
∞
does not exceed γ
if and only if there exists a symmetric matrix X
∞
such that
•H
2
performance: the H
2
norm of the closedloop transfer function from w to
z
2
does not exceed ν if and only if D
cl2
= 0 and there exist two symmetric
matrices χ
2
and Q such that
•Pole placement: the closedloop poles lie in the LMI region
with L = L
T
= and M = if and only if there exists a
symmetric matrix χ
pol
satisfying
A
cl
χ
∞
χ
∞
A
cl
T
+ B
cl
X
∞
C
cl1
T
B
cl
T
I – D
cl1
T
C
cl1
χ
∞
D
cl1
γ –
2
I
\ .





 
0 <
χ
∞
0 >
A
cl
χ
2
χ
2
A
cl
T
+ B
cl
B
cl
T
I –
\ .



 
0 <
Q C
cl2
χ
2
χ
2
C
cl2
T
χ
2
\ .


 
0 >
Trace Q ( ) ν
2
<
D z C ∈ : L Mz M
T
z 0 < + + { } =
λ
ij
{ }
1 i ≤ ,j m ≤
µ
ij
[ ]
1 i ≤ ,j m ≤
5 Synthesis of H∞ Controllers
518
For tractability in the LMI framework, we must seek a single Lyapunov
matrix
χ := χ
∞
= χ
2
= χ
pol
that enforces all three sets of constraints. Factorizing
χ
as
and introducing the change of controller variables [3]:
the inequality constraints on
χ
are readily turned into LMI constraints in the
variables R, S, Q, A
K
, B
K
, C
K
, and D
K
[8, 1]. This leads to the following
suboptimal LMI formulation of our multiobjective synthesis problem:
λ
ij
χ
pol
µ
ij
A
cl
χ
pol
µ
ji
X
pol
A
cl
T
+ + [ ]
1 i ≤ ,j m ≤
0 <
χ
pol
0 >
χ χ
1
χ
2
1 –
, χ
1 :=
R I
M
T
0
\ .

 
, χ
2 :=
0 S
I N
T
\ .

 
=
¹
¦
´
¦
¦
B
K
:= NB
K
SB
2
D
K
+
C
K
:= C
K
M
T
D
K
C
y
R +
A
K
:= NA
K
M
T
NB
K
C
y
R SB
2
C
K
M
T
S A B
2
D
K
C
y
+ ( )R, + + +
MultiObjective H• Synthesis
519
Minimize α γ
2
+ β Trace(Q) over R, S, Q, A
K
, B
K
, C
K
, D
K
, and γ
2
satisfying:
Given optimal solutions γ*, Q* of this LMI problem, the closedloop H
∞
and
H
2
performances are bounded by
AR RA
T
B
2
C
K
C
K
T
B
2
T
+ + + A
K
T
A B +
2
D
K
C
y
+ B
1
B
2
D
K
D
y1
+ H
H A
T
S SA + B
K
C
y
C
y
T
B
K
T
+ + SB
1
B
K
D
y1
+ H
H H I – H
C
∞
R D
∞2
C
K
+ C
∞
D
∞2
D
K
C
y
+ D
∞1
D
∞2
D
K
D
y1
+ γ –
2
I
\ .







 
0 <
µ
ji
RA
T
C
K
T
B
2
T
+ A
K
T
A B +
2
D
K
C
y
( )
T
A
T
S C
y
T
B
K
T
+
\ .



 
1 i ≤ ,j m ≤
0 <
Q C
2
R D
22
C
K
+ C
2
D
22
D
K
C
y
+
H R I
H I S
\ .



 
0 >
λ
ij
R I
I S \ .

 
µ
ij
AR B
2
C
K
+ A B +
2
D
K
C
y
A
K
SA B
K
C
y
+
\ .


 
+ +
γ
2
γ
0
2
<
TraceQ ν
0
2
<
D
21
D
22
D
K
D
y1
+ 0 =
T
∞ ∞
γ* T
2 2
Trace Q* ( ). ≤ , ≤
5 Synthesis of H∞ Controllers
520
The Function hinfmix
The function hinfmix implements the LMI approach to mixed H
2
/H
∞
synthesis
with regional pole placement described above. Its syntax is
[gopt,h2opt,K,R,S] = hinfmix(P,r,obj,region)
where
•P is the SYSTEM matrix of the LTI plant P(s). Note that z
2
or z
∞
can be empty,
or even both when performing pure pole placement
•r is a threeentry vector listing the lengths of z
2
, y, and u
•obj = [γ
0
, ν
0
, α, β] is a fourentry vector specifying the H
2
/H
∞
constraints and
criterion
•region specifies the LMI region for pole placement, the default being the
open lefthalf plane. Use lmireg to interactively generate the matrix region.
The outputs gopt and h2opt are the guaranteed H
∞
and H
2
performances, K is
the controller SYSTEM matrix, and R,S are optimal values of the variables R, S
(see “LMI Formulation” on page 516).
You can perform the following mixed and unmixed designs by setting obj
appropriately:
LoopShaping Design with hinfmix
In the loopshaping context (see next chapter), the closedloop poles also
include the poles of the shaping filters. Since these poles are not affected by the
obj Corresponding Design
[0 0 0 0] pole placement only
[0 0 1 0] H
∞
optimal design
[0 0 0 1] H
2
optimal design
[g 0 0 1] minimize T
2

2
subject to T
∞

∞
< g
[0 h 1 0] minimize T
∞

∞
subject to T
2

2
< h
[0 0 a b] minimize
[g h a b] most general problem
a T
∞ 0
2
b T
2 2
2
+
MultiObjective H• Synthesis
521
controller, they impose hard limitations on achievable closedloop clustering
objectives. In particular, shaping filters with modes close to the imaginary axis
will doom any attempt to push the closedloop modes into a welldamped
region.
When the purpose of such filters is to enforce integral control action, you can
eliminate this difficulty by moving the integrator from the shaping filter to the
control loop itself. This amounts to reparametrizing the controller as
K(s) = (s)
and performing the synthesis for . An example can be found at the end of
Chapter 6.
1
s
K
˜
K
˜
5 Synthesis of H∞ Controllers
522
References
[1] Chilali, M., and P. Gahinet, “H
∞
Design with Pole Placement Constraints:
an LMI Approach,” to appear in IEEE Trans. Aut. Contr., 1995.
[2] Doyle, J.C., Glover, K., Khargonekar, P., and Francis, B., “StateSpace
Solutions to Standard H
2
and H
∞
Control Problems,” IEEE Trans. Aut. Contr.,
AC–34 (1989), pp. 831–847.
[3] Gahinet, P., “Explicit Controller Formulas for LMIbased H
∞
Synthesis,”
submitted to Automatica. Also in Proc. Amer. Contr. Conf., 1994, pp. 2396–
2400.
[4] Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H
∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421–448.
[5] Gahinet, P., and A.J. Laub, “Reliable Computation of γ
opt
in Singular H
∞
Control,” to appear in SIAM J. Contr. Opt., 1995. Also in Proc. Conf. Dec.
Contr., Lake Buena Vista, Fl., 1994, pp. 1527–1532.
[6] Iwasaki, T., and R.E. Skelton, “All Controllers for the General H
∞
Control
Problem: LMI Existence Conditions and StateSpace Formulas,” Automatica,
30 (1994), pp. 1307–1317.
[7] Scherer, C., “H
∞
Optimization without Assumptions on Finite or Infinite
Zeros,” SIAM J. Contr. Opt., 30 (1992), pp. 143–166.
[8] Scherer, C., “Mixed H
2
H
∞
Control,” to appear in Trends in Control: A
European Perspective, volume of the special contributions to the ECC 1995.
6
Loop Shaping
The LoopShaping Methodology . . . . . . . . . . . 62
The LoopShaping Methodology . . . . . . . . . . . 63
Design Example . . . . . . . . . . . . . . . . . . 65
Specification of the Shaping Filters . . . . . . . . . 610
Nonproper Filters and sderiv . . . . . . . . . . . . . . 612
Specification of the Control Structure . . . . . . . . 614
Controller Synthesis and Validation . . . . . . . . . 616
Practical Considerations . . . . . . . . . . . . . . 618
Loop Shaping with Regional Pole Placement . . . . . 619
References . . . . . . . . . . . . . . . . . . . . . 624
6 Loop Shaping
62
The LoopShaping Methodology
Loop shaping is a popular technique for combining specifications such as
tracking performance, bandwidth, disturbance rejection, rolloff, robustness to
model uncertainty, gain limitation, etc. In this frequency domain method, the
design specifications are reflected as gain constraints on the various
closedloop transfer functions.
Loopshaping design involves four steps:
1 Express the design specifications as constraints on the gain response of the
closedloop transfer functions. This defines ideal gain profiles called loop
shapes.
2 Specify the desired loop shapes graphically with the GUI magshape.
3 Specify the control structure, i.e., how the feedback loop is organized and
which input/output transfer functions are relevant to the loopshaping
objectives.
4 Perform a standard H
∞
synthesis on the resulting control structure to
compute an adequate controller.
This chapter contains a tutorial introduction to the loopshaping methodology
as well as the available tools for loopshaping design.
The LoopShaping Methodology
63
The LoopShaping Methodology
Loop shaping is a design procedure to formulate frequencydomain
specifications as H
∞
constraints and standard H
∞
synthesis problems. In
loopshaping design, the performance and robustness specifications are first
expressed in terms of loop shapes, i.e., of desired gain responses for the various
closedloop transfer functions. These shaping objectives are then turned into
uniform H
∞
bounds by means of shaping filters, and standard H
∞
synthesis
algorithms are applied to compute an adequate controller.
To get a feeling for the loopshaping methodology, consider the MIMO tracking
loop of Figure 61 and suppose that we want to design a controller with integral
behavior. This requirement is naturally expressed in terms of the openloop
response GK(s), for instance by a gain constraint of the form
(61)
Figure 61: Simple tracking loop
Here the main closedloop transfer functions are:
•The sensitivity function S = (I + GK)
–1
(the transfer function from r to e)
•The complementary sensitivity function T = GK(I + GK)
–1
(the transfer
function from r to y).
To express the openloop specification (61) in terms of these closedloop
transfer functions, observe that σ
min
(GK(jω)) ≈ 1/σ
max
(S(jω)) whenever the
openloop gain is large, i.e., σ
min
(GK(jω)) >> 1. Thus, (61) is equivalent to
requiring that
(62)
σ
min
GK jω ( ) ( ) 1 ω ⁄ for ω 1 « >
+
−
K G
r
r
1
r
2
\ .
  =
y
y
1
y
2
\ .
  =
e u
σ
max
S jω ( ) ( ) < ω for ω 1 «
6 Loop Shaping
64
To turn this frequencydependent gain constraint into a normalized H
∞
bound,
a useful tool is the notion of shaping filter. Shaping filters are rational filters
whose magnitude plot reflects the desired loop shape. Here for instance, the
shaping filter
W(s) = I
2
can be used to normalize the constraint (62) to
(63)
Indeed, (63) is equivalent to
(1 + GK(jω)) for all ω,
which in turn constrains GK to
σ
min
(GK(jω)) > for ω << 1
Likewise, rolloff requirements on GK(jω) are enforced by H
∞
bounds of the
form
W(s)T(s)
∞
< 1
where W(s) is some appropriate highpass filter. This reformulation is rooted in
the approximation σ
max
(GK(jω)) ≈ σ
max
(T(jω)) valid whenever the openloop
gain is small, i.e., σ
max
(GK(jω)) << 1. Such constraints are also associated with
robust stability in the face of plant uncertainty of the form G(s) = (I +∆(s))G
0
(s)
where ∆ satisfies the frequencydependent bound
σ
max
(∆(jω)) < W(jω)
Finally, shaping filters are useful to incorporate information about the spectral
density of exogenous signals such as disturbances or measurement noise (see
“Example 5.1” on page 54).
1
s

W s ( )S s ( )
∞
1 <
1
ω
 σ
min
<
1
ω
 1
1
ω
 ≈ –
Design Example
65
Design Example
Loop shaping design with the LMI Control Toolbox involves four steps:
1 Express the design specifications in terms of loop shapes and shaping filters.
2 Specify the shaping filters by their magnitude profile. This is done
interactively with the graphical user interface magshape.
3 Specify the control loop structure with the functions sconnect and smult, or
alternatively with Simulink.
4 Solve the resulting H
∞
problem with one of the H
∞
synthesis functions.
A simple tracking problem is used to illustrate this design procedure. Type
radardem to run the corresponding demo.
Example 6.1. Figure 62 shows a simplified mechanical model of a radar
antenna. The stiffness K accounts for flexibilities in the coupling between the
motor and the antenna.
The corresponding nominal transfer function from the motor torque u = T to
the angular velocity y = of the antenna is θ
·
a
G
0
s ( )
30000
s 0.02 + ( ) s
2
0.99s 30030 + + ( )

·
. =
6 Loop Shaping
66
Our goal is to control through the torque T. The control structure is the
tracking loop of Figure 63 where the dynamical uncertainty ∆(s) accounts for
neglected highfrequency dynamics and flexibilities.
Figure 62: Secondorder model of a radar antenna
The design specifications are as follows:
(i)integral control action, openloop gain of at least 30 dB at 1 rad/s, and
bandwidth of at least 10 rad/s,
(ii)99% rejection of output disturbances d with spectral density in [0, 1] rad/s,
(iii)robustness against the neglected highfrequency dynamics represented by
the multiplicative model uncertainty ∆(s). A bound on ∆(jω) determined from
experimental data is given in Figure 64.
Figure 63: Velocity loop for the radar antenna
θ
·
a
θ
a
θ
m
K
J
m
J
a
antenna
motor
T
r
+
−
G
0
(s)
K
d
+ +
G(s)
∆
y θ
·
=
Design Example
67
Figure 64: Empirical bound on the uncertainty ∆(s)
We first convert these specifications into RMS gain constraints by means of
shaping filters. While the performance requirement (i) amounts to a lower
bound on the openloop gain, the disturbance rejection target (ii) imposes
S(jω) < 0.01 for ω < 1 rad/s. Both requirements can be combined into a single
constraint
(64)
where S = 1 = (1 + G
0
K) and W
1
(s) is a lowpass filter whose gain lies above the
shaded region in Figure 65. From the small gain theorem [2], robust stability
in the face of the uncertainty ∆(s) is equivalent to
(65)
10
1
10
2
10
3
10
4
−30
−20
−10
0
10
20
30
40
Frequency (rad/sec)
M
a
g
n
i
t
u
d
e
(
d
B
)
Frequency−dependent bound on Delta
W
1
s ( )S s ( )
∞
1 <
W
2
s ( )T s ( )
∞
1 <
6 Loop Shaping
68
where T = G
0
K = (1 + G
0
K) and W
2
is a high pass filter whose gain follows the
profile of Figure 64.
For tractability in the H
∞
framework, (64)–(65) are replaced by the single
RMS gain constraint
(66)
Figure 65: Magnitude profile for W
1
(s)
The transfer function
maps r to the signals , in the control loop of Figure 66. Hence our original
design problem is equivalent to the following H
∞
synthesis problem:
Find a stabilizing controller K(s) that makes the closedloop RMS
gain from r to less than one.
W
1
S
W
2
T
∞
1. <
dB
rad/s
1
10
0
40
30
integral action
disturbance rejection
bandwidth
W
1
S
W
2
T
\ .


 
e
˜
y
˜
e
˜
y
˜
\ .

 
Design Example
69
Figure 66: Control structure
To perform this H
∞
synthesis, we need to
1 Compute two shaping filters W
1
(s) and W
2
(s) whose magnitude responses
match the profiles of Figures 65 and 64.
2 Specify the control structure of Figure 66 and derive the corresponding
plant.
3 Call one of the H
∞
optimization functions to compute an adequate controller
K(s) for this plant.
The next three sections discuss these steps.
r
y
+
−
K G
0
e
W
2
W
1
u
e
e
˜
y
˜
6 Loop Shaping
610
Specification of the Shaping Filters
The graphical user interface magshape allows you to specify shaping filters
directly by their magnitude profile. Typing magshape brings up the worksheet
shown in Figure 67. To construct the two shaping filters W
1
(s) and W
2
(s)
involved in our problem:
1 Type w1,w2 after the prompt Filter names. This tells magshape to write the
filter SYSTEM matrices in MATLAB variables called w1 and w2.
2 Indicate which filter you are about to specify by clicking on the
corresponding button on the righthand side of the window.
3 Click on the add point button: you can now specify the desired magnitude
profile with the mouse. This profile is sketched with a few characteristic
points marking the asymptotes and the slope changes. Clicking on the
delete point or move point buttons allows you to delete or move particular
points.
4 Once the desired profile is sketched, specify the filter order and click the fit
data button to interpolate the data points by a rational filter. The gain of the
resulting filter appears in solid on the plot, and its realization is written in
the MATLAB variable of the same name. To make adjustments, you can
move some of the points or change the filter order, and then redo the fitting.
5 If w1 is used as a filter name, magshape checks if a SYSTEM matrix of the same
name already exists in the MATLAB environment. When this is the case, its
magnitude plot is drawn when the w1 button is clicked. This is useful to load
existing filters in the magshape window for modification or comparison.
Figure 67 is a snapshot of the magshape window after specifying the profiles of
W
1
and W
2
and fitting the data points (marked by o in the plot). The solid lines
show the rational fits obtained with an order 3 for W
1
and an order 1 for W
2
.
Specification of the Shaping Filters
611
The transfer functions of these filters (as given by ltitf(w1) and ltitf(w2))
are approximately
Note that horizontal asymptotes have been added in both filters. Such
asymptotes prevent humps in S and T near the crossover frequency, thus
reducing the overshoot in the step response.
The function magshape always returns stable and proper filters. If you do not
use magshape to generate your filters, make sure that they are stable with poles
sufficiently far from the imaginary axis. To avoid difficulties in the subsequent
H
∞
optimization, their poles should satisfy
W
1
s ( )
0.64s
3
10.45s
2
149.28s 25.87 + + +
s
3
1.20s
2
0.83s 8.31 10
4 –
× + + +
 =
W
2
s ( ) 100
3.41s 217.75 +
s 3.54 10
4
× +
 =
Re s ( ) 10
3 –
– ≤
6 Loop Shaping
612
Figure 67: The magshape interface
Nonproper Filters and sderiv
To impose a given rolloff rate in the openloop response, it is often desirable to
use nonproper shaping filters, i.e., filters with a derivative action in the
highfrequency range. Because such filters cannot be represented in SYSTEM
matrix form, magshape approximates them by highpass filters. A drawback of
this approximation is the introduction of fast parasitic modes in the filter and
augmented plant realizations, which in turn may cause numerical difficulties.
Alternatively, you can use sderiv to include nonproper shaping filters in the
loopshaping criterion. This function appends a SISO proportionalderivator
Specification of the Shaping Filters
613
component ns + d to selected inputs and/or outputs of a given LTI system. For
instance, the interconnection
is specified by
Pd = sderiv(P,[ 1 2],[0.01 1])
In the calling list, [ 1 2] lists the input and output channels to be filtered by
ns + d (here 1 for “first input” and 2 for “second output”) and the vector [0.01
1] lists the values of n and d. An error is generated if the resulting system Pd
is not proper.
To specify more complex nonproper filters,
1 Specify the proper “lowfrequency” part of the filters with magshape.
2 Augment the plant with these lowpass filters.
3 Add the derivative action of the nonproper filters by applying sderiv to the
augmented plant.
This procedure is illustrated in the design example presented in “Loop Shaping
with Regional Pole Placement” on page 619.
0.01s+1
0.01s+1
P(s)
6 Loop Shaping
614
Specification of the Control Structure
To perform the H
∞
synthesis, the control structure of Figure 66 must be put in
the standard linearfractional form of Figure 68. In this equivalent
representation, P(s) is the nominal plant determined by G
0
(s) and the control
structure, and P
aug
(s) is the augmented plant obtained by appending the
shaping filters W
1
(s) and W
2
(s). While the H
∞
synthesis is performed on P
aug
(s),
the physical closedloop transfer functions from r to e and y are fully specified
by P(s) and K(s) (recall that the shaping filters are not part of the physical
control loop).
Figure 68: Equivalent standard H
∞
problem
The function sconnect computes the SYSTEM matrices of P(s) or P
aug
(s) given
G
0
, W
1
, W
2
, and the qualitative description of the control structure (Figure 66).
With this function, general control structures are described by listing the input
and output signals and by specifying the input of each dynamical system. In
our problem, the plant P(s) corresponding to the control loop
P(s)
K(s)
W
1
W
2
e
y
r
u
y
P
aug
(s)
e
˜
y
˜
r
y
+
−
K G
0
e
e
Specification of the Control Structure
615
is specified by
g0 = ltisys('tf',3e4,conv([1.02],[1.99 3.03e4]))
P = sconnect('r(1)','e=rG0 ; G0','K:e','G0:K',g0)
The ltisys command returns a statespace realization of G
0
(s). In the
sconnect command, the input arguments are strings or SYSTEM matrices
specifying the control structure as follows:
•The first argument 'r(1)' lists and dimensions the input signals. Here the
only input is the scalar reference signal r.
•The second argument 'e=rG0 ; G0' lists the two output signals separated
by a semicolon. Outputs are defined as combinations of the input signals and
the outputs of the dynamical systems. For instance, the first output e is
specified as r minus the output y of G
0
. For systems with several outputs, the
syntax G0([1,3:4]) would select the first, third, and fourth outputs.
•The third argument 'K:e' names the controller and specifies its inputs. A
twodegreesoffreedom controller with inputs e and r would be specified by
'K: e;r'.
•The remaining arguments come in pairs and specify, for each known LTI
system in the loop, its input list and its SYSTEM matrix. Here 'G0:K' means
that the input of G
0
is the output of K, and g0 is the SYSTEM matrix of G
0
(s).
You can give arbitrary names to G
0
and K in the string definitions, provided
that you use the same names throughout.
Similarly, the augmented plant P
aug
(s) corresponding to the loop of Figure 66
would be specified by
Paug = sconnect('r(1)','W1;W2','K:e = rG0','G0:K',g0,...
'W1:e',w1,'W2:G0',w2)
where w1 and w2 are the shaping filter SYSTEM matrices returned by magshape.
However, the same result is obtained more directly by appending the shaping
filters to P(s) according to Figure 68:
Paug = smult(P,sdiag(w1,w2,1))
Finally, note that sconnect is also useful to compute the SYSTEM matrix of
ordinary system interconnections. In such cases, the third argument should be
set to [] to signal the absence of a controller.
6 Loop Shaping
616
Controller Synthesis and Validation
After forming the augmented plant P
aug
(s) with sconnect, it suffices to call
hinfric or hinflmi to test whether the specifications are feasible and compute
an adequate controller K(s). The loopshaping design is feasible if and only if
the H
∞
performance γ = 1 is achievable for P
aug
. With our choice of shaping
filters, we obtain
[gopt,k] = hinfric(Paug,[1 1],1,1);
GammaIteration:
Gamma Diagnosis
1.0000 : feasible
Best closedloop gain (GAMMA_OPT): 1.000000
** regularizing D12 or D21 to compute K(s):
..
Hence the specifications are met by the controller k returned by hinfric. The
last two lines indicate that the H
∞
problem associated with P
aug
is singular and
has been regularized.
To validate the controller k, you can form the openloop response GK by typing
gk = slft(k,g)
or the closedloop transfer function from r to e and y by typing
Pcl = slft(P,k)
The Bode diagram of GK and the nominal step response are then plotted by
splot(gk,'bo')
splot(ssub(Pcl,1,2),'st')
Controller Synthesis and Validation
617
In the second command, ssub(Pcl,1,2) selects the transfer function from r to
y as a subsystem of Pcl. The step response with the controller computed above
appears in Figure 69.
Figure 69: Step response of the closedloop system
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0
0.2
0.4
0.6
0.8
1
1.2
Time (secs)
A
m
p
l
i
t
u
d
e
6 Loop Shaping
618
Practical Considerations
When the model G
0
(s) of the system has poles on the imaginary axis, loop
shaping design often leads to “singular” H
∞
problems with jωaxis zeros in the
(1,2) or (2,1) blocks of P
aug
. While immaterial in the LMI approach, jωaxis
zeros bar the direct use of standard Riccatibased synthesis algorithms [1]. The
function hinfric automatically regularizes such problems by perturbing the A
matrix of P
aug
to A + εI for some positive ε. The message
** jwaxis zeros of P12(s) regularized
or
** jwaxis zeros of P21(s) regularized
is then displayed to inform users of the presence of a singularity.
In doing so, however, hinfric shifts the shaping filter poles as part of the poles
of P
aug
. Since the shaping filters must remain stable for closedloop
stabilizability, the amount of regularization ε is thereby limited. When some
shaping filters have modes close to the jωaxis, this is likely to result in poor
closedloop damping because the central controller computed by hinfric tends,
in such problems, to place closedloop poles within ε of the jωaxis.
If this difficulty arises, a simple remedy consists of perturbing the poles of the
plant P(s) prior to appending the shaping filters. Provided that all poles of G
0
(s)
are controllable and observable, this typically tolerates much larger ε, which in
turn results in better closedloop damping. In our example, this would amount
to forming P
aug
as follows:
P = sconnect('r(1)','e=rG0 ; G0','K:e','G0:K',g0)
% preregularization
[a,b,c,d] = ltiss(P)
a = a + reg * eye(size(a))
Preg = ltisys(a,b,c,d)
Paug = smult(Preg, sdiag(w1,w2,1))
Note that the regularization level reg should always be chosen positive.
Finally, a careful validation of the resulting controller on the true plant P is
necessary since the H
∞
synthesis is performed on a perturbed system.
Loop Shaping with Regional Pole Placement
619
Loop Shaping with Regional Pole Placement
A weakness of the design outlined above is the plant inversion performed by
the controller K(s). Specifically, the resonant poles of G(s) are cancelled by
zeros of K(s) as seen when comparing the magnitude plots of G and K (see
Figure 610). Since such cancellations leave resonant modes in the closedloop
system, oscillatory responses may result when these modes are excited, e.g., by
a disturbance at the plant input (see Figure 611).
There are several ways of circumventing this difficulty. One possible approach
is to impose some minimum closedloop damping via an additional regional
pole placement constraint. In addition to avoiding cancellations, this is helpful
to tune the transient responses and prevent fast controller dynamics.
Loopshaping design with pole placement is addressed by the function hinfmix.
As mentioned in the previous chapter, using hinfmix requires extra care
because of possible interferences between the root clustering objective and the
shaping filter poles. Indeed, recall that the “closedloop” dynamics of the
loopshaping criterion always comprise the shaping filter modes.
Figure 610: Polezero cancellation between G and K
10
−1
10
0
10
1
10
2
10
3
−100
−50
0
50
Frequency (rad/sec)
S
i
n
g
u
l
a
r
V
a
l
u
e
s
d
B
G(s)
K(s)
6 Loop Shaping
620
Figure 611: Response y for an impulse disturbance at the plant input
For “Example 6.1” on page 65, we chose as pole placement region the disk
centered at (–1000, 0) and with radius 1000. By inspection of the shaping filters
used in the previous design, this region clashes with the pseudointegrator in
W
1
(s) and the fast pole at s = –3.54 × 10
4
in W
2
(s). To alleviate this difficulty,
1 Remove the pseudointegrator from W
1
(s) and move it to the main loop as
shown in Figure 612. This defines an equivalent loopshaping problem
where the controller is reparametrized as K(s) = (s)/s and the integrator is
now assignable via output feedback.
0 1 2 3 4 5 6 7 8 9 10
−1.5
−1
−0.5
0
0.5
1
1.5
Time (secs)
A
m
p
l
i
t
u
d
e
K
˜
Loop Shaping with Regional Pole Placement
621
2 Specify W
2
(s) as a nonproper filter with derivative action. This is done with
sderiv.
Figure 612: Modified control structure
A satisfactory design was obtained for the following values of W
1
and W
2
:
A summary of the commands needed to specify the control structure and solve
the constrained H
∞
problem is listed below. Run the demo radardem to perform
this design interactively.
% control structure, sint = integrator
sint = ltisys('tf',1,[1 0])
[P,r] = sconnect('r','Int;G','Kt:Int','G:Kt',G0,'Int:e=rG',
sint)
% add w1
w1=ltisys('tf',[1 16 200],[1 1.2 0.8])
Paug = smult(P,sdiag(w1,1,1))
% add w2
Paug = sderiv(Paug,2,[1/200 0.9])
% interactively specify the region for pole placement
region = lmireg
% perform the Hinfinity synthesis with pole placement
r = [0 1 1]
r
y +
−
G
0
e
e
1/s
W
1
W
2
e
˜
y
˜
K
˜
W
1
s ( )
s
2
16s 200 + +
s
2
1.2s 0.8 + +
, W
2
s ( ) 0.9
s
200
  + = =
6 Loop Shaping
622
[gopt,xx,Kt] = hinfmix(Paug,r,[0 0 1 0],region)
The full controller is then given by
K = smult(sint,Kt)
The resulting openloop response and the time response to a step r and an
impulse disturbance at the plant input appear in Figure 613 and 614. Note
that the new controller no longer inverts the plant and that input disturbances
are now rejected with satisfactory transient behavior.
Figure 613: Openloop response GK(s)
10
−1
10
0
10
1
10
2
10
3
−50
0
50
100
Frequency (rad/sec)
G
a
i
n
d
B
10
−1
10
0
10
1
10
2
10
3
−360
−720
0
Frequency (rad/sec)
P
h
a
s
e
d
e
g
Loop Shaping with Regional Pole Placement
623
Figure 614: Transient behaviors with the new controller
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
−0.5
0
0.5
1
1.5
Time (secs)
A
m
p
l
i
t
u
d
e
Response to a step r
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
−0.5
0
0.5
1
1.5
2
Time (secs)
A
m
p
l
i
t
u
d
e
Response to an impulse disturbance on u
6 Loop Shaping
624
References
[1] Doyle, J.C., Glover, K., Khargonekar, P., and Francis, B., “StateSpace
Solutions to Standard H
2
and H
∞
Control Problems,” IEEE Trans. Aut. Contr.,
AC–34 (1989), pp. 831–847.
[2] Zames, G., “On the InputOutput Stability of TimeVarying Nonlinear
Feedback Systems, Part I and II,” IEEE Trans. Aut. Contr., AC–11 (1966), pp.
228–238 and 465–476.
7
Robust GainScheduled
Controllers
GainScheduled Control . . . . . . . . . . . . . . . 73
Synthesis of GainScheduled H• Controllers . . . . . 77
Simulation of GainScheduled Control Systems . . . . 79
Design Example . . . . . . . . . . . . . . . . . . 710
References . . . . . . . . . . . . . . . . . . . . . .
7 Robust GainScheduled Controllers
72
Gain scheduling is a widely used technique for controlling certain classes of
nonlinear or linear timevarying systems. Rather than seeking a single robust
LTI controller for the entire operating range, gain scheduling consists in
designing an LTI controller for each operating point and in switching controller
when the operating conditions change.
This section presents systematic tools to design gainscheduled H
∞
controllers
for linear parameterdependent systems. These tools are applicable to
timevarying and/or nonlinear systems whose linearized dynamics are
reasonably well approximated by affine parameterdependent models.
GainScheduled Control
73
GainScheduled Control
The synthesis technique discussed below is applicable to affine
parameterdependent plants with equations
where
p(t) = (p
1
(t), . . ., p
n
(t)),
is a timevarying vector of physical parameters (velocity, angle of attack,
stiffness,. . .) and A(
.
), B
1
(
.
), C
1
(
.
), D
11
(
.
) are affine functions of p(t). This is a
simple model of systems whose dynamical equations depend on physical
coefficients that vary during operation. When these coefficients undergo large
variations, it is often impossible to achieve high performance over the entire
operating range with a single robust LTI controller. Provided that the
parameter values are measured in real time, it is then desirable to use
controllers that incorporate such measurements to adjust to the current
operating conditions [4]. Such controllers are said to be scheduled by the
parameter measurements. This control strategy typically achieves higher
performance in the face of large variations in operating conditions. Note that
p(t) may include part of the state vector x itself provided that the corresponding
states are accessible to measurement.
If the parameter vector p(t) takes values in a box of R
n
with corners
, the plant SYSTEM matrix
P .,p ( )
¹
¦
´
¦
¦
x
·
= A p ( )x B
1
p ( )w B
2
u + +
z = C
1
p ( )x D
11
p ( )w D
12
u + +
y = C
2
x D
21
w D
22
u + +
pi pi t ( ) pi ≤ ≤
Π
i
{ }
i 1 =
N
N 2
n
= ( )
S p ( ) :=
A p ( ) B
1
p ( ) B
2
C
1
p ( ) D
11
p ( ) D
12
C
2
D
21
D
22 \ .




 
7 Robust GainScheduled Controllers
74
ranges in a matrix polytope with vertices S(Π
i
). Specifically, given any convex
decomposition
of p over the corners of the parameter box, the SYSTEM matrix S(p) is given by
S(p) = α
1
S(Π
1
) + . . . + α
N
S(Π
N
)
This suggests seeking parameterdependent controllers with equations
and having the following vertex property:
Given the convex decomposition of the current
parameter value p(t), the values of A
K
(p), B
K
(p), . . . are derived from the
values A
K
(Π
i
), B
K
(Π
i
), . . . at the corners of the parameter box by
In other words, the controller statespace matrices at the operating point p(t)
are obtained by convex interpolation of the LTI vertex controllers
This yields a smooth scheduling of the controller matrices by the parameter
measurements p(t).
p t ( ) α
1
Π
1
… α
N
Π
N
, α
i
0, α
i
i 1 =
N
∑
1 = ≥ + + =
K .,p ( )
ζ A
K
= p ( )ζ B
K
+ p ( )y
u C
K
= p ( )ζ D
K
+ p ( )y
¹
´
¦
p t ( ) α
i
Π
i
i 1 =
N
∑
=
A
K
p ( ) B
K
p ( )
C
K
p ( ) D
K
p ( )
\ .


 
α
i
i 1 =
N
∑
A
K
Π
i
( ) B
K
Π
i
( )
C
K
Π
i
( ) D
K
Π
i
( )
\ .


 
=
K
i
:=
A
K
Π
i
( ) B
K
Π
i
( )
C
K
Π
i
( ) D
K
Π
i
( )
\ .


 
GainScheduled Control
75
For this class of controllers, consider the following H
∞
like synthesis problem
relative to the interconnection of Figure 71:
Figure 71: Gainscheduled H
∞
problem
Design a gainscheduled controller K(., p) satisfying the vertex property and
such that
•The closedloop system is stable for all admissible parameter trajectories p(t)
•The worstcase closedloop RMS gain from w to z does not exceed some level
γ > 0.
Using the notion of quadratic H
∞
performance to enforce the RMS gain
constraint, and observing that the closedloop system is a polytopic system, this
synthesis problem can be reduced to the following LMI problem [3, 2]
Find two symmetric matrices R and S such that
(71)
u y
P(. ,p)
K(., p)
w
z
p(t)
N
12
0
0 I
\ .

 
T
A
i
R RA
i
T
+ RC
1i
T
B
1i
C
1i
R γI – D
11i
B
1
T
D
11i
T
γI –
\ .





 
N
12
0
0 I
\ .

 
0, i 1 … N , , = <
7 Robust GainScheduled Controllers
76
(72)
(73)
where
and N
12
and N
21
are bases of the null spaces of ( ) and (C
2
, D
21
),
respectively.
Remark: Requiring the plant matrices B
2
, C
2
, D
12
, D
21
to be independent of
p(t) may be restrictive in practice. When B
2
and D
12
depend on p(t), a simple
trick to enforce these requirements consists of filtering u through a lowpass
filter with sufficiently large bandwidth. Appending this filter to the plant
rejects the parameter dependence of B
2
and D
12
into the A matrix of the
augmented plant. Similarly, parameter dependences in C
2
and D
21
can be
eliminated by filtering the output (see [1] for details).
N
21
0
0 I
\ .

 
T
A
i
T
S SA + SB
1i
C
1i
T
B
1i
T
S γI – D
11i
T
C
1i
D
11i
γI –
\ .





 
N
21
0
0 I
\ .

 
0, i 1 … N , , = <
R I
I S \ .

 
0 ≥
A
i
B
1i
C
1i
D
11i \ .


 
:=
A Π
i
( ) B
1
Π
i
( )
C
1
Π
i
( ) D
11
Π
i
( )
\ .


 
B
2
T
D
12
T
, ( )
Synthesis of GainScheduled H• Controllers
77
Synthesis of GainScheduled H
∞
Controllers
The synthesis of gainscheduled H
∞
controllers is performed by the function
hinfgs. Since such controllers are characterized by their vertex values, both
the plant and the controller are treated as polytopic models by hinfgs. The
calling sequence is
[gopt,pdK] = hinfgs(pdP,r)
Here pdP is an affine or polytopic model of the plant (see psys) and the two
entry vector r specifies the dimensions of the D
22
matrix (set r=[p2 m2] if
y ∈ R
p
2
and u ∈ R
m
2
).
On output, gopt is the best achievable RMS gain and pdK is the polytopic
system consisting of the vertex controllers.
Given any value p of the parameter vector p(t), the corresponding statespace
parameters
of the gainscheduled controller are returned in SYSTEM matrix format by
c = polydec(pv,p)
Kp = psinfo(pdK,'eval',c)
The function polydec computes the convex decomposition
and returns c = (α
1
, . . ., α
N
). The controller statespace data
at time t is then computed by psinfo. Note that polydec
ensures the continuity of the polytopic coordinates α
i
as a function of p, thereby
guaranteeing the continuity of the controller statespace matrices along
parameter trajectories.
K
i
=
A
K
Π
i
( ) B
K
Π
i
( )
C
K
Π
i
( ) D
K
Π
i
( )
\ .


 
K
i
=
A
K
Π
i
( ) B
K
Π
i
( )
C
K
Π
i
( ) D
K
Π
i
( )
\ .


 
K p ( ) =
A
K
p ( ) B
K
p ( )
C
K
p ( ) D
K
p ( )
\ .


 
p t ( ) α
i
Π
i
i 1 =
N
∑
=
K p ( ) α
i
K
i
i 1 =
N
∑
=
7 Robust GainScheduled Controllers
78
Given the polytopic controller representation pdK, a polytopic model of the
closedloop system is formed by
pcl = slft(pdP,pdK)
and evaluated for a given parameter value p of p(t) by
pclt = psinfo(pcl,'eval',polydec(pv,p))
Simulation of GainScheduled Control Systems
79
Simulation of GainScheduled Control Systems
Being timevarying systems, gainscheduled control systems cannot be
simulated with splot. The function pdsimul is provided to simulate the time
response of such systems along a given parameter trajectory.
With pdsimul, you must first define the parameter trajectory and the input
signal as two functions of time with syntax p=trajfun(t) and u=inputfun(t).
You can then plot the corresponding time response by
pdsimul(pcl,'trajfun',tf,'inputfun')
where pcl is the polytopic representation of the closedloop system and tf is
the desired duration of the simulation. If 'inputfun' is omitted, the step
response is plotted by default. See the next paragraph for an example.
7 Robust GainScheduled Controllers
710
Design Example
This example is drawn from [2] and deals with the design of a gainscheduled
autopilot for the pitch axis of a missile. Type misldem to run the corresponding
demo.
The missile dynamics are highly dependent on the angle of attack α, the speed
V, and the altitude H. These three parameters undergo large variations during
operation. A simple model of the linearized dynamics of the missile is the
parameterdependent model
(74)
where a
zv
is the normalized vertical acceleration, q the pitch rate, δm the fin
deflection, and Z
α
, M
α
are aerodynamical coefficients depending on α, V, and
H. These two coefficients are “measured” in real time.
As V, H, and α vary in
V ∈ [0.5, 4] Mach, H ∈ [0, 18000] m, α ∈ [0, 40] degrees
during operation, the coefficients Z
α
and M
α
range in
Z
α
∈ [0.5, 4], M
α
∈ [0, 106].
Our goal is to control the vertical acceleration a
zv
over this operating range.
The stringent performance specifications (settling time < 0.5 s) and the
bandwidth limitation imposed by unmodeled high frequency dynamics make
gain scheduling desirable in this context.
The control structure is sketched in Figure 72. To enforce the performance and
robustness requirements, we can use the loopshaping criterion
(75)
G
α
·
q
·
\ .

 
Zα – 1
Mα – 0 \ .

 
α
q \ .

 
0
1 \ .

 
δ
m
+ =
a
zv
q
\ .

 
1 – 0
0 1 \ .

 
α
q \ .

 
=
¹
¦
¦
¦
´
¦
¦
¦
¦
W
1
S
W
2
KS
∞
1 <
Design Example
711
Note that S = (I + GK)
–1
is now a timevarying system and that the H
∞
norm
must be understood in terms of input/output RMS gain. Adequate shaping
filters are derived with magshape as
Figure 72: Missile autopilot
The gainscheduled controller synthesis parallels the standard loopshaping
procedure described in Chapter 6. First enter the parameterdependent model
(74) of the missile pitch axis by:
% parameter range:
Zmin=.5; Zmax=4; Mmin=0; Mmax=106;
pv = pvec('box',[Zmin Zmax ; Mmin Mmax])
% affine model:
s0 = ltisys([0 1;0 0],[0;1],[1 0;0 1],[0;0])
s1 = ltisys([1 0;0 0],[0;0],zeros(2),[0;0],0) % Z_al
s2 = ltisys([0 0;1 0],[0;0],zeros(2),[0;0],0) % M_al
pdG = psys(pv,[s0 s1 s2])
For loopshaping purposes, we must form the augmented plant associated with
the criterion (75). This is done with sconnect and smult as follows:
W
1
s ( )
2.01
s 0.201 +
 =
W
2
s ( )
9.678s
3
0.029s
2
+
s
3
1.206 10
4
s
2
1.136 10
7
s 1.066 10
10
× + × + × +
 =
r
a
zv
+
−
K G
autopilot
missile
e
u
q
7 Robust GainScheduled Controllers
712
% form the plant interconnection
[pdP,r] = sconnect('r','e=rGP1;K','K:e;G(2)','G:K',pdG);
% append the shaping filters
Paug = smult(pdP,sdiag(w1,w2,eye(2)))
Note that pdP and Paug are polytopic models at this stage.
We are now ready to perform the gainscheduled controller synthesis with
hinfgs:
[gopt,pdK] = hinfgs(Paug,r)
gopt =
0.205
Our specifications are achievable since the best performance γ
opt
= 0.205 is
smaller than 1, and the polytopic description of a controller with such
performance is returned in pdK.
To validate the design, form the closedloop system with
pCL = slft(pdP,pdK)
You can now simulate the step response of the gainscheduled system along
particular parameter trajectories. For the sake of illustration, consider the
spiral trajectory
Z
α
(t) = 2.25 + 1.70 e
–4t
cos(100 t)
M
α
(t) = 50 + 49 e
–4t
sin(100 t)
shown in Figure 73. After defining this trajectory is a function spiralt.m:
function p = spiral(t)
p = [2.25 + 1.70*exp(4*t).*cos(100*t) ; ...
50 + 49*exp(4*t).*sin(100*t)];
you can plot the closedloop step response along p(t) by
[t,x,y]=pdsimul(pCL,'spiralt',0.5)
plot(t,1y(:,1))
Design Example
713
The function pdsimul returns the output trajectories y(t) = (e(t), u(t))
T
and the
response a
zv
(t) is deduced by
a
zv
(t) = 1 – e(t)
The second command plots a
zv
(t) versus time as shown in Figure 74.
Figure 73: Parameter trajectory
0.5 1 1.5 2 2.5 3 3.5 4
0
10
20
30
40
50
60
70
80
90
100
Z_alpha
M
_
a
l
p
h
a
Parameter trajectory (duration : 1 s)
7 Robust GainScheduled Controllers
714
Figure 74: Corresponding step response
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
0
0.2
0.4
0.6
0.8
1
1.2
a
z
v
step response of the gain−scheduled autopilot
References
715
References
[1] Apkarian, P., and P. Gahinet,“A Convex Characterization of
GainScheduled H
∞
Controllers,” to appear in IEEE Trans. Aut. Contr., 1995.
[2] Apkarian, P., P. Gahinet, and G. Becker, “SelfScheduled H
∞
Control of
Linear ParameterVarying Systems,” to appear in Automatica, 1995.
[3] Becker, G., Packard, P., “Robust Performance of Linear Parametrically
Varying Systems Using Parametrically Dependent Linear Feedback,” Systems
and Control Letters, 23 (1994), pp. 205–215.
[4] Packard, A., “Gain Scheduling via Linear Fractional Transformations,”
Syst. Contr. Letters, 22 (1994), pp. 79–92.
7 Robust GainScheduled Controllers
716
8
The LMI Lab
Background and Terminology . . . . . . . . . . . . 83
Overview of the LMI Lab . . . . . . . . . . . . . . 86
Specifying a System of LMIs . . . . . . . . . . . . . 88
Retrieving Information . . . . . . . . . . . . . . 821
LMI Solvers . . . . . . . . . . . . . . . . . . . 822
From Decision to Matrix Variables and Vice Versa . . 828
Validating Results . . . . . . . . . . . . . . . . 829
Modifying a System of LMIs . . . . . . . . . . . . 830
Advanced Topics . . . . . . . . . . . . . . . . . 833
References . . . . . . . . . . . . . . . . . . . . 844
8 The LMI Lab
82
The LMI Lab is a highperformance package for solving general LMI problems.
It blends simple tools for the specification and manipulation of LMIs with
powerful LMI solvers for three generic LMI problems. Thanks to a
structureoriented representation of LMIs, the various LMI constraints can be
described in their natural blockmatrix form. Similarly, the optimization
variables are specified directly as matrix variables with some given structure.
Once an LMI problem is specified, it can be solved numerically by calling the
appropriate LMI solver. The three solvers feasp, mincx, and gevp constitute
the computational engine of the LMI Control Toolbox. Their high performance
is achieved through CMEX implementation and by taking advantage of the
particular structure of each LMI.
The LMI Lab offers tools to
•Specify LMI systems either symbolically with the LMI Editor or
incrementally with the lmivar and lmiterm commands
•Retrieve information about existing systems of LMIs
•Modify existing systems of LMIs
•Solve the three generic LMI problems (feasibility problem, linear objective
minimization, and generalized eigenvalue minimization)
•Validate results
This chapter gives a tutorial introduction to the LMI Lab as well as more
advanced tips for making the most out of its potential. The tutorial material is
also covered by the demo lmidem.
Background and Terminology
83
Background and Terminology
Any linear matrix inequality can be expressed in the canonical form
L(x) = L
0
+ x
1
L
1
+ . . . + x
N
L
N
< 0
where
•L
0
, L
1
, . . . , L
N
are given symmetric matrices
•x = (x
1
, . . . , x
N
)
T
∈ R
N
is the vector of scalar variables to be determined. We
refer to x
1
, . . . , x
N
as the decision variables. The names “design variables”
and “optimization variables” are also found in the literature.
Even though this canonical expression is generic, LMIs rarely arise in this form
in control applications. Consider for instance the Lyapunov inequality
(81)
where and the variable is a symmetric matrix.
Here the decision variables are the free entries x
1
, x
2
, x
3
of X and the canonical
form of this LMI reads
(82)
Clearly this expression is less intuitive and transparent than (81). Moreover,
the number of matrices involved in (82) grows roughly as n
2
/2 if n is the size
of the A matrix. Hence, the canonical form is very inefficient from a storage
viewpoint since it requires storing o(n
2
/2) matrices of size n when the single
nbyn matrix A would be sufficient. Finally, working with the canonical form
is also detrimental to the efficiency of the LMI solvers. For these various
reasons, the LMI Lab uses a structured representation of LMIs. For instance,
the expression A
T
X + XA in the Lyapunov inequality (81) is explicitly
described as a function of the matrix variable X, and only the A matrix is
stored.
A
T
X XA 0 < +
A
1 – 2
0 2 – \ .

 
= X
x
1
x
2
x
2
x
3 \ .


 
=
x
1
2 – 2
2 0 \ .

 
x
2
0 3 –
3 – 4 \ .

 
x
3
0 0
0 4 – \ .

 
0 < + +
8 The LMI Lab
84
In general, LMIs assume a block matrix form where each block is an affine
combination of the matrix variables. As a fairly typical illustration, consider
the following LMI drawn from H
∞
theory
(83)
where A, B, C, D, N are given matrices and the problem variables are
X = X
T
∈ R
n×n
and γ ∈ R. We use the following terminology to describe such
LMIs:
•N is called the outer factor, and the block matrix
is called the inner factor. The outer factor needs not be square and is often
absent.
•X and γ are the matrix variables of the problem. Note that scalars are
considered as 1by1 matrices.
•The inner factor L(X, γ) is a symmetric block matrix, its block structure being
characterized by the sizes of its diagonal blocks. By symmetry, L(X, γ) is
entirely specified by the blocks on or above the diagonal.
•Each block of L(X, γ) is an affine expression in the matrix variables X and γ.
This expression can be broken down into a sum of elementaryterm terms. For
instance, the block (1,1) contains two elementary terms: A
T
X and XA.
•Terms are either constant or variable. Constant terms are fixed matrices like
B and D above. Variable terms involve one of the matrix variables, like XA,
XC
T
, and –γI above.
The LMI (83) is specified by the list of terms in each block, as is any LMI
regardless of its complexity.
N
T
A
T
X XA + XC
T
B
CX γI – D
B
T
D
T
γI –
\ .




 
N 0 <
L X γ , ( )
A
T
X XA + XC
T
B
CX γI – D
B
T
D
T
γI –
\ .




 
=
Background and Terminology
85
As for the matrix variables X and γ, they are characterized by their dimensions
and structure. Common structures include rectangular unstructured,
symmetric, skewsymmetric, and scalar. More sophisticated structures are
sometimes encountered in control problems. For instance, the matrix variable
X could be constrained to the blockdiagonal structure
Another possibility is the symmetric Toeplitz structure
Summing up, structured LMI problems are specified by declaring the matrix
variables and describing the term content of each LMI. This termoriented
description is systematic and accurately reflects the specific structure of the
LMI constraints. There is no builtin limitation on the number of LMIs that can
be specified or on the number of blocks and terms in any given LMI. LMI
systems of arbitrary complexity can therefore, be defined in the LMI Lab.
X
x
1
0 0
0 x
2
x
3
0 x
3
x
4 \ .




 
=
X
x
1
x
2
x
3
x
2
x
1
x
2
x
3
x
2
x
1 \ .




 
=
8 The LMI Lab
86
Overview of the LMI Lab
The LMI Lab offers tools to specify, manipulate, and numerically solve LMIs.
Its main purpose is to
•Allow for straightforward description of LMIs in their natural blockmatrix
form
•Provide easy access to the LMI solvers (optimization codes)
•Facilitate result validation and problem modification
The structureoriented description of a given LMI system is stored as a single
vector called the internal representation and generically denoted by LMISYS in
the sequel. This vector encodes the structure and dimensions of the LMIs and
matrix variables, a description of all LMI terms, and the related numerical
data. It must be stressed that users need not attempt to read or understand the
content of LMISYS since all manipulations involving this internal
representation can be performed in a transparent manner with LMILab tools.
The LMI Lab supports the following functionalities:
Specification of a System of LMIs
LMI systems can be either specified as symbolic matrix expressions with the
interactive graphical user interface lmiedit, or assembled incrementally with
the two commands lmivar and lmiterm. The first option is more intuitive and
transparent while the second option is more powerful and flexible.
Information Retrieval
The interactive function lmiinfo answers qualitative queries about LMI
systems created with lmiedit or lmivar and lmiterm. You can also use
lmiedit to visualize the LMI system produced by a particular sequence of
lmivar/lmiterm commands.
Solvers for LMI Optimization Problems
Generalpurpose LMI solvers are provided for the three generic LMI problems
defined on page 15. These solvers can handle very general LMI systems and
matrix variable structures. They return a feasible or optimal vector of decision
variables x*. The corresponding values of the matrix variables are
given by the function dec2mat.
X
1
*
… X
K
*
, ,
Overview of the LMI Lab
87
Result Validation
The solution x* produced by the LMI solvers is easily validated with the
functions evallmi and showlmi. This allows a fast check and/or analysis of the
results. With evallmi, all variable terms in the LMI system are evaluated for
the value x* of the decision variables. The left and righthand sides of each
LMI then become constant matrices that can be displayed with showlmi.
Modification of a System of LMIs
An existing system of LMIs can be modified in two ways:
•An LMI can be removed from the system with dellmi.
•A matrix variable X can be deleted using delmvar. It can also be instantiated,
that is, set to some given matrix value. This operation is performed by
setmvar and allows, for example, to fix some variables and solve the LMI
problem with respect to the remaining ones.
8 The LMI Lab
88
Specifying a System of LMIs
The LMI Lab can handle any system of LMIs of the form
N
T
L(X
1
, . . . , X
K
) N < MT R(X
1
, . . . , X
K
) M
where
•X
1
, . . . , X
K
are matrix variables with some prescribed structure
•The left and right outer factors N and M are given matrices with identical
dimensions
•The left and right inner factors L(
.
) and R(
.
) are symmetric block matrices
with identical block structures, each block being an affine combination of X
1
,
. . . , X
K
and their transposes.
Important: Throughout this chapter, “lefthand side” refers to what is on the
“smaller” side of the inequality, and “righthand side” to what is on the
“larger” side. Accordingly, X is called the righthand side and 0 the lefthand
side of the LMI 0 < X even when this LMI is written as X > 0.
The specification of an LMI system involves two steps:
1 Declare the dimensions and structure of each matrix variable X
1
, . . . , X
K
.
2 Describe the term content of each LMI.
This process creates the socalled internal representation of the LMI system.
This computer description of the problem is used by the LMI solvers and in all
subsequent manipulations of the LMI system. It is stored as a single vector
which we consistently denote by LMISYS in the sequel (for the sake of clarity).
There are two ways of generating the internal description of a given LMI
system: (1) by a sequence of lmivar/lmiterm commands that build it
incrementally, or (2) via the LMI Editor lmiedit where LMIs can be specified
directly as symbolic matrix expressions. Though somewhat less flexible and
powerful than the commandbased description, the LMI Editor is more
straightforward to use, hence particularly wellsuited for beginners. Thanks to
its coding and decoding capabilities, it also constitutes a good tutorial
introduction to lmivar and lmiterm. Accordingly, beginners may elect to skip
Specifying a System of LMIs
89
the subsections on lmivar and lmiterm and to concentrate on the GUIbased
specification of LMIs with lmiedit.
A Simple Example
The following tutorial example is used to illustrate the specification of LMI
systems with the LMI Lab tools. Run the demo lmidem to see a complete
treatment of this example.
Example 8.1. Consider a stable transfer function
(84)
with four inputs, four outputs, and six states, and consider the set of
input/output scaling matrices D with blockdiagonal structure
(85)
The following problem arises in the robust stability analysis of systems with
timevarying uncertainty [4 ]:
Find, if any, a scaling D of structure (85) such that the largest gain across
frequency of D G(s) D
–1
is less than one.
G s ( ) C sI A – ( )
1 –
B =
D
d
1
0 0 0
0 d
1
0 0
0 0 d
2
d
3
0 0 d
4
d
5
\ .






 
=
8 The LMI Lab
810
This problem has a simple LMI formulation: there exists an adequate scaling
D if the following feasibility problem has solutions:
Find two symmetric matrices X ∈ R
6×6
and S = D
T
D ∈ R
4×4
such that
(86)
(87)
(88)
The LMI system (86)–(88) can be described with the LMI Editor as outlined
below. Alternatively, its internal description can be generated with lmivar and
lmiterm commands as follows:
setlmis([])
X=lmivar(1,[6 1])
S=lmivar(1,[2 0;2 1])
% 1st LMI
lmiterm([1 1 1 X],1,A,'s')
lmiterm([1 1 1 S],C',C)
lmiterm([1 1 2 X],1,B)
lmiterm([1 2 2 S], 1,1)
% 2nd LMI
lmiterm([ 2 1 1 X],1,1)
% 3rd LMI
lmiterm([ 3 1 1 S],1,1)
lmiterm([3 1 1 0],1)
LMISYS = getlmis
Here the lmivar commands define the two matrix variables X and S while the
lmiterm commands describe the various terms in each LMI. Upon completion,
A
T
X XA C
T
SC + + XB
B
T
X S – \ .



 
< 0
X 0 >
S 1 >
Specifying a System of LMIs
811
getlmis returns the internal representation LMISYS of this LMI system. The
next three subsections give more details on the syntax and usage of these
various commands. More information on how the internal representation is
updated by lmivar/lmiterm can also be found in “How It All Works” on
page 818.
setlmis and getlmis
The description of an LMI system should begin with setlmis and end with
getlmis. The function setlmis initializes the LMI system description. When
specifying a new system, type
setlmis([])
To add on to an existing LMI system with internal representation LMIS0, type
setlmis(LMIS0)
When the LMI system is completely specified, type
LMISYS = getlmis
This returns the internal representation LMISYS of this LMI system. This
MATLAB description of the problem can be forwarded to other LMILab
functions for subsequent processing. The command getlmis must be used only
once and after declaring all matrix variables and LMI terms.
lmivar
The matrix variables are declared one at a time with lmivar and are
characterized by their structure. To facilitate the specification of this structure,
the LMI Lab offers two predefined structure types along with the means to
describe more general structures:
Type 1: Symmetric block diagonal structure. This corresponds to matrix
variables of the form
X
D
1
0 … 0
0 D
2
0
0 … 0 D
r
\ .





 
=
…
…
.
.
.
.
.
.
.
.
.
8 The LMI Lab
812
where each diagonal block D
j
is square and is either zero, a full symmetric
matrix, or a scalar matrix
D
j
= d × I, d ∈ R
This type encompasses ordinary symmetric matrices (single block) and scalar
variables (one block of size one).
Type 2: Rectangular structure. This corresponds to arbitrary rectangular
matrices without any particular structure.
Type 3: General structures. This third type is used to describe more
sophisticated structures and/or correlations between the matrix
variables. The principle is as follows: each entry of X is specified
independently as either 0, x
n
, or –x
n
where x
n
denotes the nth
decision variable in the problem. For details on how to use Type 3,
see “Structured Matrix Variables” on page 833 below as well as the
lmivar entry of the “Command Reference” chapter.
In “Example 8.1” on page 89, the matrix variables X and S are of Type 1.
Indeed, both are symmetric and S inherits the blockdiagonal structure (85) of
D. Specifically, S is of the form
After initializing the description with the command setlmis([]), these two
matrix variables are declared by
lmivar(1,[6 1]) % X
lmivar(1,[2 0;2 1]) % S
In both commands, the first input specifies the structure type and the second
input contains additional information about the structure of the variable:
•For a matrix variable X of Type 1, this second input is a matrix with two
columns and as many rows as diagonal blocks in X. The first column lists the
S
s
1
0 0 0
0 s
1
0 0
0 0 s
2
s
3
0 0 s
3
s
4
\ .






 
=
Specifying a System of LMIs
813
sizes of the diagonal blocks and the second column specifies their nature with
the following convention:
1 → full symmetric block
0 → scalar block
–1 → zero block
In the second command, for instance, [2 0;2 1] means that S has two
diagonal blocks, the first one being a 2by2 scalar block and the second one
a 2−βψ−2 full block.
•For matrix variables of Type 2, the second input of lmivar is a twoentry
vector listing the row and column dimensions of the variable. For instance, a
3by5 rectangular matrix variable would be defined by
lmivar(2,[3 5])
For convenience, lmivar also returns a “tag” that identifies the matrix variable
for subsequent reference. For instance, X and S in “Example 8.1” could be
defined by
X = lmivar(1,[6 1])
S = lmivar(1,[2 0;2 1])
The identifiers X and S are integers corresponding to the ranking of X and S in
the list of matrix variables (in the order of declaration). Here their values
would be X=1 and S=2. Note that these identifiers still point to X and S after
deletion or instantiation of some of the matrix variables. Finally, lmivar can
also return the total number of decision variables allocated so far as well as the
entrywise dependence of the matrix variable on these decision variables (see
the lmivar entry of the “Command Reference” chapter for more details).
lmiterm
After declaring the matrix variables with lmivar, we are left with specifying
the term content of each LMI. Recall that LMI terms fall into three categories:
•The constant terms, i.e., fixed matrices like I in the lefthand side of the LMI
S > I
•The variable terms, i.e., terms involving a matrix variable. For instance, A
T
X
and C
T
SC in (86). Variable terms are of the form PXQ where X is a variable
and P, Q are given matrices called the left and right coefficients, respectively.
8 The LMI Lab
814
•The outer factors
The following rule should be kept in mind when describing the term content of
an LMI:
Important: Specify only the terms in the blocks on or above the diagonal. The
inner factors being symmetric, this is sufficient to specify the entire LMI.
Specifying all blocks results in the duplication of offdiagonal terms, hence in
the creation of a different LMI. Alternatively, you can describe the blocks on or
below the diagonal.
LMI terms are specified one at a time with lmiterm. For instance, the LMI
is described by
lmiterm([1 1 1 1],1,A,'s')
lmiterm([1 1 1 2],C',C)
lmiterm([1 1 2 1],1,B)
lmiterm([1 2 2 2], 1,1)
These commands successively declare the terms A
T
X + XA, C
T
SC, XB, and –S.
In each command, the first argument is a fourentry vector listing the term
characteristics as follows:
•The first entry indicates to which LMI the term belongs. The value m means
“lefthand side of the mth LMI,” and m means “righthand side of the mth
LMI”
•The second and third entries identify the block to which the term belongs.
For instance, the vector [1 1 2 1] indicates that the term is attached to the
(1, 2) block
•The last entry indicates which matrix variable is involved in the term. This
entry is 0 for constant terms, k for terms involving the kth matrix variable
A
T
X XA C
T
SC + + XB
B
T
X S – \ .



 
< 0
Specifying a System of LMIs
815
X
k
, and k for terms involving (here X and S are first and second
variables in the order of declaration).
Finally, the second and third arguments of lmiterm contain the numerical data
(values of the constant term, outer factor, or matrix coefficients P and Q for
variable terms PXQ or PX
T
Q). These arguments must refer to existing
MATLAB variables and be realvalued. See“ComplexValued LMIs” on
page 835 for the specification of LMIs with complexvalued coefficients.
Some shorthand is provided to simplify term specification. First, blocks are
zero by default. Second, in diagonal blocks the extra argument 's' allows you
to specify the conjugated expression AXB + B
T
X
T
A
T
with a single lmiterm
command. For instance, the first command specifies A
T
X + XA as the
“symmetrization” of XA. Finally, scalar values are allowed as shorthand for
scalar matrices, i.e., matrices of the form αI with α scalar. Thus, a constant
term of the form αI can be specified as the “scalar” α. This also applies to the
coefficients P and Q of variable terms. The dimensions of scalar matrices are
inferred from the context and set to 1 by default. For instance, the third LMI S
> I in “Example 8.3” on page 833 is described by
Recall that by convention S is considered as the righthand side of the
inequality, which justifies the –3 in the first command.
Finally, to improve readability it is often convenient to attach an identifier
(tag) to each LMI and matrix variable. The variable identifiers are returned by
lmivar and the LMI identifiers are set by the function newlmi. These
identifiers can be used in lmiterm commands to refer to a given LMI or matrix
variable. For the LMI system of “Example 8.1”, this would look like:
setlmis([])
X = lmivar(1,[6 1])
S = lmivar(1,[2 0;2 1])
BRL = newlmi
lmiterm([BRL 1 1 X],1,A,'s')
lmiterm([BRL 1 1 S],C',C)
lmiterm([BRL 1 2 X],1,B)
lmiterm([BRL 2 2 S], 1,1)
lmiterm([–3 1 1 2],1,1) % 1*S*1 = S
lmiterm([3 1 1 0],1) % 1*I = I
X
k
T
8 The LMI Lab
816
Xpos = newlmi
lmiterm([Xpos 1 1 X],1,1)
Slmi = newlmi
lmiterm([Slmi 1 1 S],1,1)
lmiterm([Slmi 1 1 0],1)
LMISYS = getlmis
Here the identifiers X and S point to the variables X and S while the tags BRL,
Xpos, and Slmi point to the first, second, and third LMI, respectively. Note that
Xpos refers to the righthand side of the second LMI. Similarly, X would
indicate transposition of the variable X.
The LMI Editor lmiedit
The LMI Editor lmiedit is a graphical user interface (GUI) to specify LMI
systems in a straightforward symbolic manner. Typing
lmiedit
calls up a window with several editable text areas and various pushbuttons. To
specify your LMI system,
1 Declare each matrix variable (name and structure) in the upper half of the
worksheet. The structure is characterized by its type (S for symmetric block
diagonal, R for unstructured, and G for other structures) and by an additional
“structure” matrix. This matrix contains specific information about the
structure and corresponds to the second argument of lmivar (see “lmivar” on
page 811 for details).
Please use one line per matrix variable in the text editing areas.
Specifying a System of LMIs
817
2 Specify the LMIs as MATLAB expressions in the lower half of the
worksheet. For instance, the LMI
is entered by typing
[a'*x+x*a x*b; b'*x 1] < 0
if x is the name given to the matrix variable X in the upper half of the
worksheet. The left and righthand sides of the LMIs should be valid
MATLAB expressions.
Once the LMI system is fully specified, the following tasks can be performed by
pressing the corresponding button:
•Visualize the sequence of lmivar/lmiterm commands needed to describe this
LMI system (view commands buttons). Conversely, the LMI system defined
by a particular sequence of lmivar/lmiterm commands can be displayed as a
MATLAB expression by clicking on the describe... buttons.
Beginners can use this facility as a tutorial introduction to the lmivar and
lmiterm commands.
•Save the symbolic description of the LMI system as a MATLAB string (save
button). This description can be reloaded later on by pressing the load
button.
•Read a sequence of lmivar/lmiterm commands from a file (read button). You
can then click on “describe the matrix variables” or “describe the
LMIs...” to visualize the symbolic expression of the LMI system specified by
these commands. The file should describe a single LMI system but may
otherwise contain any sequence of MATLAB commands.
This feature is useful for code validation and debugging.
Write in a file the sequence of lmivar/lmiterm commands needed to describe
a particular LMI system (write button).
This is helpful to develop code and prototype MATLAB functions based on
the LMI Lab.
A
T
X XA + XB
B
T
X I – \ .


 
< 0
8 The LMI Lab
818
•Generate the internal representation of the LMI system by pressing create.
The result is written in a MATLAB variable named after the LMI system (if
the “name of the LMI system” is set to mylmi, the internal representation is
written in the MATLAB variable mylmi). Note that all LMIrelated data
should be defined in the MATLAB workspace at this stage.
The internal representation can be passed directly to the LMI solvers or any
other LMI Lab function.
As with lmiterm, you can use various shortcuts when entering LMI expressions
at the keyboard. For instance, zero blocks can be entered simply as 0 and need
not be dimensioned. Similarly, the identity matrix can be entered as 1 without
dimensioning. Finally, upper diagonal LMI blocks need not be fully specified.
Rather, you can just type (*) in place of each such block.
Though fairly general, lmiedit is not as flexible as lmiterm and the following
limitations should be kept in mind:
•Parentheses cannot be used around matrix variables. For instance, the
expression
(a*x+b)'*c + c'*(a*x+b)
is invalid when x is a variable name. By contrast,
(a+b)'*x + x'*(a+b)
is perfectly valid.
•Loops and if statements are ignored.
•When turning lmiterm commands into a symbolic description of the LMI
system, an error is issued if the first argument of lmiterm cannot be
evaluated. Use the LMI and variable identifiers supplied by newlmi and
lmivar to avoid such difficulties.
Figure 81 shows how to specify the feasibility problem of “Example 8.1” on
page 89 with lmiedit.
How It All Works
Users familiar with MATLAB may wonder how lmivar and lmiterm physically
update the internal representation LMISYS since LMISYS is not an argument to
these functions. In fact, all updating is performed through global variables for
maximum speed. These global variables are initialized by setlmis, cleared by
Specifying a System of LMIs
819
getlmis, and not visible in the workspace. Even though this artifact is
transparent from the user's viewpoint, be sure to
•Invoke getlmis only once and after completely specifying the LMI system
•Refrain from using the command clear global before the LMI system
description is ended with getlmis
8 The LMI Lab
820
Figure 81: Graphical user interface lmiedit
Retrieving Information
821
Retrieving Information
Recall that the full description of an LMI system is stored as a single vector
called the internal representation. The user should not attempt to read or
retrieve information directly from this vector. Three functions called lmiinfo,
lminbr, and matnbr are provided to extract and display all relevant
information in a userreadable format.
lmiinfo
lmiinfo is an interactive facility to retrieve qualitative information about LMI
systems. This includes the number of LMIs, the number of matrix variables
and their structure, the term content of each LMI block, etc. To invoke lmiinfo,
enter
lmiinfo(LMISYS)
where LMISYS is the internal representation of the LMI system produced by
getlmis.
lminbr and matnbr
These two functions return the number of LMIs and the number of matrix
variables in the system. To get the number of matrix variables, for instance,
enter
matnbr(LMISYS)
8 The LMI Lab
822
LMI Solvers
LMI solvers are provided for the following three generic optimization problems
(here x denotes the vector of decision variables, i.e., of the free entries of the
matrix variables X
1
, . . . , X
K
):
•Feasibility problem
Find x ∈ R
N
(or equivalently matrices X
1
, . . . , X
K
with prescribed structure)
that satisfies the LMI system
A(x) < B(x)
The corresponding solver is called feasp.
•Minimization of a linear objective under LMI constraints
Minimize c
T
x over x ∈ R
N
subject to A(x) < B(x)
The corresponding solver is called mincx.
•Generalized eigenvalue minimization problem
Minimize λ over x ∈ R
N
subject to
C(x) < D(x)
0 < B(x)
A(x) < λB(x).
The corresponding solver is called gevp.
Note that A(x) < B(x) above is a shorthand notation for general structured LMI
systems with decision variables x = (x
1
, . . . , x
N
).
The three LMI solvers feasp, mincx, and gevp take as input the internal
representation LMISYS of an LMI system and return a feasible or optimizing
value x* of the decision variables. The corresponding values of the matrix
variables X
1
, . . . , X
K
are derived from x* with the function dec2mat. These
solvers are CMEX implementations of the polynomialtime Projective
Algorithm Projective Algorithm of Nesterov and Nemirovski [3, 2].
For generalized eigenvalue minimization problems, it is necessary to
distinguish between the standard LMI constraints C(x) < D(x) and the
linearfractional LMIs
A(x) < λB(x)
LMI Solvers
823
attached to the minimization of the generalized eigenvalue λ. When using
gevp, you should follow these three rules to ensure proper specification of the
problem:
•Specify the LMIs involving λ as A(x) < B(x) (without the λ)
•Specify them last in the LMI system. gevp systematically assumes that the
last L LMIs are linearfractional if L is the number of LMIs involving λ
•Add the constraint 0 < B(x) or any other constraint that enforces it. This
positivity constraint is required for wellposedness of the problem and is not
automatically added by gevp (see the “Command Reference” chapter for
details).
An initial guess xinit for x can be supplied to mincx or gevp. Use mat2dec to
derive xinit from given values of the matrix variables X
1
, . . . , X
K
. Finally,
various options are available to control the optimization process and the solver
behavior. These options are described in detail in the “Command Reference”
chapter.
The following example illustrates the use of the mincx solver.
Example 8.2. Consider the optimization problem
Minimize Trace(X) subject to
(89)
with data
It can be shown that the minimizer X* is simply the stabilizing solution of the
algebraic Riccati equation
A
T
X + XA + XBB
T
X + Q = 0
This solution can be computed directly with the Riccati solver care and
compared to the minimizer returned by mincx.
A
T
X XA XBB
T
X Q + + + <
A
1 – 2 – 1
3 2 1
1 2 – 1 –
\ .



 
; B
1
0
1
\ .



 
; Q
1 1 – 0
1 – 3 – 12 –
0 12 – 36 –
\ .



 
= = =
8 The LMI Lab
824
From an LMI optimization standpoint, problem (89) is equivalent to the
following linear objective minimization problem:
(810)
Since Trace(X) is a linear function of the entries of X, this problem falls within
the scope of the mincx solver and can be numerically solved as follows:
1 Define the LMI constraint (89) by the sequence of commands
setlmis([])
X = lmivar(1,[3 1]) % variable X, full symmetric
lmiterm([1 1 1 X],1,a,'s')
lmiterm([1 1 1 0],q)
lmiterm([1 2 2 0], 1)
lmiterm([1 2 1 X],b',1)
LMIs = getlmis
2 Write the objective Trace(X) as c
T
x where x is the vector of free entries of X.
Since c should select the diagonal entries of X, it is obtained as the decision
vector corresponding to X = I, that is,
c = mat2dec(LMIs,eye(3))
Note that the function defcx provides a more systematic way of specifying
such objectives (see “Specifying c
T
x Objectives for mincx” on page 838 for
details).
3 Call mincx to compute the minimizer xopt and the global minimum
copt = c'*xopt of the objective:
options = [1e 5,0,0,0,0]
Minimize Trace(X) subject to
A
T
X XA Q + + XB
B
T
X I – \ .


 
< 0
LMI Solvers
825
[copt,xopt] = mincx(LMIs,c,options)
Here 1e 5 specifies the desired relative accuracy on copt.
The following trace of the iterative optimization performed by mincx appears
on the screen:
Solver for linear objective minimization under LMI constraints
Iterations : Best objective value so far
1
2 8.511476
3 13.063640
*** new lower bound: 34.023978
4 15.768450
*** new lower bound: 25.005604
5 17.123012
*** new lower bound: 21.306781
6 17.882558
*** new lower bound: 19.819471
7 18339853
*** new lower bound: 19.189417
8 18.552558
*** new lower bound: 18.919668
9 18.646811
*** new lower bound: 18.803708
10 18.687324
*** new lower bound: 18.753903
11 18.705715
*** new lower bound: 18.732574
12 18.712175
8 The LMI Lab
826
Result: feasible solution of required accuracy
best objective value: 18.716695
guaranteed relative accuracy: 9.50e 06
fradius saturation: 0.000% of R = 1.00e+09
The iteration number and the best value of c
T
x at the current iteration
appear in the left and right columns, respectively. Note that no value is
displayed at the first iteration, which means that a feasible x satisfying the
constraint (810) was found only at the second iteration. Lower bounds on
the global minimum of c
T
x are sometimes detected as the optimization
progresses. These lower bounds are reported by the message
*** new lower bound: xxx
Upon termination, mincx reports that the global minimum for the objective
Trace(X) is –18.716695 with relative accuracy of at least 9.5by10
–6
. This is
the value copt returned by mincx.
4 mincx also returns the optimizing vector of decision variables xopt. The
corresponding optimal value of the matrix variable X is given by
*** new lower bound: 18.723491
13 18.714880
*** new lower bound: 18.719624
14 18.716094
*** new lower bound: 18.717986
15 18.716509
*** new lower bound: 18.717297
16 18.716695
*** new lower bound: 18.716873
LMI Solvers
827
Xopt = dec2mat(LMIs,xopt,X)
which returns
This result can be compared with the stabilizing Riccati solution computed
by care:
Xst = care(a,b,q, 1)
norm(XoptXst)
ans =
6.5390e 05
X
opt
6.3542 – 5.8895 – 2.2046
5.8895 – 6.2855 – 2.2201
2.2046 2.2201 6.0771 –
\ .



 
=
8 The LMI Lab
828
From Decision to Matrix Variables and Vice Versa
While LMIs are specified in terms of their matrix variables X
1
, . . . , X
K
, the
LMI solvers optimize the vector x of free scalar entries of these matrices, called
the decision variables. The two functions mat2dec and dec2mat perform the
conversion between these two descriptions of the problem variables.
Consider an LMI system with three matrix variables X
1
, X
2
, X
3
. Given
particular values X1, X2, X3 of these variables, the corresponding value xdec of
the vector of decision variables is returned by mat2dec:
xdec = mat2dec(LMISYS,X1,X2,X3)
An error is issued if the number of arguments following LMISYS differs from the
number of matrix variables in the problem (see matnbr).
Conversely, given a value xdec of the vector of decision variables, the
corresponding value of the kth matrix is given by dec2mat. For instance, the
value X2 of the second matrix variable is extracted from xdec by
X2 = dec2mat(LMISYS,xdec,2)
The last argument indicates that the second matrix variable is requested. It
could be set to the matrix variable identifier returned by lmivar.
The total numbers of matrix variables and decision variables are returned by
matnbr and decnbr, respectively. In addition, the function decinfo provides
precise information about the mapping between decision variables and matrix
variable entries (see the “Command Reference” chapter).
Validating Results
829
Validating Results
The LMI Lab offers two functions to analyze and validate the results of an LMI
optimization. The function evallmi evaluates all variable terms in an LMI
system for a given value of the vector of decision variables, for instance, the
feasible or optimal vector returned by the LMI solvers. Once this evaluation is
performed, the left and righthand sides of a particular LMI are returned by
showlmi.
In the LMI problem considered in “Example 8.2” on page 823, you can verify
that the minimizer xopt returned by mincx satisfies the LMI constraint (810)
as follows:
evlmi = evallmi(LMIs,xopt)
[lhs,rhs] = showlmi(evlmi,1)
The first command evaluates the system for the value xopt of the decision
variables, and the second command returns the left and righthand sides of the
first (and only) LMI. The negative definiteness of this LMI is checked by
eig(lhsrhs)
ans =
2.0387e 04
3.9333e 05
1.8917e 07
4.6680e+01
8 The LMI Lab
830
Modifying a System of LMIs
Once specified, a system of LMIs can be modified in several ways with the
functions dellmi, delmvar, and setmvar.
dellmi
The first possibility is to remove an entire LMI from the system with dellmi.
For instance, suppose that the LMI system of “Example 8.1” on page 89 is
described in LMISYS and that we want to remove the positivity constraint on X.
This is done by
NEWSYS = dellmi(LMISYS,2)
where the second argument specifies deletion of the second LMI. The resulting
system of two LMIs is returned in NEWSYS.
The LMI identifiers (initial ranking of the LMI in the LMI system) are not
altered by deletions. As a result, the last LMI
S > I
remains known as the third LMI even though it now ranks second in the
modified system. To avoid confusion, it is safer to refer to LMIs via the
identifiers returned by newlmi. If BRL, Xpos, and Slmi are the identifiers
attached to the three LMIs (86)–(88), Slmi keeps pointing to S > I even after
deleting the second LMI by
NEWSYS = dellmi(LMISYS,Xpos)
dellmi
Another way of modifying an LMI system is to delete a matrix variable, that is,
to remove all variable terms involving this matrix variable. This operation is
performed by delmvar. For instance, consider the LMI
A
T
X + XA + BW + W
T
B
T
+ I < 0
with variables X = X
T
∈ R
4×4
and W ∈ R
2×4
. This LMI is defined by
setlmis([])
X = lmivar(1,[4 1]) % X
W = lmivar(2,[2 4]) % W
Modifying a System of LMIs
831
lmiterm([1 1 1 X],1,A,'s')
lmiterm([1 1 1 W],B,1,'s')
lmiterm([1 1 1 0],1)
LMISYS = getlmis
To delete the variable W, type the command
NEWSYS = delmvar(LMISYS,W)
The resulting NEWSYS now describes the Lyapunov inequality
A
T
X + XA + I < 0
Note that delmvar automatically removes all LMIs that depended only on the
deleted matrix variable.
The matrix variable identifiers are not affected by deletions and continue to
point to the same matrix variable. For subsequent manipulations, it is
therefore advisable to refer to the remaining variables through their identifier.
Finally, note that deleting a matrix variable is equivalent to setting it to the
zero matrix of the same dimensions with setmvar.
setmvar
The function setmvar is used to set a matrix variable to some given value. As
a result, this variable is removed from the problem and all terms involving it
become constant terms. This is useful, for instance, to fixsetmvar some
variables and optimize with respect to the remaining ones.
Consider again “Example 8.1” on page 89 and suppose we want to know if the
peak gain of G itself is less than one, that is, if
G
∞
< 1
This amounts to setting the scaling matrix D (or equivalently, S = D
T
D) to a
multiple of the identity matrix. Keeping in mind the constraint S > I, a
legitimate choice is S = 2−βψ−I. To set S to this value, enter
NEWSYS = setmvar(LMISYS,S,2)
8 The LMI Lab
832
The second argument is the variable identifier S, and the third argument is the
value to which S should be set. Here the value 2 is shorthand for 2−βψ−I. The
resulting system NEWSYS reads
Note that the last LMI is now free of variable and trivially satisfied. It could,
therefore, be deleted by
NEWSYS = dellmi(NEWSYS,3)
or
NEWSYS = dellmi(NEWSYS,Slmi)
if Slmi is the identifier returned by newlmi.
A
T
X XA 2C
T
C + + XB
B
T
X 2I – \ .



 
< 0
X 0 >
2I I >
Advanced Topics
833
Advanced Topics
This last section gives a few hints for making the most out of the LMI Lab. It
is directed toward users who are comfortable with the basics described above.
Structured Matrix Variables
Fairly complex matrix variable structures and interdependencies can be
specified with lmivar. Recall that the symmetric blockdiagonal or rectangular
structures are covered by Types 1 and 2 of lmivar provided that the matrix
variables are independent. To describe more complex structures or correlations
between variables, you must use Type 3 and specify each entry of the matrix
variables directly in terms of the free scalar variables of the problem (the
socalled decision variables).
With Type 3, each entry is specified as either 0 or ±x
n
where x
n
is the nth
decision variable. The following examples illustrate how to specify nontrivial
matrix variable structures with lmivar. We first consider the case of
uncorrelated matrix variables.
Example 8.3. Suppose that the problem variables include a 3by3 symmetric
matrix X and a 3by3 symmetric Toeplitz matrix
The variable Y has three independent entries, hence involves three decision
variables. Since Y is independent of X, these decision variables should be
labeled n + 1, n + 2, n + 3 where n is the number of decision variables involved
in X. To retrieve this number, define the variable X (Type 1) by
setlmis([])
[X,n] =
lmivar(1,[3 1])
The second output argument n gives the total number of decision variables
used so far (here n = 6). Given this number, Y can be defined by
Y = lmivar(3,n+[1 2 3;2 1 2;3 2 1])
Y
y
1
y
2
y
3
y
2
y
1
y
2
y
3
y
2
y
1 \ .




 
=
8 The LMI Lab
834
or equivalently by
Y = lmivar(3,toeplitz(n+[1 2 3]))
where toeplitz is a standard MATLAB function. For verification purposes, we
can visualize the decision variable distributions in X and Y with decinfo:
lmis = getlmis
decinfo(lmis,X)
ans =
1 2 4
2 3 5
4 5 6
decinfo(lmis,Y)
ans =
7 8 9
8 7 8
9 8 7
The next example is a problem with interdependent matrix variables.
Example 8.4. Consider three matrix variables X, Y, Z with structure
where x, y, z, t are independent scalar variables. To specify such a triple, first
define the two independent variables X and Y (both of Type 1) as follows:
setlmis([])
[X,n,sX] = lmivar(1,[1 0;1 0])
[Y,n,sY] = lmivar(1,[1 0;1 0])
The third output of lmivar gives the entrywise dependence of X and Y on the
decision variables (x
1
, x
2
, x
3
, x
4
) := (x, y, z, t):
sX =
1 0
0 2
X
x 0
0 y \ .

 
, Y
z 0
0 t \ .

 
, Z
0 x –
t – 0 \ .

 
= = =
Advanced Topics
835
sY =
3 0
0 4
Using Type 3 of lmivar, you can now specify the structure of Z in terms of the
decision variables x
1
= x and x
4
= t:
[Z,n,sZ] = lmivar(3,[0 sX(1,1);–sY(2,2) 0])
Since sX(1,1) refers to x
1
while sY(2,2) refers to x
4
, this defines the variable
as confirmed by checking its entrywise dependence on the decision variables:
sZ =
0 1
4 0
ComplexValued LMIs
The LMI solvers are written for realvalued matrices and cannot directly
handle LMI problems involving complexvalued matrices. However,
complexvalued LMIs can be turned into realvalued LMIs by observing that a
complex Hermitian matrix L(x) satisfies
L(x) < 0
if and only if
This suggests the following systematic procedure for turning complex LMIs
into real ones:
•Decompose every complex matrix variable X as
X = X
1
+ jX
2
where X
1
and X
2
are real
Z
0 x
1
–
x
4
– 0
\ .


 
0 x –
t – 0 \ .

 
= =
Re L x ( ) ( ) Im L x ( ) ( )
Im – L x ( ) ( ) Re L x ( ) ( ) \ .

 
0 <
8 The LMI Lab
836
•Decompose every complex matrix coefficient A as
A = A
1
+ jA
2
where A
1
and A
2
are real
•Carry out all complex matrix products. This yields affine expressions in X
1
,
X
2
for the real and imaginary parts of each LMI, and an equivalent
realvalued LMI is readily derived from the above observation.
For LMIs without outer factor, a streamlined version of this procedure consists
of replacing any occurrence of the matrix variable X = X
1
+ jX
2
by
and any fixed matrix A = A
1
+ jA
2
, including real ones,by
For instance, the real counterpart of the LMI system
(811)
reads (given the decompositions M = M
1
+ jM
2
and X = X
1
+ jX
2
with M
j
, X
j
real):
Note that X = X
H
in turn requires that and .
Consequently, X
1
and X
2
should be declared as symmetric and skew
symmetric matrix variables, respectively.
Assuming, for instance, that M ∈ C
5
×
5
, the LMI system (811) would be
specified as follows:
X
1
X
2
X –
2
X
1
\ .


 
A
1
A
2
A –
2
A
1 \ .


 
M
H
XM X, X X
H
I > = <
M
1
M
2
M –
2
M
1 \ .


 
T
X
1
X
2
X –
2
X
1 \ .


 
M
1
M
2
M –
2
M
1 \ .


 
X
1
X
2
X –
2
X
1 \ .


 
<
X
1
X
2
X –
2
X
1 \ .


 
I <
X
1
X
H
= X
2
X
2
T
+ 2 0 =
Advanced Topics
837
M1=real(M), M2=imag(M)
bigM=[M1 M2;M2 M1]
setlmis([])
% declare bigX=[X1 X2;X2 X1] with X1=X1' and X2+X2'=0:
[X1,n1,sX1] = lmivar(1,[5 1])
[X2,n2,sX2] = lmivar(3,skewdec(5,n1))
bigX = lmivar(3,[sX1 sX2;sX2 sX1])
% describe the real counterpart of (1.12):
lmiterm([1 1 1 0],1)
lmiterm([ 1 1 1 bigX],1,1)
lmiterm([2 1 1 bigX],bigM',bigM)
lmiterm([ 2 1 1 bigX],1,1)
lmis = getlmis
Note the threestep declaration of the structured matrix variable bigX =
1 Specify X
1
as a (real) symmetric matrix variable and save its structure
description sX1 as well as the number n1 of decision variables used in X
1
.
2 Specify X
2
as a skewsymmetric matrix variable using Type 3 of lmivar and
the utility skewdec. The command skewdec(5,n1) creates a 5by–5
skewsymmetric structure depending on the decision variables n1 + 1,
n1 + 2,...
3 Define the structure of bigX in terms of the structures sX1 and sX2 of X
1
and
X
2
.
See the previous subsection for more details on such structure manipulations.
X
1
X
2
X –
2
X
1 \ .


 
:
8 The LMI Lab
838
Specifying c
T
x Objectives for mincx
The LMI solver mincx minimizes linear objectives of the form c
T
x where x is the
vector of decision variables. In most control problems, however, such objectives
are expressed in terms of the matrix variables rather than of x. Examples
include Trace(X) where X is a symmetric matrix variable, or u
T
Xu where u is a
given vector.
The function defcx facilitates the derivation of the c vector when the objective
is an affine function of the matrix variables. For the sake of illustration,
consider the linear objective
Trace(X) + Px
0
where X and P are two symmetric variables and x
0
is a given vector. If lmisys
is the internal representation of the LMI system and if x
0
, X, P have been
declared by
x0 = [1;1]
setlmis([])
X = lmivar(1,[3 0])
P = lmivar(1,[2 1])
:
:
lmisys = getlmis
the c vector such that c
T
x = Trace(X) + Px
0
can be computed as follows:
n = decnbr(lmisys)
c = zeros(n,1)
for j=1:n,
[Xj,Pj] = defcx(lmisys,j,X,P)
c(j) = trace(Xj) + x0'*Pj*x0
end
The first command returns the number of decision variables in the problem and
the second command dimensions c accordingly. Then the for loop performs the
following operations:
1 Evaluate the matrix variables X and P when all entries of the decision vector
x are set to zero except x
j
:= 1. This operation is performed by the function
defcx. Apart from lmisys and j, the inputs of defcx are the identifiers X and
x
0
T
x
0
T
Advanced Topics
839
P of the variables involved in the objective, and the outputs Xj and Pj are the
corresponding values.
2 Evaluate the objective expression for X := Xj and P := Pj. This yields the
jth entry of c by definition.
In our example the result is
c =
3
1
2
1
Other objectives are handled similarly by editing the following generic
skeleton:
n = decnbr( LMI system )
c = zeros(n,1)
for j=1:n,
[ matrix values ] = defcx( LMI system,j,
matrix identifiers)
c(j) = objective(matrix values)
end
Feasibility Radius
When solving LMI problems with feasp, mincx, or gevp, it is possible to
constrain the solution x to lie in the ball
x
T
x < R
2
where R > 0 is called the feasibility radius. This specifies a maximum
(Euclidean norm) magnitude for x and avoids getting solutions of very large
norm. This may also speed up computations and improve numerical stability.
Finally, the feasibility radius bound regularizes problems with redundant
variable sets. In rough terms, the set of scalar variables is redundant when an
equivalent problem could be formulated with a smaller number of variables.
The feasibility radius R is set by the third entry of the options vector of the LMI
solvers. Its default value is R = 109 . Setting R to a negative value means “no
rigid bound,” in which case the feasibility radius is increased during the
8 The LMI Lab
840
optimization if necessary. This “flexible bound” mode may yield solutions of
large norms.
WellPosedness Issues
The LMI solvers used in the LMI Lab are based on interiorpoint optimization
techniques. To compute feasible solutions, such techniques require that the
system of LMI constraints be strictly feasible, that is, the feasible set has a
nonempty interior. As a result, these solvers may encounter difficulty when the
LMI constraints are feasible but not strictly feasible, that is, when the LMI
L(x) ð 0
has solutions while
L(x) < 0
has no solution.
For feasibility problems, this difficulty is automatically circumvented by
feasp, which reformulates the problem
(812)
as
(813)
In this modified problem, the LMI constraint is always strictly feasible in x, t
and the original LMI (812) is feasible if and only if the global minimum t
min
of
(813) satisfies
t
min
ð 0
For feasible but not strictly feasible problems, however, the computational
effort is typically higher as feasp strives to approach the global optimum t
min
= 0 to a high accuracy.
For the LMI problems addressed by mincx and gevp, nonstrict feasibility
generally causes the solvers to fail and to return an “infeasibility” diagnosis.
Although there is no universal remedy for this difficulty, it is sometimes
possible to eliminate underlying algebraic constraints to obtain a strictly
feasible problem with fewer variables.
Find x such that L x ( ) 0 ≤
Minimize t subject to Lx t I. × <
Advanced Topics
841
Another issue has to do with homogeneous feasibility problems such as
A
T
P + P A < 0, P > 0
While this problem is technically wellposed, the LMI optimization is likely to
produce solutions close to zero (the trivial solution of the nonstrict problem). To
compute a nontrivial Lyapunov matrix and easily differentiate between
feasibility and infeasibility, replace the constraint P > 0byP > αI with α > 0.
Note that this does not alter the problem due to its homogeneous nature.
SemiDefinite B(x) in gevp Problems
Consider the generalized eigenvalue minimization problem
(814)
Technically, the positivity of B(x) for some x ∈ R
n
is required for the
wellposedness of the problem and the applicability of polynomialtime
interiorpoint methods. Hence problems where
cannot be directly solved with gevp. A simple remedy consists of replacing the
constraints
A(x) < B(x), B(x) > 0
by
where Y is an additional symmetric variable of proper dimensions. The
resulting problem is equivalent to (814) and can be solved directly with gevp.
Efficiency and Complexity Issues
As explained in the beginning of the chapter, the termoriented description of
LMIs used in the LMI Lab typically leads to higher efficiency than the
canonical representation
Minimize λ subject to A x ( ) λB x ( ), < B x ( ) 0, > C x ( ) 0. <
B x ( )
B
1
x ( ) 0
0 0
\ .

 
, with B
1
x ( ) 0 > strictly feasible =
A x ( )
Y 0
0 0 \ .

 
, < Y λ < B
1
x ( ) B
1
x ( ) 0 > ,
8 The LMI Lab
842
(815)
This is no longer true, however, when the number of variable terms is nearly
equal to or greater than the number N of decision variables in the problem. If
your LMI problem has few free scalar variables but many terms in each LMI,
it is therefore preferable to rewrite it as (815) and to specify it in this form.
Each scalar variable x
j
is then declared independently and the LMI terms are
of the form x
j
A
j
.
If M denotes the total row size of the LMI system and N the total number of
scalar decision variables, the flop count per iteration for the feasp and mincx
solvers is proportional to
•N
3
when the leastsquares problem is solved via. Cholesly factorization of the
Hessian matrix (default) [2 ]
•MbyN
2
when numerical instabilities warrant the use of QR factorization
instead
While the theory guarantees a worstcase iteration count proportional to M, the
number of iterations actually performed grows slowly with M in most
problems. Finally, while feasp and mincx are comparable in complexity, gevp
typically demands more computational effort. Make sure that your LMI
problem cannot be solved with mincx before using gevp.
Solving M + P
T
XQ + Q
T
X
T
P < 0
In many outputfeedback synthesis problems, the design can be performed in
two steps:
1 Compute a closedloop Lyapunov function via LMI optimization.
2 Given this Lyapunov function, derive the controller statespace matrices by
solving an LMI of the form
(816)
where M, P, Q are given matrices and X is an unstructured mbyn matrix
variable.
A
0
x
1
A
1
… x
N
A
N
+ + + 0. <
M P
T
XQ Q
T
X
T
P + + 0 <
Advanced Topics
843
It turns out that a particular solution X
c
of (816) can be computed via simple
linear algebra manipulations [1 ]. Typically, X
c
corresponds to the center of the
ellipsoid of matrices defined by (816).
The function basiclmi returns the “explicit” solution X
c
:
Xc = basiclmi(M,P,Q)
Since this central solution sometimes has large norm, basiclmi also offers the
option of computing an approximate leastnorm solution of (816). This is done
by
X = basiclmi(M,P,Q,'Xmin')
and involves LMI optimization to minimize X.
8 The LMI Lab
844
References
[1] Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H
∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421–448.
[2] Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, pp. 840–844.
[3] Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in
Convex Programming: Theory and Applications, SIAM Books, Philadelphia,
1994.
[4] Shamma, J.S., “Robustness Analysis for TimeVarying Systems,” Proc.
Conf. Dec. Contr., 1992, pp. 3163–3168.
9
Command Reference
9 Command Reference
92
This chapter gives a detailed description of all LMI Control Toolbox functions.
Functions are grouped by application in tables at the beginning of this chapter.
In addition, information on each function is aavailable through the online Help
facility.
List of Functions
93
List of Functions
Linear Algebra
basiclmi Solve M + P
T
XQ + Q
T
X
T
P < 0
care Continuoustime Riccati solver
dare Discretetime Riccati solver
dricpen Generalized Riccati solver (discretetime)
ricpen Generalized Riccati solver (continuoustime)
Linear TimeInvariant Systems
ltisys Specify a SYSTEM matrix
ltiss Extract statespace data
ltitf Compute the transfer function (SISO)
sbalanc Balance a statespace realization via diagonal similarity
sinfo Return information about a SYSTEM matrix
sinv Invert an LTI system
splot Plot the time and frequency responses of LTI systems
spol Return the poles of an LTI system
sresp Frequency response of an LTI system
ssub Extract a subsystem from a system
Interconnection of Systems
sadd Form the parallel interconnection of systems
sconnect Describe general control loops
9 Command Reference
94
sdiag Append (concatenate) several systems
slft Form linearfractional interconnections
sloop Form elementary feedback interconnections
smult Form the series interconnection of systems
Polytopic and ParameterDependent Systems (PSystems)
aff2pol Polytopic representation of an affine parameterdependent
system
pdsimul Simulation of parameterdependent systems along a
parameter trajectory
psinfo Retrieve information about a Psystem
psys Specify a Psystem
pvec Specify a vector of uncertain or timevarying parameters
pvinfo Interpret the output of pvec
Dynamical Uncertainty
ublock Specify an uncertainty block
udiag Specify blockdiagonal uncertainty from individual
uncertainty blocks
uinfo Display the characteristics of uncertainty structures
aff2lft Linearfractional representation of an affine
parameterdependent system
Interconnection of Systems
List of Functions
95
Robustness Analysis
decay Quadratic decay rate
dnorminf RMS gain of a discretetime LTI system
muperf Robust performance of an uncertain LTI system
mustab Robust stability of an uncertain LTI system
norminf H
∞
norm (RMS gain) of an LTI system
norm2 H
2
performance of an LTI system
quadperf Quadratic performance
quadstab Quadratic stability
pdlstab Robust stability analysis via parameterdependent
Lyapunov functions
popov Popov criterion
9 Command Reference
96
H
∞
Control and Loop Shaping
Continuous Time
hinflmi LMIbased H
∞
synthesis
hinfmix Mixed H
2
/H
∞
synthesis with regional pole placement
hinfric Riccatibased H
∞
synthesis
hinfpar Extract statespace data from the plant SYSTEM matrix
lmireg Specify LMI regions for pole placement
magshape Graphical specification of the shaping filters
msfsyn Multimodel/objective statefeedback synthesis
sconnect Specify general control structures
Discrete Time
dhinflmi LMIbased H
∞
synthesis
dhinfric Riccatibased H
∞
synthesis
hinfpar Retrieve the plant statespace data
Gain Scheduling
hinfgs Synthesis of robust gainscheduled controllers
pdsimul Simulation along parameter trajectories
LMI Lab: Specifying and Solving LMIs
97
LMI Lab: Specifying and Solving LMIs
Specification of LMIs
lmiedit GUI for LMI specification
setlmis Initialize the LMI description
lmivar Define a new matrix variable
lmiterm Specify the term content of an LMI
newlmi Attach an identifying tag to new LMIs
getlmis Get the internal description of the LMI system
LMI Solvers
feasp Feasibility of a system of LMIs
gevp Generalized eigenvalue minimization under LMI
constraints
mincx Minimization of a linear objective under LMI constraints
dec2mat Convert the output of the solvers into values of the matrix
variables
defcx Help define c
T
x objectives for mincx
Evaluation of LMIs/Validation of Results
evallmi Evaluate all variable terms for given values of the decision
variables
showlmi Return the left and righthand sides of an evaluated LMI
setmvar Set a matrix variable to a given value
9 Command Reference
98
LMI Lab: Additional Facilities
Information Retrieval
decinfo Visualize the mapping between matrix variable entries and
decision variables
decnbr Get the number of decision variables, that is, the number of
free scalar variables in a problem
lmiinfo Inquire about an existing system of LMIs
lminbr Get the number of LMIs in a problem
matnbr Get the number of matrix variables in a problem
Manipulations of LMI Variables
decinfo Express each matrix variable entry in terms of the decision
variables
dec2mat Return the matrix variable values given a vector of decision
variables
mat2dec Return the vector of decision variables given the matrix
variable values
Modification of LMI Systems
dellmi Delete an LMI from the system
delmvar Remove a matrix variable from the problem
setmvar Instantiate a matrix variable
aff2lft
99
9aff2lft
Purpose Compute the linearfractional representation of an affine parameter
dependent system
Syntax [sys,delta] = aff2lft(affsys)
Description The input argument affsys describes an affine parameterdependent system
of the form
E(p) = A(p)x + B(p)u
y = C(p)x + D(p)u
where p = (p
1
, . . ., p
n
) is a vector of uncertain or timevarying real parameters
taking values in a box:
Such systems are specified with psys.
Given this affine parameterdependent model, the function aff2lft returns
the equivalent linearfractional representation
where
•P(s) is an LTI system called the nominal plant
•The blockdiagonal operator ∆ = diag(δ
1
I
r
1
, . . ., δ
n
I
r
n
) accounts for the
uncertainty on the real parameters p
j
. Each scalar δ
j
is real, possibly
repeated (r
j
> 1), and bounded by
x
·
p
j
pj p
j
≤ ≤
∆
P(s)
u y
δ
j
p
j
p
j
–
2
 ≤
aff2lft
910
The SYSTEM matrix of P and the description of ∆ are returned in sys and delta.
Closing the loop with ∆ ≡ 0 yields the LTI system
E(p
av
) = A(p
av
)x + B(p
av
)u
y = C(p
av
)x + D(p
av
)u
where p
av
= is the vector of average parameter values.
Example See example in “SectorBounded Uncertainty” on page 230.
See Also psys, pvec, ublock, udiag
x
·
1
2

p p + ( )
aff2pol
911
9aff2pol
Purpose Convert affine parameterdependent models to polytopic ones
Syntax polsys = aff2pol(affsys)
Description aff2pol derives a polytopic representation polsys of the affine parameter
dependent system
(91)
(92)
where p = (p
1
, . . ., p
n
) is a vector of uncertain or timevarying real parameters
taking values in a box or a polytope. The description affsys of this system
should be specified with psys.
The vertex systems of polsys are the instances of (91)–(92) at the vertices p
ex
of the parameter range, i.e., the SYSTEM matrices
for all corners p
ex
of the parameter box or all vertices p
ex
of the polytope of
parameter values.
Example “Example 2.1” on page 221.
See Also psys, pvec, aff2lft
E p ( )x
·
A p ( )x B p ( )u + =
y C p ( )x D p ( )u + =
A p
ex
( ) jE p
ex
( ) + B p
ex
( )
C p
ex
( ) D p
ex
( )
\ .


 
basiclmi
912
9basiclmi
Purpose Compute a solution X of M + P
T
XQ + Q
T
X
T
P < 0
Syntax X = basiclmi(M,P,Q,option)
Description basiclmi computes a solution X of the LMI
M + P
T
XQ + Q
T
X
T
P < 0
where M is symmetric and P,Q are matrices of compatible dimensions. If W
P
and W
Q
are orthonormal bases of the null space of P and Q, this LMI is solvable
if and only if
W
P
< 0 and W
Q
< 0
The output is [] when these conditions are not satisfied.
The solution X is derived by standard linear algebra manipulations. When
option is set to 'Xmin', basiclmi computes the leastnorm solution by solving
the LMI problem
Minimize τ
subject to M + P
T
XQ + Q
T
X
T
P < 0
See Also feasp, mincx
W
P
T
M W
Q
T
M
τI X
X
T
τI
\ .

 
0 ≥
care
913
9care
Purpose Solve continuoustime Riccati equations (CARE)
Syntax X = care(A,B,Q,R,S,E)
[X,L,G,RR] = care(A,B,Q,R,S,E)
Description This function computes the stabilizing solution X of the Riccati equation
A
T
XE + E
T
XA – (E
T
XB + S)R
–1
(B
T
XE + S
T
) + Q = 0
where E is an invertible matrix. The solution is computed by unitary deflation
of the associated Hamiltonian pencil
For solvability, the pencil H – λL must have no finite generalized eigenvalue
on the imaginary axis.
If the input arguments S and E are omitted, the matrices S and E are set to the
default values S = 0 and E = I. The function care also returns the vector L of
closedloop eigenvalues, the gain matrix G given by
G = –R
–1
(B
T
XE + S
T
),
and the Frobenius norm RR of the relative residual matrix.
Reference Laub, A. J., “A Schur Method for Solving Algebraic Riccati Equations,” IEEE
Trans. Aut. Contr., AC–24 (1979), pp. 913–921.
Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and
Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746–
1754.
See Also ricpen, cares, dare
H λL –
A 0 B
Q – A –
T
S –
S
T
B
T
R
λ
E 0 0
0 E
T
0
0 0 0
– =
dare
914
9dare
Purpose Solve discretetime Riccati equations (DARE)
Syntax X = dare(A,B,Q,R,S,E)
[X,L,G,RR] = dare(A,B,Q,R,S,E)
Description dare computes the stabilizing solution X of the algebraic Riccati equation
(93)
where E is an invertible matrix. The solution is computed by unitary deflation
of the associated symplectic pencil
For solvability, the pencil H – λL must have no finite generalized eigenvalue
on the unit circle.
If the input arguments S and E are omitted, the matrices S and E are set to the
default values S = 0 and E = I. The function dare also returns the vector L of
closedloop eigenvalues, the gain matrix G given by
G = –(B
T
XB + R)
–1
(B
T
XA + S
T
),
and the Frobenius norm RR of the relative residual matrix.
Remark The function dricpen is a variant of dare where the inputs are the two matrices
H and L defined above. The syntax is
X = dricpen(H,L)
or
[X1,X2] = dricpen(H,L)
This second syntax returns two matrices X
1
, X
2
such that
• has orthonormal columns,
A
T
XA E
T
XE – A
T
XB S + ( ) B
T
XB R + ( )
1 –
B
T
XA S
T
+ ( ) – Q + 0 =
H λL –
A 0 B
Q – E –
T
S –
S
T
0 R
λ
E 0 0
0 A
T
0
0 B –
T
0
– =
X
1
X
2
dare
915
• is the stabilizing solution of (93).
Reference Pappas, T., A.J. Laub, and N.R. Sandell, “On the Numerical Solution of the
DiscreteTime Algebraic Riccati Equation,” IEEE Trans. Aut. Contr., AC–25
(1980), pp. 631–641.
Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and
Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746–
1754.
See Also dricpen, dares, care
X X
2
X
1
1 –
=
decay
916
9decay
Purpose Quadratic decay rate of polytopic or affine Psystems
Syntax [drate,P] = decay(ps,options)
Description For affine parameterdependent systems
E(p) = A(p)x, p(t) = (p
1
(t), . . ., p
n
(t))
or polytopic systems
E(t) = A(t)x, (A, E) ∈ Co{(A
1
, E
1
), . . ., (A
n
, E
n
)},
decay returns the quadratic decay rate drate, i.e., the smallest α ∈ R such that
A
T
QE + EQA
T
< αQ
holds for some Lyapunov matrix Q > 0 and all possible values of (A, E). Two
control parameters can be reset via options(1) and options(2):
•If options(1)=0 (default), decay runs in fast mode, using the least expensive
sufficient conditions. Set options(1)=1 to use the least conservative
conditions.
•options(2) is a bound on the condition number of the Lyapunov matrix P.
The default is 10
9
.
Example See “Example 3.3” on page 39.
See Also quadstab, pdlstab, mustab, psys
x
·
x
·
decinfo
917
9decinfo
Purpose Describe how the entries of a matrix variable X relate to the decision variables
Syntax decinfo(lmisys)
decX = decinfo(lmisys,X)
Description The function decinfo expresses the entries of a matrix variable X in terms of
the decision variables x
1
, . . ., x
N
. Recall that the decision variables are the free
scalar variables of the problem, or equivalently, the free entries of all matrix
variables described in lmisys. Each entry of X is either a hard zero, some
decision variable x
n
, or its opposite –x
n
.
If X is the identifier of X supplied by lmivar, the command
decX = decinfo(lmisys,X)
returns an integer matrix decX of the same dimensions as X whose (i, j) entry is
•0 if X(i, j) is a hard zero
•n if X(i, j) = x
n
(the nth decision variable)
•–n if X(i, j) = –x
n
decX clarifies the structure of X as well as its entrywise dependence on
x
1
, . . ., x
N
. This is useful to specify matrix variables with atypical structures
(see lmivar).
decinfo can also be used in interactive mode by invoking it with a single
argument. It then prompts the user for a matrix variable and displays in return
the decision variable content of this variable.
Example 1 Consider an LMI with two matrix variables X and Y with structure:
•X = x I
3
with x scalar
•Y rectangular of size 2by1
If these variables are defined by
setlmis([])
X = lmivar(1,[3 0])
Y = lmivar(2,[2 1])
:
:
lmis = getlmis
decinfo
918
the decision variables in X and Y are given by
dX = decinfo(lmis,X)
dX =
1 0 0
0 1 0
0 0 1
dY = decinfo(lmis,Y)
dY =
2
3
This indicates a total of three decision variables x
1
, x
2
, x
3
that are related to the
entries of X and Y by
Note that the number of decision variables corresponds to the number of free
entries in X and Y when taking structure into account.
Example 2 Suppose that the matrix variable X is symmetric block diagonal with one 2by2
full block and one 2by2 scalar block, and is declared by
setlmis([])
X = lmivar(1,[2 1;2 0])
:
lmis = getlmis
The decision variable distribution in X can be visualized interactively as
follows:
decinfo(lmis)
There are 4 decision variables labeled x1 to x4 in this problem.
X
x
1
0 0
0 x
1
0
0 0 x
1 \ .




 
, Y
x
2
x
3 \ .


 
= =
decinfo
919
Matrix variable Xk of interest (enter k between 1 and 1, or 0 to
quit):
?> 1
The decision variables involved in X1 are among {x1,...,x4}.
Their entrywise distribution in X1 is as follows
(0,j>0,j<0 stand for 0,xj,xj, respectively):
X1 :
1 2 0 0
2 3 0 0
0 0 4 0
0 0 0 4
*********
Matrix variable Xk of interest (enter k between 1 and 1, or 0 to
quit):
?> 0
See Also lmivar, mat2dec, dec2mat, decnbr, decinfo
decnbr
920
9decnbr
Purpose Give the total number of decision variables in a system of LMIs
Syntax ndec = decnbr(lmisys)
Description The function decnbr returns the number ndec of decision variables (free scalar
variables) in the LMI problem described in lmisys. In other words, ndec is the
length of the vector of decision variables.
Example For an LMI system lmis with two matrix variables X and Y such that
•X is symmetric block diagonal with one 2by2 full block, and one 2by2
scalar block
•Y is 2by3 rectangular,
the number of decision variables is
ndec = decnbr(LMIs)
ndec =
10
This is exactly the number of free entries in X and Y when taking structure into
account (see decinfo for more details).
See Also dec2mat, decinfo, mat2dec
dec2mat
921
9dec2mat
Purpose Given values of the decision variables, derive the corresponding values of the
matrix variables
Syntax valX = dec2mat(lmisys,decvars,X)
Description Given a value decvars of the vector of decision variables, dec2mat computes the
corresponding value valX of the matrix variable with identifier X. This
identifier is returned by lmivar when declaring the matrix variable.
Recall that the decision variables are all free scalar variables in the LMI
problem and correspond to the free entries of the matrix variables X
1
, . . ., X
K
.
Since LMI solvers return a feasible or optimal value of the vector of decision
variables, dec2mat is useful to derive the corresponding feasible or optimal
values of the matrix variables.
Example See the description of feasp.
See Also mat2dec, decnbr, decinfo
defcx
922
9defcx
Purpose Help specify c
T
x objectives for the mincx solver
Syntax [V1,...,Vk] = defcx(lmisys,n,X1,...,Xk)
Description defcx is useful to derive the c vector needed by mincx when the objective is
expressed in terms of the matrix variables.
Given the identifiers X1,...,Xk of the matrix variables involved in this
objective, defcx returns the values V1,...,Vk of these variables when the nth
decision variable is set to one and all others to zero. See the example in
“Specifying cTx Objectives for mincx” on page 838 for more details on how to
use this function to derive the c vector.
See Also mincx, decinfo
dellmi
923
9dellmi
Purpose Remove an LMI from a given system of LMIs
Syntax newsys = dellmi(lmisys,n)
Description dellmi deletes the nth LMI from the system of LMIs described in lmisys. The
updated system is returned in newsys.
The ranking n is relative to the order in which the LMIs were declared and
corresponds to the identifier returned by newlmi. Since this ranking is not
modified by deletions, it is safer to refer to the remaining LMIs by their
identifiers. Finally, matrix variables that only appeared in the deleted LMI are
removed from the problem.
Example Suppose that the three LMIs
X
1
+ X
1
A
1
+ Q
1
< 0
X
2
+ X
2
A
2
+ Q
2
< 0
X
3
+ X
3
A
3
+ Q
3
< 0
have been declared in this order, labeled LMI1, LMI2, LMI3 with newlmi, and
stored in lmisys. To delete the second LMI, type
lmis = dellmi(lmisys,LMI2)
lmis now describes the system of LMIs
(94)
(95)
and the second variable X
2
has been removed from the problem since it no
longer appears in the system (94)–(95).
To further delete (95), type
lmis = dellmi(lmis,LMI3)
or equivalently
lmis = dellmi(lmis,3)
A
1
T
A
2
T
A
3
T
A
1
T
X
1
X
1
A
1
Q
1
+ + 0 <
A
3
T
X
3
X
3
A
3
Q
3
+ + 0 <
dellmi
924
Note that (95) has retained its original ranking after the first deletion.
See Also newlmi, lmiedit, lmiinfo
delmvar
925
9delmvar
Purpose Delete one of the matrix variables of an LMI problem
Syntax newsys = delmvar(lmisys,X)
Description delmvar removes the matrix variable X with identifier X from the list of
variables defined in lmisys. The identifier X should be the second argument
returned by lmivar when declaring X. All terms involving X are automatically
removed from the list of LMI terms. The description of the resulting system of
LMIs is returned in newsys.
Example Consider the LMI
involving two variables X and Y with identifiers X and Y. To delete the variable
X, type
lmisys = delmvar(lmisys,X)
Now lmisys describes the LMI
with only one variable Y. Note that Y is still identified by the label Y.
See Also lmivar, setmvar, lmiinfo
0
A
T
Y B
T
YA Q + + CX D +
X
T
C
T
D
T
+ X X
T
+ ( ) – \ .


 
<
0
A
T
YB B
T
YA Q + + D
D
T
0 \ .


 
<
dhinflmi
926
9dhinflmi
Purpose LMIbased H
∞
synthesis for discretetime systems
Syntax gopt = dhinflmi(P,r)
[gopt,K] = dhinflmi(P,r)
[gopt,K,x1,x2,y1,y2] = dhinflmi(P,r,g,tol,options)
Description dhinflmi is the counterpart of hinflmi for discretetime plants. Its syntax and
usage mirror those of hinflmi.
The H
∞
performance γ is achievable if and only if the following system of LMIs
has symmetric solutions R and S:
where N
12
and N
21
denote bases of the null spaces of and (C
2
, D
21
),
respectively. The function dhinflmi returns solutions R = x1 and S = y1 of this
LMI system for γ = gopt (the matrices x2 and y2 are set to goptbyI).
Reference Gahinet, P. and P. Apkarian, “A Linear Matrix Inequality Approach to H
∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421–448.
See Also dhinfric, hinflmi
N
12
0
0 I
\ .

 
T
ARA
T
R – ARC
1
T
B
1
C
1
RA
T
γI – C
1
RC
1
T
+ D
11
B
1
T
D
11
T
γI –
\ .





 
N
12
0
0 I
\ .

 
0 <
N
21
0
0 I
\ .

 
T
A
T
SA S – A
T
SB
1
C
1
T
B
1
T
SA γI – B
1
T
SB
1
+ D
11
T
C
1
D
11
γI –
\ .





 
N
21
0
0 I
\ .

 
0 <
R I
I S \ .

 
0 ≥
B
2
T
D
12
T
, ( )
dhinfric
927
9dhinfric
Purpose Riccatibased H
∞
synthesis for discretetime systems
Syntax gopt = dhinfric(P,r)
[gopt,K] = dhinfric(P,r)
[gopt,K,x1,x2,y1,y2] = dhinfric(P,r,gmin,gmax,tol,options)
Description dhinfric is the counterpart of hinfric for discretetime plants P(s) with
equations:
x
k+1
= Ax
k
+ B
1
w
k
+ B
2
u
k
z
k
= C
1
x
k
+ D
11
w
k
+ D
12
u
k
y
k
= C
2
x
k
+ D
21
w
k
+ D
22
u
k
.
See dnorminf for a definition of the RMS gain (H
∞
performance) of
discretetime systems. The syntax and usage of dhinfric are identical to those
of hinfric.
Restriction D
12
and D
21
must have full rank.
See Also dhinflmi, hinfric
dnorminf
928
9dnorminf
Purpose Compute the random meansquares (RMS) gain of discretetime systems
Syntax [gain,peakf] = dnorminf(g,tol)
Description For sampleddata LTI systems with statespace equations
Ex
k+1
= Ax
k
+ Bu
k
y
k
+1 = Cx
k
+ Du
k
and transfer function G(z) = D + C(zE – A)
–1
B, dnorminf computes the peak
gain
(96)
of the response on the unit circle. When G(z) is stable, this peak gain coincides
with the RMS gain or H
∞
norm of G, that is,
The input g is a SYSTEM matrix containing the statespace data A, B, C, D, E.
On output, dnorminf returns the RMS gain gain and the “frequency” peakf at
which this gain is attained, i.e., the value of ω maximizing (96). The relative
accuracy on the computed RMS gain can be adjusted with the optional input
tol (the default value is 10
–2
).
Reference Boyd, S., V. Balakrishnan, and P. Kabamba, “A Bisection Method for
Computing the H
∞
Norm of a Transfer Matrix and Related Problems,” Math.
Contr. Sign. Syst., 2 (1989), pp. 207–219.
Bruisma, N.A., and M. Steinbuch, “A Fast Algorithm to Compute the H
∞
Norm
of a Transfer Function Matrix,” Syst. Contr. Letters, 14 (1990), pp. 287–293.
Robel, G., “On Computing the Infinity Norm,” IEEE Trans. Aut. Contr., AC–34
(1989), pp. 882–884.
See Also norminf
G
∞
sup σ
max
G e
jω
( ) ( ) =
ω 0 2π , [ ] ∈
G
∞
sup
y
l
2
u
l
2
 =
u t
2
∈
u 0 ≠
evallmi
929
9evallmi
Purpose Given a particular instance of the decision variables, evaluate all variable
terms in the system of LMIs
Syntax evalsys = evallmi(lmisys,decvars)
Description evallmi evaluates all LMI constraints for a particular instance decvars of the
vector of decision variables. Recall that decvars fully determines the values of
the matrix variables X
1
, . . ., X
K
. The “evaluation” consists of replacing all terms
involving X
1
, . . ., X
K
by their matrix value. The output evalsys is an LMI
system containing only constant terms.
The function evallmi is useful for validation of the LMI solvers’ output. The
vector returned by these solvers can be fed directly to evallmi to evaluate all
variable terms. The matrix values of the left and righthand sides of each LMI
are then returned by showlmi.
Observation evallmi is meant to operate on the output of the LMI solvers. To evaluate all
LMIs for particular instances of the matrix variables X
1
, . . ., X
K
, first form the
corresponding decision vector x with mat2dec and then call evallmi with x as
input.
Example Consider the feasibility problem of finding X > 0 such that
A
T
XA – X + I < 0
where This LMI system is defined by:
setlmis([])
X = lmivar(1,[2 1]) % full symmetric X
lmiterm([1 1 1 X],A',A) % LMI #1: A'*X*A
lmiterm([1 1 1 X],1,1) % LMI #1: X
lmiterm([1 1 1 0],1) % LMI #1: I
lmiterm([2 1 1 X],1,1) % LMI #2: X
lmis = getlmis
To compute a solution xfeas, call feasp by
[tmin,xfeas] = feasp(lmis)
A
0.5 0.2 –
0.1 0.7 – \ .

 
. =
evallmi
930
The result is
tmin =
4.7117e+00
xfeas' =
1.1029e+02 1.1519e+01 1.1942e+02
The LMI constraints are therefore feasible since tmin < 0. The solution X
corresponding to the feasible decision vector xfeas would be given by
X = dec2mat(lmis,xfeas,X).
To check that xfeas is indeed feasible, evaluate all LMI constraints by typing
evals = evallmi(lmis,xfeas)
The left and righthand sides of the first and second LMIs are then given by
[lhs1,rhs1] = showlmi(evals,1)
[lhs2,rhs2] = showlmi(evals,2)
and the test
eig(lhs1rhs1)
ans =
8.2229e+01
5.8163e+01
confirms that the first LMI constraint is satisfied by xfeas.
See Also showlmi, setmvar, dec2mat, mat2dec
feasp
931
9feasp
Purpose Find a solution to a given system of LMIs
Syntax [tmin,xfeas] = feasp(lmisys,options,target)
Description The function feasp computes a solution xfeas (if any) of the system of LMIs
described by lmisys. The vector xfeas is a particular value of the decision
variables for which all LMIs are satisfied.
Given the LMI system
(97)
xfeas is computed by solving the auxiliary convex program:
Minimize t subject to .
The global minimum of this program is the scalar value tmin returned as first
output argument by feasp. The LMI constraints are feasible if tmin ð 0 and
strictly feasible if tmin < 0. If the problem is feasible but not strictly feasible,
tmin is positive and very small. Some postanalysis may then be required to
decide whether xfeas is close enough to feasible.
The optional argument target sets a target value for tmin. The optimization
code terminates as soon as a value of t below this target is reached. The default
value is target = 0.
Note that xfeas is a solution in terms of the decision variables and not in terms
of the matrix variables of the problem. Use dec2mat to derive feasible values of
the matrix variables from xfeas.
Control
Parameters
The optional argument options gives access to certain control parameters for
the optimization algorithm. This fiveentry vector is organized as follows:
•options(1) is not used
•options(2) sets the maximum number of iterations allowed to be performed
by the optimization procedure (100 by default)
•options(3) resets the feasibility radius. Setting options(3) to a value R >
0 further constrains the decision vector x = (x
1
, . . ., x
N
) to lie within the ball
N
T
LxN M
T
≤ R x ( )M,
N
T
L x ( )N M
T
R x ( ) – M tI ≤
feasp
932
In other words, the Euclidean norm of xfeas should not exceed R. The
feasibility radius is a simple means of controlling the magnitude of solutions.
Upon termination, feasp displays the fradius saturation, that is, the norm
of the solution as a percentage of the feasibility radius R.
The default value is R = 10
9
. Setting options(3) to a negative value
activates the “flexible bound” mode. In this mode, the feasibility radius is
initially set to 10
8
, and increased if necessary during the course of
optimization
•options(4) helps speed up termination. When set to an integer value J > 0,
the code terminates if t did not decrease by more than one percent in relative
terms during the last J iterations. The default value is 10. This parameter
trades off speed vs. accuracy. If set to a small value (< 10), the code
terminates quickly but without guarantee of accuracy. On the contrary, a
large value results in natural convergence at the expense of a possibly large
number of iterations.
•options(5) = 1 turns off the trace of execution of the optimization
procedure. Resetting options(5) to zero (default value) turns it back on.
Setting option(i) to zero is equivalent to setting the corresponding control
parameter to its default value. Consequently, there is no need to redefine the
entire vector when changing just one control parameter. To set the maximum
number of iterations to 10, for instance, it suffices to type
options=zeros(1,5) % default value for all parameters
options(2)=10
Memory
Problems
When the leastsquares problem solved at each iteration becomes ill
conditioned, the feasp solver switches from Choleskybased to QRbased linear
algebra (see“Memory Problems” on page 974 for details). Since the QR mode
typically requires much more memory, MATLAB may run out of memory and
display the message
??? Error using ==> feaslv
Out of memory. Type HELP MEMORY for your options.
x
i
2
i 1 =
N
∑
R
2
<
feasp
933
You should then ask your system manager to increase your swap space or, if no
additional swap space is available, set options(4) = 1. This will prevent
switching to QR and feasp will terminate when Cholesky fails due to
numerical instabilities.
Example Consider the problem of finding P > I such that
(98)
(99)
(910)
with data
This problem arises when studying the quadratic stability of the polytope of
matrices Co{A
1
, A
2
, A
3
} (see“Quadratic Stability” on page 36 for details).
To assess feasibility with feasp, first enter the LMIs (98)–(910) by:
setlmis([])
p = lmivar(1,[2 1])
lmiterm([1 1 1 p],1,a1,'s') % LMI #1
lmiterm([2 1 1 p],1,a2,'s') % LMI #2
lmiterm([3 1 1 p],1,a3,'s') % LMI #3
lmiterm([4 1 1 p],1,1) % LMI #4: P
lmiterm([4 1 1 0],1) % LMI #4: I
lmis = getlmis
Then call feasp to find a feasible decision vector:
[tmin,xfeas] = feasp(lmis)
A
1
T
P PA
1
+ 0 <
A
2
T
P PA
2
+ 0 <
A
3
T
P PA
3
+ 0 <
A
1
1 – 2
1 3 – \ .

 
= , A
2
0.8 – 1.5
1.3 2.7 – \ .

 
= , A
3
1.4 – 0.9
0.7 2.0 – \ .

 
=
feasp
934
This returns tmin = 3.1363. Hence (98)–(910) is feasible and the dynamical
system = A(t)x is quadratically stable for A(t) ∈ Co{A
1
, A
2
, A
3
}.
To obtain a Lyapunov matrix P proving the quadratic stability, type
P = dec2mat(lmis,xfeas,p)
This returns
It is possible to add further constraints on this feasibility problem. For
instance, you can bound the Frobenius norm of P by 10 while asking tmin to be
less than or equal to –1. This is done by
[tmin,xfeas] = feasp(lmis,[0,0,10,0,0],1)
The third entry 10 of options sets the feasibility radius to 10 while the third
argument 1 sets the target value for tmin. This yields tmin = 1.1745 and a
matrix P with largest eigenvalue λ
max
(P) = 9.6912.
Reference The feasibility solver feasp is based on Nesterov and Nemirovski’s Projective
Method described in
Nesterov, Y., and A. Nemirovski, Interior Point Polynomial Methods in Convex
Programming: Theory and Applications, SIAM, Philadelphia, 1994.
Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, Baltimore, Maryland, pp.
840–844.
The optimization is performed by the CMEX file feaslv.mex.
See Also mincx, gevp, dec2mat
x
·
P
270.8 126.4
126.4 155.1 \ .

 
=
getdg
935
9getdg
Purpose Retrieve the D, G scaling matrices in problems
Syntax D = getdg(ds,i)
G = getdg(gs,i)
Description The function mustab computes the mixedµ upper bound over a grid of
frequencies fs and returns the corresponding D, G scalings compacted in two
matrices ds and gs. The values of D, G at the frequency fs(i) are extracted
from ds and gs by getdg.
See Also mustab
getlmis
936
9getlmis
Purpose Get the internal description of an LMI system
Syntax lmisys = getlmis
Description After completing the description of a given LMI system with lmivar and
lmiterm, its internal representation lmisys is obtained with the command
lmisys = getlmis
This MATLAB representation of the LMI system can be forwarded to the LMI
solvers or any other LMILab function for subsequent processing.
See Also setlmis, lmivar, lmiterm, newlmi
gevp
937
9gevp
Purpose Generalized eigenvalue minimization under LMI constraints
Syntax [lopt,xopt] = gevp(lmisys,nlfc,options,linit,xinit,target)
Description gevp solves the generalized eigenvalue minimization problem
Minimize λ subject to:
(911)
(912)
(913)
where C(x) < D(x) and A(x) < λB(x) denote systems of LMIs. Provided that
(911)–(912) are jointly feasible, gevp returns the global minimum lopt and
the minimizing value xopt of the vector of decision variables x. The
corresponding optimal values of the matrix variables are obtained with
dec2mat.
The argument lmisys describes the system of LMIs (911)–(913) for λ = 1. The
LMIs involving λ are called the linearfractional constraints while (911)–(912)
are regular LMI constraints. The number of linearfractional constraints (913)
is specified by nlfc. All other input arguments are optional. If an initial
feasible pair (λ
0
, x
0
) is available, it can be passed to gevp by setting linit to λ
0
and xinit to x
0
. Note that xinit should be of length decnbr(lmisys) (the
number of decision variables). The initial point is ignored when infeasible.
Finally, the last argument target sets some target value for λ. The code
terminates as soon as it has found a feasible pair (λ, x) with λ ð target.
Caution When setting up your gevp problem, be cautious to
•Always specify the linearfractional constraints (913) last in the LMI
system. gevp systematically assumes that the last nlfc LMI constraints are
linear fractional
•Add the constraint B(x) > 0 or any other LMI constraint that enforces it (see
Remark below). This positivity constraint is required for regularity and
wellposedness of the optimization problem (see the discussion in
“WellPosedness Issues” on page 840).
C x ( ) D x ( ) <
0 B x ( ) <
A x ( ) λB x ( ) <
gevp
938
Control
Parameters
The optional argument options gives access to certain control parameters of
the optimization code. In gevp, this is a fiveentry vector organized as follows:
•options(1) sets the desired relative accuracy on the optimal value lopt
(default = 10
–2
).
•options(2) sets the maximum number of iterations allowed to be performed
by the optimization procedure (100 by default).
•options(3) sets the feasibility radius. Its purpose and usage are as for
feasp.
•options(4) helps speed up termination. If set to an integer value J > 0, the
code terminates when the progress in λ over the last J iterations falls below
the desired relative accuracy. By progress, we mean the amount by which λ
decreases. The default value is 5 iterations.
•options(5) = 1 turns off the trace of execution of the optimization
procedure. Resetting options(5) to zero (default value) turns it back on.
Setting option(i) to zero is equivalent to setting the corresponding control
parameter to its default value.
Example Given
consider the problem of finding a single Lyapunov function V(x) = x
T
Px that
proves stability of
= A
i
x (i = 1, 2, 3)
and maximizes the decay rate . This is equivalent to minimizing
α subject to
(914)
(915)
A
1
1 – 2
1 3 – \ .

 
= , A
2
0.8 – 1.5
1.3 2.7 – \ .

 
= , A
3
1.4 – 0.9
0.7 2.0 – \ .

 
, =
x
·
dV x ( )
dt
 –
I P <
A
1
T
P PA
1
+ αP <
gevp
939
(916)
(917)
To set up this problem for gevp, first specify the LMIs (915)–(917) with α = 1:
setlmis([]);
p = lmivar(1,[2 1])
lemiterm([1 1 1 0],1) % P > I : I
lemiterm([ 1 1 1 p],1,1) % P > I : P
lemiterm([2 1 1 p],1,a1,'s') % LFC # 1 (lhs)
lemiterm([ 2 1 1 p],1,1) % LFC # 1 (rhs)
lemiterm([3 1 1 p],1,a2,'s') % LFC # 2 (lhs)
lemiterm([ 3 1 1 p],1,1) % LFC # 2 (rhs)
lemiterm([4 1 1 p],1,a3,'s') % LFC # 3 (lhs)
lemiterm([ 4 1 1 p],1,1) % LFC # 3 (rhs)
lmis = getlmis
Note that the linear fractional constraints are defined last as required. To
minimize α subject to (915)–(917), call gevp by
[alpha,popt]=gevp(lmis,3)
This returns alpha = 0.122 as optimal value (the largest decay rate is
therefore 0.122). This value is achieved for
Remark Generalized eigenvalue minimization problems involve standard LMI
constraints (911) and linear fractional constraints (913). For wellposedness,
the positive definiteness of B(x) must be enforced by adding the constraint
B(x) > 0 to the problem. Although this could be done automatically from inside
the code, this is not desirable for efficiency reasons. For instance, the set of
constraints (912) may reduce to a single constraint as in the example above.
In this case, the single extra LMI “P > I” is enough to enforce positivity of all
A
2
T
P PA
2
+ αP <
A
3
T
P PA
3
+ αP <
P
5.58 8.35 –
8.35 – 18.64 \ .

 
=
gevp
940
linearfractional righthand sides. It is therefore left to the user to devise the
least costly way of enforcing this positivity requirement.
Reference The solver gevp is based on Nesterov and Nemirovski’s Projective Method
described in
Nesterov, Y., and A. Nemirovski, Interior Point Polynomial Methods in Convex
Programming: Theory and Applications, SIAM, Philadelphia, 1994.
The optimization is performed by the CMEX file fpds.mex.
See Also dec2mat, decnbr, feasp, mincx
hinfgs
941
9hinfgs
Purpose Synthesis of gainscheduled H
∞
controllers
Syntax [gopt,pdK,R,S] = hinfgs(pdP,r,gmin,tol,tolred)
Description Given an affine parameterdependent plant
where the timevarying parameter vector p(t) ranges in a box and is measured
in real time, hinfgs seeks an affine parameterdependent controller
scheduled by the measurements of p(t) and such that
•K stabilizes the closedloop system
for all admissible parameter trajectories p(t)
•K minimizes the closedloop quadratic H
∞
performance from w to z.
The description pdP of the parameterdependent plant P is specified with psys
and the vector r gives the number of controller inputs and outputs (set
r=[p
2
,m2] if y ∈ R
p2
and u ∈ R
m2
). Note that hinfgs also accepts the polytopic
model of P returned, e.g., by aff2pol or sconnect.
hinfgs returns the optimal closedloop quadratic performance gopt and a
polytopic description of the gainscheduled controller pdK. To test if a
P
¹
¦
´
¦
¦
x
·
= A p ( )x B
1
p ( )w B
2
u + +
z = C
1
p ( )x D
11
p ( )w D
12
u + +
y = C
2
x D
21
w D
22
u + +
K
ζ
·
A
K
p ( )ζ B
K
p ( )y + =
u C
K
p ( )ζ D
K
p ( )y + =
¹
´
¦
P
u
y
w z
K
hinfgs
942
closedloop quadratic performance γ is achievable, set the third input gmin to γ.
The arguments tol and tolred control the required relative accuracy on gopt
and the threshold for order reduction (see hinflmi for details). Finally, hinfgs
also returns solutions R, S of the characteristic LMI system.
Controller
Implementation
The gainscheduled controller pdK is parametrized by p(t) and characterized by
the values K
Π
j
of at the corners ³
j
of the parameter box. The
command
Kj = psinfo(pdK,'sys',j)
returns the jth vertex controller K
Π
j
while
pv = psinfo(pdP,'par')
vertx = polydec(pv)
Pj = vertx(:,j)
gives the corresponding corner ³
j
of the parameter box (pv is the parameter
vector description).
The controller scheduling should be performed as follows. Given the
measurements p(t) of the parameters at time t,
3 Express p(t) as a convex combination of the ³
j
:
p(t) = α
1
³
1
+ . . .+ α
N
³
N
, α
j
Š 0,
This convex decomposition is computed by polydec.
4 Compute the controller statespace matrices at time t as the convex
combination of the vertex controllers K
Π
j
:
5 Use A
K
(t), B
K
(t), C
K
(t), D
K
(t) to update the controller statespace equations.
A
K
p ( ) B
K
p ( )
C
K
p ( ) D
K
p ( )
\ .


 
α
j
i 1 =
N
∑
1 =
A
K
t ( ) B
K
t ( )
C
K
t ( ) D
K
t ( )
\ .


 
α
j
i 1 =
N
∑
K
Π
j
. =
hinfgs
943
Reference Apkarian, P., P. Gahinet, and G. Becker, “SelfScheduled H
∞
Control of Linear
ParameterVarying Systems,” submitted to Automatica, October 1995.
Becker, G., Packard, P., “Robust Performance of LinearParametrically
Varying Systems Using ParametricallyDependent Linear Feedback,” Systems
and Control Letters, 23 (1994), pp. 205–215.
Packard, A., “Gain Scheduling via Linear Fractional Transformations,” Syst.
Contr. Letters, 22 (1994), pp. 79–92.
Example See “Design Example” on page 710.
See Also psys, pvec, pdsimul, polydec, hinflmi
hinflmi
944
9hinflmi
Purpose LMIbased H
∞
synthesis for continuoustime plants
Syntax gopt = hinflmi(P,r)
[gopt,K] = hinflmi(P,r)
[gopt,K,x1,x2,y1,y2] = hinflmi(P,r,g,tol,options)
Description hinflmi computes an internally stabilizing controller K(s) that minimizes the
closedloop H
∞
performance in the control loop
The H
∞
performance is the RMS gain of the closedloop transfer function
F (P, K) from w to z. This function implements the LMIbased approach to H
∞
synthesis. The SYSTEM matrix P contains a statespace realization of the plant
P(s) and the row vector r specifies the number of measurements and controls
(set r = [p
2
m2] when y ∈ R
p
2
and u ∈ R
m
2
).
The optimal H
∞
performance gopt and an optimal H
∞
controller K are returned
by
[gopt,K] = hinflmi(P,r)
Alternatively, a suboptimal controller K that guarantees F (P, K)
∞
< g is
obtained with the syntax
[gopt,K] = hinflmi(P,r,g)
The optional argument tol specifies the relative accuracy on the computed
optimal performance gopt. The default value is 10
–2
. Finally,
[gopt,K,x1,x2,y1,y2] = hinflmi(P,r)
also returns solutions R = x1 and S = y1 of the characteristic LMIs for γ = gopt
(see “Practical Considerations” on page 618 for details).
K(s)
P(s)
u
y
w
z
hinflmi
945
Control
Parameters
The optional argument options resets the following control parameters:
•options(1) is valued in [0, 1] with default value 0. Increasing its value
reduces the norm of the LMI solution R. This typically yields controllers with
slower dynamics in singular problems (rankdeficient D
12
). This also
improves closedloop damping when p
12
(s) has jωaxis zeros.
•options(2): same as options(1) for S and singular problems with
rankdeficient D
21
•when options(3) is set to ε > 0, reducedorder synthesis is performed
whenever
ρ(R
–1
S
–1
) Š (1 – ε) gopt
2
The default value is ε = 10
–3
Remark The LMIbased approach is directly applicable to singular plants without
preliminary regularization.
Reference Scherer, C., “H
∞
Optimization without Assumptions on Finite or Infinite
Zeros,” SIAM J. Contr. Opt., 30 (1992), pp. 143–166.
Gahinet, P., and P. Apkarian, “A Linear Matrix Inequality Approach to H
∞
Control,” Int. J. Robust and Nonlinear Contr., 4 (1994), pp. 421–448.
Gahinet, P., “Explicit Controller Formulas for LMIbased H
∞
Synthesis,” in
Proc. Amer. Contr. Conf., 1994, pp. 2396–2400.
Iwasaki, T., and R.E. Skelton, “All Controllers for the General H
∞
Control
Problem: LMI Existence Conditions and StateSpace Formulas,” Automatica,
30 (1994), pp. 1307–1317.
See Also hinfric, hinfmix, dhinflmi, ltisys, slft
hinfmix
946
9hinfmix
Purpose Mixed H
2
/H
∞
synthesis with pole placement constraints
Syntax [gopt,h2opt,K,R,S] = hinfmix(P,r,obj,region,dkbnd,tol)
Description hinfmix performs multiobjective outputfeedback synthesis. The control
problem is sketched in Figure 9.1.
Figure 91: Mixed H
2
/H
∞
synthesis
If T
∞
(s) and T
2
(s) denote the closedloop transfer functions from w to z
∞
and z
2
,
respectively, hinfmix computes a suboptimal solution of the following
synthesis problem:
Design an LTI controller K(s) that minimizes the mixed H
2
/H
∞
criterion
subject to
•T
∞

∞
< γ
0
•T
2

2
< ν
0
•The closedloop poles lie in some prescribed LMI region D.
Recall that 
.

∞
and 
.

2
denote the H
∞
norm (RMS gain) and H
2
norm of transfer
functions. More details on the motivations and statement of this problem can
be found in “MultiObjective H• Synthesis” on page 515.
On input, P is the SYSTEM matrix of the plant P(s) and r is a threeentry vector
listing the lengths of z
2
, y, and u. Note that z
∞
and/or z
2
can be empty. The
K(s)
P(s)
u
y
w
z
∞
z
2
α T
∞ ∞
2
β T
2
2
2
+
hinfmix
947
fourentry vector obj = [γ
0
, ν
0
, α, β] specifies the H
2
/H
∞
constraints and
tradeoff criterion, and the remaining input arguments are optional:
•region specifies the LMI region for pole placement (the default region = []
is the open lefthalf plane). Use lmireg to interactively build the LMI region
description region
•dkbnd is a userspecified bound on the norm of the controller feedthrough
matrix D
K
(see the definition in “LMI Formulation” on page 516). The
default value is 100. To make the controller K(s) strictly proper, set dkbnd =
0.
•tol is the required relative accuracy on the optimal value of the tradeoff
criterion (the default is 10
–2
).
The function hinfmix returns guaranteed H
∞
and H
2
performances gopt and
h2opt as well as the SYSTEM matrix K of the LMIoptimal controller. You can
also access the optimal values of the LMI variables R, S via the extra output
arguments R and S.
A variety of mixed and unmixed problems can be solved with hinfmix (see list
in “The Function hinfmix” on page 520). In particular, you can use hinfmix to
perform pure pole placement by setting obj = [0 0 0 0]. Note that both z
∞
and z
2
can be empty in such case.
Example See the example in“H• Synthesis” on page 510.
Reference Chilali, M., and P. Gahinet, “H
∞
Design with Pole Placement Constraints: An
LMI Approach,” to appear in IEEE Trans. Aut. Contr., 1995.
Scherer, C., “Mixed H
2
H
∞
Control,” to appear in Trends in Control: A European
Perspective, volume of the special contributions to the ECC 1995.
See Also lmireg, hinflmi, msfsyn
hinfpar
948
9hinfpar
Purpose Extract the statespace matrices A, B
1
, B
2
, . . .from the SYSTEM matrix
representation of an H
∞
plant
Syntax [a,b1,b2,c1,c2,d11,d12,d21,d22] = hinfpar(P,r)
b1 = hinfpar(P,r,'b1')
Description In H
∞
control problems, the plant P(s) is specified by its statespace equations
= Ax + B
1
w + B
2
u
z = C
1
x + D
11
w + D
12
u
y = C
2
x + D
21
w + D
22
u
or their discretetime counterparts. For design purposes, this data is stored in
a single SYSTEM matrix P created by
P = ltisys(a,[b1 b2],[c1;c2],[d11 d12;d21 d22])
The function hinfpar retrieves the statespace matrices A, B
1
, B
2
, . . . from P.
The second argument r specifies the size of the D
22
matrix. Set r = [ny,nu]
where ny is the number of measurements (length of y) and nu is the number of
controls (length of u).
The syntax
hinfpar(P,r,'b1')
can be used to retrieve just the B
1
matrix. Here the third argument should be
one of the strings 'a', 'b1', 'b2',....
See Also hinfric, hinflmi, ltisys
x
·
hinfric
949
9hinfric
Purpose Riccatibased H
∞
synthesis for continuoustime plants
Syntax gopt = hinfric(P,r) [gopt,K] = hinfric(P,r)
[gopt,K,x1,x2,y1,y2,Preg] = hinfric(P,r,gmin,gmax,...
tol,options)
Description hinfric computes an H
∞
controller K(s) that internally stabilizes the control
loop
and minimizes the RMS gain (H
∞
performance) of the closedloop transfer
function T
wz
. The function hinfric implements the Riccatibased approach.
The SYSTEM matrix P contains a statespace realization
= Ax + B
1
w + B
2
u
z = C
1
x + D
11
w + D
12
u
y = C
2
x + D
21
w + D
22
u
of the plant P(s) and is formed by the command
P = ltisys(a,[b1 b2],[c1;c2],[d11 d12;d21 d22])
The row vector r specifies the dimensions of D
22
. Set r = [p
2
m2] when y ∈ R
p2
and u ∈ R
m2
.
To minimize the H
∞
performance, call hinfric with the syntax
gopt = hinfric(P,r)
K(s)
P(s)
u
y
w
z
x
·
hinfric
950
This returns the smallest achievable RMS gain from w to z over all stabilizing
controllers K(s). An H
∞
optimal controller K(s) with performance gopt is
computed by
[gopt,K] = hinfric(P,r)
The H
∞
performance can also be minimized within prescribed bounds gmin and
gmax and with relative accuracy tol by typing
[gopt,K] = hinfric(P,r,gmin,gmax,tol)
Setting both gmin and gmax to γ > 0 tests whether the H
∞
performance γ is
achievable and returns a controller K(s) such that T
wz

∞
< γ when γ is
achievable. Setting gmin or gmax to zero is equivalent to specifying no lower or
upper bound.
Finally, the full syntax
[gopt,K,x1,x2,y1,y2,Preg] = hinfric(P,r)
also returns the solutions X
∞
= x2/x1 and Y
∞
= y2/y1 of the H
∞
Riccati
equations for γ = gopt , as well as the regularized plant Preg used to compute
the controller in singular problems.
Given the controller SYSTEM matrix K returned by hinfric, a realization of the
closedloop transfer function from w to z is given by
Twz = slft(P,K)
Control
Parameters
hinfric automatically regularizes singular H
∞
problems and computes
reducedorder controllers when the matrix I – γ
–2
X
∞
Y
∞
is nearly singular. The
optional input options gives access to the control parameters that govern the
regularization and order reduction:
•options(1) controls the amount of regularization used when D
12
or D
21
is
rankdeficient. This tuning parameter is valued in [0,1] with default value 0.
Increasing options(1) augments the amount of regularization. This
generally results in less control effort and in controller statespace matrices
of smaller norm, possibly at the expense of a higher closedloop RMS gain.
•options(2) controls the amount of regularization used when P
12
(s) or P
21
(s)
has jωaxis zero(s). Such problems are regularized by replacing A by A + εI
where ε is a small positive number. Increasing options(2) increases ε and
improves the closedloop damping. Beware that making ε too large may
hinfric
951
destroy stabilizability or detectability when the plant P(s) includes shaping
filters with poles close to the imaginary axis. In such cases the optimal H
∞
performance will be infinite (see “Practical Considerations” on page 618 for
more details).
•when options(3) is set to ε > 0, reducedorder synthesis is performed
whenever
ρ(X
∞
Y
∞
) Š (1 – ε) gopt
2
The default value is ε = 10
–3
Remark Setting gmin = gmax = Inf computes the optimal LQG controller.
Example See the example in “H• Synthesis” on page 510.
Reference Doyle, J.C., Glover, K., Khargonekar, P., and Francis, B., “StateSpace
Solutions to Standard H
2
and H
∞
Control Problems,” IEEE Trans. Aut. Contr.,
AC–34 (1989), pp. 831–847.
Gahinet, P., and A.J. Laub, “Reliable Computation of flopt in Singular H
∞
Control,” in Proc. Conf. Dec. Contr., Lake Buena Vista, Fl., 1994, pp. 1527–
1532.
Gahinet, P., “Explicit Controller Formulas for LMIbased H
∞
Synthesis,” in
Proc. Amer. Contr. Conf., 1994, pp. 2396–2400.
See Also hinflmi, hinfmix, dhinfric, ltisys, slft
lmiedit
952
9lmiedit
Purpose Specify or display systems of LMIs as MATLAB expressions
Syntax lmiedit
Description lmiedit is a graphical user interface for the symbolic specification of LMI
problems. Typing lmiedit calls up a window with two editable text areas and
various pushbuttons. To specify an LMI system,
1 Give it a name (top of the window).
2 Declare each matrix variable (name and structure) in the upper half of the
window. The structure is characterized by its type (S for symmetric block
diagonal, R for unstructured, and G for other structures) and by an
additional structure matrix similar to the second input argument of lmivar.
Please use one line per matrix variable in the text editing areas.
3 Specify the LMIs as MATLAB expressions in the lower half of the window.
An LMI can stretch over several lines. However, do not specify more than
one LMI per line.
Once the LMI system is fully specified, you can perform the following
operations by pressing the corresponding pushbutton:
•Visualize the sequence of lmivar/lmiterm commands needed to describe this
LMI system (view commands buttons)
•Conversely, display the symbolic expression of the LMI system produced by
a particular sequence of lmivar/lmiterm commands (click the describe...
buttons)
•Save the symbolic description of the LMI system as a MATLAB string (save
button). This description can be reloaded later on by pressing the load
button
•Read a sequence of lmivar/lmiterm commands from a file (read button). The
matrix expression of the LMI system specified by these commands is then
displayed by clicking on describe the LMIs...
•Write in a file the sequence of lmivar/lmiterm commands needed to specify
a particular LMI system (write button)
•Generate the internal representation of the LMI system by pressing create.
The result is written in a MATLAB variable with the same name as the LMI
system
lmiedit
953
Remark Editable text areas have builtin scrolling capabilities. To activate the scroll
mode, click in the text area, maintain the mouse button down, and move the
mouse up or down. The scroll mode is only active when all visible lines have
been used.
Example An example and some limitations of lmiedit can be found in “The LMI Editor
lmiedit” on page 816.
See Also lmivar, lmiterm, newlmi, lmiinfo
lmiinfo
954
9lmiinfo
Purpose Interactively retrieve information about the variables and term content of
LMIs
Syntax lmiinfo(lmisys)
Description lmiinfo provides qualitative information about the system of LMIs lmisys.
This includes the type and structure of the matrix variables, the number of
diagonal blocks in the inner factors, and the term content of each block.
lmiinfo is an interactive facility where the user seeks specific pieces of
information. General LMIs are displayed as
N' * L(x) * N < M' * R(x) * M
where N,M denote the outer factors and L,R the left and right inner factors. If
the outer factors are missing, the LMI is simply written as
L(x) < R(x)
If its righthand side is zero, it is displayed as
N' * L(x) * N < 0
Information on the block structure and term content of L(x) and R(x) is also
available. The term content of a block is symbolically displayed as
C1 + A1*X2*B1 + B1'*X2*A1' + a2*X1 + x3*Q1
with the following conventions:
•X1, X2, x3 denote the problem variables. Uppercase X indicates matrix
variables while lowercase x indicates scalar variables. The labels 1,2,3 refer
to the first, second, and third matrix variable in the order of declaration.
•Cj refers to constant terms. Special cases are I and I (I = identity matrix).
•Aj, Bj denote the left and right coefficients of variable terms. Lowercase
letters such as a2 indicate a scalar coefficient.
•Qj is used exclusively with scalar variables as in x3*Q1.
The index j in Aj, Bj, Cj, Qj is a dummy label. Hence C1 may appear in
several blocks or several LMIs without implying any connection between the
corresponding constant terms. Exceptions to this rule are the notations
lmiinfo
955
A1*X2*A1' and A1*X2*B1 + B1'*X2'*A1' which indicate symmetric terms and
symmetric pairs in diagonal blocks.
Example Consider the LMI
where the matrix variables are X of Type 1, Y of Type 2, and z scalar. If this
LMI is described in lmis, information about X and the LMI block structure can
be obtained as follows:
lmiinfo(lmis)
LMI ORACLE

This is a system of 1 LMI with 3 variable matrices
Do you want information on
(v) matrix variables (l) LMIs (q) quit
?> v
Which variable matrix (enter its index k between 1 and 3) ? 1
X1 is a 2x2 symmetric block diagonal matrix
its (1,1)block is a full block of size 2

This is a system of 1 LMI with 3 variable matrices
Do you want information on
(v) matrix variables (l) LMIs (q) quit
?> l
Which LMI (enter its number k between 1 and 1) ? 1
This LMI is of the form
0 ð
2X – A +
T
YB B
T
Y
T
A I + + XC
C
T
X zI – \ .


 
lmiinfo
956
0 < R(x)
where the inner factor(s) has 2 diagonal block(s)
Do you want info on the right inner factor ?
(w) whole factor (b) only one block
(o) other LMI (t) back to top level
?> w
Info about the right inner factor
block (1,1) : I + a1*X1 + A2*X2*B2 + B2'*X2'*A2'
block (2,1) : A3*X1
block (2,2) : x3*A4
(w) whole factor (b) only one block
(o) other LMI (t) back to top level

This is a system of 1 LMI with 3 variable matrices
Do you want information on
(v) matrix variables (l) LMIs (q) quit
?> q
It has been a pleasure serving you!
Note that the prompt symbol is ?> and that answers are either indices or
letters. All blocks can be displayed at once with option (w), or you can prompt
for specific blocks with option (b).
Remark lmiinfo does not provide access to the numerical value of LMI coefficients.
See Also decinfo, lminbr, matnbr, decnbr
lminbr
957
9lminbr
Purpose Return the number of LMIs in an LMI system
Syntax k = lminbr(lmisys)
Description lminbr returns the number k of linear matrix inequalities in the LMI problem
described in lmisys.
See Also lmiinfo, matnbr
lmireg
958
9lmireg
Purpose Specify LMI regions for pole placement purposes
Syntax region = lmireg
region = lmireg(reg1,reg2,...)
Description lmireg is an interactive facility to specify the LMI regions involved in
multiobjective H
∞
synthesis with pole placement constraints (see msfsyn and
hinfmix). Recall that an LMI region is any convex subset D of the complex
plane that can be characterized by an LMI in z and , i.e.,
D = {z ∈ C : L + Mz + M
T
< 0}
for some fixed real matrices M and L = L
T
. This class of regions encompasses
half planes, strips, conic sectors, disks, ellipses, and any intersection of the
above.
Calling lmireg without argument starts an interactive query/answer session
where you can specify the region of your choice. The matrix region = [L, M] is
returned upon termination. This matrix description of the LMI region can be
passed directly to msfsyn or hinfmix for synthesis purposes.
The function lmireg can also be used to intersect previously defined LMI
regions reg1, reg2,.... The output region is then the [L, M] description of
the intersection of these regions.
See Also msfsyn, hinfmix
z
z
lmiterm
959
9lmiterm
Purpose Specify the term content of LMIs
Syntax lmiterm(termID,A,B,flag)
Description lmiterm specifies the term content of an LMI one term at a time. Recall that
LMI term refers to the elementary additive terms involved in the blockmatrix
expression of the LMI. Before using lmiterm, the LMI description must be
initialized with setlmis and the matrix variables must be declared with
lmivar. Each lmiterm command adds one extra term to the LMI system
currently described.
LMI terms are one of the following entities:
•outer factors
•constant terms (fixed matrices)
•variable terms AXB or AX
T
B where X is a matrix variable and A and B are
given matrices called the term coefficients.
When describing an LMI with several blocks, remember to specify only the
terms in the blocks on or below the diagonal (or equivalently, only the terms
in blocks on or above the diagonal). For instance, specify the blocks (1,1), (2,1),
and (2,2) in a twoblock LMI.
In the calling of limterm, termID is a fourentry vector of integers specifying
the term location and the matrix variable involved.
where positive p is for terms on the lefthand side of the pth LMI and
negative p i s for terms on the righthand side of the pth LMI.
Recall that, by convention, the lefthand side always refers to the smaller side
of the LMI. The index p is relative to the order of declaration and corresponds
to the identifier returned by newlimi.
termID (1)
+p
p
¹
´
¦
=
termID (2:3)
0 0 , [ ] for outer factors
i j , [ ] for terms in the i j , ( )th
block of the left or right inner factor
¹
¦
´
¦
¦
=
lmiterm
960
where x is the identifier of the matrix variable X as returned by lmivar.
The arguments A and B contain the numerical data and are set according to:
Note that identity outer factors and zero constant terms need not be specified.
The extra argument flag is optional and concerns only conjugated expressions
of the form
(AXB) + (AXB)
T
= AXB + B
T
X
(T)
A
T
in diagonal blocks. Setting flag = 's' allows you to specify such expressions
with a single lmiterm command. For instance,
lmiterm([1 1 1 X],A,1,'s')
adds the symmetrized expression AX + X
T
A
T
to the (1,1) block of the first LMI
and summarizes the two commands
lmiterm([1 1 1 X],A,1)
lmiterm([1 1 1 X],1,A')
Aside from being convenient, this shortcut also results in a more efficient
representation of the LMI.
Type of Term A B
outer factor N matrix value of N omit
constant term C matrix value of C omit
variable term
AXB or AX
T
B
matrix value of A
(1 if A is absent)
matrix value of B
(1 if B is absent)
termID (4)
0 for outer factors
x for variable terms AXB
x – for variable terms AX
T
B ¹
¦
´
¦
¦
=
lmiterm
961
Example Consider the LMI
where X
1
, X
2
are matrix variables of Types 2 and 1, respectively, and x
3
is a
scalar variable (Type 1).
After initializing the LMI description with setlmis and declaring the matrix
variables with lmivar, the terms on the lefthand side of this LMI are specified
by:
lmiterm([1 1 1 X2],2*A,A') % 2*A*X2*A'
lmiterm([1 1 1 x3],1,E) % x3*E
lmiterm([1 1 1 0],D*D') % D*D'
lmiterm([1 2 1 X1],1,B) % X1'*B
lmiterm([1 2 2 0],1) % I
Here X1, X2, X3 should be the variable identifiers returned by lmivar.
Similarly, the term content of the righthand side is specified by:
lmiterm([1 0 0 0],M) % outer factor M
lmiterm([1 1 1 X1],C,C','s') % C*X1*C'+C*X1'*C'
lmiterm([1 2 2 X2],f,1) % f*X2
Note that CX
1
C
T
+ C C
T
is specified by a single lmiterm command with the
flag 's' to ensure proper symmetrization.
See Also setlmis, lmivar, getlmis, lmiedit, newlmi
2AX
2
A
T
x
3
E – DD
T
+ B
T
X
1
X
1
T
B I –
\ .



 
M
T CX
1
C
T
CX
1
T
C
T
+ 0
0 f – X
2
\ .


 
M <
X
1
T
lmivar
962
9lmivar
Purpose Specify the matrix variables in an LMI problem
Syntax X = lmivar(type,struct)
[X,n,sX] = lmivar(type,struct)
Description lmivar defines a new matrix variable X in the LMI system currently described.
The optional output X is an identifier that can be used for subsequent reference
to this new variable.
The first argument type selects among available types of variables and the
second argument struct gives further information on the structure of X
depending on its type. Available variable types include:
type=1: Symmetric matrices with a blockdiagonal structure. Each diagonal
block is either full (arbitrary symmetric matrix), scalar (a multiple of the
identity matrix), or identically zero.
If X has R diagonal blocks, struct is an Rby2 matrix where
•struct(r,1) is the size of the rth block
•struct(r,2) is the type of the rth block (1 for full, 0 for scalar, 1 for zero
block).
type=2: Full mbyn rectangular matrix. Set struct = [m,n] in this case.
type=3: Other structures. With Type 3, each entry of X is specified as zero or
±x
n
where x
n
is the nth decision variable.
Accordingly, struct is a matrix of the same dimensions as X such that
•struct(i,j)=0 if X(i, j) is a hard zero
•struct(i,j)=n if X(i, j) = x
n
struct(i,j)= n if X(i, j) = –x
n
Sophisticated matrix variable structures can be defined with Type 3. To specify
a variable X of Type 3, first identify how many free independent entries are
involved in X. These constitute the set of decision variables associated with X.
If the problem already involves n decision variables, label the new free
variables as x
n+1
, . . ., x
n+p
. The structure of X is then defined in terms of
x
n+1
, . . ., x
n+p
as indicated above. To help specify matrix variables of Type 3,
lmivar optionally returns two extra outputs: (1) the total number n of scalar
decision variables used so far and (2) a matrix sX showing the entrywise
lmivar
963
dependence of X on the decision variables x
1
, . . ., x
n
. For more details, see
Example 2 below and “Structured Matrix Variables” on page 833.
Example 1 Consider an LMI system with three matrix variables X
1
, X
2
, X
3
such that
•X
1
is a 3 × 3 symmetric matrix (unstructured),
•X
2
is a 2 × 4 rectangular matrix (unstructured),
•X
3
=
where ∆ is an arbitrary 5 × 5 symmetric matrix, δ
1
and δ
2
are scalars, and I
2
denotes the identity matrix of size 2.
These three variables are defined by
setlmis([])
X1 = lmivar(1,[3 1]) % Type 1
X2 = lmivar(2,[2 4]) % Type 2 of dim. 2x4
X3 = lmivar(1,[5 1;1 0;2 0]) % Type 1
The last command defines X
3
as a variable of Type 1 with one full block of size
5 and two scalar blocks of sizes 1 and 2, respectively.
Example 2 Combined with the extra outputs n and sX of lmivar, Type 3 allows you to
specify fairly complex matrix variable structures. For instance, consider a
matrix variable X with structure
where X
1
and X
2
are 2by3 and 3by2 rectangular matrices, respectively. You
can specify this structure as follows:
1 Define the rectangular variables X
1
and X
2
by
setlmis([])
[X1,n,sX1] = lmivar(2,[2 3])
∆ 0 0
0 δ
1
0
0 0 δ
2
I
2
\ .



 
X
X
1
0
0 X
2 \ .


 
=
lmivar
964
[X2,n,sX2] = lmivar(2,[3 2])
The outputs sX1 and sX2 give the decision variable content of X
1
and X
2
:
sX1
sX1 =
1 2 3
4 5 6
sX2
sX2 =
7 8
9 10
11 12
For instance, sX2(1,1)=7 means that the (1,1) entry of X
2
is the seventh
decision variable.
2 Use Type 3 to specify the matrix variable X and define its structure in terms
of those of X
1
and X
2
:
[X,n,sX] = lmivar(3,[sX1,zeros(2);zeros(3),sX2])
The resulting variable X has the prescribed structure as confirmed by
sX
sX =
1 2 3 0 0
4 5 6 0 0
0 0 0 7 8
0 0 0 9 10
0 0 0 11 12
See Also setlmis, lmiterm, getlmis, lmiedit, skewdec, symdec, delmvar, setmvar
ltiss
965
9ltiss
Purpose Extract the statespace realization of a linear system from its SYSTEM matrix
description
Syntax [a,b,c,d,e] = ltiss(sys)
Description Given a SYSTEM matrix sys created with ltisys, this function returns the
statespace matrices A, B, C, D, E.
Example The system G(s) = is specified in transfer function form by
sys = ltisys('tf', 2,[1 1])
A realization of G(s) is then obtained as
[a,b,c,d] = ltiss(sys)
a =
1
b =
1
c =
2
d =
0
Important If the output argument e is not supplied, ltiss returns the realization
(E
–1
A,E
–1
B,C,D).
While this is immaterial for systems with E = I, this should be remembered
when manipulating descriptor systems.
See Also ltisys, ltitf, sinfo
2 –
s 1 +

ltisys
966
9ltisys
Purpose Pack the statespace realization (A, B, C, D, E) into a single SYSTEM matrix
Syntax sys = ltisys(a)
sys = ltisys(a,e)
sys = ltisys(a,b,c)
sys = ltisys(a,b,c,d)
sys = ltisys(a,b,c,d,e)
sys = ltisys('tf',num,den)
Description For continuoustime systems,
E = Ax + Bu, y = Cx + Du
or discretetime systems
Ex
k+1
= Ax
k
+ Bu
k
, y
k
= Cx
k
+ Du
k
,
ltisys stores A, B, C, D, E into a single matrix sys with structure
The upper right entry n gives the number of states (i.e., A ∈ R
n
×
n
). This data
structure is called a SYSTEM matrix. When omitted, d and e are set to the default
values D = 0 and E = I.
The syntax sys = ltisys(a) specifies the autonomous system = Ax while
sys = ltisys(a,e) specifies E = Ax. Finally, you can specify SISO systems
by their transfer function
G(s) =
The syntax is then
sys = ltisys('tf',num,den)
where num and den are the vectors of coefficients of the polynomials n(s) and
d(s) in descending order.
x
·
A j E I – ( ) + B
n
0
C D
0
0 Inf
.
.
.
x
·
x
·
n s ( )
d s ( )

ltisys
967
Examples The SYSTEM matrix representation of the descriptor system
is created by
sys = ltisys(1,2,1,0,eps)
Similarly, the SISO system with transfer function G(s) = is specified by
sys = ltisys('tf',[1 4],[1 7])
See Also ltiss, ltitf, sinfo
εx
·
x – 2u + =
y x =
¹
´
¦
s 4 +
s 7 +

ltitf
968
9ltitf
Purpose Compute the transfer function of a SISO system
Syntax [num,den] = ltitf(sys)
Description Given a singleinput/singleoutput system defined in the SYSTEM matrix sys,
ltitf returns the numerator and denominator of its transfer function
G(s) =
The output arguments num and den are vectors listing the coefficients of the
polynomials n(s) and d(s) in descending order. An error is issued if sys is not a
SISO system.
Example The transfer function of the system
= –x + 2u, y = x – 1
is given by
sys = ltisys(1,2,1,1)
[n,d] = ltitf(sys)
n =
1 1
d =
1 1
The vectors n and d represent the function
See Also ltisys, ltiss, sinfo
n s ( )
d s ( )

x
·
G s ( )
s – 1 +
s 1 +
  =
magshape
969
9magshape
Purpose Graphical specification of the shaping filters
Syntax magshape
Description magshape is a graphical user interface to construct the shaping filters needed
in loopshaping design. Typing magshape brings up a window where the
magnitude profile of each filter can be specified graphically with the mouse.
Once a profile is sketched, magshape computes an interpolating SISO filter and
returns its statespace realization. Several filters can be specified in the same
magshape window as follows:
1 First enter the filter names after Filter names. This tells magshape to write
the SYSTEM matrices of the interpolating filters in MATLAB variables with
these names.
2 Click on the button showing the name of the filter you are about to specify.
To sketch its profile, use the mouse to specify a few characteristic points that
define the asymptotes and the slope changes. For accurate fitting by a
rational filter, make sure that the slopes are integer multiples of ±20 dB/dec.
3 Specify the filter order after filter order and click on the fit data button
to perform the fitting. The magnitude response of the computed filter is then
drawn as a solid line and its SYSTEM matrix is written in the MATLAB
variable with the same name.
4 Adjustments can be made by moving or deleting some of the specified points.
Click on delete point or move point to activate those modes. You can also
increase the order for a better fit. Press fit data to recompute the
interpolating filter.
If some of the filters are already defined in the MATLAB environment, their
magnitude response is plotted when the button bearing their name is clicked.
This allows for easy modification of existing filters.
Remark See sderiv for the specification of nonproper shaping filters with a derivative
action.
Acknowledg
ment
The authors are grateful to Prof. Gary Balas and Prof. Andy Packard for
providing the cepstrum algorithm used for phase reconstruction.
See Also sderiv, frfit, mrfit
matnbr
970
9matnbr
Purpose Return the number of matrix variables in a system of LMIs
Syntax K = matnbr(lmisys)
Description matnbr returns the number K of matrix variables in the LMI problem described
by lmisys.
See Also decnbr, lmiinfo, decinfo
mat2dec
971
9mat2dec
Purpose Return the vector of decision variables corresponding to particular values of
the matrix variables
Syntax decvec = mat2dec(lmisys,X1,X2,X3,...)
Description Given an LMI system lmisys with matrix variables X
1
, . . ., X
K
and given
values X1,...,Xk of X
1
, . . ., X
K
, mat2dec returns the corresponding value
decvec of the vector of decision variables. Recall that the decision variables are
the independent entries of the matrices X
1
, . . ., X
K
and constitute the free
scalar variables in the LMI problem.
This function is useful, for example, to initialize the LMI solvers mincx or gevp.
Given an initial guess for X
1
, . . ., X
K
, mat2dec forms the corresponding vector
of decision variables xinit.
An error occurs if the dimensions and structure of X1,...,Xk are inconsistent
with the description of X
1
, . . ., X
K
in lmisys.
Example Consider an LMI system with two matrix variables X and Y such that
•X is A symmetric block diagonal with one 2by2 full block and one 2by2
scalar block
•Y is a 2by3 rectangular matrix
Particular instances of X and Y are
and the corresponding vector of decision variables is given by
decv = mat2dec(lmisys,X0,Y0)
decv'
ans =
1 3 1 5 1 2 3 4 5 6
X
0
1 3 0 0
3 1 – 0 0
0 0 5 0
0 0 0 5
\ .




 
, Y
0
1 2 3
4 5 6 \ .

 
= =
mat2dec
972
Note that decv is of length 10 since Y has 6 free entries while X has 4
independent entries due to its structure. Use decinfo to obtain more
information about the decision variable distribution in X and Y.
See Also dec2mat, decinfo, decnbr
mincx
973
9mincx
Purpose Minimize a linear objective under LMI constraints
Syntax [copt,xopt] = mincx(lmisys,c,options,xinit,target)
Description The function mincx solves the convex program
(918)
where x denotes the vector of scalar decision variables.
The system of LMIs (918) is described by lmisys. The vector c must be of the
same length as x. This length corresponds to the number of decision variables
returned by the function decnbr. For linear objectives expressed in terms of the
matrix variables, the adequate c vector is easily derived with defcx.
The function mincx returns the global minimum copt for the objective c
T
x, as
well as the minimizing value xopt of the vector of decision variables. The
corresponding values of the matrix variables is derived from xopt with
dec2mat.
The remaining arguments are optional. The vector xinit is an initial guess of
the minimizer xopt. It is ignored when infeasible, but may speed up compu
tations otherwise. Note that xinit should be of the same length as c. As for
target, it sets some target for the objective value. The code terminates as soon
as this target is achieved, that is, as soon as some feasible x such that
c
T
x ð target is found. Set options to [] to use xinit and target with the
default options.
Control
Parameters
The optional argument options gives access to certain control parameters of
the optimization code. In mincx, this is a fiveentry vector organized as follows:
•options(1) sets the desired relative accuracy on the optimal value lopt
(default = 10
–2
).
•options(2) sets the maximum number of iterations allowed to be performed
by the optimization procedure (100 by default).
•options(3) sets the feasibility radius. Its purpose and usage are as for
feasp.
minimize c
T
x subject to N
T
L x ( )N M
T
≤ R x ( )M
mincx
974
•options(4) helps speed up termination. If set to an integer value J > 0, the
code terminates when the objective c
T
x has not decreased by more than the
desired relative accuracy during the last J iterations.
•options(5) = 1 turns off the trace of execution of the optimization
procedure. Resetting options(5) to zero (default value) turns it back on.
Setting option(i) to zero is equivalent to setting the corresponding control
parameter to its default value. See feasp for more detail.
Tip for
SpeedUp
In LMI optimization, the computational overhead per iteration mostly comes
from solving a leastsquares problem of the form
where x is the vector of decision variables. Two methods are used to solve this
problem: Cholesky factorization of A
T
A (default), and QR factorization of A
when the normal equation becomes ill conditioned (when close to the solution
typically). The message
* switching to QR
is displayed when the solver has to switch to the QR mode.
Since QR factorization is incrementally more expensive in most problems, it is
sometimes desirable to prevent switching to QR. This is done by setting
options(4) = 1. While not guaranteed to produce the optimal value, this
generally achieves a good tradeoff between speed and accuracy.
Memory
Problems
QRbased linear algebra (see above) is not only expensiveLMI solvers,memory
shortagememory shortage in terms of computational overhead, but also in
terms of memory requirement. As a result, the amount of memory required by
QR may exceed your swap space for large problems with numerous LMI
constraints. In such case, MATLAB issues the error
??? Error using ==> pds
Out of memory. Type HELP MEMORY for your options.
You should then ask your system manager to increase your swap space or, if no
additional swap space is available, set options(4) = 1. This will prevent
x
min Ax b –
mincx
975
switching to QR and mincx will terminate when Cholesky fails due to
numerical instabilities.
Example See “Example 8.2” on page 823.
Reference The solver mincx implements Nesterov and Nemirovski’s Projective Method as
described in
Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in Convex
Programming: Theory and Applications, SIAM, Philadelphia, 1994.
Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear
Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, Baltimore, Maryland, pp.
840–844.
The optimization is performed by the CMEX file pds.mex.
See Also defcx, dec2mat, decnbr, feasp, gevp
msfsyn
976
9msfsyn
Purpose Multimodel/multiobjective statefeedback synthesis
Syntax [gopt,h2opt,K,Pcl,X] = msfsyn(P,r,obj,region,tol)
Description Given an LTI plant P with statespace equations
msfsyn computes a statefeedback control u = Kx that
•Maintains the RMS gain (H
∞
norm) of the closedloop transfer function T∞
from w to z
∞
below some prescribed value
•Maintains the H
2
norm of the closedloop transfer function T
2
from w to z
2
below some prescribed value
•Minimizes an H
2
/H
∞
tradeoff criterion of the form
•Places the closedloop poles inside the LMI region specified by region (see
lmireg for the specification of such regions). The default is the open lefthalf
plane.
Set r = size(d22) and obj = [γ
0
, ν
0
, α, β] to specify the problem dimensions
and the design parameters γ
0
, ν
0
, α, and β. You can perform pure pole
placement by setting obj = [0 0 0 0]. Note also that z
∞
or z
2
can be empty.
On output, gopt and h2opt are the guaranteed H
∞
and H
2
performances, K is
the optimal statefeedback gain, Pcl the closedloop transfer function from w
to , and X the corresponding Lyapunov matrix.
The function msfsyn is also applicable to multimodel problems where P is a
polytopic model of the plant:
x
·
= Ax B
1
w B
2
u + +
z
∞
= C
1
x D
11
w D
12
u + +
z
2
= C
2
x D
22
u +
¹
¦
´
¦
¦
γ
0
0 >
υ
0
0 >
α T
∞ ∞
2
β T
2 2
2
+
z
∞
z
2
\ .

 
msfsyn
977
with timevarying statespace matrices ranging in the polytope
In this context, msfsyn seeks a statefeedback gain that robustly enforces the
specifications over the entire polytope of plants. Note that polytopic plants
should be defined with psys and that the closedloop system Pcl is itself
polytopic in such problems. Affine parameterdependent plants are also
accepted and automatically converted to polytopic models.
Example See “Design Example” on page 413.
See Also lmireg, psys, hinfmix
x
·
= A t ( )x B
1
t ( )w B
2
t ( )u + +
z
∞
= C
1
t ( )x D
11
t ( )w D
12
t ( )u + +
z
2
= C
2
t ( )x D
22
t ( )u +
¹
¦
´
¦
¦
A t ( ) B
1
t ( ) B
2
t ( )
C
1
t ( ) D
11
t ( ) D
12
t ( )
C
2
t ( ) 0 D
22
t ( )
\ .




 
Co
A
k
B
k
B
2k
C
1k
D
11k
D
12k
C
2k
0 D
22k \ .




 
: k 1 … K , , =
¹ )
¦ ¦
¦ ¦
´ `
¦ ¦
¦ ¦
¦ ¹
∈
mubnd
978
9mubnd
Purpose Compute the upper bound for the µ structured singular value
Syntax [mu,D,G] = mubnd(M,delta,target)
Description mubnd computes the mixedµ upper bound mu = ν
∆
(M) for the matrix M ∈ C
m
×
n
and the normbounded structured perturbation ∆ = diag(∆
1
, . . ., ∆n) (see
“Structured Singular Value” on page 317). The reciprocal of mu is a guaranteed
wellposedness margin for the algebraic loop
Specifically, this loop remains wellposed as long as
mu x σ
max
(∆
i
) < β
i
where β
i
is the specified bound on ∆
i
.
The uncertainty ∆ is described by delta (see ublock and udiag). The function
mubnd also returns the optimal scaling matrices D and G arising in the
computation of mu.
To test whether where τ > 0 is a given scalar, set the optional argument
target to τ .
See Also mustab, muperf
∆
M
w
q
mu τ ≤
muperf
979
9muperf
Purpose Estimate the robust H
∞
performance of uncertain dynamical systems
Syntax [grob,peakf] = muperf(P,delta,0,freqs)
[margin,peakf] = muperf(P,delta,g,freqs)
Description muperf estimates the robust H
∞
performance (worstcase RMS gain) of the
uncertain LTI system
where ∆(s) = diag(∆
1
(s), . . ., ∆
n
(s))
is a linear timeinvariant uncertainty quantified by norm bounds
σ
max
(∆
i
(jω)) < β
i
(ω) or sector bounds ∆
i
∈ {a, b}. The robust performance γ
rob
is
the smallest γ > 0 such that
y
L
2
< γ u
L
2
for all bounded input u(t) and instance ∆(s) of the uncertainty. It is finite if and
only if the interconnection is robustly stable.
muperf converts the robust performance estimation into a robust stability
problem for some augmented ∆ structure (see “Robust Performance” on
page 321). The input arguments P and delta contain the SYSTEM matrix of P(s)
and the description of the LTI uncertainty ∆(s) (see ublock and udiag). To
compute the robust H
∞
performance grob for the prescribed uncertainty delta,
call muperf with the syntax
[grob,peakf] = muperf(P,delta)
muperf returns grob = Inf when the interconnection is not robustly stable.
Alternatively,
[margin,peakf] = muperf(P,delta,g)
∆(s)
P(s)
u
y
muperf
980
estimates the robustness of a given H
∞
performance g, that is, how much
uncertainty ∆(s) can be tolerated before the RMS gain from u to y exceeds g.
Provided that
σ
max
(∆
i
(jω)) < margin × β
i
(ω)
in the normbounded case and
in the sectorbounded case, the RMS gain is guaranteed to remain below g.
With both syntaxes, peakf is the frequency where the worst RMS performance
or robustness margin is achieved. An optional vector of frequencies freqs to be
used for the µ upper bound evaluation can be specified as fourth input
argument.
See Also mustab, norminf, quadperf
∆
i
a b +
2

b a –
2
 – margin,
a b +
2
 ×
b a –
2
 + margin ×
¹ )
´ `
¦ ¹
∈
mustab
981
9mustab
Purpose Estimate the robust stability margin of uncertain LTI systems
Syntax [margin,peak,fs,ds,gs] = mustab(sys,delta,freqs)
Description mustab computes an estimate margin of the robust stability margin for the
interconnection
where G(s) is a given LTI system and ∆ = diag(∆
1
, . . ., ∆
n
) is a linear
timeinvariant uncertainty quantified by norm bounds σ
max
(∆i(jω)) < β
i
(ω) or
sector bounds ∆
i
∈ {a, b}. In the normbounded case, the stability of this
interconnection is guaranteed for all ∆(s) satisfying
σ
max
(∆
i
(jω)) < margin × β
i
(ω)
See “Robust Stability Analysis” on page 319 for the interpretation of margin in
the sectorbounded case.
The value margin is obtained by computing the mixedµ upper bound ν
∆
(G(jω
i
))
over a grid of frequencies ω
i
. The command
[margin,peak] = mustab(sys,delta)
returns the guaranteed stability margin margin and the frequency peakf
where the margin is weakest. The uncertainty ∆(s) is specified by delta (see
ublock and udiag). The optional third argument freqs is a usersupplied vector
of frequencies where to evaluate ν
∆
(G(jω)).
To obtain the D, G scaling matrices at all tested frequencies, use the syntax
[margin,peak,fs,ds,gs] = mustab(sys,delta)
Here fs is the vector of frequencies used to evaluate ν
∆
and the D, G scalings
at the frequency fs(i) are retrieved by
∆(s)
G(s)
mustab
982
D = getdg(ds,i)
G = getdg(gs,i)
Caution margin may overestimate the robust stability margin when the frequency grid
freqs is too coarse or when the µ upper bound is discontinuous due to real
parameter uncertainty. See “Caution” on page 321 for more details.
Example See “Example” on page 328.
See Also mubnd, muperf, ublock, udiag
newlmi
983
9newlmi
Purpose Attach an identifying tag to LMIs
Syntax tag = newlmi
Description newlmi adds a new LMI to the LMI system currently described and returns an
identifier tag for this LMI. This identifier can be used in lmiterm, showlmi, or
dellmi commands to refer to the newly declared LMI. Tagging LMIs is optional
and only meant to facilitate code development and readability.
Identifiers can be given mnemonic names to help keep track of the various
LMIs. Their value is simply the ranking of each LMI in the system (in the order
of declaration). They prove useful when some LMIs are deleted from the LMI
system. In such cases, the identifiers are the safest means of referring to the
remaining LMIs.
See Also setlmis, lmivar, lmiterm, getlmis, lmiedit, dellmi
norminf
984
9norminf
Purpose Compute the random meansquares (RMS) gain of continuoustime systems
Syntax [gain,peakf] = norminf(g,tol)
Description Given the SYSTEM matrix g of an LTI system with statespace equations
E = Ax + Bu
y = Cx + Du
and transfer function G(s) = D + C(sE – A)
–1
B, norminf computes the peak gain
of the frequency response G(jω). When G(s) is stable, this peak gain coincides
with the RMS gain or H
∞
norm of G, that is,
The function norminf returns the peak gain and the frequency at which it is
attained in gain and peakf, respectively. The optional input tol specifies the
relative accuracy required on the computed RMS gain (the default value is
10
–2
).
Example The peak gain of the transfer function
is given by
norminf(ltisys('tf',100,[1 0.01 100]))
ans =
1.0000e+03
Reference Boyd, S., V. Balakrishnan, and P. Kabamba, “A Bisection Method for
Computing the H
∞
Norm of a Transfer Matrix and Related Problems,” Math.
Contr. Sign. Syst., 2 (1989), pp. 207–219.
dx
dt

G
∞
sup σ
max
G jω ( ) ( ) =
ω
G
∞
sup
y
L
2
u
L
2
 =
u L
2
∈
u 0 ≠
G s ( )
100
s
2
0.01s 100 + +
 =
norminf
985
Bruisma, N.A., and M. Steinbuch, “A Fast Algorithm to Compute the H
∞
Norm
of a Transfer Function Matrix,” Syst. Contr. Letters, 14 (1990), pp. 287–293.
Robel, G., “On Computing the Infinity Norm,” IEEE Trans. Aut. Contr., AC–34
(1989), pp. 882–884.
See Also dnorminf, muperf, quadperf
norm2
986
9norm2
Purpose Compute the H
2
norm of a continuoustime LTI system
Syntax h2norm = norm2(g)
Description Given a stable LTI system
driven by a white noise w with unit covariance, norm2 computes the H
2
norm
(LQG performance) of G defined by
where G(s) = C(sI – A)
–1
B.
This norm is computed as
where P is the solution of the Lyapunov equation
AP + PA
T
+ BB
T
= 0
See Also norminf
G:
x
·
Ax Bw + =
y Cx =
¹
´
¦
G
2
2
lim E
1
T
  y
T
0
T
∫
t ( )y t ( )dt
¹ )
´ `
¦ ¹
=
1
2π
 G
H
∞ –
∞
∫
jω ( )G jω ( )dω =
T ∞ →
G
2
Trace CPC
T
( ) =
pdlstab
987
9pdlstab
Purpose Assess the robust stability of a polytopic or parameterdependent system
Syntax [tau,Q0,Q1,...] = pdlstab(pds,options)
Description pdlstab uses parameterdependent Lyapunov functions to establish the
stability of uncertain statespace models over some parameter range or
polytope of systems. Only sufficient conditions for the existence of such
Lyapunov functions are available in general. Nevertheless, the resulting
robust stability tests are always less conservative than quadratic stability tests
when the parameters are either timeinvariant or slowly varying.
For an affine parameterdependent system
E(p) = A(p)x + B(p)u
y = C(p)x + D(p)u
with p = (p
1
, . . ., p
n
) ∈ R
n
, pdlstab seeks a Lyapunov function of the form
V(x, p) = x
T
Q(p)
–1
x, Q(p) = Q
0
+ p
1
Q
1
+ . . .p
n
Q
n
such that dV(x, p)/dt < 0 along all admissible parameter trajectories. The
system description pds is specified with psys and contains information about
the range of values and rate of variation of each parameter p
i
.
For a timeinvariant polytopic system
E = Ax + Bu
y = Cx + Du
with
(919)
pdlstab seeks a Lyapunov function of the form
V(x, α) = x
T
Q(α)
–1
x, Q(α) = α
1
Q
1
+ . . .+ α
n
Q
n
such that dV(x, α)/dt < 0 for all polytopic decompositions (919).
x
·
x
·
A jE + B
C D \ .

 
α
i
A jE
i
+ B
i
C
i
D
i \ .


 
i 1 =
n
∑
, α
i
0 ≥ , α
i
i 1 =
n
∑
1 = , =
pdlstab
988
Several options and control parameters are accessible through the optional
argument options:
•Setting options(1)=0 tests robust stability (default)
•When options(2)=0, pdlstab uses simplified sufficient conditions for faster
running times. Set options(2)=1 to use the least conservative conditions
Remark For affine parameterdependent systems with timeinvariant parameters,
there is equivalence between the robust stability of
(920)
and that of the dual system
(921)
However, the second system may admit an affine parameterdependent
Lyapunov function while the first does not.
In such case, pdlstab automatically restarts and tests stability on the dual
system (921) when it fails on (920).
See Also quadstab, mustab
E p ( )x
·
A p ( )x =
E p ( )
T
z
·
A p ( )
T
z =
pdsimul
989
9pdsimul
Purpose Time response of a parameterdependent system along a given parameter
trajectory
Syntax pdsimul(pds,'traj',tf,'ut',xi,options)
[t,x,y] = pdsimul(pds,pv,'traj',tf,'ut',xi,options)
Description pdsimul simulates the time response of an affine parameterdependent system
E(p) = A(p)x + B(p)u
y = C(p)x + D(p)u
along a parameter trajectory p(t) and for an input signal u(t). The parameter
trajectory and input signals are specified by two time functions p=traj(t) and
u=ut(t). If 'ut' is omitted, the response to a step input is computed by default.
The affine system pds is specified with psys. The function pdsimul also accepts
the polytopic representation of such systems as returned by aff2pol(pds) or
hinfgs. The final time and initial state vector can be reset through tf and xi
(their respective default values are 5 seconds and 0). Finally, options gives
access to the parameters controlling the ODE integration (type help gear for
details).
When invoked without output arguments, pdsimul plots the output trajectories
y(t). Otherwise, it returns the vector of integration time points t as well as the
state and output trajectories x,y.
Example See “Design Example” on page 710.
See Also psys, pvec, gear
x
·
popov
990
9popov
Purpose Perform the Popov robust stability test
Syntax [t,P,S,N] = popov(sys,delta,flag)
Description popov uses the Popov criterion to test the robust stability of dynamical systems
with possibly nonlinear and/or timevarying uncertainty (see “The Popov
Criterion” on page 324 for details). The uncertain system must be described as
the interconnection of a nominal LTI system sys and some uncertainty delta
(see “NormBounded Uncertainty” on page 227). Use ublock and udiag to
specify delta.
The command
[t,P,S,N] = popov(sys,delta)
tests the robust stability of this interconnection. Robust stability is guaranteed
if t < 0. Then P determines the quadratic part x
T
Px of the Lyapunov function
and D and S are the Popov multipliers (see p. ?? for details).
If the uncertainty delta contains real parameter blocks, the conservatism of
the Popov criterion can be reduced by first performing a simple loop
transformation (see “Real Parameter Uncertainty” on page 325). To use this
refined test, call popov with the syntax
[t,P,S,N] = popov(sys,delta,1)
Example See “Design Example” on page 710.
See Also mustab, quadstab, pdlstab, ublock, aff2lft
psinfo
991
9psinfo
Purpose Inquire about polytopic or parameterdependent systems created with psys
Syntax psinfo(ps)
[type,k,ns,ni,no] = psinfo(ps)
pv = psinfo(ps,'par')
sk = psinfo(ps,'sys',k)
sys = psinfo(ps,'eval',p)
Description psinfo is a multiusage function for queries about a polytopic or
parameterdependent system ps created with psys. It performs the following
operations depending on the calling sequence:
•psinfo(ps) displays the type of system (affine or polytopic); the number k of
SYSTEM matrices involved in its definition; and the numbers of ns, ni, no of
states, inputs, and outputs of the system. This information can be optionally
stored in MATLAB variables by providing output arguments.
•pv = psinfo(ps,'par') returns the parameter vector description (for
parameterdependent systems only).
•sk = psinfo(ps,'sys',k) returns the kth SYSTEM matrix involved in the
definition of ps. The ranking k is relative to the list of systems syslist used
in psys.
•sys = psinfo(ps,'eval',p) instantiates the system for a given vector p of
parameter values or polytopic coordinates.
For affine parameterdependent systems defined by the SYSTEM matrices S
0
,
S
1
, . . ., S
n
, the entries of p should be real parameter values p
1
, . . ., p
n
and
the result is the LTI system of SYSTEM matrix
S(p) = S
0
+ p
1
S
1
+ . . .+ p
n
S
n
For polytopic systems with SYSTEM matrix ranging in
Co{S
1
, . . ., S
n
},
the entries of p should be polytopic coordinates p
1
, . . ., p
n
satisfying p
j
Š 0
and the result is the interpolated LTI system of SYSTEM matrix
See Also psys, ltisys
S
p
1
S
1
… p
n
S
n
+ +
p
1
… p
n
+ +
 =
psys
992
9psys
Purpose Specify polytopic or parameterdependent linear systems
Syntax pols = psys(syslist)
affs = psys(pv,syslist)
Description psys specifies statespace models where the statespace matrices can be
uncertain, timevarying, or parameterdependent.
Two types of uncertain statespace models can be manipulated in the LMI
Control Toolbox:
•Polytopic systems
E(t) = A(t)x + B(t)u
y = C(t)x + D(t)u
whose SYSTEM matrix takes values in a fixed polytope:
where S
1
, . . ., S
k
are given “vertex” systems and
Co{S
1
, . . ., S
k
} =
denotes the convex hull of S
1
, . . ., S
k
(polytope of matrices with vertices
S
1
, . . ., S
k
)
•Affine parameterdependent systems
E(p) = A(p)x + B(p)u
x
·
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
S t ( )
S
1
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
S
k
A t ( ) jE t ( ) + B t ( )
C t ( ) D t ( )
Co
A
1
jE
1
+ B
1
C
1
D
1
, . . .,
A
k
jE
k
+ B
k
C
k
D
k
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
∈
α
i
S
i
i 1 =
k
∑
: α
i
0 ≥ , α
i
i 1 =
k
∑
1 =
¹ )
¦ ¦
´ `
¦ ¦
¦ ¹
x
·
psys
993
y = C(p)x + D(p)u
where A(
.
); B(
.
), . . ., E(
.
) are fixed affine functions of some vector
p = (p
1
, . . ., p
n
) of real parameters, i.e.,
where S
0
, S
1
, . . ., S
n
are given SYSTEM matrices. The parameters p
i
can be
timevarying or constant but uncertain.
Both types of models are specified with the function psys. The argument
syslist lists the SYSTEM matrices S
i
characterizing the polytopic value set or
parameter dependence. In addition, the description pv of the parameter vector
(range of values and rate of variation) is required for affine parameter
dependent models (see pvec for details). Thus, a polytopic model with vertex
systems S
1
, . . ., S
4
is created by
pols = psys([s1,s2,s3,s4])
while an affine parameterdependent model with 4 real parameters is defined
by
affs = psys(pv,[s0,s1,s2,s3,s4])
The output is a structured matrix storing all the relevant information.
To specify the autonomous parameterdependent system
= A(p)x, A(p) = A
0
+ p
1
A
1
+ p
2
A
2
,
type
s0 = ltisys(a0)
s1 = ltisys(a1,0)
s2 = ltisys(a2,0)
ps = psys(pv,[s0 s1 s2])
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
S
0
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
S p ( )
A p ( ) jE p ( ) + B p ( )
C p ( ) D p ( )
=
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
S
1
¹
)
¦
¦
¦
¦
¦
¦
¦
¦
´
`
¦
¦
¦
¦
¦
¦
¦
¦
¦
¹
S
n
A
0
jE
0
+ B
0
C
0
D
0
+ p
1
A
1
jE
1
+ B
1
C
1
D
1
+ . . . + p
n
A
n
jE
n
+ B
n
C
n
D
n
x
·
psys
994
Do not forget the 0 in the second and third commands. This 0 marks the
independence of the E matrix on p
1
, p
2
. Typing
s0 = ltisys(a0)
s1 = ltisys(a1)
s2 = ltisys(a2)
ps = psys(pv,[s0 s1 s2])
instead would specify the system
E(p) = A(p)x, E(p) = (1+p
1
+ p
2
)I, A(p) = A
0
+p
1
A
1
+p
2
A
2
Example See “Example 2.1” on page 221.
See Also psinfo, pvec, aff2pol, ltisys
x
·
pvec
995
9pvec
Purpose Specify the range and rate of variation of uncertain or timevarying parameters
Syntax pv = pvec('box',range,rates)
pv = pvec('pol',vertices)
Description pvec is used in conjunction with psys to specify parameterdependent systems.
Such systems are parametrized by a vector p = (p
1
, . . ., p
n
) of uncertain or
timevarying real parameters p
i
. The function pvec defines the range of values
and the rates of variation of these parameters.
The type 'box' corresponds to independent parameters ranging in intervals
The parameter vector p then takes values in a hyperrectangle of R
n
called the
parameter box. The second argument range is an nby2 matrix that stacks up
the extremal values and of each p
j
. If the third argument rates is
omitted, all parameters are assumed timeinvariant. Otherwise, rates is also
an nby2 matrix and its jth row specifies lower and upper bounds and
on :
Set = Inf and = Inf if p
j
(t) can vary arbitrarily fast or discontinuously.
The type 'pol' corresponds to parameter vectors p ranging in a polytope of the
parameter space R
n
. This polytope is defined by a set of vertices V
1
, . . ., V
n
corresponding to “extremal” values of the vector p. Such parameter vectors are
declared by the command
pv = pvec('pol',[v1,v2, . . ., vn])
where the second argument is the concatenation of the vectors v1,...,vn.
The output argument pv is a structured matrix storing the parameter vector
description. Use pvinfo to read the contents of pv.
Example Consider a problem with two timeinvariant parameters
p
1
∈ [–1, 2], p
2
∈ [20, 50]
p
j
p
j
p
j
≤ ≤
p
j
p
j
ν
j
ν
j
dp
j
dt

ν
j
dp
j
dt
 ν
j
≤ ≤
ν
j
ν
j
pvec
996
The corresponding parameter vector p = (p
1
, p
2
) is specified by
pv = pvec('box',[1 2;20 50])
Alternatively, this vector can be regarded as taking values in the rectangle
drawn in Figure 9.2. The four corners of this rectangle are the four vectors
Hence, you could also specify p by
pv = pvec('pol',[v1,v2,v3,v4])
Figure 92: Parameter box
See Also pvinfo, psys
v
1
1 –
20 \ .

 
= , v
2
1 –
50 \ .

 
= , v
3
2
20 \ .

 
= , v
4
2
50 \ .

 
=
1 2
20
50
p
1
p
2
pvinfo
997
9pvinfo
Purpose Describe a parameter vector specified with pvec
Syntax [typ,k,nv] = pvinfo(pv)
[pmin,pmax,dpmin,dpmax] = pvinfo(pv,'par',j)
vj = pvinfo(pv,'par',j)
p = pvinfo(pv,'eval',c)
Description pvec retrieves information about a vector p = (p
1
, . . ., p
n
) of real parameters
declared with pvec and stored in pv. The command pvinfo(pv) displays the
type of parameter vector ('box' or 'pol'), the number n of scalar parameters,
and for the type 'pol', the number of vertices used to specify the parameter
range.
For the type 'box':
[pmin,pmax,dpmin,dpmax] = pvinfo(pv,'par',j)
returns the bounds on the value and rate of variations of the jth real
parameter p
j
. Specifically,
,
For the type 'pol':
pvinfo(pv,'par',j)
returns the jth vertex of the polytope of R
n
in which p ranges, while
pvinfo(pv,'eval',c)
returns the value of the parameter vector p given its barycentric coordinates c
with respect to the polytope vertices (V
1
, . . .,V
k
). The vector c must be of length
k and have nonnegative entries. The corresponding value of p is then given by
See Also pvec, psys
pmin p
j
t ( ) pmax ≤ ≤ dpmin
dp
j
dt
 dpmax ≤ ≤
p
c
i
i 1 =
k
∑
V
i
c
i
i 1 =
k
∑
 =
quadperf
998
9quadperf
Purpose Compute the quadratic H
∞
performance of a polytopic or parameterdependent
system
Syntax [perf,P] = quadperf(ps,g,options)
Description The RMS gain of the timevarying system
(922)
is the smallest γ > 0 such that
(923)
for all input u(t) with bounded energy. A sufficient condition for (923) is the
existence of a quadratic Lyapunov function
V(x) = x
T
Px, P > 0
such that
Minimizing γ over such quadratic Lyapunov functions yields the quadratic H
∞
performance, an upper bound on the true RMS gain.
The command
[perf,P] = quadperf(ps)
computes the quadratic H
∞
performance perf when (922) is a polytopic or
affine parameterdependent system ps (see psys). The Lyapunov matrix P
yielding the performance perf is returned in P.
The optional input options gives access to the following task and control
parameters:
•If options(1)=1, perf is the largest portion of the parameter box where the
quadratic RMS gain remains smaller than the positive value g (for affine
parameterdependent systems only). The default value is 0
E t ( )x
·
A t ( )x = B t ( )u, y C t ( )x = D t ( )u + +
y
L
2
γ u
L
2
≤
u ∀ L
2
∈ ,
dV
dt
 y
T
y γ
2
u
T
u – + 0 <
quadperf
999
•If options(2)=1, quadperf uses the least conservative quadratic
performance test. The default is options(2)=0 (fast mode)
•options(3) is a userspecified upper bound on the condition number of P (the
default is 10
9
).
Example See “Example 3.4” on page 310.
See Also muperf, quadstab, psys
quadstab
9100
9quadstab
Purpose Quadratic stability of polytopic or affine parameterdependent systems
Syntax [tau,P] = quadstab(ps,options)
Description For affine parameterdependent systems
E(p) = A(p)x, p(t) = (p
1
(t), . . ., p
n
(t))
or polytopic systems
E(t) = A(t)x, (A, E) ∈ Co{(A
1
, E
1
), . . ., (A
n
, E
n
)},
quadstab seeks a fixed Lyapunov function V(x) = x
T
Px with P > 0 that
establishes quadratic stability (see “Quadratic Lyapunov Functions” on
page 33 for details). The affine or polytopic model is described by ps (see psys).
The task performed by quadstab is selected by options(1):
•if options(1)=0 (default), quadstab assesses quadratic stability by solving
the LMI problem
Minimize τ over Q = Q
T
such that
A
T
QE + EQA
T
< τI for all admissible values of (A, E)
Q > I
The global minimum of this problem is returned in tau and the system is
quadratically stable if tau < 0
•if options(1)=1, quadstab computes the largest portion of the specified
parameter range where quadratic stability holds (only available for affine
models). Specifically, if each parameter p
i
varies in the interval
p
i
∈ [p
i0
– δ
i
, p
i0
+ δ
i
],
quadstab computes the largest θ > 0 such that quadratic stability holds over
the parameter box
p
i
∈ [p
i0
– θδ
i
, p
i0
+ θδ
i
]
This “quadratic stability margin” is returned in tau and ps is quadratically
stable if tau Š 1.
Given the solution Q
opt
of the LMI optimization, the Lyapunov matrix P is
given by P = . This matrix is returned in P.
x
·
x
·
Q
opt
1 –
quadstab
9101
Other control parameters can be accessed through options(2) and
options(3):
•if options(2)=0 (default), quadstab runs in fast mode, using the least
expensive sufficient conditions. Set options(2)=1 to use the least
conservative conditions
•options(3) is a bound on the condition number of the Lyapunov matrix P.
The default is 10
9
.
Example See “Example 3.1” on page 37 and “Example 3.2” on page 38.
See Also pdlstab, mustab, decay, quadperf, psys
ricpen
9102
9ricpen
Purpose Solve continuoustime Riccati equations (CARE)
Syntax X = ricpen(H,L)
[X1,X2] = ricpen(H,L)
Description This function computes the stabilizing solution X of the Riccati equation
(924)
where E is an invertible matrix. The solution is computed by unitary deflation
of the associated Hamiltonian pencil
The matrices H and L are specified as input arguments. For solvability, the
pencil H – λL must have no finite generalized eigenvalue on the imaginary
axis.
When called with two output arguments, ricpen returns two matrices X
1
, X
2
such that has or thonormal columns, and X = X
2
is the stabilizing
solution of (924).
If X
1
is invertible, the solution X is returned directly by the syntax
X = ricpen(H,L).
ricpen is adapted from care and implements the generalized Schur method for
solving CAREs.
Reference Laub, A. J., “A Schur Method for Solving Algebraic Riccati Equations,” IEEE
Trans. Aut. Contr., AC–24 (1979), pp. 913–921.
Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and
Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746–
1754.
A
T
XE E
T
XA E
T
XFXE E (
T
XB – S)R
1 –
B (
T
XE S
T
) Q + + + + + 0 =
H λL –
A F B
Q – A –
T
S –
S
T
B
T
R
λ
E 0 0
0 E
T
0
0 0 0
– =
X
1
X
2
X
1
1 –
ricpen
9103
See Also care, dricpen
sadd
9104
9sadd
Purpose Parallel interconnection of linear systems
Syntax sys = sadd(g1,g2,...)
Description sadd forms the parallel interconnection
of the linear systems G
1
, . . ., G
n
. The arguments g1, g2,... are either SYSTEM
matrices (dynamical systems) or constant matrices (static gains). In addition,
one (and at most one) of these systems can be polytopic or parameter
dependent, in which case the resulting system sys is of the same nature.
In terms of transfer functions, sadd performs the sum
G(s) = G
1
(s) + G
2
(s) + . . .+ G
n
(s)
See Also smult, sdiag, sloop, ltisys
G
1
G
2
G
n
.
.
.
+
+
+
G
sbalanc
9105
9sbalanc
Purpose Numerically balance the statespace realization of a linear system
Syntax [a,b,c] = sbalanc(a,b,c,condnbr)
sys = sbalanc(sys,condnbr)
Description sbalanc computes a diagonal invertible matrix T that balances the statespace
realization (A, B, C) by reducing the magnitude of the offdiagonal entries of A
and by scaling the norms of B and C. The resulting realization is of the form
(T
–1
A
T
, T
–1
B, CT)
The optional argument condnbr specifies an upper bound on the condition
number of T. The default value is 10
8
.
Example Computing a realization of
with the function tf2ss yields
[a,b,c,d]=tf2ss([1 1 1],[1 1000 500^2])
a =
1000 250000
1 0
b =
1
0
c =
999 249999
d =
1
To reduce the numerical range of this data and improve numerical stability of
subsequent computations, you can balance this realization by
[a,b,c]=sbalanc(a,b,c)
a =
1000.00 3906.25
64.00 0
W s ( )
s
2
s 1 + +
s 500 + ( )
2
 =
sbalanc
9106
b =
64.00
0
c =
15.61 61.03
See Also ltisys
sconnect
9107
9sconnect
Purpose Specify general control structures or system interconnections
Syntax [P,r] = sconnect(inputs,outputs,Kin,G1in,g1,G2in,g2,...)
Description sconnect is useful in loopshaping problems to turn general control structures
into the standard linearfractional interconnection used in H
∞
synthesis. In
this context, sconnect returns:
•The SYSTEM matrix P of the standard H
∞
plant P(s)
•The vector r = [p2 m2] where p2 and m2 are the number of input and outputs
of the controller, respectively.
More generally, sconnect is useful to specify complex interconnections of LTI
systems (see Example 1 below).
General control loops or system interconnections are described in terms of
signal flows. Specifically, you must specify
•The exogenous inputs, i.e., what signals enter the loop or interconnection
•The ouput signals, i.e., the signals generated by the loop or interconnection
•How the inputs of each dynamical system relate to the exogenous inputs and
to the outputs of other systems.
The outputs of a system of name G are collectively denoted by G. To refer to
particular outputs, e.g., the second and third, use the syntax G(2:3).
The arguments of sconnect are as follows:
•The first argument inputs is a string listing the exogenous inputs. For
instance, inputs='r(2), d' specifies two inputs, a vector r of size 2 and a
scalar input d
•The second argument outputs is a string listing the outputs generated by the
control loop. Outputs are defined as combinations of the exogenous inputs
and the outputs of the dynamical systems. For instance, outputs='e=rS;
S+d' specifies two outputs e = r – y and y + d where y is the output of the
system of name S.
•The third argument Kin names the controller and specifies its inputs. For
instance, Kin='K:e' inserts a controller of name K and input e. If no name is
specified as in Kin='e', the default name K is given to the controller.
sconnect
9108
•The remaining arguments come in pairs and specify, for each known LTI
system in the loop, its input list Gkin and its SYSTEM matrix gk. The input list
is a string of the form
system name : input1 ; input2 ; ... ; input n
For instance, G1in='S: K(2);d' inserts a system called S whose inputs
consist of the second output of the controller K and of the input signal d.
Note that the names given to the various systems are immaterial provided that
they are used consistently throughout the definitions.
Remark One of the dynamical systems can be polytopic or affine parameterdependent.
In this case, sconnect always returns a polytopic model of the interconnection.
Example 1
This simple example illustrates the use of sconnect to compute
interconnections of systems without controller. The SYSTEM matrix of this
series interconnection is returned by
sys1 = ltisys('tf',1,[1 1])
sys2 = ltisys('tf',[1 0],[1 2])
S = sconnect('r','S2',[],'S1:r',sys1,'S2:S1',sys2)
Note that inputK is set to [] to mark the absence of controller. The same result
would be obtained with
S = smult(sys1,sys2)
r
y
1
s + 1
1
s + 2
sconnect
9109
Example 2
The standard plant associated with this simple tracking loop is given by
g = ltisys('tf',1,[1 1])
P = sconnect('r','y=G','C:ry','G:C',g)
Example 3
If the SYSTEM matrices of the system G and filters W
1
, W
2
, W
3
, W
d
, W
n
are stored
in the variables g, w1, w2, w3, wd, wn, respectively, the corresponding
standard plant P(s) is formed by
inputs = 'r;n;d'
outputs = 'W1;W2;W3'
Kin = 'K: e=ryWn'
W3in = 'W3: y=G+Wd'
1
s + 1
C(s)
r
e u y
+
−
r
+
−
K G
W
1
W
2
W
3
W
n
W
d
d
z
3
n
+
+
+
+
z
2
z
3
z
1
e
sconnect
9110
[P,r] = sconnect(inputs,outputs,Kin,'G:K',g,'W1:e',w1,...
'W2:K',w2,W3in,w3,'Wd:d',wd,'Wn:n',wn)
See Also ltisys, slft, ssub, sadd, smult
sderiv
9111
9sderiv
Purpose Apply proportionalderivative action to some inputs/outputs of an LTI system
Syntax dsys = sderiv(sys,chan,pd)
Description sderiv multiplies selected inputs and/or outputs of the LTI system sys by the
proportionalderivator ns + d. The coefficients n and d are specified by setting
pd = [n , d]
The second argument chan lists the input and output channels to be filtered by
ns + d. Input channels are denoted by their ranking preceded by a minus sign.
On output, dsys is the SYSTEM matrix of the corresponding interconnection. An
error is issued if the resulting system is not proper.
Example Consider a SISO loopshaping problem where the plant P(s) has three outputs
corresponding to the transfer functions S, KS, and T . Given the shaping filters
w
1
(s) = , w
2
(s) = 100, w
3
(s) = 1 + 0.01s,
the augmented plant associated with the criterion
is formed by
w1 = ltisys('tf',1,[1 0])
w2 = 100
paug = smult(p,sdiag(w1,w2,1))
paug = sderiv(paug,3,[0.01 1])
This last command multiplies the third output of P by the filter w
3
.
See Also ltisys, magshape
1
s

w
1
S
w
2
KS
w
3
T
\ .




 
∞
sdiag
9112
9sdiag
Purpose Append (concatenate) linear systems
Syntax g = sdiag(g1,g2,...)
Description sdiag returns the system obtained by stacking up the inputs and outputs of the
systems g1,g2,... as in the diagram.
If G
1
(s),G
2
(s), . . . are the transfer functions of g1,g2,..., the transfer function
of g is
The function sdiag takes up to 10 input arguments. One (and at most one) of
the systems g1,g2,... can be polytopic or parameterdependent, in which case
g is of the same nature.
.
.
.
G
1
G
n
u
1
y
1
u
n
y
n
u y
G
G s ( )
G
1
s ( ) 0 0
0 G
2
s ( ) 0
0 0
\ .



 
=
.
.
.
sdiag
9113
Example Let p be a system with two inputs u
1
, u
2
and three outputs y
1
, y
2
, y
3
. The
augmented system
with weighting filters
w
1
(s) = , w
2
(s) = 10, w
3
(s) =
is created by the commands
w1 = ltisys('tf',1,[1 0])
w3 = ltisys('tf',[100 0],[1 100])
paug = smult(p,sdiag(w1,10,w3))
See Also ltisys, ssub, sadd, smult, psys
P
w
1
w
2
w
3
u
1
u
2
y
1
y
2
y
3
y
1
˜
y
2
˜
y
3
˜
P
aug
1
s

100s
s 100 +

setlmis
9114
9setlmis
Purpose Initialize the description of an LMI system
Syntax setlmis(lmi0)
Description Before starting the description of a new LMI system with lmivar and lmiterm,
type
setlmis([])
to initialize its internal representation.
To add on to an existing LMI system, use the syntax
setlmis(lmi0)
where lmi0 is the internal representation of this LMI system. Subsequent
lmivar and lmiterm commands will then add new variables and terms to the
initial LMI system lmi0.
See Also getlmis, lmivar, lmiterm, newlmi
setmvar
9115
9setmvar
Purpose Instantiate a matrix variable and evaluate all LMI terms involving this matrix
variable
Syntax newsys = setmvar(lmisys,X,Xval)
Description setmvar sets the matrix variable X with identifier X to the value Xval. All terms
involving X are evaluated, the constant terms are updated accordingly, and X
is removed from the list of matrix variables. A description of the resulting LMI
system is returned in newsys.
The integer X is the identifier returned by lmivar when X is declared.
Instantiating X with setmvar does not alter the identifiers of the remaining
matrix variables.
The function setmvar is useful to freeze certain matrix variables and optimize
with respect to the remaining ones. It saves time by avoiding partial or
complete redefinition of the set of LMI constraints.
Example Consider the system
= Ax + Bu
and the problem of finding a stabilizing statefeedback law u = Kx where K is
an unknown matrix.
By the Lyapunov Theorem, this is equivalent to finding P > 0 and K such that
(A + BK)P + P(A + BK)
T
+ I < 0.
With the change of variable Y := KP, this condition reduces to the LMI
AP + PA
T
+ BY + Y
T
B
T
+ I < 0.
This LMI is entered by the commands
n = size(A,1) % number of states
ncon = size(B,2) % number of inputs
setlmis([])
P = lmivar(1,[n 1]) % P full symmetric
Y = lmivar(2,[ncon n]) % Y rectangular
lmiterm([1 1 1 P],A,1,'s') % AP+PA'
x
·
setmvar
9116
lmiterm([1 1 1 Y],B,1,'s') % BY+Y'B'
lmiterm([1 1 1 0],1) % I
lmis = getlmis
To find out whether this problem has a solution K for the particular Lyapunov
matrix P = I, set P to I by typing
news = setmvar(lmis,P,1)
The resulting LMI system news has only one variable Y = K. Its feasibility is
assessed by calling feasp:
[tmin,xfeas] = feasp(news)
Y = dec2mat(news,xfeas,Y)
The computed Y is feasible whenever tmin < 0.
See Also evallmi, delmvar
showlmi
9117
9showlmi
Purpose Return the left and righthand sides of an LMI after evaluation of all variable
terms
Syntax [lhs,rhs] = showlmi(evalsys,n)
Description For given values of the decision variables, the function evallmi evaluates all
variable terms in a system of LMIs. The left and righthand sides of the nth
LMI are then constant matrices that can be displayed with showlmi. If evalsys
is the output of evallmi, the values lhs and rhs of these left and righthand
sides are given by
[lhs,rhs] = showlmi(evalsys,n)
An error is issued if evalsys still contains variable terms.
Example See the description of evallmi.
See Also evallmi, setmvar
sinfo
9118
9sinfo
Purpose Return the number of states, inputs, and outputs of an LTI system
Syntax sinfo(sys)
[ns,ni,no] = sinfo(sys)
Description For a linear system
E = Ax + Bu
y = Cx + Du
specified in the SYSTEM matrix sys, sinfo returns the order ns of the system
(number of states) and the numbers ni and no of its inputs and outputs,
respectively. Note that ns, ni, and no are the lengths of the vectors x, u, y,
respectively.
Without output arguments, sinfo displays this information on the screen and
also indicates whether the system is in descriptor form (E ¦ I) or not.
See Also ltisys, ltiss, psinfo
x
·
sinv
9119
9sinv
Purpose Compute the inverse G(s)
–1
, if it exists, of a system G(s)
Syntax h = sinv(g)
Description Given a system g with transfer function
G(s) = D + C(sE – A)
–1
B
with D square and invertible, sinv returns the inverse system with transfer
function H(s) = G(s)
–1
. This corresponds to exchanging inputs and outputs:
y = G(s)u ⇔ u = H(s)y
See Also ltisys, sadd, smult, sdiag
slft
9120
9slft
Purpose Form the linearfractional interconnection of two timeinvariant systems
(Redheffer’s star product)
Syntax sys = slft(sys1,sys2,udim,ydim)
Description slft forms the linearfractional feedback interconnection
of the two systems sys1 and sys2 and returns a realization sys of the
closedloop transfer function from to . The optional arguments
udim and ydim specify the lengths of the vectors u and y. Note that u enters
sys1 while y is an output of this system.When udim and ydim are omitted, slft
forms one of the two interconnections:
w
1
z
1
u y
w
2
z
2
sys1
sys2
w
1
w
2 \ .


 
z
1
z
2 \ .


 
slft
9121
or
depending on which system has the larger number of inputs/outputs. An error
is issued if neither of these interconnections can be performed.
slft also performs interconnections of polytopic or affine
parameterdependent systems. An error is issued if the result is neither
polytopic nor affine parameterdependent.
u
y
sys1
sys2
w
1
z
1
u y
sys2
sys1
z
2 w
2
slft
9122
Example Consider the interconnection
where ∆(s), P(s), K(s) are LTI systems. A realization of the closedloop transfer
function from w to z is obtained as
clsys = slft(delta,slft(p,k))
or equivalently as
clsys = slft(slft(delta,p),k)
See Also ltisys, sloop, sconnect
u y
P(s)
K(s)
w
z
∆(s)
sloop
9123
9sloop
Purpose Form the feedback interconnection of two systems
Syntax sys = sloop(g1,g2,sgn)
Description sloop forms the interconnection
The output sys is a realization of the closedloop transfer function from r to y.
The third argument sgn specifies either negative feedback (sgn = 1) or
positive feedback (sgn = +1). The default value is 1.
In terms of the transfer functions G
1
(s) and G
2
(s), sys corresponds to the
transfer function (I – εG
1
G
2
)
–1
G
1
where ε =sgn.
Example The closedloop transfer function from r to y in
is obtained by
sys = sloop( ltisys('tf',1,[1 1]) , ltisys('tf',1,[1 0]))
[num,den] = ltitf(sys)
num =
r
y
+
sgn
G
1
G
2
r
y
+
−
1/(s+1)
1/s
sloop
9124
0 1.00 0.00
den =
1.00 1.00 1.00
See Also ltisys, slft, sconnect
smult
9125
9smult
Purpose Series interconnection of linear systems
Syntax sys = smult(g1,g2,...)
Description smult forms the series interconnection
The arguments g1, g2,... are SYSTEM matrices containing the statespace
data for the systems G
1
, . . ., G
n
. Constant matrices are also allowed as a
representation of static gains. In addition, one (and at most one) of the input
arguments can be a polytopic or affine parameterdependent model, in which
case the output sys is of the same nature.
In terms of transfer functions, this interconnection corresponds to the product
G(s) = G
n
(s) × . . . × G
1
(s)
Note the reverse order of the terms.
Example Consider the interconnection
where W
i
(s), W
0
(s) are given LTI filters and G is the affine
parameterdependent system
= θ
1
x + u
y = x – θ
2
u
parametrized by θ
1
, θ
2
. The parameterdependent system defined by this
interconnection is returned by
y
G
2
G
1
G
n
u
........
G
G
W
i
(s) W
o
(s)
x
·
smult
9126
sys = smult(wi,g,w0)
While the SYSTEM matrices wi and w0 are created with ltisys, the
parameterdependent system g should be specified with psys.
See Also sadd, sdiag, sloop, slft, ltisys
splot
9127
9splot
Purpose Plot the various frequency and time responses of LTI systems
Syntax splot(sys,type,xrange)
splot(sys,T,type,xrange)
Description The first syntax is for continuoustime systems
E = Ax + Bu
y = Cx + Du
The first argument sys is the SYSTEM matrix representation of a system. The
optional argument xrange is a vector of frequencies or times used to control the
xaxis and the number of points in the plot. Finally, the string type consists of
the first two characters of the diagram to be plotted. Available choices include:
Frequency Response
type plot
'bo' Bode plot
'sv' Singular value plot
'ny' Nyquist plot
'li' linlog Nyquist plot
'ni' Black/Nichols chart
Time Response
type plot
'st' Step response
'im' Impulse response
'sq' Response to a square signal
'si' Response to a sinusoidal signal
x
·
splot
9128
The syntax splot(sys,T,type,xrange) is used for discretetime systems, in
which case T is the sampling period in seconds.
Remark The Control System Toolbox is needed to run splot.
Example Consider the thirdorder SISO system with transfer function
To plot the Nyquist diagram of the frequency response G(jω), type
g = ltisys('tf',1,[1 0.001 1 0])
splot(g,'ny')
The resulting plot appears in Figure 93. Due to the integrator and poor
damping of the natural frequency at 1 rd/s, this plot is not very informative. In
such cases, the linlog Nyquist plot is more appropriate. Given the gain/phase
decomposition
G(jω) = γ(ω)e
jφ(ω)
of the frequency response, the linlog Nyquist plot is the curve described by
x = ρ(ω) cos ϕ(ω), y = ρ(ω) sin ϕ(ω)
where
This plot is drawn by the command
splot(g,'li')
and appears in Figure 94. The resulting aspect ratio is more satisfactory
thanks to the logarithmic scale for gains larger than one. A more complete
contour is drawn in Figure 95 by
splot(g,'li',logspace(3,2))
G s ( )
1
s s
2
0.001s 1 + + ( )
 =
p ω ( )
γω if γω 1 ≥
1 log
10
γω if γω 1 > +
¹
´
¦
=
splot
9129
Figure 93: splot(g,'ny')
Figure 94: splot(g,'li')
splot
9130
Figure 95: splot(g,'li',logspace(3,2))
See Also ltisys, bode, nyquist, sigma, step, impulse
spol
9131
9spol
Purpose Return the poles of an LTI system
Syntax poles = spol(sys)
Description The function spol computes the poles of the LTI system of SYSTEM matrix sys.
If (A, B, C, D, E) denotes the statespace realization of this system, the poles
are the generalized eigenvalues of the pencil (A, E) in general, and the
eigenvalues of A when E = I.
See Also ltisys, ltiss
sresp
9132
9sresp
Purpose Frequency response of a continuoustime system
Syntax resp = sresp(sys,f)
Description Given a continuoustime system G(s) of realization (A, B, C, D, E), sresp
computes its frequency response
G(jω) = D + C(jωE – A)
–1
B
at the frequency ω = f. The first input sys is the SYSTEM matrix of G.
See Also ltisys, splot, spol
ssub
9133
9ssub
Purpose Select particular inputs and outputs in a MIMO system
Syntax subsys = ssub(sys,inputs,outputs)
Description Given an LTI or polytopic/parameterdependent system sys, ssub extracts the
subsystem subsys mapping the inputs selected by inputs to the outputs
selected by outputs. The resulting subsystem has the same number of states
as sys even though some of these states may no longer be
controllable/observable from the remaining inputs and outputs.
Example If p is a system with 3 inputs and 6 outputs, a realization of the transfer
function from the input channels 1,3 to the output channels 4,5,6 is obtained
as:
ssub(p,[1,3],4:6)
The second and third arguments are the vectors of indices of the selected
inputs/outputs. Inputs and outputs are labeled 1,2,3,... starting from the top of
the block diagram:
See Also ltisys, psys
u
1
u
2
u
3
y
1
y
2
y
3
y
4
y
5
y
6
P
ublock
9134
9ublock
Purpose Specify the characteristics of an uncertainty block
Syntax block = ublock(dims,bound,type)
Description ublock is used to describe individual uncertainty blocks ∆i in a blockdiagonal
uncertainty operator
∆ = diag(∆
1
, . . ., ∆
n
)
Such operators arise in the linearfractional representation of uncertain
dynamical systems (see “How to Derive Such Models” on page 223).
The first argument dims specifies the dimensions of ∆i. Set dim = [m,n] if ∆
i
is
mbyn, i.e., has m outputs and n inputs. The second argument bound contains
the quantitative information about the uncertainty (norm bound or sector
bounds). Finally, type is a string specifying the dynamical and structural
characteristics of ∆
i
.
The string type consists of one or twoletter identifiers marking the
uncertainty block properties. Characteristic properties include:
•The dynamical nature: linear time invariant ('lti'), linear time varying
('ltv'), nonlinear memoryless ('nlm'), or arbitrary nonlinear ('nl'). The
default is 'lti'
•The structure: 'f' for full (unstructured) blocks and 's' for scalar blocks of
the form
∆
i
= δ
i
x I
r
i
Scalar blocks often arise when representing parameter uncertainty. They
are also referred to as repeated scalars. The default is 'f'
•The phase information: the block can be complexvalued ('c') or realvalued
('r'). Complexvalued means arbitrary phase. The default is 'c'.
For instance, a realvalued timevarying scalar block is specified by setting
type = 'tvsr'. The order of the property identifiers is immaterial.
Finally, the quantitative information can be of two types:
•Norm bound: here the second argument bound is either a scalar β for uniform
bounds,
ublock
9135
∆
∞
< β,
or a SISO shaping filter W(s) for frequencyweighted bounds W
–1
∆
∞
< 1,
that is,
σ
max
(∆(jω)) < W(jω) for all ω
•Sector bounds: set bound = [a b] to indicate that the response of ∆(
.
) lies in
the sector {a, b}. Valid values for a and b include Inf and Inf.
Example See the example on p. 225 of this manual.
See Also udiag, uinfo, slft, aff2lft
udiag
9136
9udiag
Purpose Form blockdiagonal uncertainty structures
Syntax delta = udiag(delta1,delta2,....)
Description The function udiag appends individual uncertainty blocks specified with
ublock and returns a complete description of the blockdiagonal uncertainty
operator
∆ = diag(∆
1
, ∆
2
, . . .)
The input arguments delta1, delta2,... are the descriptions of ∆
1
, ∆
2
, . . . .
If udiag is called with k arguments, the output delta is a matrix with k
columns, the kth column being the description of ∆
k
.
See Also ublock, udiag, slft, aff2lft
uinfo
9137
9uinfo
Purpose Display the characteristics of a blockdiagonal uncertainty structure
Syntax uinfo(delta)
Description Given the description delta of a blockdiagonal uncertainty operator
∆ = diag(∆
1
, . . ., ∆
n
).
as returned by ublock and udiag, the function uinfo lists the characteristics of
each uncertainty block ∆
i
.
See Also ublock, udiag
uinfo
9138
How to Contact The MathWorks:
www.mathworks.com comp.softsys.matlab support@mathworks.com suggest@mathworks.com bugs@mathworks.com doc@mathworks.com service@mathworks.com info@mathworks.com
Web Newsgroup Technical support Product enhancement suggestions Bug reports Documentation error reports Order status, license renewals, passcodes Sales, pricing, and general information Phone Fax Mail
5086477000 5086477001 The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 017602098
For contact information about worldwide offices, see the MathWorks Web site. LMI Control Toolbox User’s Guide COPYRIGHT 1995 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc. FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by or for the federal government of the United States. By accepting delivery of the Program, the government hereby agrees that this software qualifies as "commercial" computer software within the meaning of FAR Part 12.212, DFARS Part 227.72021, DFARS Part 227.72023, DFARS Part 252.2277013, and DFARS Part 252.2277014. The terms and conditions of The MathWorks, Inc. Software License Agreement shall pertain to the government’s use and disclosure of the Program and Documentation, and shall supersede any conflicting contractual terms or conditions. If this license fails to meet the government’s minimum needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to MathWorks. MATLAB, Simulink, Stateflow, Handle Graphics, and RealTime Workshop are registered trademarks, and TargetBox is a trademark of The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective holders.
Printing History: May 1995
First printing
New for Version 1
Contents
Preface
About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Introduction
1
Linear Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Toolbox Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 LMIs and LMI Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 The Three Generic LMI Problems . . . . . . . . . . . . . . . . . . . . . . . 15 Further Mathematical Background . . . . . . . . . . . . . . . . . . . . . 19 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Uncertain Dynamical Systems
2
Linear TimeInvariant Systems . . . . . . . . . . . . . . . . . . . . . . . . 23 SYSTEM Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Time and Frequency Response Plots . . . . . . . . . . . . . . . . . . . 26 Interconnections of Linear Systems . . . . . . . . . . . . . . . . . . . . 29
i
Model Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Uncertain StateSpace Models . . . . . . . . . . . . . . . . . . . . . . . . . Polytopic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Affine ParameterDependent Models . . . . . . . . . . . . . . . . . . . . Quantification of Parameter Uncertainty . . . . . . . . . . . . . . . . Simulation of ParameterDependent Systems . . . . . . . . . . . . . From Affine to Polytopic Models . . . . . . . . . . . . . . . . . . . . . . . . Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LinearFractional Models of Uncertainty . . . . . . . . . . . . . . . How to Derive Such Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specification of the Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . From Affine to LinearFractional Models . . . . . . . . . . . . . . . . .
214 214 215 217 219 220 221 223 223 226 232
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Robustness Analysis
3
Quadratic Lyapunov Functions . . . . . . . . . . . . . . . . . . . . . . . . 33 LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Quadratic Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Maximizing the Quadratic Stability Region . . . . . . . . . . . . . . . . 38 Decay Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Quadratic H∞ Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 ParameterDependent Lyapunov Functions . . . . . . . . . . . . 312 Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 µ Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structured Singular Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robust Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robust Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
317 317 319 321
The Popov Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Real Parameter Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
ii
Contents
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
StateFeedback Synthesis
4
MultiObjective StateFeedback . . . . . . . . . . . . . . . . . . . . . . . . 43 Pole Placement in LMI Regions . . . . . . . . . . . . . . . . . . . . . . . . 45 LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Extension to the MultiModel Case . . . . . . . . . . . . . . . . . . . . . . . 49 The Function msfsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
Synthesis of H∞ Controllers
5
H∞ Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Riccati and LMIBased Approaches . . . . . . . . . . . . . . . . . . . . . . 57 H∞ Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Validation of the ClosedLoop System . . . . . . . . . . . . . . . . . . . 513 MultiObjective H∞ Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . LMI Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Function hinfmix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LoopShaping Design with hinfmix . . . . . . . . . . . . . . . . . . . . . .
515 516 520 520
iii
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
Loop Shaping
6
The LoopShaping Methodology . . . . . . . . . . . . . . . . . . . . . . . . 62 The LoopShaping Methodology . . . . . . . . . . . . . . . . . . . . . . . . 63 Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Specification of the Shaping Filters . . . . . . . . . . . . . . . . . . . . 610 Nonproper Filters and sderiv . . . . . . . . . . . . . . . . . . . . . . . . . . 612 Specification of the Control Structure . . . . . . . . . . . . . . . . . 614 Controller Synthesis and Validation . . . . . . . . . . . . . . . . . . . 616 Practical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618 Loop Shaping with Regional Pole Placement . . . . . . . . . . . 619 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
Robust GainScheduled Controllers
7
GainScheduled Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Synthesis of GainScheduled H• Controllers . . . . . . . . . . . . . 77 Simulation of GainScheduled Control Systems . . . . . . . . . . 79 Design Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
iv
Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
The LMI Lab
8
Background and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Overview of the LMI Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Specifying a System of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 A Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 setlmis and getlmis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 lmivar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811 lmiterm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813 The LMI Editor lmiedit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816 How It All Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818 Retrieving Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821 lmiinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821 lminbr and matnbr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821 LMI Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822 From Decision to Matrix Variables and Vice Versa . . . . . . 828 Validating Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829 Modifying a System of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . dellmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dellmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . setmvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structured Matrix Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . ComplexValued LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying cTx Objectives for mincx . . . . . . . . . . . . . . . . . . . . . Feasibility Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
830 830 830 831 833 833 835 838 839
v
. . SemiDefinite B(x) in gevp Problems . . 840 841 841 842 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .WellPosedness Issues . . . . . . . . . . . . . . . Efficiency and Complexity Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Solving M + PTXQ + QTXTP < 0 . . 844 Command Reference 9 List of Functions . . . . . . . . . . . . 98 vi Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 LMI Lab: Additional Facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 LMI Lab: Specifying and Solving LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 H∞ Control and Loop Shaping . . . . .
Preface .
D. linear matrix inequalities. Haifa. His thesis is on the theory and applications of linear matrix inequalities in control. Prof. at INRIA Rocquencourt. Arkadi Nemirovski is with the Faculty of Industrial Engineering and Management at Technion. His research interests include robust control theory. France. and numerical software for control. and linear and largescale control and filtering theory. Prof. Alan J. Laub is with the Electrical and Computer Engineering Department of the University of California at Santa Barbara. Israel. Pascal Gahinet is a reserach fellow at INRIA Rocquencourt. France. computeraided control system design. scientific computation. mathematical software. complexity theory.Preface About the Authors Dr. His research interests are in numerical analysis. and nonparametric statistics. numerical linear algebra. USA. viii . His research interests include convex optimization. Mahmoud Chilali is completing his Ph.
Andy Sparks. to those we may have omitted. Goh. Carsten Scherer. Eric Feron. K. ix .C. Jason Ly. Ravi Prasanth. Markus Brandstetter. Apologies. The work of Pascal Gahinet was supported in part by INRIA. Anders Helmersson.Acknowledgments Acknowledgments The authors wish to express their gratitude to all colleagues who directly or indirectly contributed to the making of the LMI Control Toolbox. Michael Safonov. Ted Iwasaki. and Anca Ignat for their help and contribution. Special thanks to Pierre Apkarian. Gregory Becker. Matthew Lamont Tyler. Many thanks also to those who tested and helped refine the software. Jianbo Lu. Mario Rotea. Jim Tung. Hiroyuki Kajiwara. finally. including Bobby Bodenheimer. and John Wen. Roy Lurie. John Morris.
Preface x .
. . . . .1 Introduction Linear Matrix Inequalities . . . . . . . . . . . . . . 110 . . . . . . . 12 Toolbox Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 LMIs and LMI Problems . . . . 15 Further Mathematical Background . . . . . . . . . . . . . . . . . . . 14 The Three Generic LMI Problems . . . . 19 References . . . . . . .
However. Three factors make LMI techniques appealing: • A variety of design specifications and constraints can be expressed as LMIs.1 Introduction Linear Matrix Inequalities Linear Matrix Inequalities (LMIs) and LMI techniques have emerged as powerful design tools in areas ranging from control engineering to system identification and structural design. • For more experienced LMI users. a problem can be solved exactly by efficient convex optimization algorithms (the “LMI solvers”). • For users who occasionally need to solve LMI problems. Thanks to its efficient “structured” representation of LMIs. 12 . a Riccati equation. the “LMI Lab” (Chapter 8) offers a rich. This makes LMIbased design a valuable alternative to classical “analytical” methods. problems with a thousand design variables typically take over an hour on today's workstations. The LMI Control Toolbox is designed as an easy and progressive gateway to the new and fastgrowing field of LMIs: • For users mainly interested in applying LMI techniques to control design. • While most problems with multiple constraints or objectives lack analytical solutions in terms of matrix equations. it should be kept in mind that the complexity of LMI computations remains higher than that of solving. the “LMI Editor” and the tutorial introduction to LMI concepts and LMI solvers provide for quick and easy problem solving. The LMI Control Toolbox implements stateoftheart interiorpoint LMI solvers. While these solvers are significantly faster than classical convex optimization algorithms. the LMI Control Toolbox is geared to making the most out of such improvements. and fully programmable environment to develop customized LMIbased tools. • Once formulated in terms of LMIs. flexible. they often remain tractable in the LMI framework. say. the LMI Control Toolbox features a variety of highlevel tools for the analysis and design of multivariable feedback loops (see Chapters 2 through 7). research on LMI optimization is still very active and substantial speedups can be expected in the future. See [9] for a good introduction to LMI concepts. For instance.
) • Robustness analysis. the LMI Lab provides a generalpurpose and fully programmable environment to specify and solve virtually any LMI problem. polytopic. Note that the scope of this facility is by no means restricted to controloriented applications. etc. including mixed H2 /H∞ synthesis with regional pole placement constraints • Loopshaping design • Synthesis of robust gainscheduled controllers for timevarying parameterdependent systems For users interested in developing their own applications.and LMIbased techniques. including quadratic stability.Toolbox Features Toolbox Features The LMI Control Toolbox serves two purposes: • Provide stateoftheart tools for the LMIbased analysis and design of robust control systems • Offer a flexible and userfriendly environment to specify and solve general LMI problems (the LMI Lab) The control design tools can be used without a priori knowledge about LMIs or LMI solvers. parameterdependent. analysis. Various measures of robust stability and performance are implemented. 13 . • Multimodel/multiobjective statefeedback design • Synthesis of outputfeedback H∞ controllers via Riccati. These are dedicated tools covering the following applications of LMI techniques: • Specification and manipulation of uncertain dynamical systems (lineartime invariant. and Popov analysis. techniques based on parameterdependent Lyapunov functions.
1 Introduction LMIs and LMI Problems A linear matrix inequality (LMI) is any constraint of the form A ( x ) := A 0 + x 1 A 1 + … + x N A N < 0 where • x = (x1. As a result. . if any. AK(x) on its diagonal.. AK ( x ) < 0 where diag (A1(x). Note that a system of LMI constraints can be regarded as a single LMI since A1 ( x ) < 0 . . is equivalent to A ( x ) := diag ( A 1 ( x ). The LMI (11) is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that +A y 2 z < 0 . . • Its solution set. .….e. respectively. is a convex subset of RN • Finding a solution x to (11). AK(x)) denotes the blockdiagonal matrix with A1(x). A K ( x ) ) < 0 . AN are given symmetric matrices • < 0 stands for “negative definite. . is a convex optimization problem. . . . .” i. . . the largest eigenvalue of A(x) is negative Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of (11) since they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0. . called the feasible set. (11) 14 . Hence multiple LMI constraints can be imposed on the vector of decision variables x without destroying convexity. it can be solved numerically with guarantees of finding a solution when one exists. . . . . . Convexity has an important consequence: even though (11) has no analytical solution in general. xN) is a vector of unknown scalars (the decision or optimization variables) • A0.
Minimizing a convex objective under LMI constraints is also a convex problem. . Xn) where L(. this LMI could be rewritten in the form (11). . but also for 15 . the linear objective minimization problem Minimize c x subject to A ( x ) < 0 plays an important role in LMIbased design. Many control problems and design specifications have LMI formulations [9]. xN as the independent scalar entries of X. the generalized eigenvalue minimization problem A ( x ) < λB ( x ) Minimize λ subject to B ( x ) > 0 C(x) < 0 T (14) (15) is quasiconvex and can be solved by similar techniques. . .B(x)). . . .LMIs and LMI Problems In most control applications. Xn. which is the approach taken in the LMI Lab.) are affine functions of some structured matrix variables X1. This is especially true for Lyapunovbased analysis and design. . Finally.) and R(. . A simple example is the Lyapunov inequality A X + XA < 0 T (12) where the unknown X is a symmetric matrix. . Xn) < R(X1. Defining x1. Yet it is more convenient and efficient to describe it in its natural form (12). . . . In particular. but rather in the form L(X1. . LMIs do not naturally arise in the canonical form (11). It owes its name to the fact that is related to the largest generalized eigenvalue of the pencil (A(x). . . The Three Generic LMI Problems Finding a solution x to the LMI system A(x) < 0 (13) is called the feasibility problem.
the stability of the dynamical system · x = Ax is equivalent to the feasibility of Find P = PT such that AT P + P A < 0. 27] • Robust stability in the face of sectorbounded nonlinearities (Popov criterion) [22. 9. matrix scaling problems. 16 . etc. 17.) [9] • Multimodel/multiobjective state feedback design [4. decay rate. A nonexhaustive list of problems addressed by LMI techniques includes the following: • Robust stability of systems with LTI uncertainty (µanalysis) [24. 14] • Multiobjective H∞ synthesis [17. 16] • Quadratic stability of differential inclusions [15. 28. 8] • Lyapunov stability of parameterdependent systems [12] • Input/state/output properties of LTI systems (invariant ellipsoids. H∞ control. 10. Further applications of LMIs arise in estimation. 2] • Control of stochastic systems [9] • Weighted interpolation problems [9] To hint at the principles underlying LMI design. structural design [6. optimal design. and so on. 21. 10] • Robust pole placement • Optimal LQG control [9] • Robust H∞ control [11. 18] • Design of robust gainscheduled controllers [5. 23. etc. let’s review the LMI formulations of a few typical design objectives. 13. 7]. covariance control.1 Introduction optimal LQG control. P > I. The main strength of LMI formulations is the ability to combine various design constraints or objectives in a numerically tractable manner. identification. Stability. 3.
LMIs and LMI Problems This can be generalized to linear differential inclusions (LDI) · x = A ( t )x where A(t) varies in the convex envelope of a set of LTI models: N n a i A i : a i ≥ 0. P > I. This gain is the global minimum of the following linear objective minimization problem [1.…. 25]. RMS gain .A n } = i=1 i = 1 ∑ ∑ A sufficient condition for the asymptotic stability of this LDI is the feasibility of Find P = PT such that A i P + PA i < 0. the randommeansquares (RMS) gain of a stable LTI system T · x = Ax + Bu y = Cx + Du is the largest input/output gain over all bounded inputs u(t). 26. Minimize γ over X = XT and γ such that T T A X + XA XB C T T B X –γ I D < 0 C D –γ I X > 0 LQG performance . for a stable LTI system · x = Ax + Bw G y = Cx 17 . ai = 1 A ( t ) ∈ Co { A 1 .
the LQG or H2 performance G2 is defined by 1 T T 2 G 2 : = lim E . T T 2 2 T T T 18 .Q such that AP + PA + BB < 0 Q CP >0 PC T P Again this is a linear objective minimization problem since the objective Trace (Q) is linear in the decision variables (free entries of P.1 Introduction where w is a white noise disturbance with unit covariance.Q).y ( t )y ( t )dt T → ∞ T 0 1 ∞ H = G ( jω )G ( jω )dω 2π – ∞ ∫ ∫ It can be shown that G 2 = inf { Trace ( CPC ) : AP + PA + BB < 0 } Hence G 2 is the global minimum of the LMI problem Minimize Trace (Q) over the symmetric matrices P.
xx (16) Such problems cannot be handled directly by interiorpoint methods which require strict feasibility of the LMI constraints. A simple example is T Minimize c x subject to x x ≥ 0. 19 . it matters when A(x) can be made negative semidefinite but not negative definite. A wellposed reformulation of (16) would be Minimize cTx subject to x Š 0. 19]. Keeping this subtlety in mind. For instance.Further Mathematical Background Further Mathematical Background Efficient interiorpoint algorithms are now available to solve the three generic LMI problems (13)–(15) defined in “The Three Generic LMI Problems” on page 15. we always use strict inequalities in this manual. this algorithm does not require an initial feasible point for the linear objective minimization problem (14) or the generalized eigenvalue minimization problem (15). The LMI Control Toolbox implements the Projective Algorithm of Nesterov and Nemirovski [20. In addition to its polynomialtime complexity. the number N(ε) of flops needed to compute an εaccurate solution is bounded by N(ε) ð M N3 log(V/ε) where M is the total row size of the LMI system. That is. N is the total number of scalar decision variables. and V is a datadependent scaling factor. While this distinction is immaterial in general. Some LMI problems are formulated in terms of inequalities rather than strict inequalities. These algorithms have a polynomialtime complexity. a variant of (14) is Minimize cTx subject to A(x) ð 0.
“Potential Reduction PolynomialTime Method for Truss Topology Design.O. J. 205215. Yang. and Q. pp.. Uchida.. Aut.“ IEEE Trans.” to appear in SIAM J. A. [5] Becker. [2] Apkarian. pp. M. Contr. 20262031. [9] Boyd.. “H∞ Design with Pole Placement Constraints: an LMI Approach. “Mixed H2 /H∞ Control with Pole Placement. G. “Structured and Simultaneous Lyapunov Functions for System Stability Problems. [4] Barmish. 1993. Conf.” Proc. Network Analysis. Contr. Contr. 1994..D. Also in Proc. P. Gahinet. “A Linear Matrix Inequality Approach to H∞ Control.” to appear in IEEE Trans. V.. Conf. pp. “Optimization Methods for Truss Geometry and Topology Design. 856860.P.” Int. and A. [10] Chilali. B. and G.” Int. P. and M. and P. Amer. 1994. Philadelphia. E. Robust and Nonlinear Contr. pp.. S.. 1994. Opt. [12] Gahinet.. Contr. pp. Nemirovski. Aut. P. Chilali. [3] Bambang. J. PrenticeHall.. 1973. Amer. Conf. Contr.. R.. Zowe. 27772779.“ Proc. Gahinet. Conf. pp. S..R. 23 (1994). [6] Bendsoe.. “Affine ParameterDependent Lyapunov Functions for Real Parametric Uncertainty. Dec. P. 421448. “SelfScheduled H∞ Control of Linear ParameterVarying Systems. and S. BenTal. Apkarian. “Stabilization of Uncertain Systems via Linear Control. Packard. Shimemura.. pp.. 22152240..” StateFeedback Case. Becker. P. and P. [8] Boyd. Contr. L. 1994. 848850. E. and K. SIAM books. M. Dec.” to appear in Structural Optimization. 49 (1989).” Proc.. 553558. B. and J. Vongpanitlerd. pp. Linear Matrix Inequalities in Systems and Control Theory. “Robust Performance of LinearParametrically Varying Systems Using ParametricallyDependent Linear Feedback. El Ghaoui.. Apkarian. Contr. Balakrishnan. Englewood Cliffs.1 Introduction References [1] Anderson.. [7] BenTal. [11] Gahinet. 4 (1994). Contr. 110 . P.” Systems and Control Letters. A. AC–28 (1983). Feron.
. pp. Dec. 1993.“ParameterDependent Lyapunov Functions. Rotea. [19] Nemirovski. “The Complex Structured Singular Value.E. pp. [15] Horisberger.” Automatica. pp.” submitted to Int. PrenticeHall. volume of the special contributions to the ECC 1995. and S. 1994. and R. Suda. 29 (1994). “Beyond Singular Values and Loop Shapes. 71109. P. 1994.” Proc.. 516. G. Yu.. 14 (1991). and M.M. and P.” Automation and Remote Control. pp. 1991.” to appear in Trends in Control: A European Perspective. pp. SIAM Books. “The Projective Method for Solving Linear Matrix Inequalities. Hall.” Proc. “All Controllers for the General H∞ Control Problem: LMI Existence Conditions and StateSpace Formulas. [24] Stein. “Regulators for Linear TimeVarying Plants with Uncertain Parameters. 10841089.. I. Amer. [17] Khargonekar.” Automatica. Skelton. and N. Englewood Cliffs. A. [16] How. A.. C. [25] Vidyasagar. Ohara. and A.References [13] Haddad.S. 22742279 and 26322633. [18] Masubuchi.P.P. pp. Berstein. Contr. 111 . pp. M.” IEEE Trans. [20] Nesterov. 705708. and J. Interior Point Polynomial Methods in Convex Programming: Theory and Applications.. Doyle.” J. Aut. Conf. J.R. 840844. Conf. Guidance. [23] Scherer. W.. Robust and Nonlinear Contr. “Absolute Stability of Nonlinear Systems of Automatic Control. Contr. and D. V.. 824837.” IEEE Trans. Conf. “Connection between the Popov Stability Criterion and Bounds for Real Parameter Uncertainty. [22] Popov.. 857875. and J. T. Contr. [14] Iwasaki.. pp. Doyle.M.C.” Proc. J.A.R. [21] Packard. Contr. Nonlinear System Analysis. H. Amer.C.P. Nemirovski. pp. 30 (1994). Aut. “LMIBased Controller Synthesis: A Unified Formulation and Solution. A.. 1992. and P. Constant Real Parameter Uncertainty. Philadelphia. 13071317.. and the Popov Criterion in Robust Analysis and Synthesis: Part 1 and 2. 39 (1991)... 22 (1962). Gahinet. AC–21 (1976). 1994.“Mixed H2 /H∞ Control: a Convex Optimization Approach. “Mixed H2 H∞ Control.. Belanger. Contr..
pp. Newlin. and J. Contr.. [28] Zames. G. “On the InputOutput Stability of TimeVarying Nonlinear Feedback Systems.C. Doyle. “Let's Get Real. 1994. [27] Young. pp.. Springer Verlag. Contr.1 Introduction [26] Willems. Part I and II. J. “LeastSquares Stationary Optimal Control and the Algebraic Riccati Equation. C. M.” in Robust Control Theory.. 228238 and 465476.” IEEE Trans. 143174. Aut. 621634. AC–11 (1966). 112 . P.” IEEE Trans.. Aut. M. AC–16 (1971). P. pp..
29 . . . . . . Simulation of ParameterDependent Systems From Affine to Polytopic Models . . . . .220 . Polytopic Models . .212 . . . . . . . .215 . . . . . . . . 23 Time and Frequency Response Plots . . . . . . . . . . . . . .235 . . . . . . . . . . .223 . . . . . . . . . . . . . . . . . . .223 . . .232 Uncertain StateSpace Models . . . . . References . . . . . . . . LinearFractional Models of Uncertainty How to Derive Such Models . . . . .2 Uncertain Dynamical Systems Linear TimeInvariant Systems . . . . Affine ParameterDependent Models . . . . . . . . . . . . . . . . . . . . . . . . .214 . .214 . . . . . . . . . .226 . . . . . Quantification of Parameter Uncertainty . . . . . . . . . . . . From Affine to LinearFractional Models . . . . . 26 Interconnections of Linear Systems Model Uncertainty . . . . . . . . . . . . .217 . . . .219 . 23 SYSTEM Matrix . . . . . . . . Example . . . . . . . .221 . . . . . . . Specification of the Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . .
systems with uncertain or timevarying physical parameters.2 Uncertain Dynamical Systems The LMI Control Toolbox offers a variety of tools to facilitate the description and manipulation of uncertain dynamical systems. etc. These include functions to: • Manipulate the statespace realization of linear timeinvariant (LTI) systems as a single SYSTEM matrix • Form interconnections of linear systems • Specify linear systems with uncertain statespace matrices (polytopic differential inclusions.) • Describe linearfractional models of uncertainty The next sections provide a tutorial introduction to uncertain dynamical systems and an overview of these facilities. 22 .
C. y(t) denote the state. These tools handle general LTI models of the form dx E .Linear TimeInvariant Systems Linear TimeInvariant Systems The LMI Control Toolbox provides streamlined tools to manipulate statespace representations of linear timeinvariant (LTI) systems. y = x. the secondorder system ·· · mx + f x + kx = u. the statespace realization of LTI systems is stored as a single MATLAB matrix called a SYSTEM matrix. as well as their discretetime counterpart Ex k + 1 = Ax k + Bu k y k = Cx k + Du k Recall that the vectors x(t).or 23 .= Ax ( t ) + Bu ( t ) dt y ( t ) = Cx ( t ) + Du ( t ) where A. 0 )X SYSTEM Matrix For convenience. B. Specifically. and output vectors at the sample time k. E are real matrices and E is invertible. 1 y = ( 1. many dynamical systems are naturally written in descriptor form. For instance. The “descriptor” formulation ( E ≠ I ) proves useful when specifying parameterdependent systems (see “Affine ParameterDependent Models” on page 215) and also avoids inverting E when this inversion is poorly conditioned. input. D. Similarly. uk. input. and output trajectories. xk. admits the statespace representation dX 0 1 . yk denote the values of the state. Moreover. u(t). · (t) x 1 0 0 m 0 X + u. a continuous.= dt –k –f where X ( t ) := x ( t ) .
0) specifies the LTI system · x = – x + u. C. A ∈ Rn×n) while the entry Inf is used to differentiate SYSTEM matrices from regular matrices. the command sys = ltisys('tf'. For instance.d] = ltiss(sys) For singleinput/singleoutput (SISO) systems. This data structure is similar to that used in the µ Analysis and Synthesis Toolbox.2 Uncertain Dynamical Systems discretetime LTI system with statespace matrices A. D.d) returns a statespace realization of the SISO transfer function n(s)/d(s) in SYSTEM format. D from the SYSTEM matrix sys. The upper right entry n corresponds to the number of states (i.n.. the function ltitf returns the numerator/denominator representation of the transfer function –1 n(s) G ( s ) = D + C ( sE – A ) B = d(s) Conversely. B. E is represented by the structured matrix A + j(E – I) B C 0 D n 0 0 –Inf where j = – 1 .b. y=x To retrieve the values of A. B. and outputs of a linear system are retrieved from the SYSTEM matrix with sinfo: … 24 .e.1. type [a. inputs.1. The functions ltisys and ltiss create SYSTEM matrices and extract statespace data from them. sys = ltisys( 1. The number of states.c. Here n and d are the vector representation of n(s) and d(s) (type help poly for details). C.
2:3) The function sinv computes the inverse H(s) = G(s)–1 of a system G(s) with square invertible D matrix: h = sinv(g) Finally. 25 .1. the subsystem mapping the first input to the second and third outputs is given by ssub(g. For instance. This function seeks a diagonal scaling similarity that reduces the norms of A. if G has two inputs and three outputs. the poles of the system are given by spol: spol(sys) ans = 1 The function ssub selects particular inputs and outputs of a system and returns the corresponding subsystem. the statespace realization of an LTI system can be “balanced” with sbalanc. B. 1 input(s). C. and 1 output(s) Similarly.Linear TimeInvariant Systems sinfo(sys) System with 1 state(s).
'bo') The second argument is the string consisting of the first two letters of “bode. 0.0. f = 0.01].5 0. For instance.5.” This command produces the plot of Figure 21. and k = 0.[1 0]. To plot its Bode diagram. 50 Gain dB 0 −50 −1 10 Frequency (rad/sec) 0 10 0 Phase deg −90 −180 10 −1 10 Frequency (rad/sec) 0 Figure 21: splot (sys. splot(sys.logspace( 1.[1 0.1.2 Uncertain Dynamical Systems Time and Frequency Response Plots Time and frequency responses of linear systems are plotted directly from the SYSTEM matrix with the function splot. consider the secondorder system ·· · mx + f x + kx = u. y=x with m = 2. This system is specified in descriptor form by sys = ltisys([0 1.1].01.'bo') 26 .50)) displays the Bode plot of Figure 22 instead.0 2]) the last input argument being the E matrix. type splot(sys. A third argument can be used to adjust the frequency range.'bo'.[0. For the sake of illustration.
.'sv').Time and Frequency Response Plots 50 Gain dB 0 −50 −1 10 10 Frequency (rad/sec) 0 10 1 0 Phase deg −90 −180 10 −1 10 Frequency (rad/sec) 0 10 1 Figure 22: splot (sys. the singular value plot consists of the curves (ω.'im'). For such systems. type splot(sys.logspace (–1. . Similarly. The function splot is also applicable to discretetime systems.1. For instance.'st') and splot(sys. Š σp(M). σi(G(jω))) where σi denotes the ith singular value of a matrix M in the order σ1(M) Š σ2(M) Š . the step and impulse responses of this system are plotted by splot(sys.'bo'. the sampling period T must be specified to plot frequency responses. See the “Command Reference” chapter for a complete list of available diagrams. the Bode plot of the system 27 . respectively.50)) To draw the singular value plot of this secondorder system. For a system with transfer function G(s) = D + C(sE – A)–1B.
2 Uncertain Dynamical Systems x k + 1 = 0.0.01.2.'bo') 28 .2u k yk = –xk with sampling period t = 0.01 s is drawn by splot(ltisys(0.1.1x k + 0.0. 1).
Similarly.g2) returns the system with transfer function G1(s) + G2(s) while smult(g1. and simple feedback interconnections of LTI systems. at most one of which can be a parameterdependent system. y2 = G2(s)u2 into the single relation y 1 = G( s ) u1 y u 2 2 29 .g2) appends (concatenates) the systems G1 and G2 and returns the system with transfer function G (s) 0 G(s) = 1 0 G2 ( s ) This corresponds to combining the input/output relations y1 = G1(s)u1.Interconnections of Linear Systems Interconnections of Linear Systems Tools are provided to form series. Note that most of these functions are also applicable to polytopic or parameter. Both functions take up to ten input arguments. More complex interconnections can be built either incrementally with these basic facilities or directly with sconnect (see“Polytopic Models” on page 214 for details).g2) returns the system with transfer function G2(s)G1(s). parallel.dependent models (see “Polytopic Models” on page 214). Series and parallel interconnections are performed by sadd and smult. sadd(g1. The interconnection functions work on the SYSTEM matrix representation of dynamical systems and their names start with an s. In terms of transfer functions. the command sdiag(g1.
2. The result is a statespace realization of the transfer function (I – εG1G2)–1G1 where ε = ±1.3) 210 .P2. r −ε + G1(s) y G2(s) Figure 23: sloop This function is useful to specify simple feedback loops. To form this ω 2 z 2 interconnection when u ∈ R2 and y ∈ R3.2 Uncertain Dynamical Systems Finally.g). For instance. The function sloop computes the closedloop mapping between r and y in the loop of Figure 23. sloop(smult(k.1)) The function slft forms the more general feedback interconnection of Figure ω z 24 and returns the closedloop mapping from 1 to 1 . the functions sloop and slft form basic feedback interconnections. the command is slft(P1. the closedloop system clsys corresponding to the twodegreeoffreedom tracking loop r C − + K G y is derived by setting G1(s) = G(s)K(s) and G2(s) = 1: clsys = smult(c.
w1 P1(s) u y z1 w2 P2(s) z2 Figure 24: slft 211 . This function is useful to compute linearfractional interconnections such as those arising in H∞ theory and its extensions (see “How to Derive Such Models” on page 223).Interconnections of Linear Systems The last two arguments dimension u and y.
• The more information you have about the uncertainty (phase. structure. Another cause of uncertainty is the imperfect knowledge of some components of the system. the possibly complex behavior of dynamical systems must be approximated by models of relatively low complexity.). time invariance. There are two major classes of uncertainty: • Dynamical uncertainty. highfrequency flexible modes. aerodynamical coefficients in flying devices. Model uncertainty is generally a combination of dynamical and parametric uncertainty. For control design purposes. the linear model should be sufficiently accurate in the control bandwidth. aging. there may be 212 . Other important characteristics of uncertainty include whether it is linear or nonlinear. Examples of physical parameters include stiffness and damping coefficients in mechanical systems. which stems from imperfect knowledge of the physical parameter values. Note that model uncertainty should be distinguished from exogenous actions such as disturbances or measurement noise. For instance. and may arise at several different points in the control loop. or from variations of these parameters during operation. or the alteration of their behavior due to changes in operating conditions. and whether it is time invariant or time varying. capacitors and inductors in electric circuits.2 Uncertain Dynamical Systems Model Uncertainty The notion of uncertain dynamical system is central to robust control theory. nonlinearities for large inputs. etc. uncertainty also stems from physical parameters whose value is only approximately known or varies in time. The gap between such models and the true physical system is called the model uncertainty. slow time variations. In other words. Finally. The LMI Control Toolbox focuses on the class of dynamical systems that can be approximated by linear models up to some possibly nonlinear and/or timevarying model uncertainty. etc. the higher the achievable performance will be. two fundamental principles must be remembered: • Uncertainty should be small where high performance is desired (tradeoff between performance and robustness). For instance. When deriving the nominal model and estimating the uncertainty. • Parameter uncertainty. etc. etc. which consists of dynamical components neglected in the linear model as well as of variations in the dynamical behavior during operation.
For instance. 213 . and parametric uncertainty on some sensor coefficients.01. frequencydomain model. • Linearfractional representation of uncertainty. Here the uncertain system is described as an interconnection of known LTI systems with uncertain components called “uncertainty blocks.) and the analysis or synthesis tool to be used. Determinant factors in the choice of representation include the available model (statespace equations. This representation is relevant for systems described by dynamical equations with uncertain and/or timevarying coefficients. Two representations of model uncertainty are used in the LMI Control Toolbox: • Uncertain statespace models.) represents a family of systems of which only a few characteristics are known.” Each uncertainty block ∆i(. etc. the only available information about ∆i may be that it is a timeinvariant nonlinearity with gain less than 0.Model Uncertainty dynamical uncertainty on the system actuators.
. . Sk are given vertex systems: S1 = A 1 + jE 1 B 1 A k + jE k B k . as well as approximations of complex and possibly nonlinear phenomena. 214 . E depend on uncertain and/or timevarying parameters or vary in some bounded sets of the space of matrices. .” Psystems are specified with the function psys and manipulated like ordinary LTI systems except for a few specific restrictions.. k k S ( t ) ∈ Co { S 1. The nonnegative numbers α1. αk are called the polytopic coordinates of S. αi = 1 i=1 i = 1 ∑ ∑ where S1. S k } := α i S i : α i ≥ 0. Polytopic Models We call polytopic system a linear timevarying system · E(t) x = A(t) x + B(t) u y = C(t) x + D(t) u whose SYSTEM matrix S ( t ) = A ( t ) + jE ( t ) B ( t ) varies within a fixed polytope C(t) D(t) of matrices. In other words. “P–” standing for “polytopic or parameterdependent. . Sk. . .2 Uncertain Dynamical Systems Uncertain StateSpace Models Physical models of a system often lead to a statespace description of its dynamical behavior. …. the system is described by an uncertain statespace model · E x = Ax + Bu. B. i. D. S(t) is a convex combination of the SYSTEM matrices S1.e. Of particular relevance to this toolbox are the classes of polytopic or parameterdependent models discussed next. . . . . y = Cx + Du where the statespace matrices A. We collectively refer to such models as Psystems. . C.…S k = C1 D1 Ck Dk (21) In other words. . The resulting statespace equations typically involve physical parameters whose value is only approximately known.
Such models commonly arise from the equations of motion.+ pnAn. by the SYSTEM matrices S1. . The LMI Control Toolbox offers various tools to analyze the stability and performance of parameterdependent systems with an affine dependence on the parameter vector p = (p1.) are known functions of some parameter vector p = (p1. 1} = [–1. .+ pnBn. For instance. That is. When the system is linear. B(p) = B0 + p1B1 + . . etc. E(. this naturally gives rise to parameterdependent models (PDS) of the form · E(p) x = A(p) x + B(p) u y = C(p) x + D(p) u where A(. a polytopic model taking values in the convex envelope of the three LTI systems s1. . .. aerodynamics. . .). . • Statespace models depending affinely on timevarying parameters (see “From Affine to Polytopic Models” on page 220) A simple example is the system · x = (sin x) x whose state matrix A = sin x ranges in the polytope A Œ Co{–1. pn). . including: • Multimodel representation of a system. s3 is declared by polsys = psys([s1 s2 s3]) Affine ParameterDependent Models The equations of physics often involve uncertain or timevarying coefficients. . . 215 . . . 1]. . each model being derived around particular operating conditions • Nonlinear systems of the form · y = C(x) x + D(x) u x = A(x) x + B(x) u. pn). i. . circuits. Sk in (21). s2. .Uncertain StateSpace Models Such models are also called polytopic linear differential inclusions in the literature [3] and arise in many practical situations. . . .e. . PDSs where A(p) = A0 + p1A1 + . Polytopic systems are specified by the list of their vertex systems.
+ pnSn.b0. . The system “coefficients” S0. Only their combination S(p) is a relevant description of the problem.2 Uncertain Dynamical Systems and so on. Sn fully characterize the dependence on the uncertain parameters p1. . 216 . .b1. . . . Sn need not represent meaningful dynamical systems. . .c0.d1.e2) affsys = psys(pv. Ci Di the affine dependence on p is written more compactly in SYSTEM matrix terms as S(p) = S0 + p1S1 + . Affine parameterdependent models are wellsuited for Lyapunovbased analysis and synthesis and are easily converted to linearfractional uncertainty models for smallgainbased design (see the nonlinear spring example in “SectorBounded Uncertainty” on page 230). . .d0. . . the system S(p) = S0 + p1S1 + p2S2 is defined by s0 = ltisys(a0.c1. Note that S0. The output affsys is a structured matrix storing all relevant data.c2. [s0 s1 s2]) where pv is the parameter description returned by pvec (see next subsection for details). C(p) D(p) A + jE B i i Si = i . Affine parameterdependent systems are specified with psys by providing • A description of the parameter vector p in terms of bounds on the parameter values and rates of variation • The list of SYSTEM matrix coefficients S0. . . Sn For instance. With the notation S ( p ) = A ( p ) + jE ( p ) B ( p ) .e1) s2 = ltisys(a2. . . . pn.e0) s1 = ltisys(a1. .d2.b2.
This is done with the function pvec. .0) Quantification of Parameter Uncertainty Parameter uncertainty is quantified by the range of parameter values and possibly the rates of parameter variation. . you must explicitly set E1 = E2 = 0 by typing s0 = ltisys(a0. p i ].b0.range) The ith row of the matrix range lists the lower and upper bounds on pi.d2. The characteristics of parameter vectors defined with pvec are retrieved with pvinfo. This corresponds to cases where each uncertain or timevarying parameter pi ranges between two empirically determined extremal values p i and p i : p i ∈ [ p i.c2. 5] The corresponding parameter vector p = (ρ. Omitting the arguments e0. e2 altogether results in setting E(p) = I + (p1 + p2)I. Consider the example of an electrical circuit with uncertain resistor ρ and capacitor c ranging in ρ ∈ [600.d1.c0. 1000] c ∈ [1. E(p) = I). e1.1 5] p = pvec('box'. 217 . (21) If p = (p1. To specify an affine PDS with a parameterindependent E matrix (e.b2. . pn) is the vector of all uncertain parameters. The parameter uncertainty range can be described as a box in the parameter space. (22) delimits a hyperrectangle of the parameter space Rn called the parameter box.d0) s1 = ltisys(a1.g. . c) takes values in the box drawn in Figure 25..0) s2 = ltisys(a2. This uncertainty range is specified by the commands: range = [600 1000. ltisys sets the E matrix to the identity.Uncertain StateSpace Models Important: By default.c1.b1.
In general. pi3=[10.1 ≤ ρ ( t ) ≤ 1.1 1.2 Uncertain Dynamical Systems · Similarly. the constraints · 0.001 0. robustness against fast parameter variations is more stringent than robustness against constant but uncertain parameters.001] p = pvec('box'.rate) · c ( t ) ≤ 0.8]. For instance. Π3 = (10.4]. c 5 range of parameter values 1 p 600 Figure 25: Parameter box 1000 Alternatively. bounds on the rate of variation p i of pi(t) are specified by adding a third argument rate to the calling list of pvec.[pi1. 8). 1). An uncertain parameter vector p with this range of values is defined by pi1=[1. are incorporated by rate = [0. pi2=[3.range. 4). uncertain parameter vectors can be specified as ranging in a polytopic of the parameter space Rn like the one drawn in Figure 26 for n = 2.001 All parameters are assumed to be timeinvariant when rate is omitted.pi2. Slowly varying parameters can be specified in this manner. 0. This polytope is characterized by the three vertices: Π1 = (1. Π2 = (3.1] p = pvec('pol'.pi3]) 218 .
To obtain other time responses.'traj'. pdsimul(pds. Multiinput multioutput (MIMO) systems are simulated similarly by specifying an appropriate input function.'traj') plots the step response of a singleinput parameterdependent system pds along the parameter trajectory defined in traj.'sin') This command plots the response to a sine wave between t = 0 and t = 1 s. The parameter vector p(t) is required to range in a box (type 'box' of pvec). 219 .1. The parameter trajectory is defined by a function p = fun(t) returning the value of p at time t.m. p2 8 range of values of p 4 Π2 Π1 1 Π3 1 3 10 p1 Figure 26: Polytopic parameter range Simulation of ParameterDependent Systems The function pdsimul simulates the time response of affine parameterdependent systems along given parameter trajectories. For instance. specify an input signal as in pdsimul(pds.Uncertain StateSpace Models The string 'pol' indicates that the parameter range is defined as a polytope.
this polytope is the convex envelope of the images S(Π1).2 Uncertain Dynamical Systems From Affine to Polytopic Models Affine parameterdependent models S ( p ) = A ( p ) + jE ( p ) B ( p ) C(p) D(p) are readily converted to polytopic ones. . The syntax is polsys = aff2pol(affsys) where affsys is the affine model. If the function S(p) is affine in p. it maps this parameter box to some polytope of SYSTEM matrices. . . The parameter vector p = (p1. p1 Π4 Π3 Π1 Π2 p2 polytope of matrices S(p) parameter box Figure 27: Mapping the parameter box to a polytope of systems Given an affine parameterdependent system. as illustrated by the figure below. . . . Π2. p i ]. . 220 . S(Π2). . pn) then takes values in a parameter box with 2n corners Π1. . the function aff2pol performs this mapping and returns an equivalent polytopic model. . . More precisely. Π2. of the parameter box corners Π1. . . . The resulting polytopic model polsys consists of the instances of affsys at the vertices of the parameter range. Suppose for instance that each parameter pi ranges in some interval [ p i.
sC = ltisys(aC. C ) = 1 0 = 1 0 + L 0 0 + R × 0 + C × 0 .[s0 sL sR sC]) The first SYSTEM matrix s0 contains the statespace data for L = R = C = 0 while sL. C.100 150]) pds = psys(pv.0 1].0 0]. The results can be checked with psinfo and pvinfo: 221 .0) pv=pvec('box'. sL = ltisys(aL. R. 2 A statespace representation of its undriven response is · E(L.1 2. C ∈ [100. 2]. C ) = 0 1 = 0 1 + L × 0 + R 0 0 + C 0 0 –R –C 0 0 –1 0 0 –1 E ( L. Example 2. The range of parameter values is specified by the pvec command. 20]. 0 L 0 0 0 1 This affine system is specified with psys as follows: a0 aL aR aC = = = = [0 1. C)x T di where x = i. e0 = [1 0. R. and the capacitor C are uncertain parameters ranging in L ∈ [10.eL) [0 0. Consider a simple electrical circuit with equation di d i L . sR.+ R . R.1. 150].+ Ci = V dt dt where the inductance L. R ∈ [1. R. sR = ltisys(aR.e0) zeros(2).0 0]. C) x = A(L. s0 = ltisys(a0. R. 1 0].0 1].[10 20.  and dt A ( L . eL = [0 0. sC define the coefficient matrices of L. the resistor R.Uncertain StateSpace Models Example We conclude with an example illustrating the manipulation of polytopic and parameterdependent models in the LMI Control Toolbox.0) [0 0.
Finally. 0 input(s).2. R.b.e] = ltiss(sys) The matrices a and e now contain the values of A(L. 0 input(s). C: sys = psinfo(pds. C) for L = 15. and C = 150.[15 1. R. and 0 output(s) 222 .c.d. the polytopic counterpart of this affine model is given by pols = aff2pol(pds) psinfo(pols) Polytopic model with 8 vertex systems Each system has 2 state(s). R. and 0 output(s) pvinfo(pv) Vector of 3 parameters ranging in a box The system can also be evaluated for given values of L.'eval'.2 Uncertain Dynamical Systems psinfo(pds) Affine parameterdependent system with 3 parameters (4 systems) Each system has 2 state(s). C) and E(L. R = 1.2 150]) [a.
nominal models of the system.LinearFractional Models of Uncertainty LinearFractional Models of Uncertainty For systems with both dynamical and parametric uncertainty. . In this generic model: • The LTI system P(s) gathers all known LTI components (controller. reference signal. a more general representation of uncertainty is the linearfractional model of Figure 28 [1. .) and the vector y consists of all output signals generated by the system • ∆ is a structured description of the uncertainty. ∆ w q u P(s) y Figure 28: Linearfractional uncertainty How to Derive Such Models Before proceeding with the specification of such uncertainty models.) • The input vector u includes all external actions on the system (disturbance. nonlinearity. . Specifically. . . uncertain parameter. etc. 223 . . ∆r) where each uncertainty block ∆i accounts for one particular source of uncertainty (neglected dynamics. we illustrate how natural uncertainty descriptions can be recast into the standard linearfractional model of Figure 28.). noise. 2]. . ∆ = diag(∆1. . . The diagonal structure of ∆ reflects how each uncertainty component ∆i enters the loop and affects the overall behavior of the true system. . sensors. and actuators.
2 .2 Uncertain Dynamical Systems Example 2. G(s) = G0(s)(I + ∆(s)) where the multiplicative dynamical uncertainty ∆(s) is any stable LTI system with RMS gain less than 0.1 k0.) k0 224 . Consider the feedback loop r + y G(s) − k(t) where • G(s) is an uncertain LTI system approximated within 5% accuracy by the secondorder nominal model 1 G 0 ( s ) = 2 s + 0.01s + 1 In other words. To derive a linearfractional representation of this uncertain feedback system. first separate the uncertainty from the nominal models by rewriting the loop as ∆ r + + + G0 y − + + δ(.05 • k(t) is a fluctuating gain modeled by k(t) = k0 + δ(t) where k0 is the nominal value and δ(t) < 0.
LinearFractional Models of Uncertainty Then pull out all uncertain components and lump them together into a single block as follows: q1 q2 δ 0 0 ∆ p1 P2 r + − e + + y G0 + + k0 p q If 1 and 1 denote the input and output vectors of the uncertainty p2 q2 q1 p1 block. the plant P(s) is simply the transfer function from q 2 to p 2 in the r y diagram. q 1 q 2 δ 0 0 ∆ p 1 p 2 r P(s) y 225 .
2 Uncertain Dynamical Systems Hence the SYSTEM matrix of P(s) can be computed with sconnect or by specifying the following interconnection: p2 q2 y G0 p1 r + e q1 + + − + + k0 Specification of the Uncertainty In linearfractional uncertainty models. nonlinear memoryless. Scalar blocks are used to represent uncertain parameters. The basics about quantification issues are reviewed next. . • Whether αii is real or complex in the case of scalar uncertainty ∆i = δi by I • Quantitative information such as norm bounds or sector bounds The properties of each individual uncertainty block ∆i are specified with the function ublock. 226 . ∆r) Proper quantification of the uncertainty is important since this determines achievable levels of robust stability and performance. . arbitrary nonlinear • Its dimensions and structure (full block or scalar block ∆i = δi × I). . These individual descriptions are then appended with udiag to form a complete description of the structured uncertainty ∆ = diag(∆1. . each uncertainty block ∆i is a dynamical system characterized by • Its dynamical nature: linear timeinvariant or timevarying.
LinearFractional Models of Uncertainty NormBounded Uncertainty Norm bounds specify the amount of uncertainty in terms of RMS gain..e. If p ∈ [ p. then p0 and δmax are given by p+p p 0 = . Norm bounds are also useful to quantify parametric uncertainty. the bound on ∆∞ is an estimate of the worstcase gap G – G0 between the true system G and its linear model G0(s).. For multiplicative uncertainty G = G0(I + ∆). the uncertainty can be quantified more accurately by using frequencydependent bounds of the form W ( s )∆ ( s ) ∞ < 1 –1 (22) 227 . this bound quantifies the relative gap between G and its model G0. For additive uncertainty G = G0 + ∆. p ]. when the physical system G itself is LTI). an uncertain parameter p can be represented as p = p0 (1 + δ). Recall that the RMS gain or L2 gain of a BIBOstable system is defined as the worstcase ratio of the input and output energies: ∆ ∞ = sup ∆w L 2 w L2 w ∈ L2 w≠0 where w L = 2 2 ∫0 w ∞ T ( t )w ( t ) dt. For instance. δ < δmax where p0 is the nominal value and δmax bounds the relative deviation from p0. 2 p–p δ max = 2p 0 When ∆ is LTI (i.
For instance.10) while a scalar nonlinearity Φ(. choosing 2s + 1 W ( s ) = s + 100 results in the frequencyweighted uncertainty bound ( ∆ ( jω ) < β ( ω ) ) := 4ω + 1 2 4 ω + 10 2 fquan2.) with unit RMS gain is defined by 228 . Such bounds specify different amounts of uncertainty across the frequency range and the uncertainty level at each frequency is adjusted through theshaping filter shaping filter W(s).ps drawn in Figure 29. 10 1 10 Bound beta(w) 10 0 −1 10 −2 10 −1 10 0 10 Frequency w 1 10 2 10 3 Figure 29: Frequencydependent bound on ∆(jw) These various norm bounds are easily specified with ublock. For instance.2 Uncertain Dynamical Systems where W(s) is some SISO shaping filter. a 2by.3 LTI uncertainty block ∆(s) with RMS gain smaller than 10 is declared by delta = ublock([2 3].
Finally. delta = ublock(2.[1 100])) declares an LTI uncertainty ∆(s) ∈ C2×2 satisfying σ max < ( ∆ ( jω ) ) < W ( jω ) .'s') delta3 = ublock([1 2]. . For instance. the first argument dimensions the block.1 229 .1) delta2 = ublock(2. respectively.'nl') delta = udiag(delta1. After specifying each block ∆i with ublock.[100 0].delta2. the command delta = ublock(3.0.2.delta3) 100s W ( s ) = s + 100 specify the uncertainty structure ∆ = diag(∆1. ∆2. ∆n).0. . Scalar blocks and real parameter uncertainty are denoted by the letters s and r. at all frequencies ω. setting the second argument of ublock to a SISO system W(s) specifies a frequencydependent norm bound on the LTI uncertainty block ∆(s).'nl') In these commands.1. the commands delta1 = ublock([3 1]. . . The default properties are full. call udiag to derive a complete description of the uncertainty structure ∆ = diag(∆1. complexvalued. the second argument quantifies the uncertainty by a norm bound or sector bounds.5. and the optional third argument is a string specifying the nature and structure of the uncertainty block. Thus.100. ∆3) where • ∆1(s) ∈ C3×1 and ∆1∞ < 0.5.'sr') declares a 3 −βψ−3 block ∆ = δ −βψ−I with repeated real parameter δ bounded by δ ð 0. and linear time invariant (LTI). For example. ltisys('tf'.LinearFractional Models of Uncertainty phi = ublock(1.
< 2 2 ∞ If φ is passive.) is a nonlinear operator with two inputs and one output and satisfies ∆3∞ < 100.2 Uncertain Dynamical Systems • ∆2 = δ by I2 with δ ∈ C and δ < 2 • ∆3(. uinfo(delta) block 1 2 3 dims 3x1 2x2 1x2 type LTI LTI NL real/cplx c c r full/scal f s f bounds norm <= 0. b} if the mapping φ : u ∈ L2 α y∈L2 satisfies a quadratic constraint of the form ∫0 ( y ( t ) – au ( t ) ) ∫0 y ( t ) ∞ T ∞ T ( y ( t ) – bu ( t ) )dt ≤ 0 for all u ∈ L 2 As a special case. 4]. Finally.1 norm <= 2 norm <= 100 SectorBounded Uncertainty The behavior of uncertain dynamical components can also be quantified in terms of sector bounds on their timedomain response [5. passive systems satisfy u ( t )dt ≥ 0 which corresponds to the sector {0. Note that sector bounds can always be converted to norm bounds since φ is in the sector {a. uinfo displays the characteristics of uncertainty structures declared with ublock and udiag.) is said to be in the sector {a. A (possibly nonlinear) BIBOstable system φ(. +∞}. a bilinear transformation maps φ to the operator (I – φ)(I + φ)–1 of norm less than one. 230 . b} if and only if a–b a+b φ – .
this means that the response y = φ(u) lies in the sector delimited by the two lines y = au and y = bu as illustrated below. 2} as is seen from Figure 210.LinearFractional Models of Uncertainty The system φ(. For memoryless systems.'nlm') 231 . y y=bu y=φ (υ) y=au u A simple example is the nonlinear spring y = k(u) with 2u if u ≤ 1 k ( u ) = u + 1 if u > 1 u – 1 if u < – 1 The response of this spring lies in the sector {1.[1 2]. y(t) = φ(u(t))..) is called memoryless if y(t) only depends on the input value u(t) at time t. i. the sector condition reduces to the “instantaneous” property (y(t) – au(t))T (y(t) – bu(t)) ð 0 at all time t.e. In the scalar case. This scalar memoryless nonlinearity is specified with ublock by k = ublock(1.
2 Uncertain Dynamical Systems Robustness in the face of sectorbounded uncertainty is analyzed by the circle and Popov criteria. the function aff2lft is provided to derive a linearfractional representation of their parametric uncertainty. 10] and timevarying p2 ∈ [–1. From Affine to LinearFractional Models Since some uncertain systems are more naturally specified as affine parameterdependent systems. 3] is specified by s0 = ltisys([0 1. For instance.0 0]) s1 = ltisys([1 0.0) % p1 232 . 1 1]. All uncertain parameters are assumed real and are repeated in the ∆ structure when necessary. the system p 1 dx .= 1 – p p + 2p dt 1 1 2 x (23) k(u) 2 1 1 Figure 210: Nonlinear spring y = k(u) with constant p1 ∈ [2.
[0 0.delta] = aff2lft(pds) On output: • The nominal plant P corresponds to the average values of p1 and p2 . p2. closing the interconnection of Figure 28 with ∆ ≡ 0 yields the system (24) for p1 = 6 and p2 = 1.Inf Inf]) pds = psys(pv.LinearFractional Models of Uncertainty s2 = ltisys([0 0. Specifically.0 2]. 1 3].P) a = ltiss(cls) a = 6 6 1 8 • The uncertainty structure delta consists of real scalar blocks as confirmed by uinfo(delta) block 1 2 dims 2by2 1by1 type LTI LTV real/cplx r r full/sc al s s bounds norm <= 4 norm <= 2 Note that the parameter p1 is repeated and that the norm bound corresponds to the maximal deviation from the average value of the parameter. type [P.0) % p2 pv = pvec('box'.[2 10. 233 .[s0 s1 s2]) To derive the linearfractional representation of the uncertainty on p1. as confirmed by cls = slft(zeros(3).
2 Uncertain Dynamical Systems The corresponding block diagram appears in Figure 211. δp 1 × I 0 0 δp 2 u P(s) y Figure 211: Linearfractional representation of the system (24) 234 .
El Ghaoui. [5] Zames. 235 . G.. AC26 (1981). pp. “Analysis of Feedback Systems with Structured Uncertainties.” IEEE Trans. PrenticeHall. [4] Vidyasagar. 1992. pp. L. M.. Nonlinear System Analysis. AC–11 (1966). S. 1994. SIAM books. E.. “Multivariable Feedback Design: Concepts for a Classical/Modern Synthesis. Linear Matrix Inequalities in Systems and Control Theory. Englewood Cliffs. 242–250.C... Stein. D (1982). Feron. [3] Boyd.” IEEE Trans. pp.References References [1] Doyle. [2] Doyle. Contr.C. 228238 and 465476. vol. pt..” IEE Proc. Contr. Balakrishnan. Part I and II. “On the InputOutput Stability of TimeVarying Nonlinear Feedback Systems. 129. and G.. J. 416. Aut. J. V. Philadelphia. Aut.
2 Uncertain Dynamical Systems 236 .
. . . . . . . 33 . . .324 Real Parameter Uncertainty . . . . . . . . . . . . . . . . . . . . . . .328 References . . . . . . . . . . . . Quadratic Stability . . . . . . . . . . . . . . . . . . . . . . . . . Maximizing the Quadratic Stability Region Decay Rate . . . . . . . . . . . . . . . . . . . . . . . .312 Stability Analysis . Quadratic H∞ Performance . . . . . . . . .314 µ Analysis . . . . . . . . . . . . . 34 . . . . . Structured Singular Value Robust Stability Analysis Robust Performance . . 39 . . . . . . . . . . . . . . . . . . . . . . . . . . . .325 Example . . 36 . . . . . . . .317 . . . . . . . .310 ParameterDependent Lyapunov Functions . . . . . . . . . . . . . . . . . . . . 38 . . . . . . . . . . .317 . . . LMI Formulation . . .321 The Popov Criterion . . . . . . . . . . . . . . . . . .3 Robustness Analysis Quadratic Lyapunov Functions . .332 . . . . . . . . . . . . . . . . .319 . . . . . . . . . . . . . . . .
Aposteriori robustness analysis is then necessary to validate the design and obtain guarantees of stability and performance in the face of plant uncertainty. The LMI Control Toolbox offers a variety of tools to assess robust stability and robust performance. for example. Since all of these tests are based on sufficient conditions. real parameter uncertainty for parameterdependent Lyapunov functions. These tools cover most available Lyapunovbased and frequencydomain analysis techniques. Hence users should select the most appropriate tool for their problem. or memoryless nonlinearities for the Popov criterion. they are only productive when they succeed in establishing robust stability or performance. • Quadratic stability/performance • Tests involving parameterdependent Lyapunov functions • Mixedµ analysis • The Popov criterion Each test is geared to a particular class of uncertainty.3 Robustness Analysis Control systems are often designed for a simplified model of the physical plant that does not take into account all sources of uncertainty. 32 .
A(t) = α1(t)A1 + . that is. . 2 A(t) + jE(t) ranges in a fixed polytope of matrices.Quadratic Lyapunov Functions Quadratic Lyapunov Functions The notions of quadratic stability and quadratic performance are useful to analyze linear timevarying systems · E(t) x (t) = A(t)x(t). + αn(t)En with αi(t) Š 0 and ∑i = 1 α i ( t ) n = 1 . and the second case to 33 . Given such systems. This is referred to as a polytopic model. . + pn(t)En. this is equivalent to A ( t )QE ( t ) + E ( t )QA ( t ) < 0 at all times t. + pn(t)An E(t) = E0 + p1(t)E1 + . The first case corresponds to systems whose statespace equations depend affinely on timevarying physical parameters. However. pn(t). . . . + αn (t)An E(t) = α1(t)E1 + . x(0) = x0. . Assessing quadratic stability is not tractable in general since (31) places an infinite number of constraints on Q. a sufficient condition for asymptotic stability is the existence of a positivedefinite quadratic Lyapunov function V(x) =xTPx such that dV ( x ( t ) ) . In terms of Q := P –1. . . .< 0 dt along all state trajectories. . (31) can be reduced to a finite set of LMI contraints in the following cases: 1 A(t) and E(t) are fixed affine functions of some timevarying parameters T T (31) p1(t). A(t) = A0 + p1(t)A1 + . . . This is referred to as an affine parameterdependent model.
Specifically. 2]: Affine models: consider the parameterdependent model 34 . Quadratic Lyapunov functions are also useful to assess robust RMS performance. which is their main source of conservatism. this is equivalent to the family of “Bounded Real Lemma” inequalities T T T A ( t )QE ( t ) + E ( t )QA ( t ) B ( t ) E ( t )QC ( t ) T T B(t) – γI D(t) T C ( t )QE ( t ) D(t) – γI <0 (32) The smallest γ for which such a Lyapunov function exists is called the quadratic H∞ performance. 1. LMI Formulation Sufficient LMI conditions for quadratic stability are as follows [8. a timevarying system · E ( t )x = A ( t )x + B ( t )u y = C ( t )x + D ( t )u with zero initial state satisfies yL2 < γuL2 for all bounded input u(t) if there exists a positivedefinite Lyapunov function V(x) =xTPx such that dV ( x ) + y T y – γ 2 u T u < 0 dt In terms of Q := P –1.3 Robustness Analysis timevarying systems modeled by an envelope of LTI systems (see “Uncertain StateSpace Models” on page 214 for details). This is an upper bound on the worstcase RMS gain of the system (32). Note that quadratic Lyapunov functions guarantee stability for arbitrarily fast time variations.
E ( t ) = E0 + p1 E1 + … + pn En where pi(t) ∈ [ p i. and let V = { ( ω 1. The dynamical system (33) is quadratically stable if there exist symmetric matrices Q and n { M i } i = 1 such that A ( ω )QE ( ω ) + E ( ω )QA ( ω ) + T T ∑ ωi Mi < 0 for all ω ∈ V 2 i (34) T A i PE i + E i PA i + M i ≥ 0 for i = 1. . . T (35) (36) (37) is quadratically stable if there exist a symmetric matrix Q and scalars tij = tji such that A i QE j + E j QA i + A j QE i + E i QA j < 2t ij I for i.n } Q>I T T T T (38) (39) 35 . .…. ….…. A(t) + jE(t) ∈ Co{A1 + jE1.Quadratic Lyapunov Functions · E ( p )x = A ( p )x . ω n ) : ω i ∈ { p i. p i ]. An + jEn }. p i } } (33) denote the set of corners of the corresponding parameter box. . A ( p ) = A 0 + p 1 A 1 + … + p n A n .n Mi ≥ 0 Q>I Polytopic models: the polytopic system · E(t) x = A(t)x. j ∈ { 1.
n} exist such that T T T H H A i QE j + E j QA i + A j QE i + E j QA i T T Bi + Bj – 2γI H T T C i QE i + C i QE j D i + D j – 2γI < 2t ij × 1 Q>0 t 11 … t 1n t 1n … t nn … … … … (310) <0 Quadratic Stability The function quadstab tests the quadratic stability of polytopic or affine parameterdependent systems. no parameter pi enters both A(t) and E(t). • In the polytopic case. A symmetric matrix Q and scalars tij = tji for i. It then suffices to solve (38)–(39) for tij = 0. 36 . For polytopic systems. The corresponding LMI feasibility problem is solved with feasp. that is. . either A(t) or E(t) is constant. . . for instance. The conditions (35)–(36) can then be deleted and it suffices to check (31) at the corners ω of the parameter box.3 Robustness Analysis t 11 … t 1n . <0 t 1n … t nn Note that these LMI conditions are in fact necessary and sufficient for quadratic stability whenever • In the affine case. Ai = 0 or Ei = 0 for all i...j ∈ {1. Similar sufficient conditions are available for robust RMS performance. . LMI conditions guaranteeing the RMS performance γ are as follows.
032876 Result: best value of t: 0. Hence the system is quadratically stable if and only if tmin < 0.2 – 0.040521 0.P] = quadstab(polsys) Solver for LMI feasibility problems L(x) < R(x) This solver minimizes t subject to L(x) < R(x) + t*I The best value of t should be negative for feasibility Iteration : Best value of t so far 1 2 0.2876e 02 Collectively denoting the LMI system (38)–(310) by A(x) < 0. A3} with 1 . the second output P is the Lyapunov matrix proving stability.2 – 2.1. quadstab assesses its feasibility by minimizing τ subject to A(x) < τI and returns the global minimum tmin of this problem.032876 fradius saturation: 0. A2 = 0 3 – 2 – 0. A = 0 1 A1 = 0 1 . type [tmin.00e+08 This system is quadratically stable tmin = 3.1 This polytopic system is specified by s1 = ltisys(a1) s2 = ltisys(a2) s3 = ltisys(a3) polsys = psys([s1 s2 s3]) To test its quadratic stability. A2.3 – 1.000% of R = 1. Consider the timevarying system · x = A(t)x where A(t) ∈ Co{A1.Quadratic Lyapunov Functions Example 3.9 – 0. 37 . In this case.
[1 0 0]) The result is 38 .P] = quadstab(affsys.5 0. Example 3.7]) % parameter box affsys = psys(pv.[10 12. k ( t ) ∈ [ 10. i + θδi] Computing this largest factor is a generalized eigenvalue minimization problem (see gevp). f ( t ) ∈ [ 0.7 ] –k –f This system is entered by s0 = ltisys([0 1. This optimization is performed by quadstab when the first entry of the options vector is set to 1. p i ] Denoting the center and radius of each interval by u i = 1 ( p i + p i ) and 2 δ i = 1 ( p i – p i ) .0. In other words. 12 ].2. the maximal quadratic stability region is defined as the 2 largest portion of the parameter box where quadratic stability can be established. type [marg. f ) = 0 1 .0 1].0) % k s2 = ltisys([0 0.3 Robustness Analysis Maximizing the Quadratic Stability Region This option of quadstab is only available for affine parameterdependent systems where the E matrix is constant and each parameter pi(t) ranges in an interval pi(t) ∈ [ p i. the largest dilatation factor such that the system is quadratically stable whenever pi(t) ∈ [µi – θδi. 0. f)x where A ( k.0) % f pv = pvec('box'.5.[s0 s1 s2]) To compute the largest dilatation of the parameter box where quadratic stability holds. Consider the secondorder system · x = A(k. 1 0].0 0]) s1 = ltisys([0 0.
k(t) ∈ [9.P] = decay(polsys) This command returns drate = 5. f(t) ∈ [0.49].6016e 02 39 . For the timevarying system considered in Example 3. For affine or polytopic models. the decay rate is computed by [drate.Quadratic Lyapunov Functions marg = 1. that is. the quadratic decay rate α* is the smallest α such that A(t)QE(t)T + E(t)QA(t)T < αE(t)QE(t)T holds at all times for some fixed Q > 0.51. xT(t)Q–1x(t) < eα*t (x(0)TQ–1x(0)).4908e+00 which means 149% of the specified box.is also the largest β such that the shifted system 2 · E(t) x = (A(t) + βE(t))x is quadratically stable.1. Decay Rate For the timevarying system · E(t) x = A(t)x. the decay rate computation is attacked as a generalized eigenvalue minimization problem (see gevp). Specifically. This task is performed by the function decay.451.749] Note that marg Š 1 implies quadratic stability in the prescribed parameter box. 0.3. Example 3. The system is quadratically stable if and only if α* < 0. in which case α* is an upper bound on the rate of return to equilibrium. 12. α* Note that – .
damping 1 0 x 0 1 x 0 · = + · ·· 0 m ( t ) x –2 –f ( t ) x 1 y = x. Specifically yL2 < γqp uL2 holds for all bounded input u when the initial state is zero. m ( t ) ∈ [ 0.S n } C(t) D(t) Recall that the quadratic H∞ performance γqp is an upper bound on the worstcase input/output RMS gain of the system. Consider the secondorder system with timevarying mass and f ( t ) ∈ [ 0. 0.9400e 02 1.1098e 01 Quadratic H∞ Performance The function quadperf computes the quadratic H∞ performance of an affine parameterdependent system · E(p) x = A(p)x + B(p)u y = C(p)x + D(p)u or of a polytopic system · E(t) x = A(t)x + B(t)u y = C(t)x + D(t)u where S ( t ) = A ( t ) + jE ( t ) B ( t ) ∈ Co { S 1 .4. The function exceed a given value γ > 0.0707e 01 1.….2 ].8.9400e 02 3. u quadperf can also be used to test whether the worstcase RMS gain does not Example 3.6 ] This affine parameterdependent model is specified as 310 .3 Robustness Analysis P = 6. 1. or to maximize the portion of the parameter box where this level γ is not exceeded.4.
0 0]) s1 = ltisys(zeros(2).[1 0.'eval'.[0.1].[0 0].0 1].[1 0].[0.0.4 0.[0.[0 0].[1 0. 2 0].0 1]) % m s2 = ltisys([0 0.[0 0.2 . 0.0.0].[0.Quadratic Lyapunov Functions s0 = ltisys([0 1.5]) norminf(nomsys) 1.5 as confirmed by nomsys = psinfo(affsys.4311e+00 311 .0.8 1.0].0) % f pv = pvec('box'.083e+00 This value is larger than the nominal LTI performance for m = 1 and f = 0.[s0 s1 s2]) and its quadratic H∞ performance from u to y is computed by qperf = quadperf(affsys) qperf = 6.6]) affsys = psys(pv.
p ) For such Lyapunov functions.< 0 is dt equivalent to T T T dQ E ( p )Q ( p )A ( p ) + A ( p )Q ( p )E ( p ) – E ( p ) . + pnQn (311) Given interval bounds p i ∈ [ p i. . Q1. . .E ( p ) < 0. ν i ]. The resulting tests are less conservative than quadratic stability when the parameters are constant or slowly varying [6]. p i ]. (311) holds for all parameter trajectories if the following LMI problem is feasible [6]. To prove the robust stability of such systems. the stability condition . we seek a quadratic Lyapunov function that depends on the uncertain parameters or the polytopic coordinates in the case of a polytopic model. . Qn. . the vectors p and . and { M i } i = 1 such that n 312 . p) = xTQ(p)–1x where dV ( x.range in dt dt ndimensional “boxes..3 Robustness Analysis ParameterDependent Lyapunov Functions The robustness test discussed next is applicable to affine parameterdependent systems or timeinvariant uncertain systems described by a polytope of models. Find symmetric matrices Q0. Affine models: for an affine parameterdependent system · E(p) x = A(p)x with parameter vector p = (p1. dt Q(p) = Q0 + p1Q1 + . . . ∈ [ ν i. pn) ∈ Rn. available bounds on the rates of parameter variation can be explicitly taken into account.” If V and T list the corners of these boxes. . dp i dp on each pi and its time derivative . . we seek positive definite Lyapunov functions of the form V(x. Moreover. .
Assuming that E is constant and A ranges in the polytope A ∈ {α1A1 + . .+ αnAn : αi Š 0. .are allowed to range in (–∞. . In such cases indeed. Qn must be set to zero for feasibility. . . T T T T T T T T T T ∑ ωi Mi < 0 2 i 313 . . . + αnQn Using such Lyapunov functions.ParameterDependent Lyapunov Functions • For all (ω. . n A i Q i E ( ω ) + E ( ω ) Q i A i + A i Q ( ω )E i + E i Q ( ω )A i + A ( ω ) Qi Ei + Ei Qi A ( ω ) – Ei ( Q ( τ ) – Q0 ) + Mi ≥ 0 • Q(ω) > I for all ω ∈ V • Mi Š 0 Note that these conditions reduce to quadratic stability when the rates of dp i variation . α) = xTQ(α)–1x where Q(α) = α1Q1 + . . τ) ∈ V × T E ( ω )Q ( ω )A ( ω ) + A ( ω )Q ( ω )E ( ω ) – E ( ω ) ( Q ( τ ) – Q 0 )E ( ω ) + • For ω ∈ V and i = 1. +∞). . sufficient conditions for stability over the entire polytope are as follows. α1+ . . dt Q1. . we seek a Lyapunov function of the form V(x. .+ αn = 1}. Polytopic models: A similar extension of the quadratic stability test is available for timeinvariant polytopic systems · E x = Ax where one of the matrices A. E is constant and the other uncertain. .
Example 3. the parameters are regarded as timeinvariant. For an affine parameterdependent system with two uncertain parameters p1 and p2. p) =xTQ(p)–1x with Q(p) = Q0 + p1Q1 + p2Q2 Remark In the case of timeinvariant uncertain parameters.5.. and scalars tij = tji such that A i Q j E + EQ j A i + A j Q i E + EQ i A j < 2t ij Qj > I t 11 … t 1n . . pdlstab is invoked by [tmin. The syntax is similar to that of quadstab. for instance. the function pdlstab seeks a parameterdependent Lyapunov function establishing robust stability over a given parameter range or polytope of models.Q1. if pdlstab fails to prove stability over the entire parameter range.Q0. In such cases. .. Specifically. Qn. you can split the range into smaller regions and try to establish stability on each subregion independently. . a parameterdependent Lyapunov function proving stability is V(x.Q2] = pdlstab(ps) Here ps is the system description and includes available information on the range of values and rates of variation of each parameter (see psys and pvec for details). <0 t 1n … t nn … … T T T T Stability Analysis Given an affine or polytopic system. Consider the secondorder system · x = 0 1 x · ·· –k ( t ) –f ( t ) x x 314 . pdlstab can be combined with a branchandbound scheme to reduce conservatism and enlarge the computed stability region. . The function pdlstab determines whether the sufficient LMI conditions listed above are feasible.3 Robustness Analysis There exist symmetric matrices Q1. Feasibility is established when tmin < 0. If no rates of variation are specified.
011826 1. 0.0) pvec('box'.878414e 03 315 .[5 10 . This system is not quadratically stable over the specified parameter box as confirmed by tmin = quadstab(ps) tmin = 8. 1 0].0118e 04 Nevertheless. 10]. dt df . 0.[s0 s1 s2]) 1 1]) The second argument of pvec defines the range of parameter values while the third argument specifies the bounds on their rate of variation.0) ltisys([0 0.0 1].< 0.0 0]) ltisys([0 0. as shown by [tmin.Q2] = pdlstab(ps) Solver for LMI feasibility problems L(x) < R(x) This solver minimizes t subject to L(x) < R(x) + t*I The best value of t should be negative for feasibility Iteration : Best value of t so far 1 2 3 4 0. it turns out to be robustly stable when the parameter variations do not exceed the maximum rates (312).01 0.ParameterDependent Lyapunov Functions where the stiffness k(t) and damping f(t) range in k(t) ∈ [5.01 . f(t) ∈ [0.01 0.1] and their rates of variation are bounded by dk .01.< 1 dt (312) This system and parameter data are specified by s0 s1 s2 pv ps = = = = = ltisys([0 1.084914 0.1]. psys(pv.Q1.011826 0.Q0.[ 0.01.
832830e 04 Result: best value of t: 7.846478e 04 1.878414e 03 4.003% of R = 1.528537e 04 7.00e+07 This system is stable for the specified parameter trajectories This makes pdlstab a useful tool to analyze systems with constant or slowlyvarying parameters.3 Robustness Analysis 5 6 7 8 1.832830e 04 fradius saturation: 0. 316 .
. 10]. Consider the algebraic loop of Figure 31 where M ∈ Cn×m and ∆ = diag(∆1. . The algebraic loop of Figure 31 is well posed if and only if I – M∆ is invertible. . 12. ∆ w M q Figure 31: Static µ problem 317 . 13. ∆n) is a structured perturbation characterized by • The dimensions of each block ∆i • Whether ∆i is a complex or realvalued matrix • Whether ∆i is a full matrix or a scalar matrix of the form ∆i = δi × I We denote by ∆ the set of perturbations with this particular structure. Nonlinear and/or timevarying uncertainty is addressed by the Popov criterion discussed in the next section. Structured Singular Value µ analysis is based on the concept of structured singular value (SSV or µ) [3. .µ Analysis µ Analysis µ analysis investigates the robust stability or performance of systems with linear timeinvariant linearfractional uncertainty. Measuring the smallest amount of perturbation ∆ ∈ ∆ needed to destroy wellposedness is the purpose of the SSV. It is also applicable to timeinvariant parameterdependent systems by first deriving an equivalent linearfractional representation with aff2lft (see “From Affine to LinearFractional Models” on page 232 for details).
. as long as the size of ∆ does not exceed K∆ := 1/µ∆(M). Dn). . Denoting by D and G the sets of D. From this definition. D > I The µ upper bound ν∆(M) and the optimal D. .3 Robustness Analysis Formally. Computing µ∆(M) is an NPhard problem in general. However. Thus µ∆(M) extends the notion of largest singular value to the case of structured perturbations. the variables in this LMI problem are two scaling matrices D = diag(D1. α* ) where α* is the global minimum of the following generalized eigenvalue minimization problem [4]: Minimize α over D ∈ D and G ∈ G subject to MHDM + j(GM – MHG) < αD. . • Gi = 0 if ∆i is complexvalued. . G scalings are computed by the function mubnd.delta) 318 . Gi = gi × I with gi ∈ R if ∆i is a realvalued full H block. Note that µ∆(M) = σmax(M) when ∆ = Cm×n (unstructured case). .G] = mubnd(M. The critical size K is called thewellposedness margin wellposedness margin. . and G i = G i if ∆i = δi × I with δi real. Gn) H with the same blockdiagonal structure as ∆ and where • Di = di × I with di > 0 if ∆i is a full block. Its syntax is [mu. an upper bound on µ∆(M) is given by ν∆ = max ( 0. G = diag(G1.D. a conservative estimate of the margin K∆ (upper bound µ) can be computed by solving an LMI problem [4]. Assuming for simplicity that the blocks ∆i are square. the SSV of M with respect to the perturbation structure ∆ is defined as [10] 1 µ ( M ) := min { σ max ( ∆ ) : ∆ ∈ ∆ and I – M ∆ is singular } with the convention µ∆(M) = 0 if I – M∆ is invertible for all ∆ ∈ ∆. and D i = D i > 0 otherwise. . I – M∆ remains invertible as long as ∆ ∈ ∆ satisfies σmax(∆) < 1/µ∆(M) That is. G scalings with such properties.
319 . 10. 12. . the largest amount of uncertainty ∆ that can be tolerated without losing stability.9 means that wellposedness is guaranteed for perturbation sizes that do not exceed 90% of the prescribed bounds.µ Analysis where delta specifies the perturbation structure ∆ (use ublock and udiag to define delta. the output mu is equal to ν∆(M). Here P(s) is a given LTI system and ∆(s) = diag(∆1(s). 16] µ m := sup µ ∆ ( P ( jω ) ) < 1. . If different bounds βi are used for each ∆i. see “NormBounded Uncertainty” on page 227 and “SectorBounded Uncertainty” for details). Robust Stability Analysis The structured singular value µ is useful to assess the robust stability of timeinvariant linearfractional interconnections (see Figure 32). ∆n (s)) is a norm. When each block ∆i is bounded by 1. ∆(s) w q P(s) y u Figure 32: Robust stability analysis This interconnection is stable for all structured ∆(s) satisfying ∆∞ < 1 if and only if [3. ω The reciprocal Km = 1/µm represents the robust stability margin. .or sectorbounded linear timeinvariant uncertainty with some prescribed structure. that is. 1/mu = 0. the interpretation of mu is as follows: The interconnection of Figure 31 is well posed for all ∆ ∈ ∆ satisfying βi σ max ( ∆ i ) < mu For instance. .
A gridbased estimation of K m is performed by mustab: [margin.a + . margin is the computed value of K m and peakf is the frequency near which ν∆(P(jω) peaks.– θ . 2 2 2 2 where a < b are the sector bounds specified with ublock...1. margin = 0.7by3 = 2.. The uncertainty quantification may involve different bounds for each block ∆i. We refer to K m ˆ margin.delta. The syntax 320 .peakf] = mustab(P.a + . 1+0 1 – 0 for θ > 1 For instance. margin represents the relative stability margin as a percentage of the specified bounds. where the margin is the smallest.. this condition reads 1–0 1 + 0 ∆ i is in the sector a + . we can only compute a conservative estimate K m of Km ˆ as the guaranteed stability based on the upper bound ν∆ of µ∆. On ˆ output.+ θ . margin = θ means that stability is guaranteed as long as each block ∆i satisfies • For normbounded blocks σmax(∆i(jω)) < θ βi(ω) where βi(ω) is the (possibly frequencydependent) norm bound specified with ublock • For sectorbounded blocks a + b b–a a+b b – a ∆ i is in the sector . In the special case b = +∞. delta describes the uncertainty ∆ (see ublock and udiag for details) and freqs is an optional vector of frequencies where to evaluate ν∆(P(jω)). i.7 for a singleblock uncertainty ∆(s) bounded by 3 means that stability is guaranteed for ∆∞ < 0.e. .freqs) On input.3 Robustness Analysis ˆ As mentioned earlier. More precisely. 1+0 1 – 0 for θ ð 1 and 1–0 1 + 0 outside ∆ i is outside the sector a + . In this case.
Gi = getdg(gs.peakf. G scaling matrices at the tested frequencies fs. i.. In such cases. The presence of uncertainty typically deteriorates performance. Worse. The coarser the gridding. Indeed. this function may be discontinuous at its peaks when the uncertainty contains real blocks. ˆ Caution: K m is computed by frequency sweep.delta) also returns the D.i). The function mustab has builtin tests to detect such discontinuities and minimize the risks of missing the critical peaks. any “blind” frequency sweep is likely to miss the peak value and yield an overoptimistic stability margin. Robust Performance In robust control. To retrieve the values of D and G at the frequency fs(i). 321 . the robust performance γrob is defined as the worstcase RMS gain from u to y in the face of the uncertainty ∆(s). it is customary to formulate the design specifications as abstract disturbance rejection objectives.ds. The performance of a control system is then measured in terms of the closedloop RMS gain from disturbances to outputs (see “H• Control” on page 53 for details).gs] = mustab(P. type Di = getdg(ds. the higher the risk of underestimating the peak value of ν∆(P(jω)).µ Analysis [margin. by evaluating ν∆(P(jω)) over a finite grid of frequencies. For an LTI system with linearfractional uncer.i) Note that Di and Gi are not necessarily the optimal scalings at fs(i).e.tainty (Figure 33).fs. the LMI optimization is often stopped before completion to speed up computations.
. Indeed. yL2 < γuL2 holds for all ∆(s) in the prescribed uncertainty set if and only if the interconnection of Figure 34 is stable for all ∆(s) and for ∆perf(s) satisfying ∆perf(s)∞ < γ–1 Given a norm.3 Robustness Analysis This robust performance is generally worse (larger) than the nominal performance. i. whether γrob < γ. is equivalent to a robust stability problem. that is.e. the function muperf assesses the robust performance γrob via this equivalent robust stability problem. This is done by 322 .or sectorbounded linear timeinvariant uncertainty ∆(s). the RMS gain from u to y for (s) ≡ 0. ∆ perf ( s ) 0 0 ∆(s) P(s) Figure 34: Equivalent robust stability problem Two problems can be solved with muperf: • Compute the robust performance γrob for the specified uncertainty ∆(s). ∆(s) u P(s) y Figure 33: Robust performance problem Assessing whether the performance γ can be robustly sustained.
g) computes a fraction margin of the specified uncertainty bounds for which the RMS gain from u to y is guaranteed not to exceed γ. γ rob = max γ rob ( ω ) ω where γrob(ω) is the global minimum of the LMI problem Minimize γ over D ∈ D.µ Analysis grob = muperf(P. Assuming ∆∞ < 1 and denoting by D and G the related scaling sets. The performance g is robust if and only if margin Š 1. Note that g should be larger than the nominal RMS gain for ∆ = 0. • Assess the robustness of a given performance γ = g > 0 in the face of the uncertainty ∆(s). The command margin = muperf(P.delta) The value grob is finite if and only if the interconnection of Figure 33 is robustly stable. G ∈ G such that D > 0 and D 0 H H P ( jω ) D 0 P ( jω ) + j G 0 P ( jω ) – P ( jω ) G 0 < 0 I 0 0 0 γ2 I 0 0 323 . The function muperf uses the following characterization of γrob .delta.
3 Robustness Analysis The Popov Criterion The Popov criterion gives a sufficient condition for the robust stability of the interconnection of Figure 35 where G(s) is a given LTI system and φ = diag (φ1.) satisfies the sector bound ∀T > 0. ∫0 ( w j – α j q j ) T T ( w j – β j q j )dt ≤ 0 (313) (this includes norm bounds as the special case βj = –αj). . . the µ test is generally less conservative. To establish robust stability. For purely linear timeinvariant uncertainty. the Popov criterion seeks a Lyapunov function of the form 324 . The operator φ can be nonlinear and/or timevarying which makes the Popov criterion more general than the µ stability test. however. . φn) is a sectorbounded BIBOstable uncertainty satisfying φ(0) = 0. φ( · ) w u q G(s) y Figure 35: Popov criterion A statespace representation of this interconnection is x = Ax + Bw · q = Cx + Dw w = φ ( q ) or equivalently in blockpartitioned form: x = Ax + B w · Σj j j rj q j = C j x + D j w ( ∈ R ) w = φ(q ) j Suppose that φj(. .
The Popov stability test is performed by the command [t.The Popov Criterion 1 T V ( x. wj(t) depends only on qj(t). (315) reduces to the scaled smallgain criterion. S. The function popov returns the output t of the LMI solver feasp. the conditions V(x. Letting Kα := diag(αj Irj).e. Hence the interconnection is robust stable if t < 0. CB ) – T I 0 D Kα – 1 T S C Kβ T D Kβ – 1 T <0 (315) When φ(. B ) + 0 N ( CA. Real Parameter Uncertainty A sharper version of the Popov criterion can be used to analyze systems with uncertain real parameters. the sector constraint (313) reduces to α j q j ≤ q j φ j ( q j ) ≤ β j q j ) . S := diag(σj Irj). t) > 0 and dV/dt < 0 are equivalent to the following LMI feasibility problem (with the notation H(M) := M + MT): Find P = PT and structured matrices S. Then a more general form for the corresponding term in the Lyapunov function (314) is 325 . Assume that Dj = 0 and wj = φj(qj) := δjqj where δj is an uncertain real parameter ranging in [αj.) is a scalar memoryless nonlinearity (i.N] = popov(G. βj]. N such that S > 0 and T C Kα H I P ( A.phi) where G is the SYSTEM matrix representation of G(s) and phi is the uncertainty description specified with ublock and udiag. In this case. P..x Px + 2 ∑ ∫0 φ j ( z ) d z – ∑ σ j ∫0 ( w j – α j q j ) νj j j qj t T ( w j – β j q j ) dτ (314) where σj > 0 and νj are scalars and νj = 0 unless Dj = 0 and φj (. t ) = .) is nonlinear timevarying. in such case. and N are solutions of the LMI problem (315).P.S. Kβ := diag(βj Irj) N := diag(νj Irj) 2 2 and assuming nominal stability.
x P + δ j C N j C x – ( δ j – α j ) ( δ j – β j ) q j ( S j + S j )q j dτ 2 0 ∑ j ∑ j ∫ The first term can be interpreted as a parameterdependent quadratic function of the state.. t ) = . it is possible to obtain an even sharper test by applying the Popov criterion to the equivalent interconnection x = ( Ax + B C w ) · Σj j j ˜ j ˜ qj = x ˜ ˜ wj = δj qj The Lyapunov function now becomes 1 T V ( x.) are real timeinvariant parameters. respectively. t (316) where Sj and Nj are rj × rj matrices subject to S j + > O. This is analogous to the notion of parameterdependent Lyapunov function discussed in “ParameterDependent Lyapunov Functions” on page 312. invoke popov as follows. the variables S and N in the Popov condition (315) assume a blockdiagonal structure where σjIrj and νjIrj are replaced by the full blocks Sj and Nj. 326 . To use this refined test. even though the Popov condition (315) is not equivalent to the robust stability conditions discussed in that section.x P + 2 (317) ∑ δj Nj x –∑ ( δj – αj ) ( δj – βj ) ∫0 qj ( Sj + Sj )qj dτ T T j j T T t with Sj. Finally. As a result. the Lyapunov function (314) becomes t T T T 1 T V ( x. The resulting test is discussed in [5. t ) = . 6] and often performs as well as the real stability test while eliminating the hazards of a frequency sweep (see “Caution” on page 321).3 Robustness Analysis ∫0 qj φ ( z ) N j dz – T T T ∫0 ( w j – α j q j ) t T S j ( w j – β j q j ) dτ = δ j x C N j Cx – ( δ j – α j ) ( δ j – β j ) ∫0 qj ( Sj + Sj )qj dτ T T T Sj T Nj . When all φj(. N = and Nj = 0 when the parameter δj is time varying. Nj subject to S j + S j > O and N j = N j .
N] = popov(G.1) The third input 1 signals popov to perform the loop transformation (317) before solving the feasibility problem (315).The Popov Criterion [t.S.phi.P. 327 .
B K = 2.494 – 1.973 4.4097 – 0.5.720 – 2. w2 and guarantees stability for values of the stiffness parameter k ranging in [0.506 0. The physical system is sketched in Figure 36.3 Robustness Analysis Example The use of these various robustness tests is illustrated on the benchmark problem proposed in [14].0063 – 0.9732 0 1 .9616 – 3.738 – 0.37 – 5.0304 4.8548 – 1.932 328 . a statespace description of this system is · x1 · x 0 2 = 0 ·· –k x1 k ·· x2 0 0 k –k 1 0 0 0 x 0 1 0 x2 1 + 0 ( u + w1 ) + · 0 x1 1 0 x 0 · 2 0 0 w 2 0 1 (318) A solution of this problem proposed in [15] is the fourthorder controller u = CK(sI – AK)–1 BKx2 where 0 0 AK = – 2.419 C K = – 1.0].0081 0.5133 1. 2.7287 – 0. x1 u k w1 m1 m2 w2 x2 Figure 36: Benchmark problem The goal is to design an outputfeedback law u = K(s)x2 that adequately rejects the disturbances w1. For m1 = m2 = 1.7195 1 0 0.
C1.5.0). 2].1 1 0 0]. 1 1 0 0.K) The result Cl is a parameterdependent description of the closedloop system.B0.943.5.[1 0 0]) marg = 4. That is.0 0 0 0].25).0 0 0 0.[0. 2] (with respect to the center 1. A0=[0 0 1 0.1. Since quadratic stability assumes arbitrarily fast time variations of the parameter k.C0.S1]) After entering the controller data as a SYSTEM matrix K. C0=[0 1 0 0].4).Example The concern here is to assess. % k component pv=pvec('box'. B0=[0. C1=zeros(1. 1. we can define the uncertain physical system as an affine parameterdependent system G with psys. D0=0.1919e 01 The value marg = 0. for this particular controller.0].0 0 0 1. D1=0.0 0 0 0. To test stability for k ∈ [0. To estimate the robust stability margin with respect to k. the closedloop stability margin with respect to the uncertain real parameter k.557].B1. Since the plant equations (318) depend affinely on k.5. type marg = quadstab(Cl. Cl = slft(G. % system for k=0 A1=[0 0 0 0.D1.419 means 41% of [0.0. close the loop with slft. we can now apply to Cl the various tests described in this chapter: • Quadratic stability: to determine the portion of the interval [0.5 2]) % range of values of k G=psys(pv. quadratic stability in the interval [0.[S0. • Parameterdependent Lyapunov functions: when k does not vary in time.1). S0=ltisys(A0.D0). S1=ltisys(A1. a less conservative estimate is provided by pdlstab. B1=zeros(4. we can expect this answer to be conservative when k is time invariant. 2] where quadratic stability holds. type 329 .
5 2]. [P0.[S0.75 Here P0 is the closed loop system for the nominal value k0 = 1.3 Robustness Analysis t = pdlstab(Cl) t = 2. Assume now that k slowly varies with a rate of variation bounded by 0.1721e 01 Since t < 0.[0. this level of time variations does not destroy robust stability. To get the relative parameter margin.K) Then call pdlstab as earlier. type 330 .1]) G1 = psys(pv1.25 of k and the uncertainty on k is represented as a real scalar block deltak.[ 0.S1]) Cl1 = slft(G1. redefine the parameter vector and update the description of the closedloop system by pv1 = pvec('box'. t = pdlstab(Cl1) t = 2.1 0. To test if the closed loop remains stable in the face of such slow variations.1. • µ analysis: to perform µ analysis. Note that k must be assumed time invariant in the µ framework. the closed loop is robustly stable for this range of values of k.deltak] = aff2lft(Cl) uinfo(deltak) block 1 dims 1x1 type LTI real/cplx r full/scal s bounds norm <= 0. first convert the affine parameterdependent model Cl to an equivalent linearfractional uncertainty model.0089e 02 Since t is again negative.
2. 0.7820e 01i 9.peakf] = mustab(P0.P.7484e 01 + 1.81)) ans = 1.N] = popov(P0.75 × 1.) is any memoryless nonlinearity with gain less than 0.9208e+00i 1.2324e 03 6.) where δk(.44: spol(slft(P0.3767e 02 This test is also successful since t < 0.05].7820e 01i + 6.0512e+00 1. for k = k0 – 0. This estimate is sharp since the closed loop becomes unstable for δk = –0.2324e 03 2. 331 .deltak) t = 1. Note that the Popov criterion also proves stability for k = k0 + δk(.81 = 0.25 does not exceed 0. that is.0160e+00 1.9756e 01i • Popov criterion: finally.3959e 01 The value pmarg = 1.0512e+00 6.45.0670e+00 peakf = 6.81.9208e+00i + 9.064 means stability as long as the deviation δk from k0 = 1.Example [pmarg. as long as k remains in the interval [0.9756e 01i 1.D.7484e 01 2. we can apply the Popov criterion to the linearfractional model returned by aff2lft [t.deltak) pmarg = 1.75.0160e+00 1.3884e 01i 6.3884e 01i + 1.067 ≈ 0. That is.80.
B. A.. [12] Safonov. V. Conf. P.“ParameterDependent Lyapunov Functions. Contr. [4] Fan. 129. “Absolute Stability of Nonlinear Systems of Automatic Control.” IEEE Trans. Berstein.” Proc. pp... [9] How.G. and Q.. Aut. 22 (1962). Conf. “SProcedure for the Analysis of Control Systems with Parametric Uncertainties via ParameterDependent Lyapunov Functions. 36 (1991). Yang. 1983. 71–109. Tits..R. J.R. M.” Proc. 705–708. Dec. Contr. and P. and its Applic.. Conf. P.P. Hall. 848–850. Chilali. Louis.. and P. “The Complex Structured Singular Value... J. Belanger. 1991. pp.. pt. St. Constant Real Parameter Uncertainty.” IEEE Trans.L. on Contr.“Robustness in the Presence of Mixed Parametric Uncertainty and Unmodeled Dynamics. Contr. Dec. “Affine ParameterDependent Lyapunov Functions for Real Parametric Uncertainty.K.. pp. and M. Contr. [8] Horisberger. [10] Packard. pp. 2274–2279 and 2632–2633. pp. Dec. Contr. Contr..S..M. “L1 Optimal Sensitivity vs. 1084–1089.C.. [11] Popov.” Third SIAM Conf. 242–250. and J. 1993. Missouri. H. Amer. Aut. Apkarian. 857–875. 1994. W. Contr. 332 . pp. [2] Boyd. Doyle. 29 (1994)..C.H. pp.” Int.P. pp.” Automatica.. M. AC–21 (1976). [6] Gahinet. Aut. 2215–2240. and the Popov Criterion in Robust Analysis and Synthesis: Part 1 and 2.. D (1982). Conf.” Proc. Doyle.” IEE Proc. A.” IEEE Trans. Gahinet. 25–38. [5] Feron. Stability Margin. S. 2026–2031. and D. and J.C.M. P. “Analysis of Feedback Systems with Structured Uncertainties. 49 (1989). Apkarian. pp. pp.R. “Stabilization of Uncertain Systems via Linear Control.. “Connection between the Popov Stability Criterion and Bounds for Real Parameter Uncertainty..” Proc.. “Regulators for Linear TimeVarying Plants with Uncertain Parameters.” Automation and Remote Control. and S. “Structured and Simultaneous Lyapunov Functions for System Stability Problems. Contr. [7] Haddad. 1995. vol. E.3 Robustness Analysis References [1] Barmish. [3] Doyle. AC–28 (1983). J.
Byun. M. 143–174. 1994. 1057–1059. “Robust H∞ Control Synthesis Method and Its Application to Benchmark Problems. “Benchmark Problem for Robust Control Design.. Berstein. 5–16.. Contr. 15 (1992). [15] Wie. and J. C. Part I and II. pp.” IEEE Trans. “Beyond Singular Values and Loop Shapes. Guidance. B. Guidance and Contr. “Let's Get Real. [14] Wie. Springer Verlag.C. pp 1140–1148. [16] Young. 333 . M.. Newlin. 228–238 and 465–476. 14 (1991)..” J. B. Liu. Guidance and Contr. and J.. Doyle. P.References [13] Stein. “On the InputOutput Stability of TimeVarying Nonlinear Feedback Systems. Doyle. and K. [17] Zames. Aut.. pp. P. AC–11 (1966). G.S.” in Robust Control Theory.” J. pp.. and D. pp. G.” J.W. Q. 15 (1992).
3 Robustness Analysis 334 .
. . . . . 43 . . . . . 47 Extension to the MultiModel Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 The Function msfsyn . . . . 411 Design Example . . . 45 LMI Formulation . . . . 418 . . .4 StateFeedback Synthesis MultiObjective StateFeedback Pole Placement in LMI Regions . . . . . . . . . . . . . . 413 References . . . . . . . . . . . . . . . . . . .
the LMI Control Toolbox offers tools for statefeedback design with a combination of the following objectives: • H∞ performance (for tracking. or robustness aspects) • H2 performance (for LQG aspects) • Robust pole placement specifications (to ensure fast and welldamped transient responses.. disturbance rejection. and transient behaviors are more easily tuned in terms of l1 norm or closedloop damping. 42 . etc.4 StateFeedback Synthesis In many control problems. i. While some tracking and robustness are best captured by an H∞ criterion. These various objectives are rarely encompassed by a single synthesis criterion. noise insensitivity is more naturally expressed in LQG terms.) These tools apply to multimodel problems. reasonable feedback gain.e. when the objectives are to be robustly achieved over a polytopic set of plant models. the design specifications are a mix of performance and robustness objectives expressed both in the time and frequency domains. The LMI framework is particularly well suited to multiobjective statefeedback synthesis. As an illustration.
The plant P(s) is a given LTI system and we assume full measurement of its state vector x. u 2 2 43 . n z ∞ = e. For instance. respectively. This abstract formulation encompasses many practical situations. The control structure is depicted by Figure 41. and let e denote the regulation error. consider a regulation problem with disturbance d and white measurement noise n. we describe the underlying problem in the case of a single LTI model. w P(s) z∞ z2 u K x Figure 41: Statefeedback control Denoting by T∞(s) and T2(s) the closedloop transfer functions from w to z∞ and z2. Setting w = d . our goal is to design a statefeedback law u = Kx that • Maintains the RMS gain (H∞ norm) of T∞ below some prescribed value γ0 > 0 • Maintains the H2 norm of T2 (LQG cost) below some prescribed value ν0 > 0 • Minimizes an H2/H∞ tradeoff criterion of the form α T∞ ∞ + β T2 2 • Places the closedloop poles in a prescribed region D of the open lefthalf plane. z2 = x . For simplicity.MultiObjective StateFeedback MultiObjective StateFeedback The function msfsyn performs multimodel H2/H∞ statefeedback synthesis with pole placement constraints.
4 StateFeedback Synthesis the mixed H2/H∞ criterion takes into account both the disturbance rejection aspects (RMS gain from d to e) and the LQG aspects (H2 norm from n to z2 ). 44 . the closedloop poles can be forced into some sector of the stable halfplane to obtain welldamped transient responses. In addition.
strips. [ S ij ] 1 ≤ i .j ≤ m denote the entries of the matrices L and M. conics. More practically. disks. The class of LMI regions is fairly general since its closure is the set of convex regions symmetric with respect to the real axis. if { λ ij } 1 ≤ i . . Specifically. etc.j ≤ m and { µ ij } 1 ≤ i . Another strength of LMI regions is the availability of a “Lyapunov theorem” for such regions.j ≤ m < 0 with the notation S … S i1 11 . a matrix A has all its eigenvalues in D if and only if there exists a positive definite matrix P such that [3] [ λ ij P + µ ij AP + µ ji PA ] 1 ≤ i . as well as any intersection of the above. LMI regions include relevant regions such as sectors. The matrixvalued function f D ( z ): = L + Mz + M z is called the characteristic function of the region D.j ≤ m := S m1 … S mm … T T Note that this condition is an LMI in P and that the classical Lyapunov theorem corresponds to the special case fD ( z ) = z + z Next we list a few examples of useful LMI regions along with their characteristic function fD: … 45 ..Pole Placement in LMI Regions Pole Placement in LMI Regions The concept of LMI region [3] is useful to formulate pole placement objectives in LMI terms. LMI regions are convex subsets D of the complex plane characterized by D = { z ∈ C : L + Mz + M z < 0 } T where M and L = LT are fixed real matrices..
( z + z ) – cos . 0) and radius r: fD ( z ) = –r z + q z + q –r r –q • Conic sector centered at the origin and with inner angle θ: θ θ θ sin .( z – z ) sin .4 StateFeedback Synthesis • Disk with center at (–q. 2 • Vertical strip h1 < x < h2: h1 h2 2h – ( z + z ) 0 1 fD ( z ) = 0 ( z + z ) – 2h 2 46 .( z – z ) 2 2 θ The damping ratio of poles lying in this sector is at least cos ..( z – z ) 2 2 fD ( z ) = θ θ cos .
the closedloop system is given in statespace form by · x = z∞ = z = 2 ( A + B 2 K )x + B 1 w ( C 1 + D 12 K )x + D 11 w ( C 2 + D 22 K )x Taken separately. our three design objectives have the following LMI formulation: • H∞ performance: the closedloop RMS gain from w to z∞ does not exceed γ if and only if there exists a symmetric matrix X∞ such that [5] T T ( A + B 2 K )X ∞ + X ∞ ( A + B 2 K ) B 1 X ∞ ( C 1 + D 12 K ) T T B1 –I D 11 2 ( C 1 + D 12 K )X ∞ D 11 –γ I <0 X∞ > 0 • H2 performance: the closedloop H2 norm of T2 does not exceed ν if there exist two symmetric matrices X2 and Q such that 47 .LMI Formulation LMI Formulation Given a statespace realization x = · z∞ = z = 2 Ax + B 1 w + B 2 u C 1 x + D 11 w + D 12 u C 2 x + D 22 u of the plant P.
X2 and Xpol.4 StateFeedback Synthesis T ( A + B 2 K )X 2 + X 2 ( A + B 2 K ) B 1 <0 T B1 –I ( C 2 + D 22 K )X 2 Q T X2 X 2 ( C 2 + D 22 K ) >0 2 Trace ( Q ) < ν (see “LQG performance” on page 17 for details) • Pole placement: the closedloop poles lie in the LMI region D = { z ∈ C : L + Mz + M T z < 0 } where L = L = { λ ij } 1 ≤ i . we seek a single Lyapunov matrix X := X ∞ = X 2 = X pol T (41) that enforces all three objectives. K.j ≤ m T M = { µ ij } 1 ≤ i . this leads to the following suboptimal LMI formulation of our multiobjective statefeedback synthesis problem [4. 3.j ≤ m if and only if there exists a symmetric matrix Xpol satisfying [ λ ij X pol + µ ij ( A + B 2 K )X pol + µ ij X pol + µ ji X pol ( A + B 2 K ) ] 1 ≤ i . X∞. With the change of variable Y := KX. 2]: 48 .j ≤ m < 0 X pol > 0 These three sets of conditions add up to a nonconvex optimization problem with variables Q. For tractability in the LMI framework.
X. Q. and γ2 satisfying T T T T T T AX + XA + B 2 Y + Y B 2 B 1 XC 1 + Y D 12 T T B1 –I D 11 2 C 1 X + D 12 Y D 11 –γ I Q C 2 X + D 22 Y T T T X XC 2 + Y D 22 <0 (42) >0 (43) [ λ ij + µ ij ( AX + B 2 Y )X pol + µ ji ( XA + Y B 2 ) ] 1 ≤ i . Extension to the MultiModel Case The LMI approach outlined above extends to uncertain linear timevarying plants 49 . Y*.j ≤ m < 0 Trace ( Q ) < ν 0 γ < γ0 Denoting the optimal solution by (X*. the corresponding statefeedback gain is given by K* = Y*(X* )–1 and this gain guarantees the worstcase performances: T ∞ ∞ ≤ γ*. γ* ).LMI Formulation Minimize α γ2 + β Trace(Q) over Y. T 2 2 ≤ Trace ( Q* ) 2 2 2 T T T (44) (45) (46) Note that K* does not yield the globally optimal tradeoff in general due to the conservatism of Assumption (41). Q*.
4 StateFeedback Synthesis · x = P: z ∞ = z = 2 A ( t )x + B 1 ( t )w + B 2 ( t )u C 1 ( t )x + D 11 ( t )w + D 12 ( t )u C 2 ( t )x + D 22 ( t )u with statespace matrices varying in a polytope: A(t) B (t) B (t) A k B 1k B 2k 1 2 C ( t ) D ( t ) D ( t ) ∈ Co C D 1k 11k D 12k 11 12 1 C (t) 0 D 22 ( t ) C 2k 0 D 22k 2 · : k = 1. … . K Such polytopic models are useful to represent plants with uncertain and/or timevarying parameters (see “Polytopic Models” on page 214 and the design example below). Q. and γ2 subject to T T T T T T A k X + XA k + B 2k Y + Y B 2k B 1k XC 1k + Y D 12k T T –I D 11k B 1k 2 C 1k X + D 12k Y D 11k –γ I Q C 2k X + D 22k Y T T T X XC 2k + Y D 22 k T T T <0 >0 [ λ ij + µ ij ( A k X + B 2k Y ) + µ ji ( XA k + Y B 2k ) ] 1 ≤ i . Seeking a single quadratic Lyapunov function that enforces the design objectives for all plants in the polytope leads to the following multimodel counterpart of the LMI conditions (42)–(46): Minimize α γ2 + β Trace(Q) over Y.j ≤ m < 0 Trace ( Q ) < ν 0 γ < γ0 2 2 2 410 . X.
ν0. Use lmireg to generate the matrix region. K is the statefeedback gain. and Pcl is the closedloop system in SYSTEM matrix or polytopic model format.Pcl] = msfsyn(P. msfsyn computes a suboptimal solution to the mixed problem: Minimize α T ∞ 0 + β T 2 2 subject to • T∞∞ < γ0 • T2 2 < ν0 • The closedloop poles lie in D. gopt and h2opt are the guaranteed H∞ and H2 performances. α. The dimensions of the D22 matrix are specified by r • obj = [γ0.K. M] if the characteristic matrices L and M are readily available. The pole placement objectives are expressed in terms of LMI regions D = { z ∈ C : L + Mz + Mz < 0 } characterized by the two matrices L and M. The syntax is [gopt.obj. β] is a fourentry vector specifying the H2/H∞ criterion • region specifies the LMI region to be used for pole placement.region) 2 2 where • P is the plant SYSTEM matrix in the singlemodel case. LMI regions are specified interactively with the function lmireg. or the polytopic/parameterdependent description of the plant in the multimodel case (see psys for details). 411 . the default being the open lefthalf plane. On output.h2opt. Denoting the closedloop transfer functions from w to z∞ and z2 by T∞ and T2.r.The Function msfsyn The Function msfsyn The function msfsyn implements the LMI approach to multimodel H2/H∞ synthesis outlined above. or set it to [L.
obj [0 0 0 0] [0 0 1 0] [0 0 0 1] [g 0 0 1] Corresponding Design pole placement only H∞optimal design H2optimal design minimize T22 subject to T∞∞ < g minimize T∞∞ subject to T22 < h minimize a T ∞ + b T 2 ∞ 2 2 2 [0 h 1 0] [0 0 a b] 412 .4 StateFeedback Synthesis Several mixed or unmixed designs can be performed with msfsyn. The various possibilities are summarized in the table below.
The boom is modeled as a spring with torque constant k and viscous damping f and finiteelement analysis gives the following uncertainty ranges for k and f : 0. θ1 main body sensor module θ2 Figure 41: Satellite The control purpose is to minimize the influence of the disturbance w on the angular position θ2. and w is a torque disturbance on the main body. The system is a satellite consisting of two rigid bodies (main body and instrumentation module) joined by a flexible link (the “boom”). T is the control torque.04: The dynamical equations are ·· · · J1 θ1 + f ( θ1 – θ2 ) + k ( θ1 – θ2 ) = T + w ·· · · J2 θ2 + f ( θ2 – θ1 ) + k ( θ2 – θ1 ) = 0 where θ1 and θ2 are the yaw angles for the main body and the sensor module.0038 ð k 0.09 ð 0. This goal is expressed through the following objectives: 413 .4 f 0.Design Example Design Example This example is adapted from [1] and covered by the demo sateldem.
Since these parameters enter the plant state matrix in an affine manner. we can model the parameter uncertainty by a polytopic system with four vertices corresponding to the four combinations of extremal parameter values (see “From Affine to Polytopic Models” on page 220).1 Figure 42: Pole placement region • Achieve these objectives for all possible values of the varying parameters k and f.4 StateFeedback Synthesis • Obtain a good tradeoff between the RMS gain from w to θ2 and the H2 norm of the transfer function from w to θ 1 θ 2 T (LQG cost of control) • Place the closedloop poles in the region shown in Figure 43 to guarantee some minimum decay rate and closedloop damping –0. 414 .
b.1 1] zeros(2)] [zeros(2. A statespace description is readily derived from the dynamical equations as: 1 0 0 0 0 1 0 0 0 0 J1 0 · θ1 0 0 · θ2 = 0 0 ·· –k k θ1 k –k 0 0 0 J 2 ·· θ2 1 0 –f f θ1 0 0 1 + θ2 + 0 ( w + T ) · f 0 θ1 –f 1 · θ2 θ1 z∞ = θ2 .0*b.0) ]) Next.0 0] % b = [b1 b2] c = [0 1 0 0.0 0 0 0] % c = [c1.0) .1 1. ltisys(ak.04]) % parameterdependent plant P = psys(pv. This is done interactively with the function lmireg: 415 .1 0 0 0.. . .0038 0.0*d. 0.09 0.0 1 0 0..0*c.d.4)] [zeros(2.e0) .1 1]] diag([1 1 J1 J2]) b = [0 0.[ ltisys(a0.0*b.0*c. ltisys(af. first specify the plant as a parameterdependent system with affine dependence on k and f. 1 0 0 0 θ 0 2 + 0 T z2 = 0 1 0 0 · θ 0 0 0 0 1 1 · θ2 This parameterdependent model is entered by the commands a0 ak af e0 = = = = [zeros(2) eye(2). specify the LMI region for pole placement as the intersection of the halfplane x < –0.c2] d = [0 0..0 0.4) .0 0.c. [1 1. zeros(2. zeros(2) [1 1.0 0.0*d..[0.0 1] % range of parameter values pv = pvec('box'.4) .1 and of the sector centered at the origin and with inner angle 3π/4.Design Example To solve this design problem with the LMI Control Toolbox.4 .
region) This yields gopt ≈ 0.2 0. For this choice of K.1 0.35 0. 0. first compute the optimal quadratic H∞ performance subject to the pole placement constraint by gopt = msfsyn(P. By inspection of this curve.5 6 5. Figure 45 superimposes the impulse responses from w to θ2 for the four combinations of extremal values of k and f. 6. the best H2 performance h2opt is computed by [gopt.[1 1].01.4 StateFeedback Synthesis region = lmireg To assess the tradeoff between the H∞ and H2 performances.2.05 0.15 0. 0.1. Finally.5 Figure 43: Tradeoff between the H∞ and H2 performances 416 . Repeating this operation for the values g ∈ {0. Note that they are robustly placed in the prescribed LMI region. For a prescribed H∞ performance g > 0.[g 0 0 1]. the statefeedback gain K obtained for g = 0.K.5 3 2.h2opt.[0 0 1 0]. the closedloop poles for these four extremal combinations are displayed in Figure 46.5 0 0.region) Here obj = [g 0 0 1] asks to optimize the H2 performance subject to T∞∞ < g and the pole placement constraint.5 H2 performance 5 4.3 H−infinity performance 0.Pcl] = msfsyn(P.5} yields the Paretolike tradeoff curve shown in Figure 44.25 0.[1 1].4 0.5 4 3. 0.1 yields the best compromise between the H∞ and H2 objectives.45 0.
Design Example 12 x 10 −3 Impulse responses w −> x2 for the robust design 10 8 Amplitude 6 4 2 0 −2 0 1 2 3 4 5 6 Time (secs) 7 8 9 10 Figure 44: Impulse responses for the extremal values of k and f Closed−loop poles for the four extremal sets of parameter values 10 8 6 4 2 0 −2 −4 −6 −8 −10 −14 −12 −10 −8 −6 −4 −2 0 Figure 45: Corresponding closedloop poles 417 .
1994. Aut. 1994.M. Aut. [3] Chilali.4 StateFeedback Synthesis References [1] Biernacki. P. H.A. V.. Feron. L. Gahinet. El Ghaoui. Dec. AC–32 (1987). [2] Boyd.“Mixed H2/H∞ Control: A Convex Optimization Approach.. Opt. Linear Matrix Inequalities in Systems and Control Theory.” IEEE Trans.” SIAM J. SIAM books.. S. Contr.. Contr. 553–558.P. Contr. Also in Proc. “Robust Stability with Structured Real Parameter Perturbations.” to appear in IEEE Trans. pp.. 824837. “H∞ Optimization without Assumptions on Finite or Infinite Zeros. and P. Battacharyya. Aut. Hwang. 418 . and M. pp. pp. 30 (1992). Balakrishnan. C. [4] Khargonekar.. [5] Scherer.. Contr.. and S.P. Conf.” IEEE Trans. E. R. 495–506. M. pp. 39 (1991). “H∞ Design with Pole Placement Constraints: an LMI Approach. Rotea. 143–166.. Contr.
. . . . . . . . . . . . . . . . . . . . . . . . . . 510 Validation of the ClosedLoop System . . . . . .5 Synthesis of H∞ Controllers H∞ Control . . . . . . . . . . . . . . . . . . . . . . 53 Riccati. . 57 H∞ Synthesis . . . 520 . . . . LMI Formulation . . . . . . . . . . . . . . . . 516 . . . . . LoopShaping Design with hinfmix . 515 . . . . . . . . . . . 522 521 . . . . . . . . . . . . . . . . . . . . The Function hinfmix . 520 References . . . . . . . . . . . . . . . . . . . . . . .and LMIBased Approaches . . 513 MultiObjective H∞ Synthesis . .
5 Synthesis of H∞ Controllers Disturbance rejection. H∞ control is the cornerstone of design techniques for this class of problems. Both the Riccatibased and LMIbased solutions of this problem are implemented • Multiobjective H∞ design (mixed H2/H∞ synthesis with closedloop pole placement in LMI regions) • Discretetime H∞ control via either Riccati. input/output decoupling. and general robust stability or performance objectives can all be formulated as gain attenuation problems in the face of dynamical or parametric uncertainty. “Loop Shaping. The LMI Control Toolbox addresses three variants of outputfeedback H∞ synthesis: • Standard H∞ control for continuoustime systems.or LMIbased approaches Additional facilities for loopshaping design are described in Chapter 6.” 52 .
. This norm also corresponds to the peak gain of the frequency response G(jω).H• Control H∞ Control The H∞ norm of a stable transfer function G(s) is its largest input/output RMS gain. w P(s) y K(s) z u Figure 51: H∞ control Partitioning the plant P(s) as P (s) P (s) Z(s) 12 = 11 P (s) P (s) Y(s) 22 21 W(s) . G ∞ = sup σ max ( G ( jω ) ) ω In its abstract “standard” formulation. G ∞ = y L2 sup u L u∈L u≠0 2 2 where L2 is the space of signals with finite energy and y(t) is the output of the system G for a given input u(t). i. Specifically. U(s) 53 .e. the H∞ control problem is one of disturbance rejection. that is. it consists of minimizing the closedloop RMS gain from w to z in the control loop of Figure 51. This can be interpreted as minimizing the effect of the worstcase disturbance w on the output z.
A number of control problems can be recast in this standard formulation. Alternatively. Since the RMS gain is the largest gain over all squareintegrable inputs. K ) := P11 + P 12 K ( I – P22 K ) –1 P21 (51) Hence the optimal H∞ control seeks to minimize F(P. Consider the following disturbance rejection problem relative to the tracking loop of Figure 52: For disturbances d with spectral density concentrated between 0 and 1 rad/s. K)∞ < γ ? This is known as the suboptimal H∞ control problem.1. maintain the closedloop RMS gain from d to the output y below 1%.1] is described as ˜ d ( s ) = W l p ( s )d ( s ) ˜ where d is an arbitrary squareintegrable signal and Wlp(s) is a lowpass filter with cutoff frequency at 1 rad/s.5 Synthesis of H∞ Controllers the closedloop transfer function from w to z is given by the linearfractional expression F ( P. we can specify some maximum value γ for the closedloop RMS gain and ask the following question: Does there exist a stabilizing controller K(s) that ensures F(P. K)∞ over all stabilizing LTI controllers K(s). a disturbance d with all its energy in the d r − + e K u G + y Figure 52: Tracking loop with external disturbance frequency band [0. For instance. From the closedloop equation 54 . external signals with uneven spectral density must be specified by means of shaping filters. Example 5. and γ is called the prescribed H∞ performance.
2. Hence our disturbance rejection objective is equivalent to finding a stabilizing controller K(s) such that S Wlp ∞ < 1. This RMS gain constraint is readily turned into a standard H∞ problem by observing that.H• Control y = Sd + Tr with S := (I + GK)–1 and T := GKS. For instance. ˜ the transfer function from the equalized disturbance d to y is SWlp . in conformance with (51). Decoupling constraints are handled in a similar fashion. s + 10 ∞ 55 . K ) W ( s ) –G ( s ) with Ps := l p W ( s ) –G ( s ) lp Example 5. The closedloop transfer function from r 1 r2 to y 1 y 2 is T = GK(I + GK)–1. SW l p = F ( P. This can be expressed as 10 . consider the loop of Figure 53 and the problem of decoupling the action of r1 on y1 from that of r2 on y2 in the control bandwidth ω ð 10 rad/s. Decoupling is achieved if T(jω) ≈ I in the bandwidth ω ð 10 rad/s. or equivalently if S(jω) = I – T(jω) has a small gain in this bandwidth.S ( s ) < ε.
) is an arbitrary BIBOstable system satisfying the norm bound ∆∞ < 1. Consider the uncertain system ∆( · ) w u z P(s) y where the uncertainty ∆(.5 Synthesis of H∞ Controllers r1 r2 + − + − u K(s) G(s) y1 y2 Figure 53: Decoupling problem which is again of the standard form 10 10 . the controller u = K(s)y robustly stabilizes this uncertain system if the nominal closedloop transfer function F (P. K)∞ < 1. H∞ optimization is also useful for the design of robust controllers in the face of unstructured uncertainty. From the small gain theorem.G ( s ) F ( P. Example 5. Finally. a useful application of H∞ synthesis is the loop shaping design procedure discussed in the next chapter. K) from w to z satisfies F (P.3.I – . K ) ∞ < ε with P ( s ) := s + 10 2 s + 10 I2 –G ( s ) . 56 .
(A2) P12(s) and P21(s) have no zeros on the jωaxis.and LMIBased Approaches Since the continuous. K)∞ < γ Specifically. we review only continuoustime H∞ synthesis. the plant P(s) is given in statespace form by · x = Ax + B 1 w + B 2 u C 1 x + D 11 w + D 12 u C 2 x + D 21 w + D 22 u z = y = The Riccatibased approach is applicable to plants P satisfying (A1) D12 and D21 have full rank. Given γ > 0. Since both approaches are statespace methods. the H∞ performance γ is achievable if and only if [2]: (i) γ > max ( σ max ( I – D 12 D 12 )D 11 ). it gives necessary and sufficient conditions for the existence of internally stabilizing controllers K(s) such that F (P. While the LMIbased approach is computationally more involved for large problems. Both the Riccatibased and LMIbased approaches are implemented in the LMI Control Toolbox. σ max ( D 11 ( I – D 21 D 21 ) ) ) + + 57 .and discretetime cases are essentially similar.H• Control Riccati. it has the merit of eliminating the regularity restrictions attached to the Riccatibased solution.
Y∞. Singular H∞ problems (those violating (A1)–(A2)) require regularization by small perturbation. Rather. and the plant statespace matrices. respectively (iii) X∞ and Y∞ further satisfy X∞ Š 0. Y∞ Š 0. A notable exception is the direct computation of the optimal H∞ performance when D12 or D21 is singular [5]. the H∞ performance is directly optimized by solving the following LMI problem [4. ρ(X∞Y∞) < γ2 Using this characterization. 6]: 58 . the LMI approach is applicable to any plant and does not involve γiterations. the optimal H∞ performance γopt can be computed by bisection (the socalled γiterations). An H∞ controller with performance γ ≥ γopt is then given by explicit formulas involving X∞.5 Synthesis of H∞ Controllers (ii) the Riccati equations associated with the Hamiltonian pencils H – λε = J – λε = A 0 C1 0 –A 0 –1 T T 0 γ B1 0 A T B2 0 T –1 γ B1 B2 T –C1 0 0 I 0 –1 – I γ D 11 D 12 – λ 0 0 –1 T γ D 11 – I 0 T D 12 0 0 0 0 0 B1 0 0 T T –1 T γ C1 C2 –A –B1 0 0 I 0 T –1 T 0 – I γ D 11 D 21 – λ 0 0 –1 –1 γ C 1 γ D 11 – I 0 D 21 0 0 C2 have stabilizing solutions X∞ and Y∞. In comparison.
59 . D21). This problem falls within the scope of the LMI solver mincx. Note that the LMI constraints (52)–(53) amount to the inequality counterpart of the H∞ Riccati equations. D 12 ) and (C2.H• Control Minimize γ over R = RT and S = ST such that T N 12 0 0 I T T AR + RA RC 1 B 1 C1 R – γI D 11 T T B1 D 11 – γI T T A S + SA SB 1 C 1 T T B1 S – γI D 11 C1 D 11 – γI N12 0 0 <0 I (52) N21 0 0 I T N 21 0 0 <0 I (53) R I ≥0 I S T T (54) where N12 and N21 denote bases of the null spaces of ( B 2 . γ) of (52)–(54) [3]. S. explicit formulas are available to derive H∞ controllers from any solution (R. R–1 and S–1 being solutions of these inequalities. Again. respectively.
d21 d22]) Note that the plant P(s) is “singular” since D12 = 0. and their discretetime counterparts) • Direct computation of the optimal H∞ performance when D12 and D21 are rankdeficient (without using regularization) [5] • Automatic regularization of singular problems for the controller computation. P=ltisys(a. Original features of the Riccatibased functions include: • The use of pencilbased Riccati solvers for highly accurate computation of Riccati solutions (see care. d22=0. d12=0. reducedorder controllers are computed whenever the matrices I – γ–2X∞Y∞ or I – RS are nearly rankdeficient. c2= 1. b2=2.c2].[b1 b2]. c1=1. Table 51: Functions for H∞ synthesis H∞ synthesis Riccatibased hinfric dhinfric LMIbased hinflmi dhinflmi Continuoustime Discretetime To illustrate the use of hinfric and hinflmi. d11=0.[c1. b1=1.[d11 d12. consider the simple firstorder plant P(s) with statespace equations · x = w + 2u z = x y = –x+w This plant is specified as a SYSTEM matrix by a=0. d21=1.or LMIbased approaches. 510 .5 Synthesis of H∞ Controllers H∞ Synthesis The LMI Control Toolbox supports continuous. In addition. ricpen. Transparency and numerical reliability were primary concerns in the development of the related tools.and discretetime H∞ synthesis using either Riccati. The functions for standard H∞ synthesis are listed in Table 51.
5000 3. The γiterations performed by hinfric are displayed on the screen as follows: GammaIteration: Gamma 10. Alternatively.0703 1.[1 1]) This function optimizes the H∞ performance using mincx: 511 .H• Synthesis To determine the optimal H∞ performance over stabilizing control laws u = K(s)y.0176 1.008789 The optimal H∞ performance for P(s) is therefore approximately 1.0088 : : : : : : : : : : : : Diagnosis feasible infeasible (Y is not positive semidefinite) feasible feasible feasible feasible feasible feasible feasible feasible feasible feasible Best closedloop gain (GAMMA_OPT): 1.1250 1.0000 1.2500 2.[1 1]) where the second argument [1 1] specifies the numbers of measurements and controls (lengths of the vectors y and u). you can use the LMIbased function hinflmi with the same syntax: gopt = hinflmi(P.1406 1.0000 5. For each tested value of γ.2812 1. hinfric gives a feasible/infeasible diagnosis as well as the cause for infeasibility when applicable. type gopt = hinfric(P.0352 1.5625 1.
656673 0.094665 1. The optimal value 1.K] = hinfric(P.008499 1.878172 0.993382 0.0025 approximately matches the value found by hinfric.11e 03 fradius saturation: 0.002517 new lower bound: 1. When a second output argument is provided.381964 1. these two functions also return an optimal H∞ controller K(s): [gopt.002517 new lower bound: 0.196542 new lower bound: 1.5 Synthesis of H∞ Controllers Minimization of gamma: Solver for linear objective minimization under LMI constraints Iterations : Best objective value so far 1 *** 2 3 4 5 6 7 8 *** 9 *** 10 *** 11 *** 2.002517 guaranteed relative accuracy: 9.052469 1.[1 1]) 512 .571370 Result:feasible solution of required accuracy best objective value: 1.00e+08 The value displayed in the second column is the current best estimate of γopt .002517 new lower bound: 1.008499 new lower bound: 1.381964 1.008499 1.018% of R = 1.982223 0.
Further options and control parameters are discussed in the “Command Reference” chapter.[1.y1. Similarly. the counterpart of these functions for discretetime H∞ synthesis are called dhinfric.10) returns solutions X = x2/x1 and Y = y2/y1 of the H∞ Riccati inequalities.y1.1]. dhinflmi.1].[1 1].H• Synthesis The output K is the SYSTEM matrix of K(s). the closedloop mapping represented in Figure 54 is formed with the function slft: w P(s) y K(s) Figure 54: Closedloop system z u 513 . Finally. Equivalently. the same problem is solved by the command [g.[1 1]. the syntax [g.x1.y2] = hinflmi(P. To compute a suboptimal H∞ controller with guaranteed performance γ < 10.K] = hinflmi(P. Validation of the ClosedLoop System Given the H∞ controller K computed by hinfric or hinflmi.10) also returns the stabilizing solutions X∞ = x2/x1 and Y∞ = y2/y1 of the H∞ Riccati equations for γ = 10.10.[1. type [g.x2.K.10) In this case only the value γ = 10 is tested. R = x1 and S = y1 are solutions of the characteristic LMIs (52)(54) since x2 and y2 are always set to g × I. and dnorminf and follow the exact same syntax.K.x2.10.y2] = hinfric(P.10) Finally.x1.K] = hinfric(P. In the LMI approach. [g.
d21. 514 .b2.loop clsys with splot. Finally.d11.d12.5 Synthesis of H∞ Controllers Closedloop stability is then checked by inspecting the closedloop poles with spol spol(clsys) while the closedloop RMS gain from w to z is computed by norminf(clsys) You can also plot the time. hinfpar extracts the statespace matrices A. . .c1. . B1.d22] = hinfpar(P. from the plant SYSTEM matrix P: [a.b1.c2.and frequencydomain responses of the closed.r) Set the second argument r to [p2 m2] if D22 ∈ Rp2×m2.
pure H∞ synthesis only enforces closedloop stability and does not allow for direct placement of the closedloop poles in more specific regions of the lefthalf plane. Since the pole location is related to the time response and transient behavior of the feedback system. standard H∞ synthesis cannot adequately capture all design specifications. noise attenuation or regulation against random disturbances are more naturally expressed in LQG terms. Mixed H2/H∞ synthesis with regional pole placement is one example of multiobjective design addressed by the LMI Control Toolbox.MultiObjective H• Synthesis MultiObjective H∞ Synthesis In many realworld applications. The control problem is sketched in Figure 55. we consider the following multiobjective synthesis problem: w z∞ P(s) y K(s) z2 u Figure 55: Multiobjective H ∞ synthesis Design an outputfeedback controller u = K(s)y that • Maintains the H∞ norm of T∞(s) (RMS gain) below some prescribed value γ0 > 0 • Maintains the H2 norm of T2(s) (LQG cost) below some prescribed value ν0 > 0 515 . For instance. The output channel z∞ is associated with the H∞ performance while the channel z2 is associated with the LQG aspects (H2 performance). Denoting by T∞(s) and T2(s) the closedloop transfer functions from w to z∞ and z2. This makes multiobjective synthesis highly desirable in practice. Similarly. and LMI theory offers powerful tools to attack such problems. it is often desirable to impose additional damping and clustering constraints on the closedloop dynamics. respectively.
Our three design objectives can be expressed as follows: 516 .5 Synthesis of H∞ Controllers • Minimizes a tradeoff criterion of the form α T∞ ∞ + β T2 2 with α Š 0 and β ≥ 0 • Places the closedloop poles in some prescribed LMI region D Recall that LMI regions are general convex subregions of the open lefthalf plane (see “Pole Placement in LMI Regions” on page 45 for details). 2 2 LMI Formulation Let and · x = Ax + B 1 w + B 2 u C ∞ x + D ∞1 w + D ∞2 u C 2 x + D 21 w + D 22 u C y x + D y1 w z∞ = z2 = y = · ζ = A Kζ + B Ky · u = C Kζ + D Ky be statespace realizations of the plant P(s) and controller K(s). respectively. and let · x cl z∞ z 2 = = = A cl x cl + B cl w C cl1 x cl + D cl1 w C cl2 x cl + D cl2 w be the corresponding closedloop statespace equations.
j ≤ m and M = [ µ ij ] 1 ≤ i .j ≤ m if and only if there exists a symmetric matrix χpol satisfying 517 .MultiObjective H• Synthesis • H∞ performance: the closedloop RMS gain from w to z∞ does not exceed γ if and only if there exists a symmetric matrix X∞ such that T T A cl χ ∞ + χ ∞ A cl B cl X ∞ C cl1 T T B cl –I D cl1 2 C cl1 χ ∞ D cl1 – γ I <0 χ∞ > 0 • H2 performance: the H2 norm of the closedloop transfer function from w to z2 does not exceed ν if and only if Dcl2 = 0 and there exist two symmetric matrices χ2 and Q such that T A cl χ 2 + χ 2 A cl B cl <0 T B cl –I Q C cl2 χ 2 T χ 2 C cl2 χ 2 >0 2 Trace ( Q ) < ν • Pole placement: the closedloop poles lie in the LMI region D = { z ∈ C : L + Mz + M T z < 0 } with L = LT = { λ ij } 1 ≤ i .
CK.5 Synthesis of H∞ Controllers [ λ ij χ pol + µ ij A cl χ pol + µ ji X pol A cl ] 1 ≤ i . 1]. MT 0 0 S χ 2 := I NT and introducing the change of controller variables [3]: B := NB K + SB 2 D K K T C K := C K M + D K C y R T T A K := NA K M + NB K C y R + SB 2 C K M + S ( A + B 2 D K C y )R.j ≤ m < 0 χ pol > 0 For tractability in the LMI framework. This leads to the following suboptimal LMI formulation of our multiobjective synthesis problem: 518 . we must seek a single Lyapunov matrix χ := χ∞ = χ2 = χpol that enforces all three sets of constraints. AK. S. Factorizing χ as χ = χ1 χ2 . and DK [8. BK. the inequality constraints on χ are readily turned into LMI constraints in the variables R. –1 T R I χ 1 := . Q.
519 . the closedloop H∞ and H2 performances are bounded by T ∞ ∞ ≤ γ*. T 2 2 ≤ Trace ( Q* ).j ≤ m TraceQ < ν 0 γ < γ0 2 2 2 D 21 + D 22 D K D y1 = 0 Given optimal solutions γ*. CK. S.MultiObjective H• Synthesis Minimize α γ2 + β Trace(Q) over R. DK. Q* of this LMI problem. AK. BK. and γ2 satisfying: T T T T AK + A + B2 DK Cy B 1 + B 2 D K D y1 H AR + RA + B 2 C K + C K B 2 T T T H A S + SA + B K C y + C y B K SB 1 + B K D y1 H H H –I H 2 C ∞ R + D ∞2 C K C ∞ + D ∞2 D K C y D ∞1 + D ∞2 D K D y1 – γ I <0 >0 Q C R+D C C +D D C 2 22 K 2 22 K y H R I H I S AR + B C A + B D C 2 K 2 K y λ ij R I + µ ij I S AK SA + B K C y + T T T T AK RA + C K B 2 µ ji ( A + B D C )T AT S + CT BT 2 K y y K <0 1 ≤ i . Q.
S are optimal values of the variables R. y.R.K. and u • obj = [γ0.obj.5 Synthesis of H∞ Controllers The Function hinfmix The function hinfmix implements the LMI approach to mixed H2/H∞ synthesis with regional pole placement described above. Note that z2 or z∞ can be empty. α.h2opt. S (see “LMI Formulation” on page 516). β] is a fourentry vector specifying the H2/H∞ constraints and criterion • region specifies the LMI region for pole placement. the closedloop poles also include the poles of the shaping filters. the default being the open lefthalf plane. ν0.S] = hinfmix(P. and R. You can perform the following mixed and unmixed designs by setting obj appropriately: obj [0 0 0 0] [0 0 1 0] [0 0 0 1] [g 0 0 1] Corresponding Design pole placement only H∞optimal design H2optimal design minimize T22 subject to T∞∞ < g minimize T∞∞ subject to T22 < h minimize a T ∞ 0 + b T 2 2 most general problem 2 2 [0 h 1 0] [0 0 a b] [g h a b] LoopShaping Design with hinfmix In the loopshaping context (see next chapter). or even both when performing pure pole placement • r is a threeentry vector listing the lengths of z2.region) where • P is the SYSTEM matrix of the LTI plant P(s). Its syntax is [gopt. K is the controller SYSTEM matrix. Use lmireg to interactively generate the matrix region. Since these poles are not affected by the 520 .r. The outputs gopt and h2opt are the guaranteed H∞ and H2 performances.
K (s) s ˜ and performing the synthesis for K . they impose hard limitations on achievable closedloop clustering objectives. shaping filters with modes close to the imaginary axis will doom any attempt to push the closedloop modes into a welldamped region. When the purpose of such filters is to enforce integral control action. you can eliminate this difficulty by moving the integrator from the shaping filter to the control loop itself.MultiObjective H• Synthesis controller. An example can be found at the end of Chapter 6. 521 . This amounts to reparametrizing the controller as 1˜ K(s) = . In particular.
” IEEE Trans. Contr.. and P.. P. Dec. [4] Gahinet. [7] Scherer. “H∞ Optimization without Assumptions on Finite or Infinite Zeros. Gahinet. Aut. [5] Gahinet. 831–847. Contr. Laub. J.C.. Opt. Lake Buena Vista. Also in Proc. pp.” submitted to Automatica. Contr.. K. Conf. volume of the special contributions to the ECC 1995. Contr. Glover. Fl.. Contr. 1994. and P. 421–448. Aut. pp. “Explicit Controller Formulas for LMIbased H∞ Synthesis. Skelton. pp. “All Controllers for the General H∞ Control Problem: LMI Existence Conditions and StateSpace Formulas. “Reliable Computation of γopt in Singular H∞ Control. AC–34 (1989).” to appear in SIAM J. Contr.” Automatica.” SIAM J. 522 .. Also in Proc...5 Synthesis of H∞ Controllers References [1] Chilali. “H∞ Design with Pole Placement Constraints: an LMI Approach. P. Opt. pp. [6] Iwasaki.. 1995. “StateSpace Solutions to Standard H2 and H∞ Control Problems. 143–166. 30 (1994). J.” to appear in Trends in Control: A European Perspective. Apkarian. Robust and Nonlinear Contr. and R. 1994. [8] Scherer... pp.” to appear in IEEE Trans. B... Amer. T... 1527–1532. 30 (1992). C. Khargonekar. 4 (1994). [2] Doyle. M. 1995. “Mixed H2H∞ Control..” Int. pp. P. 1307–1317. and A..E. Conf.. “A Linear Matrix Inequality Approach to H∞ Control. [3] Gahinet. P. 2396– 2400. C.J.. and Francis.
. . . 614 Controller Synthesis and Validation . . . . . . . . . . . . . . . 624 . . . . . . . . . . . . . . . . . . 612 Specification of the Control Structure . . . . . . . . . . . . . . . .6 Loop Shaping The LoopShaping Methodology . . . . . . 619 References . . . . . . . . . . . . . . . . . . . . . . . 610 Nonproper Filters and sderiv . . . . . . . 62 The LoopShaping Methodology . . . . 63 Design Example . . . 65 Specification of the Shaping Filters . . . . . . . . 616 Practical Considerations . . . . . . . . 618 Loop Shaping with Regional Pole Placement . . . . . . . . . .
3 Specify the control structure. robustness to model uncertainty. This chapter contains a tutorial introduction to the loopshaping methodology as well as the available tools for loopshaping design..6 Loop Shaping The LoopShaping Methodology Loop shaping is a popular technique for combining specifications such as tracking performance. how the feedback loop is organized and which input/output transfer functions are relevant to the loopshaping objectives. the design specifications are reflected as gain constraints on the various closedloop transfer functions. In this frequency domain method. rolloff.e. gain limitation. Loopshaping design involves four steps: 1 Express the design specifications as constraints on the gain response of the closedloop transfer functions. etc. i. 62 . 4 Perform a standard H∞ synthesis on the resulting control structure to compute an adequate controller. This defines ideal gain profiles called loop shapes. 2 Specify the desired loop shapes graphically with the GUI magshape. disturbance rejection. bandwidth.
. To express the openloop specification (61) in terms of these closedloop transfer functions. i. σmin(GK(jω)) >> 1. Thus..e. consider the MIMO tracking loop of Figure 61 and suppose that we want to design a controller with integral behavior. the performance and robustness specifications are first expressed in terms of loop shapes.e. This requirement is naturally expressed in terms of the openloop response GK(s). In loopshaping design. i. To get a feeling for the loopshaping methodology. and standard H∞ synthesis algorithms are applied to compute an adequate controller.The LoopShaping Methodology The LoopShaping Methodology Loop shaping is a design procedure to formulate frequencydomain specifications as H∞ constraints and standard H∞ synthesis problems. for instance by a gain constraint of the form σ min ( GK ( jω ) ) > 1 ⁄ ω for ω « 1 r r = 1 r 2 + − y y = 1 y 2 (61) e K u G Figure 61: Simple tracking loop Here the main closedloop transfer functions are: • The sensitivity function S = (I + GK)–1 (the transfer function from r to e) • The complementary sensitivity function T = GK(I + GK)–1 (the transfer function from r to y). of desired gain responses for the various closedloop transfer functions. observe that σmin(GK(jω)) ≈ 1/σmax(S(jω)) whenever the openloop gain is large. These shaping objectives are then turned into uniform H∞ bounds by means of shaping filters. (61) is equivalent to requiring that σ max ( S ( jω ) ) < ω for ω « 1 (62) 63 .
Here for instance.for ω << 1 ω ω Likewise. 64 . i. σmax(GK(jω)) << 1. the shaping filter 1 W(s) = . (63) is equivalent to 1 . rolloff requirements on GK(jω) are enforced by H∞ bounds of the form W(s)T(s)∞ < 1 where W(s) is some appropriate highpass filter. a useful tool is the notion of shaping filter.e. Such constraints are also associated with robust stability in the face of plant uncertainty of the form G(s) = (I +∆(s))G0(s) where ∆ satisfies the frequencydependent bound σmax(∆(jω)) < W(jω) Finally.– 1 ≈ . This reformulation is rooted in the approximation σmax(GK(jω)) ≈ σmax(T(jω)) valid whenever the openloop gain is small.I2 s can be used to normalize the constraint (62) to W ( s )S ( s ) ∞ < 1 (63) Indeed. ω which in turn constrains GK to 1 1 σmin(GK(jω)) > ..1” on page 54). Shaping filters are rational filters whose magnitude plot reflects the desired loop shape. shaping filters are useful to incorporate information about the spectral density of exogenous signals such as disturbances or measurement noise (see “Example 5.< σ min (1 + GK(jω)) for all ω.6 Loop Shaping To turn this frequencydependent gain constraint into a normalized H∞ bound.
The stiffness K accounts for flexibilities in the coupling between the motor and the antenna. Example 6. The corresponding nominal transfer function from the motor torque u = T to · the angular velocity y = θ a of the antenna is · 30000 G 0 ( s ) = . 2 Specify the shaping filters by their magnitude profile. 4 Solve the resulting H∞ problem with one of the H∞ synthesis functions. or alternatively with Simulink. Type radardem to run the corresponding demo.99s + 30030 ) 65 . This is done interactively with the graphical user interface magshape. A simple tracking problem is used to illustrate this design procedure.Design Example Design Example Loop shaping design with the LMI Control Toolbox involves four steps: 1 Express the design specifications in terms of loop shapes and shaping filters..1. 2 ( s + 0. Figure 62 shows a simplified mechanical model of a radar antenna.02 ) ( s + 0. 3 Specify the control loop structure with the functions sconnect and smult.
6 Loop Shaping · Our goal is to control θ a through the torque T. The control structure is the tracking loop of Figure 63 where the dynamical uncertainty ∆(s) accounts for neglected highfrequency dynamics and flexibilities. Ja θa antenna K Jm motor θm T Figure 62: Secondorder model of a radar antenna The design specifications are as follows: (i)integral control action. A bound on ∆(jω) determined from experimental data is given in Figure 64. (iii)robustness against the neglected highfrequency dynamics represented by the multiplicative model uncertainty ∆(s). (ii)99% rejection of output disturbances d with spectral density in [0. and bandwidth of at least 10 rad/s. G(s) ∆ r + − K G0(s) + d + · y = θ Figure 63: Velocity loop for the radar antenna 66 . openloop gain of at least 30 dB at 1 rad/s. 1] rad/s.
Design Example Frequency−dependent bound on Delta 40 30 20 Magnitude (dB) 10 0 −10 −20 −30 1 10 10 2 10 Frequency (rad/sec) 3 10 4 Figure 64: Empirical bound on the uncertainty ∆(s) We first convert these specifications into RMS gain constraints by means of shaping filters. the disturbance rejection target (ii) imposes S(jω) < 0.01 for ω < 1 rad/s. From the small gain theorem [2]. Both requirements can be combined into a single constraint W 1 ( s )S ( s ) ∞ < 1 (64) where S = 1 = (1 + G0K) and W1(s) is a lowpass filter whose gain lies above the shaded region in Figure 65. robust stability in the face of the uncertainty ∆(s) is equivalent to W 2 ( s )T ( s ) ∞ < 1 (65) 67 . While the performance requirement (i) amounts to a lower bound on the openloop gain.
˜ y 68 . (64)–(65) are replaced by the single RMS gain constraint W1 S W2 T ∞ < 1. For tractability in the H∞ framework. Hence our original design problem is equivalent to the following H∞ synthesis problem: Find a stabilizing controller K(s) that makes the closedloop RMS ˜ gain from r to e less than one.6 Loop Shaping where T = G0K = (1 + G0K) and W2 is a high pass filter whose gain follows the profile of Figure 64. y in the control loop of Figure 66. (66) dB integral action disturbance rejection 40 30 bandwidth 0 1 10 rad/s Figure 65: Magnitude profile for W1(s) W S 1 W2 T The transfer function ˜ ˜ maps r to the signals e .
The next three sections discuss these steps. 69 .Design Example e + − e K y G0 W1 W2 ˜ e ˜ y r u Figure 66: Control structure To perform this H∞ synthesis. 3 Call one of the H∞optimization functions to compute an adequate controller K(s) for this plant. 2 Specify the control structure of Figure 66 and derive the corresponding plant. we need to 1 Compute two shaping filters W1(s) and W2(s) whose magnitude responses match the profiles of Figures 65 and 64.
Clicking on the delete point or move point buttons allows you to delete or move particular points.6 Loop Shaping Specification of the Shaping Filters The graphical user interface magshape allows you to specify shaping filters directly by their magnitude profile. you can move some of the points or change the filter order. The gain of the resulting filter appears in solid on the plot. magshape checks if a SYSTEM matrix of the same name already exists in the MATLAB environment. This profile is sketched with a few characteristic points marking the asymptotes and the slope changes. and then redo the fitting. 610 . To make adjustments. The solid lines show the rational fits obtained with an order 3 for W1 and an order 1 for W2.w2 after the prompt Filter names. This tells magshape to write the filter SYSTEM matrices in MATLAB variables called w1 and w2. To construct the two shaping filters W1(s) and W2(s) involved in our problem: 1 Type w1. Typing magshape brings up the worksheet shown in Figure 67. and its realization is written in the MATLAB variable of the same name. 3 Click on the add point button: you can now specify the desired magnitude profile with the mouse. 2 Indicate which filter you are about to specify by clicking on the corresponding button on the righthand side of the window. specify the filter order and click the fit data button to interpolate the data points by a rational filter. When this is the case. 4 Once the desired profile is sketched. Figure 67 is a snapshot of the magshape window after specifying the profiles of W1 and W2 and fitting the data points (marked by o in the plot). its magnitude plot is drawn when the w1 button is clicked. This is useful to load existing filters in the magshape window for modification or comparison. 5 If w1 is used as a filter name.
41s + 217.31 × 10 3.Specification of the Shaping Filters The transfer functions of these filters (as given by ltitf(w1) and ltitf(w2)) are approximately 0. If you do not use magshape to generate your filters.75 W 2 ( s ) = 100 4 s + 3.28s + 25. To avoid difficulties in the subsequent H∞ optimization.83s + 8.87 W 1 ( s ) = 3 2 –4 s + 1. make sure that they are stable with poles sufficiently far from the imaginary axis.64s + 10. Such asymptotes prevent humps in S and T near the crossover frequency.20s + 0. The function magshape always returns stable and proper filters. their poles should satisfy Re ( s ) ≤ – 10 –3 3 2 611 . thus reducing the overshoot in the step response.45s + 149.54 × 10 Note that horizontal asymptotes have been added in both filters.
Because such filters cannot be represented in SYSTEM matrix form.. it is often desirable to use nonproper shaping filters.6 Loop Shaping Figure 67: The magshape interface Nonproper Filters and sderiv To impose a given rolloff rate in the openloop response. you can use sderiv to include nonproper shaping filters in the loopshaping criterion. Alternatively. This function appends a SISO proportionalderivator 612 . A drawback of this approximation is the introduction of fast parasitic modes in the filter and augmented plant realizations. magshape approximates them by highpass filters. which in turn may cause numerical difficulties.e. filters with a derivative action in the highfrequency range. i.
To specify more complex nonproper filters. This procedure is illustrated in the design example presented in “Loop Shaping with Regional Pole Placement” on page 619.01s+1 is specified by Pd = sderiv(P. [ 1 2] lists the input and output channels to be filtered by ns + d (here 1 for “first input” and 2 for “second output”) and the vector [0. 3 Add the derivative action of the nonproper filters by applying sderiv to the augmented plant. An error is generated if the resulting system Pd is not proper.[ 1 2].01 1] lists the values of n and d. 2 Augment the plant with these lowpass filters.Specification of the Shaping Filters component ns + d to selected inputs and/or outputs of a given LTI system.[0. 1 Specify the proper “lowfrequency” part of the filters with magshape. 613 . the interconnection 0.01s+1 P(s) 0. For instance.01 1]) In the calling list.
the physical closedloop transfer functions from r to e and y are fully specified by P(s) and K(s) (recall that the shaping filters are not part of the physical control loop). and Paug(s) is the augmented plant obtained by appending the shaping filters W1(s) and W2(s). With this function.6 Loop Shaping Specification of the Control Structure To perform the H∞ synthesis. W1. and the qualitative description of the control structure (Figure 66). Paug(s) e r y W2 ˜ y W1 ˜ e P(s) u K(s) Figure 68: Equivalent standard H∞ problem y The function sconnect computes the SYSTEM matrices of P(s) or Paug(s) given G0. general control structures are described by listing the input and output signals and by specifying the input of each dynamical system. In our problem. W2. the control structure of Figure 66 must be put in the standard linearfractional form of Figure 68. In this equivalent representation. the plant P(s) corresponding to the control loop e + r − e K G0 y 614 . While the H∞ synthesis is performed on Paug(s). P(s) is the nominal plant determined by G0(s) and the control structure.
w1. for each known LTI system in the loop. and fourth outputs. G0' lists the two output signals separated by a semicolon. • The third argument 'K:e' names the controller and specifies its inputs. the first output e is specified as r minus the output y of G0. For instance.'W1. G0'.sdiag(w1.03e4])) P = sconnect('r(1)'. Similarly.1)) Finally. In the sconnect command. and g0 is the SYSTEM matrix of G0(s).[1.g0. the syntax G0([1.r'.g0) The ltisys command returns a statespace realization of G0(s).'K:e = rG0'. the same result is obtained more directly by appending the shaping filters to P(s) according to Figure 68: Paug = smult(P.W2'. provided that you use the same names throughout..'K:e'. Here the only input is the scalar reference signal r. the third argument should be set to [] to signal the absence of a controller. its input list and its SYSTEM matrix. Here 'G0:K' means that the input of G0 is the output of K.. 'W1:e'. third. You can give arbitrary names to G0 and K in the string definitions.3e4.'W2:G0'. 615 . A twodegreesoffreedom controller with inputs e and r would be specified by 'K: e. For systems with several outputs.'e=rG0 .conv([1.'G0:K'.3:4]) would select the first. Outputs are defined as combinations of the input signals and the outputs of the dynamical systems.99 3.Specification of the Control Structure is specified by g0 = ltisys('tf'. the input arguments are strings or SYSTEM matrices specifying the control structure as follows: • The first argument 'r(1)' lists and dimensions the input signals.w2) where w1 and w2 are the shaping filter SYSTEM matrices returned by magshape. • The remaining arguments come in pairs and specify. the augmented plant Paug(s) corresponding to the loop of Figure 66 would be specified by Paug = sconnect('r(1)'. • The second argument 'e=rG0 . note that sconnect is also useful to compute the SYSTEM matrix of ordinary system interconnections. However.02]. In such cases..w2.'G0:K'.
The loopshaping design is feasible if and only if the H∞ performance γ = 1 is achievable for Paug.1).'st') 616 . we obtain [gopt. you can form the openloop response GK by typing gk = slft(k.000000 ** regularizing D12 or D21 to compute K(s): .. it suffices to call hinfric or hinflmi to test whether the specifications are feasible and compute an adequate controller K(s).6 Loop Shaping Controller Synthesis and Validation After forming the augmented plant Paug(s) with sconnect. GammaIteration: Gamma Diagnosis 1. To validate the controller k. With our choice of shaping filters.1. Hence the specifications are met by the controller k returned by hinfric.[1 1]. The last two lines indicate that the H∞ problem associated with Paug is singular and has been regularized.2).0000 : feasible Best closedloop gain (GAMMA_OPT): 1.1.g) or the closedloop transfer function from r to e and y by typing Pcl = slft(P.k) The Bode diagram of GK and the nominal step response are then plotted by splot(gk.k] = hinfric(Paug.'bo') splot(ssub(Pcl.
The step response with the controller computed above appears in Figure 69.05 0.6 0.2 0 0 0.3 Time (secs) 0.2) selects the transfer function from r to y as a subsystem of Pcl.8 Amplitude 0.2 1 0.25 0.1 0.15 0.1.4 0.Controller Synthesis and Validation In the second command.45 0. ssub(Pcl.2 0.5 Figure 69: Step response of the closedloop system 617 . 1.4 0.35 0.
jωaxis zeros bar the direct use of standard Riccatibased synthesis algorithms [1]. In our example.d) Paug = smult(Preg.1)) Note that the regularization level reg should always be chosen positive. this would amount to forming Paug as follows: P = sconnect('r(1)'. to place closedloop poles within ε of the jωaxis. While immaterial in the LMI approach. which in turn results in better closedloop damping.1) blocks of Paug. In doing so.w2. G0'. 618 . hinfric shifts the shaping filter poles as part of the poles of Paug. this is likely to result in poor closedloop damping because the central controller computed by hinfric tends.g0) % preregularization [a. in such problems. this typically tolerates much larger ε. Since the shaping filters must remain stable for closedloop stabilizability.2) or (2. Finally. The message ** jwaxis zeros of P12(s) regularized or ** jwaxis zeros of P21(s) regularized is then displayed to inform users of the presence of a singularity. sdiag(w1. loopshaping design often leads to “singular” H∞ problems with jωaxis zeros in the (1.b. The function hinfric automatically regularizes such problems by perturbing the A matrix of Paug to A + εI for some positive ε.c. a simple remedy consists of perturbing the poles of the plant P(s) prior to appending the shaping filters.'e=rG0 .b. Provided that all poles of G0(s) are controllable and observable.c. the amount of regularization ε is thereby limited.6 Loop Shaping Practical Considerations When the model G0(s) of the system has poles on the imaginary axis.d] = ltiss(P) a = a + reg * eye(size(a)) Preg = ltisys(a.'K:e'.'G0:K'. a careful validation of the resulting controller on the true plant P is necessary since the H∞ synthesis is performed on a perturbed system. however. If this difficulty arises. When some shaping filters have modes close to the jωaxis.
recall that the “closedloop” dynamics of the loopshaping criterion always comprise the shaping filter modes. this is helpful to tune the transient responses and prevent fast controller dynamics. e.g. As mentioned in the previous chapter. There are several ways of circumventing this difficulty.Loop Shaping with Regional Pole Placement Loop Shaping with Regional Pole Placement A weakness of the design outlined above is the plant inversion performed by the controller K(s). using hinfmix requires extra care because of possible interferences between the root clustering objective and the shaping filter poles. oscillatory responses may result when these modes are excited.. Indeed. Loopshaping design with pole placement is addressed by the function hinfmix. In addition to avoiding cancellations. 50 K(s) Singular Values dB 0 G(s) −50 −100 −1 10 10 0 10 Frequency (rad/sec) 1 10 2 10 3 Figure 610: Polezero cancellation between G and K 619 . One possible approach is to impose some minimum closedloop damping via an additional regional pole placement constraint. Since such cancellations leave resonant modes in the closedloop system. the resonant poles of G(s) are cancelled by zeros of K(s) as seen when comparing the magnitude plots of G and K (see Figure 610). Specifically. by a disturbance at the plant input (see Figure 611).
1 Remove the pseudointegrator from W1(s) and move it to the main loop as shown in Figure 612. By inspection of the shaping filters used in the previous design. this region clashes with the pseudointegrator in W1(s) and the fast pole at s = –3. we chose as pole placement region the disk centered at (–1000.5 0 1 2 3 4 5 6 Time (secs) 7 8 9 10 Figure 611: Response y for an impulse disturbance at the plant input For “Example 6.5 Amplitude 0 −0.6 Loop Shaping 1. This defines an equivalent loopshaping problem ˜ where the controller is reparametrized as K(s) = K (s)/s and the integrator is now assignable via output feedback.54 × 104 in W2(s).5 −1 −1. To alleviate this difficulty. 620 .5 1 0. 0) and with radius 1000.1” on page 65.
Loop Shaping with Regional Pole Placement 2 Specify W2(s) as a nonproper filter with derivative action.9 + 200 A summary of the commands needed to specify the control structure and solve the constrained H∞ problem is listed below.8 2 s W 2 ( s ) = 0.8]) Paug = smult(P. sint) % add w1 w1=ltisys('tf'.9]) % interactively specify the region for pole placement region = lmireg % perform the Hinfinity synthesis with pole placement r = [0 1 1] 621 .[1/200 0. % control structure.2 0. This is done with sderiv.1)) % add w2 Paug = sderiv(Paug. e r + − e 1/s W1 ˜ K y G0 W2 ˜ e ˜ y Figure 612: Modified control structure A satisfactory design was obtained for the following values of W1 and W2: s + 16s + 200 W 1 ( s ) = .'Int.'Kt:Int'. Run the demo radardem to perform this design interactively. sint = integrator sint = ltisys('tf'.2.'G:Kt'.sdiag(w1.1.1.2s + 0.G0. 2 s + 1.r] = sconnect('r'.[1 0]) [P.[1 16 200].G'..'Int:e=rG'.[1 1.
[0 0 1 0]. Note that the new controller no longer inverts the plant and that input disturbances are now rejected with satisfactory transient behavior.6 Loop Shaping [gopt.r. 100 Gain dB 50 0 −50 −1 10 10 0 10 Frequency (rad/sec) 1 10 2 10 3 0 Phase deg −360 −720 10 −1 10 0 10 Frequency (rad/sec) 1 10 2 10 3 Figure 613: Openloop response GK(s) 622 .xx.region) The full controller is then given by K = smult(sint.Kt) The resulting openloop response and the time response to a step r and an impulse disturbance at the plant input appear in Figure 613 and 614.Kt] = hinfmix(Paug.
05 0.35 0.5 1 Amplitude 0.45 0.4 0.35 0.15 0.5 0 −0.5 Amplitude 1 0.3 Time (secs) 0.1 0.5 Response to an impulse disturbance on u 2 1.2 0.15 0.25 0.5 0 0.5 0 0.5 0 −0.45 0.4 0.25 0.Loop Shaping with Regional Pole Placement Response to a step r 1.1 0.3 Time (secs) 0.5 Figure 614: Transient behaviors with the new controller 623 .2 0.05 0.
and Francis. pp. AC–11 (1966). “StateSpace Solutions to Standard H2 and H∞ Control Problems.. pp. AC–34 (1989). 831–847. B. Part I and II..6 Loop Shaping References [1] Doyle. P. Glover.. K. J. Khargonekar. G.” IEEE Trans. Aut. Aut. [2] Zames... “On the InputOutput Stability of TimeVarying Nonlinear Feedback Systems. 228–238 and 465–476.. Contr. 624 . Contr.” IEEE Trans.C..
. . . . . . . . . 79 Design Example . . . . . . . . . . . . . . 73 Synthesis of GainScheduled H• Controllers . . . 77 Simulation of GainScheduled Control Systems . . . 710 References . . . . . . . . . . . . . . . . . . . .7 Robust GainScheduled Controllers GainScheduled Control . . . . . . . . . . . . . . . .
gain scheduling consists in designing an LTI controller for each operating point and in switching controller when the operating conditions change.7 Robust GainScheduled Controllers Gain scheduling is a widely used technique for controlling certain classes of nonlinear or linear timevarying systems. These tools are applicable to timevarying and/or nonlinear systems whose linearized dynamics are reasonably well approximated by affine parameterdependent models. 72 . Rather than seeking a single robust LTI controller for the entire operating range. This section presents systematic tools to design gainscheduled H∞ controllers for linear parameterdependent systems.
. Note that p(t) may include part of the state vector x itself provided that the corresponding states are accessible to measurement.. D11(. B1(. Such controllers are said to be scheduled by the parameter measurements. it is often impossible to achieve high performance over the entire operating range with a single robust LTI controller. .) are affine functions of p(t).). If the parameter vector p(t) takes values in a box of Rn with corners N n { Π i } i = 1 ( N = 2 ) .).p ) z = C 1 ( p )x + D 11 ( p )w + D 12 u y = C x + D w + D u 2 21 22 where p(t) = (p1(t). the plant SYSTEM matrix A(p) B (p) B 1 2 S ( p ) := C 1 ( p ) D 11 ( p ) D 12 C D 21 D 22 2 73 . . angle of attack. Provided that the parameter values are measured in real time. . stiffness. This is a simple model of systems whose dynamical equations depend on physical coefficients that vary during operation. When these coefficients undergo large variations. C1(. . it is then desirable to use controllers that incorporate such measurements to adjust to the current operating conditions [4].). This control strategy typically achieves higher performance in the face of large variations in operating conditions. pn(t)).) and A(.GainScheduled Control GainScheduled Control The synthesis technique discussed below is applicable to affine parameterdependent plants with equations · x = A ( p )x + B ( p )w + B u 1 2 P ( . pi ≤ pi ( t ) ≤ pi is a timevarying vector of physical parameters (velocity. ..
. the values of AK(p). at the corners of the parameter box by A (p) B (p) K K C (p) D (p) K K N A (Π ) B (Π ) K i = αi K i C (Π ) D (Π ) K i K i i=1 ∑ In other words. ∑ αi = 1 i=1 of p over the corners of the parameter box. BK(Πi). the SYSTEM matrix S(p) is given by S(p) = α1S(Π1) + . α i ≥ 0. BK(p). the controller statespace matrices at the operating point p(t) are obtained by convex interpolation of the LTI vertex controllers A (Π ) B (Π ) K i K i := K i C (Π ) D (Π ) K i K i This yields a smooth scheduling of the controller matrices by the parameter measurements p(t). 74 . . . given any convex decomposition N p ( t ) = α1 Π1 + … + αN ΠN . . . ..p ) u = C K ( p )ζ + D K ( p )y and having the following vertex property: Given the convex decomposition p ( t ) = ∑i = 1 α i Π i N of the current parameter value p(t). + αN S(ΠN) This suggests seeking parameterdependent controllers with equations ζ = A K ( p )ζ + B K ( p )y K ( . . .7 Robust GainScheduled Controllers ranges in a matrix polytope with vertices S(Πi). are derived from the values AK(Πi). Specifically.
2] Find two symmetric matrices R and S such that T T A i R + RA i RC 1i B 1i N 0 12 C 1i R – γI D 11i 0 I T T B1 D 11i – γI T N12 0 0 < 0.. Using the notion of quadratic H∞ performance to enforce the RMS gain constraint. ….p) z u K(. p) y p(t) Figure 71: Gainscheduled H∞ problem Design a gainscheduled controller K(. . this synthesis problem can be reduced to the following LMI problem [3.. and observing that the closedloop system is a polytopic system. N I (71) 75 . p) satisfying the vertex property and such that • The closedloop system is stable for all admissible parameter trajectories p(t) • The worstcase closedloop RMS gain from w to z does not exceed some level γ > 0. consider the following H∞like synthesis problem relative to the interconnection of Figure 71: w P(. i = 1.GainScheduled Control For this class of controllers.
D12. N I (72) R I ≥0 I S where A B i 1i C D 1i 11i := A ( Π i ) B 1 ( Π i ) C (Π ) D (Π ) 11 i 1 i T T (73) and N12 and N21 are bases of the null spaces of ( ( B 2 . D21 to be independent of p(t) may be restrictive in practice. Similarly. C2. …. 76 . When B2 and D12 depend on p(t). D 12 ) ) and (C2. i = 1.7 Robust GainScheduled Controllers T T A i S + SA SB 1i C 1i N 0 T T 21 B 1i S – γI D 11i 0 I C 1i D 11i – γI T N21 0 0 < 0. Remark: Requiring the plant matrices B2. Appending this filter to the plant rejects the parameter dependence of B2 and D12 into the A matrix of the augmented plant. a simple trick to enforce these requirements consists of filtering u through a lowpass filter with sufficiently large bandwidth. respectively. D21). parameter dependences in C2 and D21 can be eliminated by filtering the output (see [1] for details).
r) Here pdP is an affine or polytopic model of the plant (see psys) and the twoentry vector r specifies the dimensions of the D22 matrix (set r=[p2 m2] if y ∈ Rp2 and u ∈ Rm2).pdK] = hinfgs(pdP. The calling sequence is [gopt.c) The function polydec computes the convex decomposition p ( t ) = αΠ i=1 i i and returns c = (α1. A (Π ) B (Π ) K i Ki = K i C (Π ) D (Π ) K i K i A (Π ) B (Π ) K i Ki = K i C (Π ) D (Π ) K i K i Given any value p of the parameter vector p(t).'eval'. Note that polydec i=1 i i ensures the continuity of the polytopic coordinates αi as a function of p. . . Since such controllers are characterized by their vertex values. gopt is the best achievable RMS gain and pdK is the polytopic system consisting of the vertex controllers. both the plant and the controller are treated as polytopic models by hinfgs. The controller statespace data N K(p) = α K at time t is then computed by psinfo. On output.Synthesis of GainScheduled H• Controllers Synthesis of GainScheduled H∞ Controllers The synthesis of gainscheduled H∞ controllers is performed by the function hinfgs. ∑ N ∑ 77 .. αN). .p) Kp = psinfo(pdK. the corresponding statespace parameters A (p) B (p) K K(p) = K C (p) D (p) K K of the gainscheduled controller are returned in SYSTEM matrix format by c = polydec(pv. thereby guaranteeing the continuity of the controller statespace matrices along parameter trajectories.
pdK) and evaluated for a given parameter value p of p(t) by pclt = psinfo(pcl. a polytopic model of the closedloop system is formed by pcl = slft(pdP.7 Robust GainScheduled Controllers Given the polytopic controller representation pdK.'eval'.p)) 78 .polydec(pv.
The function pdsimul is provided to simulate the time response of such systems along a given parameter trajectory. 79 . You can then plot the corresponding time response by pdsimul(pcl. With pdsimul.'trajfun'. See the next paragraph for an example. the step response is plotted by default. gainscheduled control systems cannot be simulated with splot.tf. If 'inputfun' is omitted.'inputfun') where pcl is the polytopic representation of the closedloop system and tf is the desired duration of the simulation. you must first define the parameter trajectory and the input signal as two functions of time with syntax p=trajfun(t) and u=inputfun(t).Simulation of GainScheduled Control Systems Simulation of GainScheduled Control Systems Being timevarying systems.
we can use the loopshaping criterion W1 S W 2 KS ∞ <1 (75) 710 . 4]. δm the fin deflection. V. These two coefficients are “measured” in real time. 4] Mach. Type misldem to run the corresponding demo. Zα ∈ [0. α ∈ [0. The stringent performance specifications (settling time < 0. Mα are aerodynamical coefficients depending on α. 40] degrees during operation. H ∈ [0. These three parameters undergo large variations during operation.5.7 Robust GainScheduled Controllers Design Example This example is drawn from [2] and deals with the design of a gainscheduled autopilot for the pitch axis of a missile. The missile dynamics are highly dependent on the angle of attack α. and the altitude H. q the pitch rate. A simple model of the linearized dynamics of the missile is the parameterdependent model G · α = – Zα 1 α + 0 δ m · – Mα 0 q 1 q –1 0 α a zv = 0 1 q q (74) where azv is the normalized vertical acceleration. and Zα. the coefficients Zα and Mα range in Our goal is to control the vertical acceleration azv over this operating range. The control structure is sketched in Figure 72. and H.5 s) and the bandwidth limitation imposed by unmodeled high frequency dynamics make gain scheduling desirable in this context. To enforce the performance and robustness requirements. and α vary in V ∈ [0. 106]. H. 18000] m. As V. the speed V.5. Mα ∈ [0.
Mmin Mmax]) % affine model: s0 = ltisys([0 1.029s W 2 ( s ) = 3 4 2 7 10 s + 1.[0.1 0].0].0) % Z_al s2 = ltisys([0 0.[1 0. Adequate shaping filters are derived with magshape as 2.[Zmin Zmax .5.[0.0 0].[0.0].[s0 s1 s2]) For loopshaping purposes. Zmax=4.[0. First enter the parameterdependent model (74) of the missile pitch axis by: % parameter range: Zmin=.136 × 10 s + 1.zeros(2). Mmin=0.201 9.1].066 × 10 3 2 + r − e K autopilot azv u G missile q Figure 72: Missile autopilot The gainscheduled controller synthesis parallels the standard loopshaping procedure described in Chapter 6.0) % M_al pdG = psys(pv.[0.zeros(2).0]. pv = pvec('box'.0]) s1 = ltisys([1 0.0].678s + 0.206 × 10 s + 1. Mmax=106.01 W 1 ( s ) = s + 0.0 0].0 1]. This is done with sconnect and smult as follows: 711 .[0. we must form the augmented plant associated with the criterion (75).Design Example Note that S = (I + GK)–1 is now a timevarying system and that the H∞ norm must be understood in terms of input/output RMS gain.
m: function p = spiral(t) p = [2..pdK] = hinfgs(Paug.0.pdG).'G:K'.*cos(100*t) . .5) plot(t.'spiralt'. We are now ready to perform the gainscheduled controller synthesis with hinfgs: [gopt.'e=rGP1. For the sake of illustration.70*exp(4*t).25 + 1.x.y]=pdsimul(pCL.pdK) You can now simulate the step response of the gainscheduled system along particular parameter trajectories. and the polytopic description of a controller with such performance is returned in pdK.205 is smaller than 1.r) gopt = 0.K'.7 Robust GainScheduled Controllers % form the plant interconnection [pdP. After defining this trajectory is a function spiralt. To validate the design. 50 + 49*exp(4*t). you can plot the closedloop step response along p(t) by [t. % append the shaping filters Paug = smult(pdP.sdiag(w1.70 e–4t cos(100 t) Mα(t) = 50 + 49 e–4t sin(100 t) shown in Figure 73.1y(:.w2.205 Our specifications are achievable since the best performance γopt = 0.'K:e. consider the spiral trajectory Zα(t) = 2.eye(2))) Note that pdP and Paug are polytopic models at this stage.25 + 1.*sin(100*t)]..r] = sconnect('r'.G(2)'.1)) 712 . form the closedloop system with pCL = slft(pdP.
Parameter trajectory (duration : 1 s) 100 90 80 70 60 M_alpha 50 40 30 20 10 0 0.Design Example The function pdsimul returns the output trajectories y(t) = (e(t).5 4 Figure 73: Parameter trajectory 713 .5 1 1.5 2 Z_alpha 2. u(t))T and the response azv(t) is deduced by azv(t) = 1 – e(t) The second command plots azv(t) versus time as shown in Figure 74.5 3 3.
45 0.2 0.6 0.2 0 0 0.35 0.1 0.05 0.5 Figure 74: Corresponding step response 714 .8 azv 0.2 1 0.4 0.25 0.15 0.4 0.3 0.7 Robust GainScheduled Controllers step response of the gain−scheduled autopilot 1.
[4] Packard. and P. Aut. pp. “SelfScheduled H∞ Control of Linear ParameterVarying Systems.References References [1] Apkarian.” Syst. 22 (1994). Contr.” to appear in IEEE Trans. Contr.. 79–92..” Systems and Control Letters. [3] Becker. P. “Gain Scheduling via Linear Fractional Transformations. Gahinet.“A Convex Characterization of GainScheduled H∞ Controllers. P. Becker. 715 . P. G. Packard. [2] Apkarian. 1995.. and G. 23 (1994).. A..” to appear in Automatica. “Robust Performance of Linear Parametrically Varying Systems Using Parametrically Dependent Linear Feedback. pp. 205–215. P. Letters. Gahinet. 1995..
7 Robust GainScheduled Controllers 716 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . From Decision to Matrix Variables and Vice Versa . . . References . . . . . . . . . . .8 The LMI Lab Background and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Overview of the LMI Lab . . . . . . . . . . . . . 86 Specifying a System of LMIs . . . Validating Results . . . . . . . . . . . . . . . . . . . Modifying a System of LMIs . 821 822 828 829 830 833 844 . . . . . 88 Retrieving Information LMI Solvers . . . . Advanced Topics . . . . . . . . .
and gevp constitute the computational engine of the LMI Control Toolbox. Thanks to a structureoriented representation of LMIs.8 The LMI Lab The LMI Lab is a highperformance package for solving general LMI problems. It blends simple tools for the specification and manipulation of LMIs with powerful LMI solvers for three generic LMI problems. and generalized eigenvalue minimization) • Validate results This chapter gives a tutorial introduction to the LMI Lab as well as more advanced tips for making the most out of its potential. The three solvers feasp. The tutorial material is also covered by the demo lmidem. 82 . mincx. The LMI Lab offers tools to • Specify LMI systems either symbolically with the LMI Editor or incrementally with the lmivar and lmiterm commands • Retrieve information about existing systems of LMIs • Modify existing systems of LMIs • Solve the three generic LMI problems (feasibility problem. Once an LMI problem is specified. the various LMI constraints can be described in their natural blockmatrix form. linear objective minimization. it can be solved numerically by calling the appropriate LMI solver. Their high performance is achieved through CMEX implementation and by taking advantage of the particular structure of each LMI. Similarly. the optimization variables are specified directly as matrix variables with some given structure.
. LMIs rarely arise in this form in control applications. We refer to x1. the number of matrices involved in (82) grows roughly as n2 /2 if n is the size of the A matrix. 83 . Consider for instance the Lyapunov inequality A X + XA < 0 T (81) x x where A = – 1 2 and the variable X = 1 2 is a symmetric matrix. x x 0 –2 2 3 Here the decision variables are the free entries x1. . The names “design variables” and “optimization variables” are also found in the literature. Finally. working with the canonical form is also detrimental to the efficiency of the LMI solvers. . x2. . LN are given symmetric matrices • x = (x1. xN)T ∈ RN is the vector of scalar variables to be determined. . For instance. . the canonical form is very inefficient from a storage viewpoint since it requires storing o(n2 /2) matrices of size n when the single nbyn matrix A would be sufficient. . + xNLN < 0 where • L0. . . . the LMI Lab uses a structured representation of LMIs. xN as the decision variables. and only the A matrix is stored. For these various reasons. L1. Hence.Background and Terminology Background and Terminology Any linear matrix inequality can be expressed in the canonical form L(x) = L0 + x1L1 + . Moreover. the expression ATX + XA in the Lyapunov inequality (81) is explicitly described as a function of the matrix variable X. x3 of X and the canonical form of this LMI reads x1 –2 2 + x2 0 –3 + x3 0 0 < 0 2 0 –3 4 0 –4 (82) Clearly this expression is less intuitive and transparent than (81). Even though this canonical expression is generic. . . . .
γ) is entirely specified by the blocks on or above the diagonal. and –γI above. XCT . consider the following LMI drawn from H∞ theory T T A X + XA XC B CX – γI D T T B D – γI N < 0 (83) N T where A. D. Constant terms are fixed matrices like B and D above. We use the following terminology to describe such LMIs: • N is called the outer factor. The outer factor needs not be square and is often absent. B. and the block matrix T T A X + XA XC B L ( X. The LMI (83) is specified by the list of terms in each block. like XA. As a fairly typical illustration.1) contains two elementary terms: ATX and XA. γ ) = CX – γI D T T B D – γI is called the inner factor. C. γ) is an affine expression in the matrix variables X and γ. Variable terms involve one of the matrix variables. • Terms are either constant or variable. γ) is a symmetric block matrix. N are given matrices and the problem variables are X = XT ∈ Rn×n and γ ∈ R. For instance. the block (1. • Each block of L(X. • X and γ are the matrix variables of the problem. L(X. Note that scalars are considered as 1by1 matrices.8 The LMI Lab In general. This expression can be broken down into a sum of elementaryterm terms. its block structure being characterized by the sizes of its diagonal blocks. 84 . as is any LMI regardless of its complexity. • The inner factor L(X. By symmetry. LMIs assume a block matrix form where each block is an affine combination of the matrix variables.
For instance. they are characterized by their dimensions and structure. There is no builtin limitation on the number of LMIs that can be specified or on the number of blocks and terms in any given LMI. Common structures include rectangular unstructured. symmetric.Background and Terminology As for the matrix variables X and γ. LMI systems of arbitrary complexity can therefore. structured LMI problems are specified by declaring the matrix variables and describing the term content of each LMI. skewsymmetric. 85 . be defined in the LMI Lab. This termoriented description is systematic and accurately reflects the specific structure of the LMI constraints. the matrix variable X could be constrained to the blockdiagonal structure x 0 0 1 X = 0 x2 x3 0 x x 3 4 Another possibility is the symmetric Toeplitz structure x x x 1 2 3 X = x2 x1 x2 x x x 3 2 1 Summing up. More sophisticated structures are sometimes encountered in control problems. and scalar.
The LMI Lab supports the following functionalities: Specification of a System of LMIs LMI systems can be either specified as symbolic matrix expressions with the interactive graphical user interface lmiedit. This vector encodes the structure and dimensions of the LMIs and matrix variables. 86 . Information Retrieval The interactive function lmiinfo answers qualitative queries about LMI systems created with lmiedit or lmivar and lmiterm. These solvers can handle very general LMI systems and matrix variable structures. or assembled incrementally with the two commands lmivar and lmiterm. a description of all LMI terms. Its main purpose is to • Allow for straightforward description of LMIs in their natural blockmatrix form • Provide easy access to the LMI solvers (optimization codes) • Facilitate result validation and problem modification The structureoriented description of a given LMI system is stored as a single vector called the internal representation and generically denoted by LMISYS in the sequel. The corresponding values X * . The first option is more intuitive and transparent while the second option is more powerful and flexible. …. and numerically solve LMIs. manipulate. You can also use lmiedit to visualize the LMI system produced by a particular sequence of lmivar/lmiterm commands.8 The LMI Lab Overview of the LMI Lab The LMI Lab offers tools to specify. It must be stressed that users need not attempt to read or understand the content of LMISYS since all manipulations involving this internal representation can be performed in a transparent manner with LMILab tools. Solvers for LMI Optimization Problems Generalpurpose LMI solvers are provided for the three generic LMI problems defined on page 15. and the related numerical data. X * of the matrix variables are 1 K given by the function dec2mat. They return a feasible or optimal vector of decision variables x*.
The left. that is. all variable terms in the LMI system are evaluated for the value x* of the decision variables. Modification of a System of LMIs An existing system of LMIs can be modified in two ways: • An LMI can be removed from the system with dellmi. With evallmi. for example. • A matrix variable X can be deleted using delmvar.and righthand sides of each LMI then become constant matrices that can be displayed with showlmi. 87 . to fix some variables and solve the LMI problem with respect to the remaining ones. This allows a fast check and/or analysis of the results. set to some given matrix value.Overview of the LMI Lab Result Validation The solution x* produced by the LMI solvers is easily validated with the functions evallmi and showlmi. It can also be instantiated. This operation is performed by setmvar and allows.
XK) N < MT R(X1. . each block being an affine combination of X1. . XK) M where • X1. or (2) via the LMI Editor lmiedit where LMIs can be specified directly as symbolic matrix expressions. it also constitutes a good tutorial introduction to lmivar and lmiterm. . It is stored as a single vector which we consistently denote by LMISYS in the sequel (for the sake of clarity). Accordingly. . hence particularly wellsuited for beginners.8 The LMI Lab Specifying a System of LMIs The LMI Lab can handle any system of LMIs of the form NT L(X1. This computer description of the problem is used by the LMI solvers and in all subsequent manipulations of the LMI system. . . XK . beginners may elect to skip 88 . . XK and their transposes. Accordingly. . “lefthand side” refers to what is on the “smaller” side of the inequality. and “righthand side” to what is on the “larger” side. . Thanks to its coding and decoding capabilities. Important: Throughout this chapter. . . . There are two ways of generating the internal description of a given LMI system: (1) by a sequence of lmivar/lmiterm commands that build it incrementally. . . The specification of an LMI system involves two steps: 1 Declare the dimensions and structure of each matrix variable X1. . Though somewhat less flexible and powerful than the commandbased description. . . 2 Describe the term content of each LMI. X is called the righthand side and 0 the lefthand side of the LMI 0 < X even when this LMI is written as X > 0. . . XK are matrix variables with some prescribed structure • The left and right outer factors N and M are given matrices with identical dimensions • The left and right inner factors L(. . the LMI Editor is more straightforward to use.) are symmetric block matrices with identical block structures. This process creates the socalled internal representation of the LMI system.) and R(.
1. Example 8. a scaling D of structure (85) such that the largest gain across frequency of D G(s) D–1 is less than one. Run the demo lmidem to see a complete treatment of this example. four outputs. A Simple Example The following tutorial example is used to illustrate the specification of LMI systems with the LMI Lab tools. and six states. 89 .Specifying a System of LMIs the subsections on lmivar and lmiterm and to concentrate on the GUIbased specification of LMIs with lmiedit. and consider the set of input/output scaling matrices D with blockdiagonal structure D = d1 0 0 0 0 d1 0 0 0 0 d2 d3 0 0 d4 d5 –1 (84) (85) The following problem arises in the robust stability analysis of systems with timevarying uncertainty [4 ]: Find. Consider a stable transfer function G ( s ) = C ( sI – A ) B with four inputs. if any.
1) LMISYS = getlmis Here the lmivar commands define the two matrix variables X and S while the lmiterm commands describe the various terms in each LMI. its internal description can be generated with lmivar and lmiterm commands as follows: setlmis([]) X=lmivar(1. Upon completion. 810 .1) % 2nd LMI lmiterm([ 2 1 1 X].C'.1) % 3rd LMI lmiterm([ 3 1 1 S].1.A.1.8 The LMI Lab This problem has a simple LMI formulation: there exists an adequate scaling D if the following feasibility problem has solutions: Find two symmetric matrices X ∈ R6×6 and S = DT D ∈ R4×4 such that A T X + XA + C T SC XB <0 T B X –S (86) X>0 S>1 (87) (88) The LMI system (86)–(88) can be described with the LMI Editor as outlined below.1.1. 1.B) S]. Alternatively.C) X].[2 0.'s') S].[6 1]) S=lmivar(1.1) lmiterm([3 1 1 0].2 1]) % 1st LMI lmiterm([1 lmiterm([1 lmiterm([1 lmiterm([1 1 1 1 2 1 1 2 2 X].
… 811 . This corresponds to matrix variables of the form D 0 … 0 1 0 D 2 X = 0 0 … 0 Dr … . .Specifying a System of LMIs getlmis returns the internal representation LMISYS of this LMI system... When specifying a new system. This MATLAB description of the problem can be forwarded to other LMILab functions for subsequent processing. type LMISYS = getlmis This returns the internal representation LMISYS of this LMI system... The function setlmis initializes the LMI system description. . The next three subsections give more details on the syntax and usage of these various commands. setlmis and getlmis The description of an LMI system should begin with setlmis and end with getlmis.. The command getlmis must be used only once and after declaring all matrix variables and LMI terms. To facilitate the specification of this structure. the LMI Lab offers two predefined structure types along with the means to describe more general structures: Type 1: Symmetric block diagonal structure. type setlmis([]) To add on to an existing LMI system with internal representation LMIS0. lmivar The matrix variables are declared one at a time with lmivar and are characterized by their structure. type setlmis(LMIS0) When the LMI system is completely specified. More information on how the internal representation is updated by lmivar/lmiterm can also be found in “How It All Works” on page 818..
This corresponds to arbitrary rectangular matrices without any particular structure. a full symmetric matrix. The first column lists the 812 . General structures. or a scalar matrix Dj = d × I. d∈R This type encompasses ordinary symmetric matrices (single block) and scalar variables (one block of size one). see “Structured Matrix Variables” on page 833 below as well as the lmivar entry of the “Command Reference” chapter. these two matrix variables are declared by lmivar(1. the matrix variables X and S are of Type 1. S is of the form S = s1 0 0 0 0 s1 0 0 0 0 s2 s3 0 0 s3 s4 After initializing the description with the command setlmis([]). The principle is as follows: each entry of X is specified independently as either 0. Type 2: Type 3: Rectangular structure.2 1]) % S In both commands. this second input is a matrix with two columns and as many rows as diagonal blocks in X.1” on page 89.[2 0. the first input specifies the structure type and the second input contains additional information about the structure of the variable: • For a matrix variable X of Type 1.[6 1]) % X lmivar(1. Indeed. This third type is used to describe more sophisticated structures and/or correlations between the matrix variables.8 The LMI Lab where each diagonal block Dj is square and is either zero. or –xn where xn denotes the nth decision variable in the problem. For details on how to use Type 3. xn. Specifically. both are symmetric and S inherits the blockdiagonal structure (85) of D. In “Example 8.
• For matrix variables of Type 2. X and S in “Example 8. i.e.[6 1]) S = lmivar(1. [2 0.. For instance. we are left with specifying the term content of each LMI. For instance.[2 0.e. lmivar can also return the total number of decision variables allocated so far as well as the entrywise dependence of the matrix variable on these decision variables (see the lmivar entry of the “Command Reference” chapter for more details). 813 . the second input of lmivar is a twoentry vector listing the row and column dimensions of the variable.[3 5]) For convenience. terms involving a matrix variable.2 1] means that S has two diagonal blocks. For instance. Recall that LMI terms fall into three categories: • The constant terms. i. fixed matrices like I in the lefthand side of the LMI S>I • The variable terms. lmiterm After declaring the matrix variables with lmivar. the first one being a 2by2 scalar block and the second one a 2−βψ−2 full block.Specifying a System of LMIs sizes of the diagonal blocks and the second column specifies their nature with the following convention: 1 → full symmetric block 0 → scalar block –1 → zero block In the second command. Variable terms are of the form PXQ where X is a variable and P. Finally. Q are given matrices called the left and right coefficients. Note that these identifiers still point to X and S after deletion or instantiation of some of the matrix variables. for instance. respectively.. ATX and CTSC in (86). lmivar also returns a “tag” that identifies the matrix variable for subsequent reference.2 1]) The identifiers X and S are integers corresponding to the ranking of X and S in the list of matrix variables (in the order of declaration).1” could be defined by X = lmivar(1. Here their values would be X=1 and S=2. a 3by5 rectangular matrix variable would be defined by lmivar(2.
8 The LMI Lab • The outer factors The following rule should be kept in mind when describing the term content of an LMI: Important: Specify only the terms in the blocks on or above the diagonal. 2) block • The last entry indicates which matrix variable is involved in the term.1. this is sufficient to specify the entire LMI. hence in the creation of a different LMI. XB.C) 1]. For instance. CTSC. This entry is 0 for constant terms. In each command. For instance. Alternatively.B) 2]. you can describe the blocks on or below the diagonal. 1. the first argument is a fourentry vector listing the term characteristics as follows: • The first entry indicates to which LMI the term belongs. the vector [1 1 2 1] indicates that the term is attached to the (1. The inner factors being symmetric.'s') 2].1) These commands successively declare the terms ATX + XA.” and m means “righthand side of the mth LMI” • The second and third entries identify the block to which the term belongs. k for terms involving the kth matrix variable 814 . LMI terms are specified one at a time with lmiterm.1. The value m means “lefthand side of the mth LMI. Specifying all blocks results in the duplication of offdiagonal terms. the LMI A T X + XA + C T SC XB <0 T –S B X is described by lmiterm([1 lmiterm([1 lmiterm([1 lmiterm([1 1 1 1 2 1 1 2 2 1].A.C'. and –S.
scalar values are allowed as shorthand for scalar matrices. a constant term of the form αI can be specified as the “scalar” α. outer factor. which justifies the –3 in the first command. These identifiers can be used in lmiterm commands to refer to a given LMI or matrix variable. This also applies to the coefficients P and Q of variable terms. These arguments must refer to existing MATLAB variables and be realvalued.A.1. blocks are zero by default. For instance.Specifying a System of LMIs Xk .1. this would look like: setlmis([]) X = lmivar(1..1) 815 . For instance. Finally. Finally. and k for terms involving X k (here X and S are first and second variables in the order of declaration). Finally.1) T % 1*S*1 = S % 1*I = I lmiterm([3 1 1 0]. The dimensions of scalar matrices are inferred from the context and set to 1 by default. the first command specifies ATX + XA as the “symmetrization” of XA.[2 0. Thus.1. the third LMI S > I in “Example 8. First.C) X]. For the LMI system of “Example 8. The variable identifiers are returned by lmivar and the LMI identifiers are set by the function newlmi. Second.1) Recall that by convention S is considered as the righthand side of the inequality.3” on page 833 is described by lmiterm([–3 1 1 2]. Some shorthand is provided to simplify term specification.'s') S].1”.C'.[6 1]) S = lmivar(1. or matrix coefficients P and Q for variable terms PXQ or PXTQ). See“ComplexValued LMIs” on page 835 for the specification of LMIs with complexvalued coefficients.e. the second and third arguments of lmiterm contain the numerical data (values of the constant term. in diagonal blocks the extra argument 's' allows you to specify the conjugated expression AXB + BTXTAT with a single lmiterm command.2 1]) BRL = newlmi lmiterm([BRL lmiterm([BRL lmiterm([BRL lmiterm([BRL 1 1 1 2 1 1 2 2 X]. matrices of the form αI with α scalar. 1.B) S]. to improve readability it is often convenient to attach an identifier (tag) to each LMI and matrix variable. i.
1. Similarly.1) LMISYS = getlmis Here the identifiers X and S point to the variables X and S while the tags BRL. The LMI Editor lmiedit The LMI Editor lmiedit is a graphical user interface (GUI) to specify LMI systems in a straightforward symbolic manner. X would indicate transposition of the variable X.8 The LMI Lab Xpos = newlmi lmiterm([Xpos 1 1 X]. respectively. This matrix contains specific information about the structure and corresponds to the second argument of lmivar (see “lmivar” on page 811 for details). To specify your LMI system. 816 . and G for other structures) and by an additional “structure” matrix.1. Note that Xpos refers to the righthand side of the second LMI. and Slmi point to the first. second. Please use one line per matrix variable in the text editing areas.1) lmiterm([Slmi 1 1 0]. The structure is characterized by its type (S for symmetric block diagonal. Typing lmiedit calls up a window with several editable text areas and various pushbuttons. 1 Declare each matrix variable (name and structure) in the upper half of the worksheet. R for unstructured. Xpos. and third LMI.1) Slmi = newlmi lmiterm([Slmi 1 1 S].
. • Read a sequence of lmivar/lmiterm commands from a file (read button). Beginners can use this facility as a tutorial introduction to the lmivar and lmiterm commands. Once the LMI system is fully specified. This feature is useful for code validation and debugging.Specifying a System of LMIs 2 Specify the LMIs as MATLAB expressions in the lower half of the worksheet. the LMI system defined by a particular sequence of lmivar/lmiterm commands can be displayed as a MATLAB expression by clicking on the describe. b'*x 1] < 0 if x is the name given to the matrix variable X in the upper half of the worksheet. Write in a file the sequence of lmivar/lmiterm commands needed to describe a particular LMI system (write button). This is helpful to develop code and prototype MATLAB functions based on the LMI Lab. the LMI T A X + XA XB < 0 T B X –I is entered by typing [a'*x+x*a x*b. the following tasks can be performed by pressing the corresponding button: • Visualize the sequence of lmivar/lmiterm commands needed to describe this LMI system (view commands buttons). This description can be reloaded later on by pressing the load button.. You can then click on “describe the matrix variables” or “describe the LMIs. 817 . The left. • Save the symbolic description of the LMI system as a MATLAB string (save button). The file should describe a single LMI system but may otherwise contain any sequence of MATLAB commands.” to visualize the symbolic expression of the LMI system specified by these commands..and righthand sides of the LMIs should be valid MATLAB expressions. For instance.. buttons. Conversely.
Note that all LMIrelated data should be defined in the MATLAB workspace at this stage. upper diagonal LMI blocks need not be fully specified. Rather. As with lmiterm. the identity matrix can be entered as 1 without dimensioning. Figure 81 shows how to specify the feasibility problem of “Example 8. you can use various shortcuts when entering LMI expressions at the keyboard. you can just type (*) in place of each such block. zero blocks can be entered simply as 0 and need not be dimensioned. Though fairly general. lmiedit is not as flexible as lmiterm and the following limitations should be kept in mind: • Parentheses cannot be used around matrix variables. The internal representation can be passed directly to the LMI solvers or any other LMI Lab function. the expression (a*x+b)'*c + c'*(a*x+b) is invalid when x is a variable name. How It All Works Users familiar with MATLAB may wonder how lmivar and lmiterm physically update the internal representation LMISYS since LMISYS is not an argument to these functions. In fact. Finally. cleared by 818 . • Loops and if statements are ignored.1” on page 89 with lmiedit. • When turning lmiterm commands into a symbolic description of the LMI system. Similarly. Use the LMI and variable identifiers supplied by newlmi and lmivar to avoid such difficulties. These global variables are initialized by setlmis. the internal representation is written in the MATLAB variable mylmi). The result is written in a MATLAB variable named after the LMI system (if the “name of the LMI system” is set to mylmi. For instance. an error is issued if the first argument of lmiterm cannot be evaluated.8 The LMI Lab • Generate the internal representation of the LMI system by pressing create. For instance. (a+b)'*x + x'*(a+b) is perfectly valid. all updating is performed through global variables for maximum speed. By contrast.
and not visible in the workspace.Specifying a System of LMIs getlmis. be sure to • Invoke getlmis only once and after completely specifying the LMI system • Refrain from using the command clear global before the LMI system description is ended with getlmis 819 . Even though this artifact is transparent from the user's viewpoint.
8 The LMI Lab Figure 81: Graphical user interface lmiedit 820 .
for instance. lminbr. The user should not attempt to read or retrieve information directly from this vector. etc. enter lmiinfo(LMISYS) where LMISYS is the internal representation of the LMI system produced by getlmis. and matnbr are provided to extract and display all relevant information in a userreadable format. To invoke lmiinfo. enter matnbr(LMISYS) 821 . lminbr and matnbr These two functions return the number of LMIs and the number of matrix variables in the system. To get the number of matrix variables. the number of matrix variables and their structure. the term content of each LMI block. lmiinfo lmiinfo is an interactive facility to retrieve qualitative information about LMI systems. Three functions called lmiinfo.Retrieving Information Retrieving Information Recall that the full description of an LMI system is stored as a single vector called the internal representation. This includes the number of LMIs.
• Minimization of a linear objective under LMI constraints Minimize cTx over x ∈ RN subject to A(x) < B(x) The corresponding solver is called mincx. . . i. Note that A(x) < B(x) above is a shorthand notation for general structured LMI systems with decision variables x = (x1. . . and gevp take as input the internal representation LMISYS of an LMI system and return a feasible or optimizing value x* of the decision variables. . For generalized eigenvalue minimization problems. . XK): • Feasibility problem Find x ∈ RN (or equivalently matrices X1. . The three LMI solvers feasp. mincx.. . XK are derived from x* with the function dec2mat. . . 2]. it is necessary to distinguish between the standard LMI constraints C(x) < D(x) and the linearfractional LMIs A(x) < λB(x) 822 . of the free entries of the matrix variables X1. xN). . . .e. The corresponding values of the matrix variables X1. These solvers are CMEX implementations of the polynomialtime Projective Algorithm Projective Algorithm of Nesterov and Nemirovski [3. The corresponding solver is called gevp.8 The LMI Lab LMI Solvers LMI solvers are provided for the following three generic optimization problems (here x denotes the vector of decision variables. . . XK with prescribed structure) that satisfies the LMI system A(x) < B(x) The corresponding solver is called feasp. • Generalized eigenvalue minimization problem Minimize λ over x ∈ RN subject to C(x) < D(x) 0 < B(x) A(x) < λB(x). .
. Consider the optimization problem Minimize Trace(X) subject to A X + XA + XBB X + Q < with data 1 –1 –2 1 A = 3 2 1 . Example 8. you should follow these three rules to ensure proper specification of the problem: • Specify the LMIs involving λ as A(x) < B(x) (without the λ) • Specify them last in the LMI system. Use mat2dec to derive xinit from given values of the matrix variables X1. These options are described in detail in the “Command Reference” chapter. . gevp systematically assumes that the last L LMIs are linearfractional if L is the number of LMIs involving λ • Add the constraint 0 < B(x) or any other constraint that enforces it. Finally. This positivity constraint is required for wellposedness of the problem and is not automatically added by gevp (see the “Command Reference” chapter for details). 823 .LMI Solvers attached to the minimization of the generalized eigenvalue λ. . XK . .2. The following example illustrates the use of the mincx solver. Q = – 1 – 3 – 12 0 – 12 – 36 T T (89) It can be shown that the minimizer X* is simply the stabilizing solution of the algebraic Riccati equation ATX + XA + XBBTX + Q = 0 This solution can be computed directly with the Riccati solver care and compared to the minimizer returned by mincx. various options are available to control the optimization process and the solver behavior. An initial guess xinit for x can be supplied to mincx or gevp. When using gevp. B = 0 1 1 –2 –1 1 –1 0 .
Since c should select the diagonal entries of X.1) LMIs = getlmis 2 Write the objective Trace(X) as cTx where x is the vector of free entries of X.1. this problem falls within the scope of the mincx solver and can be numerically solved as follows: 1 Define the LMI constraint (89) by the sequence of commands setlmis([]) X = lmivar(1.0.0.q) 0].8 The LMI Lab From an LMI optimization standpoint. 3 Call mincx to compute the minimizer xopt and the global minimum copt = c'*xopt of the objective: options = [1e 5. c = mat2dec(LMIs. that is. problem (89) is equivalent to the following linear objective minimization problem: T Minimize Trace(X) subject to A X + XA + Q XB < 0 T B X –I (810) Since Trace(X) is a linear function of the entries of X.a.'s') 0].0] 824 .[3 1]) % variable X. full symmetric lmiterm([1 lmiterm([1 lmiterm([1 lmiterm([1 1 1 2 2 1 1 2 1 X].0.eye(3)) Note that the function defcx provides a more systematic way of specifying such objectives (see “Specifying cTx Objectives for mincx” on page 838 for details). 1) X].b'. it is obtained as the decision vector corresponding to X = I.
xopt] = mincx(LMIs.005604 34.552558 new lower bound: 18.919668 19.511476 13.LMI Solvers [copt.189417 19.023978 825 .712175 18.705715 new lower bound: 18.687324 new lower bound: 18.646811 new lower bound: 18. The following trace of the iterative optimization performed by mincx appears on the screen: Solver for linear objective minimization under LMI constraints Iterations : Best objective value so far 1 2 3 *** 4 *** 5 *** 6 *** 7 *** 8 *** 9 *** 10 *** 11 *** 12 8.819471 21.063640 new lower bound: 15.882558 new lower bound: 18339853 new lower bound: 18.732574 18.768450 new lower bound: 17.options) Here 1e 5 specifies the desired relative accuracy on copt.803708 18.753903 18.123012 new lower bound: 17.c.306781 25.
716873 Result: feasible solution of required accuracy best objective value: 18. 4 mincx also returns the optimizing vector of decision variables xopt. This is the value copt returned by mincx.717297 18. which means that a feasible x satisfying the constraint (810) was found only at the second iteration.8 The LMI Lab *** 13 *** 14 *** 15 *** 16 *** new lower bound: 18.716695 new lower bound: 18.717986 18.716509 new lower bound: 18.50e 06 fradius saturation: 0.716695 with relative accuracy of at least 9. Lower bounds on the global minimum of cTx are sometimes detected as the optimization progresses.723491 18.716094 new lower bound: 18.5by10–6. The corresponding optimal value of the matrix variable X is given by 826 . Note that no value is displayed at the first iteration. These lower bounds are reported by the message *** new lower bound: xxx Upon termination. mincx reports that the global minimum for the objective Trace(X) is –18.719624 18.000% of R = 1.714880 new lower bound: 18.716695 guaranteed relative accuracy: 9. respectively.00e+09 The iteration number and the best value of cTx at the current iteration appear in the left and right columns.
X) which returns – 6.2201 2.b.5390e 05 827 .3542 – 5.8895 2.2046 2. 1) norm(XoptXst) ans = 6.xopt.2201 – 6.2046 X opt = – 5.0771 This result can be compared with the stabilizing Riccati solution computed by care: Xst = care(a.8895 – 6.LMI Solvers Xopt = dec2mat(LMIs.2855 2.q.
respectively.X2.xdec. . XK .X3) An error is issued if the number of arguments following LMISYS differs from the number of matrix variables in the problem (see matnbr). X2. In addition.2) The last argument indicates that the second matrix variable is requested. called the decision variables. The total numbers of matrix variables and decision variables are returned by matnbr and decnbr. Given particular values X1. Consider an LMI system with three matrix variables X1. X3. X2. Conversely. 828 . the value X2 of the second matrix variable is extracted from xdec by X2 = dec2mat(LMISYS. . For instance. The two functions mat2dec and dec2mat perform the conversion between these two descriptions of the problem variables. given a value xdec of the vector of decision variables.X1. the corresponding value xdec of the vector of decision variables is returned by mat2dec: xdec = mat2dec(LMISYS. . the corresponding value of the kth matrix is given by dec2mat. the function decinfo provides precise information about the mapping between decision variables and matrix variable entries (see the “Command Reference” chapter). . It could be set to the matrix variable identifier returned by lmivar. X3 of these variables.8 The LMI Lab From Decision to Matrix Variables and Vice Versa While LMIs are specified in terms of their matrix variables X1. the LMI solvers optimize the vector x of free scalar entries of these matrices.
the left. the feasible or optimal vector returned by the LMI solvers. for instance.and righthand sides of the first (and only) LMI.1) The first command evaluates the system for the value xopt of the decision variables. and the second command returns the left. Once this evaluation is performed.0387e 04 3.9333e 05 1.xopt) [lhs. The negative definiteness of this LMI is checked by eig(lhsrhs) ans = 2.6680e+01 829 . you can verify that the minimizer xopt returned by mincx satisfies the LMI constraint (810) as follows: evlmi = evallmi(LMIs.8917e 07 4.rhs] = showlmi(evlmi. The function evallmi evaluates all variable terms in an LMI system for a given value of the vector of decision variables.Validating Results Validating Results The LMI Lab offers two functions to analyze and validate the results of an LMI optimization.2” on page 823. In the LMI problem considered in “Example 8.and righthand sides of a particular LMI are returned by showlmi.
Xpos. If BRL. As a result. The LMI identifiers (initial ranking of the LMI in the LMI system) are not altered by deletions. and Slmi are the identifiers attached to the three LMIs (86)–(88). that is. to remove all variable terms involving this matrix variable. The resulting system of two LMIs is returned in NEWSYS. consider the LMI ATX + XA + BW + WTBT + I < 0 with variables X = XT ∈ R4×4 and W ∈ R2×4. suppose that the LMI system of “Example 8. the last LMI S>I remains known as the third LMI even though it now ranks second in the modified system. This LMI is defined by setlmis([]) X = lmivar(1.[4 1]) % X W = lmivar(2. This is done by NEWSYS = dellmi(LMISYS. To avoid confusion. Slmi keeps pointing to S > I even after deleting the second LMI by NEWSYS = dellmi(LMISYS. For instance.[2 4]) % W 830 . a system of LMIs can be modified in several ways with the functions dellmi. For instance.1” on page 89 is described in LMISYS and that we want to remove the positivity constraint on X.8 The LMI Lab Modifying a System of LMIs Once specified. delmvar. it is safer to refer to LMIs via the identifiers returned by newlmi. and setmvar. dellmi The first possibility is to remove an entire LMI from the system with dellmi.2) where the second argument specifies deletion of the second LMI. This operation is performed by delmvar.Xpos) dellmi Another way of modifying an LMI system is to delete a matrix variable.
note that deleting a matrix variable is equivalent to setting it to the zero matrix of the same dimensions with setmvar. if G∞ < 1 This amounts to setting the scaling matrix D (or equivalently. Keeping in mind the constraint S > I. for instance.B.S. As a result. For subsequent manipulations.1” on page 89 and suppose we want to know if the peak gain of G itself is less than one. a legitimate choice is S = 2−βψ−I.Modifying a System of LMIs lmiterm([1 1 1 X]. S = DTD) to a multiple of the identity matrix. type the command NEWSYS = delmvar(LMISYS. to fixsetmvar some variables and optimize with respect to the remaining ones. The matrix variable identifiers are not affected by deletions and continue to point to the same matrix variable.W) The resulting NEWSYS now describes the Lyapunov inequality ATX + XA + I < 0 Note that delmvar automatically removes all LMIs that depended only on the deleted matrix variable.1. setmvar The function setmvar is used to set a matrix variable to some given value.A.1) LMISYS = getlmis To delete the variable W. Consider again “Example 8. this variable is removed from the problem and all terms involving it become constant terms. it is therefore advisable to refer to the remaining variables through their identifier.2) 831 . This is useful.'s') lmiterm([1 1 1 W]. Finally.'s') lmiterm([1 1 1 0]. that is. To set S to this value.1. enter NEWSYS = setmvar(LMISYS.
Slmi) if Slmi is the identifier returned by newlmi. 832 . It could. and the third argument is the value to which S should be set.8 The LMI Lab The second argument is the variable identifier S.3) or NEWSYS = dellmi(NEWSYS. The resulting system NEWSYS reads A T X + XA + 2C T C XB <0 T B X – 2I X>0 2I > I Note that the last LMI is now free of variable and trivially satisfied. Here the value 2 is shorthand for 2−βψ−I. therefore. be deleted by NEWSYS = dellmi(NEWSYS.
Since Y is independent of X.n] = lmivar(1. Example 8.3. you must use Type 3 and specify each entry of the matrix variables directly in terms of the free scalar variables of the problem (the socalled decision variables). To retrieve this number. hence involves three decision variables. these decision variables should be labeled n + 1. With Type 3. each entry is specified as either 0 or ±xn where xn is the nth decision variable. We first consider the case of uncorrelated matrix variables. Recall that the symmetric blockdiagonal or rectangular structures are covered by Types 1 and 2 of lmivar provided that the matrix variables are independent. Y can be defined by Y = lmivar(3. Given this number. To describe more complex structures or correlations between variables. Structured Matrix Variables Fairly complex matrix variable structures and interdependencies can be specified with lmivar. Suppose that the problem variables include a 3by3 symmetric matrix X and a 3by3 symmetric Toeplitz matrix y y y 1 2 3 Y = y2 y1 y2 y y y 3 2 1 The variable Y has three independent entries. define the variable X (Type 1) by setlmis([]) [X. The following examples illustrate how to specify nontrivial matrix variable structures with lmivar.2 1 2.3 2 1]) 833 .[3 1]) The second output argument n gives the total number of decision variables used so far (here n = 6). n + 2.Advanced Topics Advanced Topics This last section gives a few hints for making the most out of the LMI Lab. n + 3 where n is the number of decision variables involved in X.n+[1 2 3. It is directed toward users who are comfortable with the basics described above.
Y) ans = 7 8 9 8 7 8 9 8 7 The next example is a problem with interdependent matrix variables. Example 8.X) ans = 1 2 4 2 3 5 4 5 6 decinfo(lmis. t are independent scalar variables. Consider three matrix variables X.1 0]) The third output of lmivar gives the entrywise dependence of X and Y on the decision variables (x1. 0 t Z = 0 –x –t 0 where x. Y.[1 0. To specify such a triple. x4) := (x. x3. 0 y Y = z 0 . y.sX] = lmivar(1. x2. Z with structure X = x 0 .sY] = lmivar(1.n.[1 0.n.8 The LMI Lab or equivalently by Y = lmivar(3. For verification purposes.4. t): sX = 1 0 0 2 834 . we can visualize the decision variable distributions in X and Y with decinfo: lmis = getlmis decinfo(lmis.toeplitz(n+[1 2 3])) where toeplitz is a standard MATLAB function. y. z. z. first define the two independent variables X and Y (both of Type 1) as follows: setlmis([]) [X.1 0]) [Y.
complexvalued LMIs can be turned into realvalued LMIs by observing that a complex Hermitian matrix L(x) satisfies L(x) < 0 if and only if Re ( L ( x ) ) Im ( L ( x ) ) <0 – Im ( L ( x ) ) Re ( L ( x ) ) This suggests the following systematic procedure for turning complex LMIs into real ones: • Decompose every complex matrix variable X as X = X1 + jX2 where X1 and X2 are real 835 .n. However.sZ] = lmivar(3.2) refers to x4 .2) 0]) Since sX(1. you can now specify the structure of Z in terms of the decision variables x1 = x and x4 = t: [Z.–sY(2.Advanced Topics sY = 3 0 0 4 Using Type 3 of lmivar.[0 sX(1.1) refers to x1 while sY(2.1). this defines the variable 0 –x 1 Z = = 0 –x –x 0 –t 0 4 as confirmed by checking its entrywise dependence on the decision variables: sZ = 0 4 0 1 ComplexValued LMIs The LMI solvers are written for realvalued matrices and cannot directly handle LMI problems involving complexvalued matrices.
by A A 1 2 –A A 2 1 For instance. For LMIs without outer factor. and an equivalent realvalued LMI is readily derived from the above observation. Assuming. H X = X >I H (811) reads (given the decompositions M = M1 + jM2 and X = X1 + jX2 with Mj. for instance.8 The LMI Lab • Decompose every complex matrix coefficient A as A = A1 + jA2 where A1 and A2 are real • Carry out all complex matrix products. that M ∈ C5×5. X2 for the real and imaginary parts of each LMI. This yields affine expressions in X1. Xj real): M M 1 2 –M M 2 1 T X X 1 2 –X X 2 1 M M 1 2 –M M 2 1 X X 1 2 < –X X 2 1 X X 1 2 <I –X X 2 1 Note that X = XH in turn requires that X 1 = X and X 2 + X 2 2 = 0 . Consequently. the real counterpart of the LMI system M XM < X. a streamlined version of this procedure consists of replacing any occurrence of the matrix variable X = X1 + jX2 by X 1 –X 2 X2 X1 and any fixed matrix A = A1 + jA2. including real ones. respectively. X1 and X2 should be declared as symmetric and skewsymmetric matrix variables. the LMI system (811) would be specified as follows: H T 836 .
n1) creates a 5by–5 skewsymmetric structure depending on the decision variables n1 + 1.12): lmiterm([1 1 1 0].1) lmiterm([ 1 1 1 bigX].n1.1) lmis = getlmis Note the threestep declaration of the structured matrix variable bigX = X X 1 2 –X X 2 1 : 1 Specify X1 as a (real) symmetric matrix variable and save its structure description sX1 as well as the number n1 of decision variables used in X1 .bigM'. M2=imag(M) bigM=[M1 M2.1) lmiterm([2 1 1 bigX].1.Advanced Topics M1=real(M).sX2 sX1]) % describe the real counterpart of (1.1.n2.X2 X1] with X1=X1' and X2+X2'=0: [X1. n1 + 2.bigM) lmiterm([ 2 1 1 bigX].sX2] = lmivar(3.[sX1 sX2.[5 1]) [X2. See the previous subsection for more details on such structure manipulations. 837 .skewdec(5.. The command skewdec(5. 2 Specify X2 as a skewsymmetric matrix variable using Type 3 of lmivar and the utility skewdec.M2 M1] setlmis([]) % declare bigX=[X1 X2..sX1] = lmivar(1.n1)) bigX = lmivar(3. 3 Define the structure of bigX in terms of the structures sX1 and sX2 of X1 and X2 ..
j.8 The LMI Lab Specifying cTx Objectives for mincx The LMI solver mincx minimizes linear objectives of the form cTx where x is the vector of decision variables.1] setlmis([]) X = lmivar(1.Pj] = defcx(lmisys. Examples include Trace(X) where X is a symmetric matrix variable.P) c(j) = trace(Xj) + x0'*Pj*x0 end T The first command returns the number of decision variables in the problem and the second command dimensions c accordingly. X. This operation is performed by the function defcx.X. consider the linear objective Trace(X) + x 0 Px0 where X and P are two symmetric variables and x0 is a given vector. such objectives are expressed in terms of the matrix variables rather than of x. or uTXu where u is a given vector. Then the for loop performs the following operations: 1 Evaluate the matrix variables X and P when all entries of the decision vector x are set to zero except xj := 1. P have been declared by x0 = [1.[3 0]) P = lmivar(1. The function defcx facilitates the derivation of the c vector when the objective is an affine function of the matrix variables. If lmisys is the internal representation of the LMI system and if x0. Apart from lmisys and j. the inputs of defcx are the identifiers X and 838 .[2 1]) : : lmisys = getlmis T the c vector such that cTx = Trace(X) + x 0 Px0 can be computed as follows: n = decnbr(lmisys) c = zeros(n. [Xj. For the sake of illustration. however.1) for j=1:n. In most control problems.
the feasibility radius bound regularizes problems with redundant variable sets. In rough terms. This may also speed up computations and improve numerical stability.Advanced Topics P of the variables involved in the objective. [ matrix values ] = defcx( LMI system. it is possible to constrain the solution x to lie in the ball xTx < R2 where R > 0 is called the feasibility radius. In our example the result is c = 3 1 2 1 Other objectives are handled similarly by editing the following generic skeleton: n = decnbr( LMI system ) c = zeros(n. or gevp. mincx. Its default value is R = 109 . 2 Evaluate the objective expression for X := Xj and P := Pj. This specifies a maximum (Euclidean norm) magnitude for x and avoids getting solutions of very large norm. Finally. matrix identifiers) c(j) = objective(matrix values) end Feasibility Radius When solving LMI problems with feasp.j. and the outputs Xj and Pj are the corresponding values.” in which case the feasibility radius is increased during the 839 .1) for j=1:n. the set of scalar variables is redundant when an equivalent problem could be formulated with a smaller number of variables. The feasibility radius R is set by the third entry of the options vector of the LMI solvers. Setting R to a negative value means “no rigid bound. This yields the jth entry of c by definition.
For the LMI problems addressed by mincx and gevp. Although there is no universal remedy for this difficulty. such techniques require that the system of LMI constraints be strictly feasible. the feasible set has a nonempty interior.8 The LMI Lab optimization if necessary. As a result. To compute feasible solutions. these solvers may encounter difficulty when the LMI constraints are feasible but not strictly feasible. (812) (813) In this modified problem. which reformulates the problem Find x such that L ( x ) ≤ 0 as Minimize t subject to Lx < t × I. that is. it is sometimes possible to eliminate underlying algebraic constraints to obtain a strictly feasible problem with fewer variables. 840 . nonstrict feasibility generally causes the solvers to fail and to return an “infeasibility” diagnosis. the computational effort is typically higher as feasp strives to approach the global optimum tmin = 0 to a high accuracy. however. For feasibility problems. that is. the LMI constraint is always strictly feasible in x. WellPosedness Issues The LMI solvers used in the LMI Lab are based on interiorpoint optimization techniques. when the LMI L(x) ð 0 has solutions while L(x) < 0 has no solution. This “flexible bound” mode may yield solutions of large norms. t and the original LMI (812) is feasible if and only if the global minimum tmin of (813) satisfies tmin ð 0 For feasible but not strictly feasible problems. this difficulty is automatically circumvented by feasp.
the positivity of B(x) for some x ∈ Rn is required for the wellposedness of the problem and the applicability of polynomialtime interiorpoint methods. Technically. Y < λ B 1 ( x ). P>0 While this problem is technically wellposed. SemiDefinite B(x) in gevp Problems Consider the generalized eigenvalue minimization problem Minimize λ subject to A ( x ) < λB ( x ). To compute a nontrivial Lyapunov matrix and easily differentiate between feasibility and infeasibility. B1 ( x ) > 0 B(x) > 0 (814) where Y is an additional symmetric variable of proper dimensions. A simple remedy consists of replacing the constraints A(x) < B(x). replace the constraint P > 0byP > αI with α > 0. the LMI optimization is likely to produce solutions close to zero (the trivial solution of the nonstrict problem). the termoriented description of LMIs used in the LMI Lab typically leads to higher efficiency than the canonical representation 841 . Efficiency and Complexity Issues As explained in the beginning of the chapter. B ( x ) > 0. Note that this does not alter the problem due to its homogeneous nature. Hence problems where B (x) 0 B(x) = 1 . with B 1 ( x ) > 0 strictly feasible 0 0 cannot be directly solved with gevp. The resulting problem is equivalent to (814) and can be solved directly with gevp.Advanced Topics Another issue has to do with homogeneous feasibility problems such as ATP + P A < 0. C ( x ) < 0. by A(x) < Y 0 0 0 .
(815) This is no longer true. the number of iterations actually performed grows slowly with M in most problems. 842 . Solving M + PTXQ + QTXTP < 0 In many outputfeedback synthesis problems.8 The LMI Lab A 0 + x 1 A 1 + … + x N A N < 0. the flop count per iteration for the feasp and mincx solvers is proportional to • N3 when the leastsquares problem is solved via. If M denotes the total row size of the LMI system and N the total number of scalar decision variables. while feasp and mincx are comparable in complexity. If your LMI problem has few free scalar variables but many terms in each LMI. derive the controller statespace matrices by solving an LMI of the form M + P XQ + Q X P < 0 T T T (816) where M. when the number of variable terms is nearly equal to or greater than the number N of decision variables in the problem. Each scalar variable xj is then declared independently and the LMI terms are of the form xjAj. it is therefore preferable to rewrite it as (815) and to specify it in this form. Finally. Make sure that your LMI problem cannot be solved with mincx before using gevp. 2 Given this Lyapunov function. Q are given matrices and X is an unstructured mbyn matrix variable. P. however. Cholesly factorization of the Hessian matrix (default) [2 ] • MbyN2 when numerical instabilities warrant the use of QR factorization instead While the theory guarantees a worstcase iteration count proportional to M. gevp typically demands more computational effort. the design can be performed in two steps: 1 Compute a closedloop Lyapunov function via LMI optimization.
'Xmin') and involves LMI optimization to minimize X.Q) Since this central solution sometimes has large norm.Advanced Topics It turns out that a particular solution Xc of (816) can be computed via simple linear algebra manipulations [1 ].P.Q. basiclmi also offers the option of computing an approximate leastnorm solution of (816). 843 . The function basiclmi returns the “explicit” solution Xc: Xc = basiclmi(M. Typically. Xc corresponds to the center of the ellipsoid of matrices defined by (816).P. This is done by X = basiclmi(M.
“The Projective Method for Solving Linear Matrix Inequalities. [2] Nemirovski. Dec. J. pp. Interior Point Polynomial Methods in Convex Programming: Theory and Applications. Apkarian. J. Contr. [4] Shamma. and P. Philadelphia. Amer.. “A Linear Matrix Inequality Approach to H∞ Control.. 840–844. 1994.” Proc. Conf.S. 421–448. pp. P. pp.” Proc. and P. SIAM Books. and A. Yu. Conf. “Robustness Analysis for TimeVarying Systems. 1994.” Int..8 The LMI Lab References [1] Gahinet.. 1992. Contr. 844 . Gahinet. Robust and Nonlinear Contr. 4 (1994). [3] Nesterov... 3163–3168. A. Nemirovski.
9 Command Reference .
Functions are grouped by application in tables at the beginning of this chapter. In addition.9 Command Reference This chapter gives a detailed description of all LMI Control Toolbox functions. information on each function is aavailable through the online Help facility. 92 .
List of Functions List of Functions Linear Algebra basiclmi care dare dricpen ricpen Solve M + PTXQ + QTXTP < 0 Continuoustime Riccati solver Discretetime Riccati solver Generalized Riccati solver (discretetime) Generalized Riccati solver (continuoustime) Linear TimeInvariant Systems ltisys ltiss ltitf sbalanc sinfo sinv splot spol sresp ssub Specify a SYSTEM matrix Extract statespace data Compute the transfer function (SISO) Balance a statespace realization via diagonal similarity Return information about a SYSTEM matrix Invert an LTI system Plot the time and frequency responses of LTI systems Return the poles of an LTI system Frequency response of an LTI system Extract a subsystem from a system Interconnection of Systems sadd sconnect Form the parallel interconnection of systems Describe general control loops 93 .
9 Command Reference Interconnection of Systems sdiag slft sloop smult Append (concatenate) several systems Form linearfractional interconnections Form elementary feedback interconnections Form the series interconnection of systems Polytopic and ParameterDependent Systems (PSystems) aff2pol Polytopic representation of an affine parameterdependent system Simulation of parameterdependent systems along a parameter trajectory Retrieve information about a Psystem Specify a Psystem Specify a vector of uncertain or timevarying parameters Interpret the output of pvec pdsimul psinfo psys pvec pvinfo Dynamical Uncertainty ublock udiag Specify an uncertainty block Specify blockdiagonal uncertainty from individual uncertainty blocks Display the characteristics of uncertainty structures Linearfractional representation of an affine parameterdependent system uinfo aff2lft 94 .
List of Functions Robustness Analysis decay dnorminf muperf mustab norminf norm2 quadperf quadstab pdlstab Quadratic decay rate RMS gain of a discretetime LTI system Robust performance of an uncertain LTI system Robust stability of an uncertain LTI system H∞ norm (RMS gain) of an LTI system H2 performance of an LTI system Quadratic performance Quadratic stability Robust stability analysis via parameterdependent Lyapunov functions Popov criterion popov 95 .
9 Command Reference H∞ Control and Loop Shaping Continuous Time hinflmi hinfmix hinfric hinfpar lmireg magshape msfsyn sconnect LMIbased H∞ synthesis Mixed H2/H∞ synthesis with regional pole placement Riccatibased H∞ synthesis Extract statespace data from the plant SYSTEM matrix Specify LMI regions for pole placement Graphical specification of the shaping filters Multimodel/objective statefeedback synthesis Specify general control structures Discrete Time dhinflmi dhinfric hinfpar LMIbased H∞ synthesis Riccatibased H∞ synthesis Retrieve the plant statespace data Gain Scheduling hinfgs pdsimul Synthesis of robust gainscheduled controllers Simulation along parameter trajectories 96 .
and righthand sides of an evaluated LMI Set a matrix variable to a given value 97 .LMI Lab: Specifying and Solving LMIs LMI Lab: Specifying and Solving LMIs Specification of LMIs lmiedit setlmis lmivar lmiterm newlmi getlmis GUI for LMI specification Initialize the LMI description Define a new matrix variable Specify the term content of an LMI Attach an identifying tag to new LMIs Get the internal description of the LMI system LMI Solvers feasp gevp mincx dec2mat defcx Feasibility of a system of LMIs Generalized eigenvalue minimization under LMI constraints Minimization of a linear objective under LMI constraints Convert the output of the solvers into values of the matrix variables Help define cTx objectives for mincx Evaluation of LMIs/Validation of Results evallmi showlmi setmvar Evaluate all variable terms for given values of the decision variables Return the left.
the number of free scalar variables in a problem Inquire about an existing system of LMIs Get the number of LMIs in a problem Get the number of matrix variables in a problem Manipulations of LMI Variables decinfo dec2mat mat2dec Express each matrix variable entry in terms of the decision variables Return the matrix variable values given a vector of decision variables Return the vector of decision variables given the matrix variable values Modification of LMI Systems dellmi delmvar setmvar Delete an LMI from the system Remove a matrix variable from the problem Instantiate a matrix variable 98 .9 Command Reference LMI Lab: Additional Facilities Information Retrieval decinfo decnbr lmiinfo lminbr matnbr Visualize the mapping between matrix variable entries and decision variables Get the number of decision variables. that is.
and bounded by pj – pj δ j ≤ 2 99 .. the function aff2lft returns the equivalent linearfractional representation ∆ u P(s) y where • P(s) is an LTI system called the nominal plant • The blockdiagonal operator ∆ = diag(δ1Ir1. pn) is a vector of uncertain or timevarying real parameters taking values in a box: p j ≤ pj ≤ p j Such systems are specified with psys. Given this affine parameterdependent model. δnIrn) accounts for the uncertainty on the real parameters pj. . .delta] = aff2lft(affsys) The input argument affsys describes an affine parameterdependent system of the form · E(p) x = A(p)x + B(p)u y = C(p)x + D(p)u where p = (p1. Each scalar δj is real. . . .. .aff2lft Purpose Syntax Description 9aff2lft Compute the linearfractional representation of an affine parameterdependent system [sys. possibly repeated (rj > 1).
ublock. psys. udiag 910 . 2 Example See Also See example in “SectorBounded Uncertainty” on page 230. Closing the loop with ∆ ≡ 0 yields the LTI system · E(pav) x = A(pav)x + B(pav)u y = C(pav)x + D(pav)u where pav = 1 ( p + p ) is the vector of average parameter values.aff2lft The SYSTEM matrix of P and the description of ∆ are returned in sys and delta. pvec.
1” on page 221. pvec. psys. the SYSTEM matrices A ( p ) + jE ( p ) B ( p ) ex ex ex D ( p ex ) C ( p ex ) for all corners pex of the parameter box or all vertices pex of the polytope of parameter values... The vertex systems of polsys are the instances of (91)–(92) at the vertices pex of the parameter range. pn) is a vector of uncertain or timevarying real parameters taking values in a box or a polytope. aff2lft 911 . i. The description affsys of this system should be specified with psys. .aff2pol Purpose Syntax Description 9aff2pol Convert affine parameterdependent models to polytopic ones polsys = aff2pol(affsys) aff2pol derives a polytopic representation polsys of the affine parameter dependent system · E ( p )x = A ( p )x + B ( p )u y = C ( p )x + D ( p )u (91) (92) where p = (p1. .e. . Example See Also “Example 2.
If WP and WQ are orthonormal bases of the null space of P and Q.Q. When option is set to 'Xmin'. The solution X is derived by standard linear algebra manipulations.P. basiclmi computes the leastnorm solution by solving the LMI problem Minimize τ subject to M + PTXQ + QTXTP < 0 τI X ≥0 X T τI T T See Also feasp.option) basiclmi computes a solution X of the LMI M + PTXQ + QTXTP < 0 where M is symmetric and P.basiclmi Compute a solution X of M + PTXQ + QTXTP < 0 9basiclmi Purpose Syntax Description X = basiclmi(M. mincx 912 . this LMI is solvable if and only if W P M WP < 0 and W Q M WQ < 0 The output is [] when these conditions are not satisfied.Q are matrices of compatible dimensions.
G. Contr.care Purpose Syntax Description 9care Solve continuoustime Riccati equations (CARE) X = care(A. Arnold.R. Laub. “Generalized Eigenproblem Algorithms and Software for Algebraic Riccati Equations.S. the gain matrix G given by G = –R–1(BTXE + ST).Q. pp. “A Schur Method for Solving Algebraic Riccati Equations.S. and the Frobenius norm RR of the relative residual matrix.. Aut.E) [X. If the input arguments S and E are omitted. 913–921. J.E) This function computes the stabilizing solution X of the Riccati equation ATXE + ETXA – (ETXB + S)R–1(BTXE + ST) + Q = 0 where E is an invertible matrix.” Proc.Q.RR] = care(A. W.R.J. pp.F. the matrices S and E are set to the default values S = 0 and E = I. IEEE. Reference Laub. 72 (1984).B. the pencil H – λL must have no finite generalized eigenvalue on the imaginary axis.” IEEE Trans.L. The function care also returns the vector L of closedloop eigenvalues.B. and A. The solution is computed by unitary deflation of the associated Hamiltonian pencil H – λ L = –Q –A E 0 0 T –λ –S 0 E 0 T T 0 0 0 S B R T A 0 B For solvability. 1746– 1754. AC–24 (1979). A.. See Also ricpen.. cares. dare 913 .
E) dare computes the stabilizing solution X of the algebraic Riccati equation –1 T T T T T T A XA – E XE – ( A XB + S ) ( B XB + R ) ( B XA + S ) + Q = 0 (93) where E is an invertible matrix.Q.S.S.X2] = dricpen(H.dare Purpose Syntax Description 9dare Solve discretetime Riccati equations (DARE) X = dare(A.G.Q.E) [X. Remark The function dricpen is a variant of dare where the inputs are the two matrices H and L defined above.RR] = dare(A. X2 such that X • 1 has orthonormal columns. the gain matrix G given by G = –(BTXB + R)–1(BTXA + ST). the pencil H – λL must have no finite generalized eigenvalue on the unit circle.L. The solution is computed by unitary deflation of the associated symplectic pencil A 0 T 0 0 T –S – λ 0 A 0 T 0 –B 0 R B E H – λ L = –Q –E S T 0 For solvability. The syntax is X = dricpen(H. the matrices S and E are set to the default values S = 0 and E = I.B.R.B. and the Frobenius norm RR of the relative residual matrix. If the input arguments S and E are omitted.L) or [X1.R. The function dare also returns the vector L of closedloop eigenvalues. X2 914 .L) This second syntax returns two matrices X1.
Contr. “On the Numerical Solution of the DiscreteTime Algebraic Riccati Equation. care 915 . 631–641.. IEEE.J. pp..” Proc.dare –1 • X = X 2 X 1 is the stabilizing solution of (93).J. pp. Reference Pappas. Laub. and A. dares. 72 (1984). “Generalized Eigenproblem Algorithms and Software for Algebraic Riccati Equations. T.. and N. Laub. AC–25 (1980).R. 1746– 1754. See Also dricpen. Sandell. Arnold.F. W. Aut.” IEEE Trans. A.
. . Two control parameters can be reset via options(1) and options(2): • If options(1)=0 (default). pn(t)) decay returns the quadratic decay rate drate. E1). E). . En)}. decay runs in fast mode. E) ∈ Co{(A1. the smallest α ∈ R such that ATQE + EQAT < αQ holds for some Lyapunov matrix Q > 0 and all possible values of (A.decay Purpose Syntax Description 9decay Quadratic decay rate of polytopic or affine Psystems [drate. Set options(1)=1 to use the least conservative conditions. The default is 109. (A. . psys 916 .3” on page 39. • options(2) is a bound on the condition number of the Lyapunov matrix P. . pdlstab. or polytopic systems · E(t) x = A(t)x. . (An.. i. mustab.options) For affine parameterdependent systems · E(p) x = A(p)x. . quadstab.. p(t) = (p1(t).P] = decay(ps. Example See Also See “Example 3.e. using the least expensive sufficient conditions.
the free entries of all matrix variables described in lmisys.. j) = xn (the nth decision variable) • –n if X(i. Each entry of X is either a hard zero..[2 1]) : : lmis = getlmis 917 . . xN.X) The function decinfo expresses the entries of a matrix variable X in terms of the decision variables x1.decinfo Purpose Syntax Description 9decinfo Describe how the entries of a matrix variable X relate to the decision variables decinfo(lmisys) decX = decinfo(lmisys. some decision variable xn. Recall that the decision variables are the free scalar variables of the problem. j) is a hard zero • n if X(i. It then prompts the user for a matrix variable and displays in return the decision variable content of this variable. the command decX = decinfo(lmisys. . This is useful to specify matrix variables with atypical structures (see lmivar). . or its opposite –xn. . xN.[3 0]) Y = lmivar(2. j) = –xn decX clarifies the structure of X as well as its entrywise dependence on x1.X) returns an integer matrix decX of the same dimensions as X whose (i. j) entry is • 0 if X(i. Example 1 Consider an LMI with two matrix variables X and Y with structure: • X = x I3 with x scalar • Y rectangular of size 2by1 If these variables are defined by setlmis([]) X = lmivar(1. If X is the identifier of X supplied by lmivar. . . or equivalently. decinfo can also be used in interactive mode by invoking it with a single argument.
Y) dY = 2 3 This indicates a total of three decision variables x1. and is declared by setlmis([]) X = lmivar(1.X) dX = 1 0 0 0 1 0 0 0 1 dY = decinfo(lmis.2 0]) : lmis = getlmis The decision variable distribution in X can be visualized interactively as follows: decinfo(lmis) There are 4 decision variables labeled x1 to x4 in this problem.decinfo the decision variables in X and Y are given by dX = decinfo(lmis. x2. x Y= 2 x 3 Note that the number of decision variables corresponds to the number of free entries in X and Y when taking structure into account. x3 that are related to the entries of X and Y by x 0 0 1 X = 0 x1 0 0 0 x 1 .[2 1. Example 2 Suppose that the matrix variable X is symmetric block diagonal with one 2by2 full block and one 2by2 scalar block. 918 .
xj....x4}.xj. or 0 to quit): ?> 1 The decision variables involved in X1 are among {x1.j<0 stand for 0.. respectively): X1 : 1 2 0 0 2 3 0 0 0 0 4 0 0 0 0 4 ********* Matrix variable Xk of interest (enter k between 1 and 1. decinfo 919 .j>0. Their entrywise distribution in X1 is as follows (0. mat2dec.decinfo Matrix variable Xk of interest (enter k between 1 and 1. dec2mat. decnbr. or 0 to quit): ?> 0 See Also lmivar.
ndec is the length of the vector of decision variables. decinfo. mat2dec 920 . For an LMI system lmis with two matrix variables X and Y such that • X is symmetric block diagonal with one 2by2 full block. and one 2by2 scalar block • Y is 2by3 rectangular. See Also dec2mat.decnbr Purpose Syntax Description 9decnbr Give the total number of decision variables in a system of LMIs ndec = decnbr(lmisys) The function decnbr returns the number ndec of decision variables (free scalar variables) in the LMI problem described in lmisys. the number of decision variables is ndec = decnbr(LMIs) ndec = 10 Example This is exactly the number of free entries in X and Y when taking structure into account (see decinfo for more details). In other words.
decvars. dec2mat is useful to derive the corresponding feasible or optimal values of the matrix variables. derive the corresponding values of the matrix variables valX = dec2mat(lmisys. .. . XK. Recall that the decision variables are all free scalar variables in the LMI problem and correspond to the free entries of the matrix variables X1. decnbr. mat2dec. dec2mat computes the corresponding value valX of the matrix variable with identifier X. Example See Also See the description of feasp. Since LMI solvers return a feasible or optimal value of the vector of decision variables. decinfo 921 .dec2mat Purpose Syntax Description 9dec2mat Given values of the decision variables. . This identifier is returned by lmivar when declaring the matrix variable.X) Given a value decvars of the vector of decision variables.
Given the identifiers X1... defcx returns the values V1..Xk) defcx is useful to derive the c vector needed by mincx when the objective is expressed in terms of the matrix variables..Vk of these variables when the nth decision variable is set to one and all others to zero......Vk] = defcx(lmisys. decinfo 922 .n...defcx Purpose Syntax Description Help specify cTx objectives for the mincx solver 9defcx [V1. See the example in “Specifying cTx Objectives for mincx” on page 838 for more details on how to use this function to derive the c vector...Xk of the matrix variables involved in this objective...X1. See Also mincx..
Since this ranking is not modified by deletions. To delete the second LMI. LMI2. type lmis = dellmi(lmis. type lmis = dellmi(lmisys. and stored in lmisys.LMI2) lmis now describes the system of LMIs T T T A1 X1 + X1 A1 + Q1 < 0 A3 X3 + X3 A3 + Q3 < 0 T T (94) (95) and the second variable X2 has been removed from the problem since it no longer appears in the system (94)–(95). LMI3 with newlmi. it is safer to refer to the remaining LMIs by their identifiers. labeled LMI1.LMI3) or equivalently lmis = dellmi(lmis. The ranking n is relative to the order in which the LMIs were declared and corresponds to the identifier returned by newlmi. matrix variables that only appeared in the deleted LMI are removed from the problem.n) dellmi deletes the nth LMI from the system of LMIs described in lmisys.dellmi Purpose Syntax Description 9dellmi Remove an LMI from a given system of LMIs newsys = dellmi(lmisys.3) 923 . The updated system is returned in newsys. To further delete (95). Finally. Example Suppose that the three LMIs A 1 X1 + X1A1 + Q1 < 0 A 2 X2 + X2A2 + Q2 < 0 A 3 X3 + X3A3 + Q3 < 0 have been declared in this order.
See Also newlmi.dellmi Note that (95) has retained its original ranking after the first deletion. lmiedit. lmiinfo 924 .
The identifier X should be the second argument returned by lmivar when declaring X. type lmisys = delmvar(lmisys. All terms involving X are automatically removed from the list of LMI terms.X) Now lmisys describes the LMI T T 0 < A YB + B YA + Q D T D 0 with only one variable Y. The description of the resulting system of LMIs is returned in newsys. setmvar. To delete the variable X. Note that Y is still identified by the label Y.delmvar Purpose Syntax Description 9delmvar Delete one of the matrix variables of an LMI problem newsys = delmvar(lmisys. Example Consider the LMI T T 0 < A Y + B YA + Q CX + D T T T T –( X + X ) X C +D involving two variables X and Y with identifiers X and Y. See Also lmivar.X) delmvar removes the matrix variable X with identifier X from the list of variables defined in lmisys. lmiinfo 925 .
” Int. Its syntax and usage mirror those of hinflmi. J.g.r) [gopt. D21). 4 (1994).y2] = dhinflmi(P. hinflmi 926 .x1.dhinflmi Purpose Syntax 9dhinflmi LMIbased H∞ synthesis for discretetime systems gopt = dhinflmi(P.y1. “A Linear Matrix Inequality Approach to H∞ Control. Apkarian. Robust and Nonlinear Contr.r.tol. pp.K] = dhinflmi(P. respectively. P.K.x2. The function dhinflmi returns solutions R = x1 and S = y1 of this LMI system for γ = gopt (the matrices x2 and y2 are set to goptbyI). and P.options) dhinflmi is the counterpart of hinflmi for discretetime plants. D 12 ) and (C2.. 421–448.r) [gopt. dhinfric. Description The H∞ performance γ is achievable if and only if the following system of LMIs has symmetric solutions R and S: N 12 0 0 I 0 I T T ARA – R C 1 RA B1 T T T T B1 T – γI + C 1 RC 1 D 11 T D 11 – γI ARC 1 T T C1 T T – γI + B 1 SB 1 D 11 D 11 – γI R I ≥0 I S N 12 0 0 <0 I N 21 0 A SA – S T B 1 SA A SB 1 T N 21 0 0 <0 I C1 where N12 and N21 denote bases of the null spaces of ( B 2 . T T Reference See Also Gahinet.
K] = dhinfric(P. dhinflmi.K.r) [gopt. The syntax and usage of dhinfric are identical to those of hinfric.x2.gmin.y1.r) [gopt.dhinfric Purpose Syntax 9dhinfric Riccatibased H∞ synthesis for discretetime systems gopt = dhinfric(P.options) dhinfric is the counterpart of hinfric for discretetime plants P(s) with Description equations: xk+1 = Axk + B1 wk + B2uk zk = C1xk + D11 wk + D12uk yk = C2xk + D21 wk + D22uk. hinfric 927 . See dnorminf for a definition of the RMS gain (H∞ performance) of discretetime systems.y2] = dhinfric(P. Restriction See Also D12 and D21 must have full rank.r.gmax.x1.tol.
peakf] = dnorminf(g. Bruisma. and P. Robel. C. 2π ] of the response on the unit circle. G ∞ = y l sup 2 u l u∈t u≠0 2 2 The input g is a SYSTEM matrix containing the statespace data A. Kabamba. 882–884. this peak gain coincides with the RMS gain or H∞ norm of G..e.dnorminf Purpose Syntax Description 9dnorminf Compute the random meansquares (RMS) gain of discretetime systems [gain. pp. D.tol) For sampleddata LTI systems with statespace equations Exk+1 = Axk + Buk yk+1 = Cxk + Duk and transfer function G(z) = D + C(zE – A)–1B. Reference Boyd. See Also norminf 928 .A. E. dnorminf computes the peak gain G ∞ = sup σ max ( G ( e ) ) (96) jω ω ∈ [ 0. Contr. On output. G. AC–34 (1989). “On Computing the Infinity Norm. S. dnorminf returns the RMS gain gain and the “frequency” peakf at which this gain is attained. that is. “A Fast Algorithm to Compute the H∞Norm of a Transfer Function Matrix. Aut. The relative accuracy on the computed RMS gain can be adjusted with the optional input tol (the default value is 10–2).. the value of ω maximizing (96).” IEEE Trans. Steinbuch.. Contr. 2 (1989).. N.” Syst. Sign. Letters. 287–293. Balakrishnan. 14 (1990). “A Bisection Method for Computing the H∞ Norm of a Transfer Matrix and Related Problems.. pp. When G(z) is stable. i.” Math. V. and M.. 207–219. Syst. Contr. pp. B.
evallmi Purpose Syntax Description 9evallmi Given a particular instance of the decision variables.[2 1]) % full symmetric X lmiterm([1 1 1 X].5 – 0. This LMI system is defined by: 0. Example Consider the feasibility problem of finding X > 0 such that ATXA – X + I < 0 where A = 0.1) % LMI #1: I lmiterm([2 1 1 X].. To evaluate all LMIs for particular instances of the matrix variables X1. XK. call feasp by [tmin. ..1) % LMI #2: X lmis = getlmis To compute a solution xfeas.7 setlmis([]) X = lmivar(1. .A) % LMI #1: A'*X*A lmiterm([1 1 1 X].1 – 0.and righthand sides of each LMI are then returned by showlmi.decvars) evallmi evaluates all LMI constraints for a particular instance decvars of the vector of decision variables. . evaluate all variable terms in the system of LMIs evalsys = evallmi(lmisys. Observation evallmi is meant to operate on the output of the LMI solvers. The matrix values of the left. The output evalsys is an LMI system containing only constant terms. The vector returned by these solvers can be fed directly to evallmi to evaluate all variable terms.1. The function evallmi is useful for validation of the LMI solvers’ output. . . XK. Recall that decvars fully determines the values of the matrix variables X1. .2 .. . first form the corresponding decision vector x with mat2dec and then call evallmi with x as input.1) % LMI #1: X lmiterm([1 1 1 0].A'. . .1. The “evaluation” consists of replacing all terms involving X1. XK by their matrix value.xfeas] = feasp(lmis) 929 .
mat2dec 930 . To check that xfeas is indeed feasible.2229e+01 5.X). evaluate all LMI constraints by typing evals = evallmi(lmis.rhs2] = showlmi(evals.1942e+02 The LMI constraints are therefore feasible since tmin < 0.1) [lhs2.evallmi The result is tmin = 4.rhs1] = showlmi(evals. See Also showlmi.2) and the test eig(lhs1rhs1) ans = 8. The solution X corresponding to the feasible decision vector xfeas would be given by X = dec2mat(lmis.1029e+02 1.8163e+01 confirms that the first LMI constraint is satisfied by xfeas.7117e+00 xfeas' = 1.1519e+01 1. setmvar.xfeas.and righthand sides of the first and second LMIs are then given by [lhs1. dec2mat.xfeas) The left.
feasp Purpose Syntax Description 9feasp Find a solution to a given system of LMIs [tmin. If the problem is feasible but not strictly feasible. The default value is target = 0. The vector xfeas is a particular value of the decision variables for which all LMIs are satisfied. This fiveentry vector is organized as follows: • options(1) is not used • options(2) sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default) • options(3) resets the feasibility radius. tmin is positive and very small. Given the LMI system T T N LxN ≤ M R ( x )M. Setting options(3) to a value R > 0 further constrains the decision vector x = (x1.. . . The LMI constraints are feasible if tmin ð 0 and strictly feasible if tmin < 0. The optimization code terminates as soon as a value of t below this target is reached.options. xN) to lie within the ball 931 . Note that xfeas is a solution in terms of the decision variables and not in terms of the matrix variables of the problem. The optional argument target sets a target value for tmin. Some postanalysis may then be required to decide whether xfeas is close enough to feasible. Control Parameters The optional argument options gives access to certain control parameters for the optimization algorithm. . Use dec2mat to derive feasible values of the matrix variables from xfeas.xfeas] = feasp(lmisys. The global minimum of this program is the scalar value tmin returned as first output argument by feasp.target) The function feasp computes a solution xfeas (if any) of the system of LMIs described by lmisys. xfeas is computed by solving the auxiliary convex program: T T (97) Minimize t subject to N L ( x )N – M R ( x ) M ≤ tI .
The default value is R = 109. a large value results in natural convergence at the expense of a possibly large number of iterations. Type HELP MEMORY for your options. The default value is 10. 932 . the code terminates quickly but without guarantee of accuracy. Setting option(i) to zero is equivalent to setting the corresponding control parameter to its default value. for instance. Upon termination. it suffices to type options=zeros(1. Setting options(3) to a negative value activates the “flexible bound” mode. there is no need to redefine the entire vector when changing just one control parameter. and increased if necessary during the course of optimization • options(4) helps speed up termination. To set the maximum number of iterations to 10. that is.5) options(2)=10 % default value for all parameters Memory Problems When the leastsquares problem solved at each iteration becomes ill conditioned. the norm of the solution as a percentage of the feasibility radius R. If set to a small value (< 10). Since the QR mode typically requires much more memory. Resetting options(5) to zero (default value) turns it back on. MATLAB may run out of memory and display the message ??? Error using ==> feaslv Out of memory. On the contrary. the code terminates if t did not decrease by more than one percent in relative terms during the last J iterations. When set to an integer value J > 0. the feasp solver switches from Choleskybased to QRbased linear algebra (see“Memory Problems” on page 974 for details). feasp displays the fradius saturation. The feasibility radius is a simple means of controlling the magnitude of solutions. accuracy. Consequently.feasp N ∑ xi < R 2 i=1 2 In other words. the Euclidean norm of xfeas should not exceed R. This parameter trades off speed vs. the feasibility radius is initially set to 108. • options(5) = 1 turns off the trace of execution of the optimization procedure. In this mode.
This will prevent switching to QR and feasp will terminate when Cholesky fails due to numerical instabilities.1.'s') % LMI lmiterm([4 1 1 p].1) % LMI #4: lmiterm([4 1 1 0]. A 3 = – 1. Example Consider the problem of finding P > I such that A 1 P + PA 1 < 0 A 2 P + PA 2 < 0 A 3 P + PA 3 < 0 with data A 1 = – 1 2 .feasp You should then ask your system manager to increase your swap space or.'s') % LMI lmiterm([2 1 1 p]. To assess feasibility with feasp.9 1 –3 1.1.1.4 0.7 0.'s') % LMI lmiterm([3 1 1 p].a3. if no additional swap space is available.a2. A2.5 .xfeas] = feasp(lmis) 933 . A 2 = – 0. A3} (see“Quadratic Stability” on page 36 for details).8 1.a1. first enter the LMIs (98)–(910) by: setlmis([]) p = lmivar(1.1) % LMI #4: I lmis = getlmis #1 #2 #3 P T T T (98) (99) (910) Then call feasp to find a feasible decision vector: [tmin. set options(4) = 1.0 This problem arises when studying the quadratic stability of the polytope of matrices Co{A1.7 – 2.3 – 2.1.[2 1]) lmiterm([1 1 1 p].
A.p) This returns P = 270.[0. Baltimore.6912. “The Projective Method for Solving Linear Matrix Inequalities. SIAM.1745 and a matrix P with largest eigenvalue λmax(P) = 9. gevp.feasp This returns tmin = 3..1363. Interior Point Polynomial Methods in Convex Programming: Theory and Applications. Amer. Nemirovski. 1994. pp. type P = dec2mat(lmis.. you can bound the Frobenius norm of P by 10 while asking tmin to be less than or equal to –1. A3}.xfeas] = feasp(lmis.0. See Also mincx.0. Philadelphia. Gahinet.0].8 126. Y. 1994.xfeas. For instance.1) The third entry 10 of options sets the feasibility radius to 10 while the third argument 1 sets the target value for tmin. This yields tmin = 1.1 It is possible to add further constraints on this feasibility problem. Hence (98)–(910) is feasible and the dynamical · system x = A(t)x is quadratically stable for A(t) ∈ Co{A1. Conf.mex.10. This is done by [tmin. 840–844. Contr. The optimization is performed by the CMEX file feaslv. and P.. and A.4 155. Reference The feasibility solver feasp is based on Nesterov and Nemirovski’s Projective Method described in Nesterov.4 126.” Proc. dec2mat 934 . A2. Maryland. Nemirovski. To obtain a Lyapunov matrix P proving the quadratic stability.
mustab See Also 935 . G scaling matrices in problems D = getdg(ds. G at the frequency fs(i) are extracted from ds and gs by getdg.getdg Purpose Syntax Description 9getdg Retrieve the D. G scalings compacted in two matrices ds and gs.i) The function mustab computes the mixedµ upper bound over a grid of frequencies fs and returns the corresponding D. The values of D.i) G = getdg(gs.
lmivar. See Also setlmis.getlmis Purpose Syntax Description 9getlmis Get the internal description of an LMI system lmisys = getlmis After completing the description of a given LMI system with lmivar and lmiterm. newlmi 936 . its internal representation lmisys is obtained with the command lmisys = getlmis This MATLAB representation of the LMI system can be forwarded to the LMI solvers or any other LMILab function for subsequent processing. lmiterm.
Finally. Note that xinit should be of length decnbr(lmisys) (the number of decision variables). All other input arguments are optional.options. gevp returns the global minimum lopt and the minimizing value xopt of the vector of decision variables x. The code terminates as soon as it has found a feasible pair (λ. The number of linearfractional constraints (913) is specified by nlfc. gevp systematically assumes that the last nlfc LMI constraints are linear fractional • Add the constraint B(x) > 0 or any other LMI constraint that enforces it (see Remark below). the last argument target sets some target value for λ. The argument lmisys describes the system of LMIs (911)–(913) for λ = 1. Provided that (911)–(912) are jointly feasible. If an initial feasible pair (λ0.xinit.nlfc.xopt] = gevp(lmisys. The corresponding optimal values of the matrix variables are obtained with dec2mat. The initial point is ignored when infeasible.linit.target) gevp solves the generalized eigenvalue minimization problem Minimize λ subject to: C(x) < D(x) B(x) (911) (912) (913) 0 < A(x) < λB(x) where C(x) < D(x) and A(x) < λB(x) denote systems of LMIs. 937 . be cautious to • Always specify the linearfractional constraints (913) last in the LMI system. x0) is available. This positivity constraint is required for regularity and wellposedness of the optimization problem (see the discussion in “WellPosedness Issues” on page 840). x) with λ ð target. it can be passed to gevp by setting linit to λ0 and xinit to x0. Caution When setting up your gevp problem.gevp Purpose Syntax Description 9gevp Generalized eigenvalue minimization under LMI constraints [lopt. The LMIs involving λ are called the linearfractional constraints while (911)–(912) are regular LMI constraints.
5 . • options(4) helps speed up termination. 3) (914) (915) 938 . 1 –3 1.9 . Its purpose and usage are as for feasp. If set to an integer value J > 0. 2.4 0. In gevp. • options(3) sets the feasibility radius.7 0. This is equivalent to minimizing dt α subject to I<P A 1 P + PA 1 < αP T · x = Aix (i = 1. • options(5) = 1 turns off the trace of execution of the optimization procedure. we mean the amount by which λ decreases. The default value is 5 iterations..3 – 2. By progress. Setting option(i) to zero is equivalent to setting the corresponding control parameter to its default value. Example Given A 1 = – 1 2 .gevp Control Parameters The optional argument options gives access to certain control parameters of the optimization code.7 – 2. this is a fiveentry vector organized as follows: • options(1) sets the desired relative accuracy on the optimal value lopt (default = 10–2). A 2 = – 0. Resetting options(5) to zero (default value) turns it back on.0 consider the problem of finding a single Lyapunov function V(x) = xTPx that proves stability of dV ( x ) and maximizes the decay rate – . • options(2) sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default).8 1. A 3 = – 1. the code terminates when the progress in λ over the last J iterations falls below the desired relative accuracy.
In this case.a2.64 Remark Generalized eigenvalue minimization problems involve standard LMI constraints (911) and linear fractional constraints (913). the positive definiteness of B(x) must be enforced by adding the constraint B(x) > 0 to the problem.1) % P > I : lemiterm([2 1 1 p].35 – 8.gevp A 2 P + PA 2 < αP A 3 P + PA 3 < αP T T (916) (917) To set up this problem for gevp. This value is achieved for P = 5. the single extra LMI “P > I” is enough to enforce positivity of all 939 .1) % P > I : I lemiterm([ 1 1 1 p].122).1.1) % LFC # 3 lmis = getlmis P # 1 (lhs) (rhs) # 2 (lhs) (rhs) # 3 (lhs) (rhs) Note that the linear fractional constraints are defined last as required.35 18.'s') % LFC lemiterm([ 4 1 1 p]. For wellposedness.popt]=gevp(lmis. p = lmivar(1.1.1. the set of constraints (912) may reduce to a single constraint as in the example above. this is not desirable for efficiency reasons.'s') % LFC lemiterm([ 2 1 1 p].1.[2 1]) lemiterm([1 1 1 0].'s') % LFC lemiterm([ 3 1 1 p].3) This returns alpha = 0. call gevp by [alpha.122 as optimal value (the largest decay rate is therefore 0.58 – 8.a3.1) % LFC # 2 lemiterm([4 1 1 p].1. first specify the LMIs (915)–(917) with α = 1: setlmis([]).1.a1. To minimize α subject to (915)–(917).1. Although this could be done automatically from inside the code. For instance.1) % LFC # 1 lemiterm([3 1 1 p].
SIAM. The optimization is performed by the CMEX file fpds. Reference The solver gevp is based on Nesterov and Nemirovski’s Projective Method described in Nesterov. Philadelphia. 1994. and A. decnbr..mex. feasp. mincx 940 . Nemirovski. Interior Point Polynomial Methods in Convex Programming: Theory and Applications. See Also dec2mat. It is therefore left to the user to devise the least costly way of enforcing this positivity requirement.gevp linearfractional righthand sides. Y.
hinfgs seeks an affine parameterdependent controller · ζ = A K ( p )ζ + B K ( p )y K u = C K ( p )ζ + D K ( p )y scheduled by the measurements of p(t) and such that • K stabilizes the closedloop system w P y K for all admissible parameter trajectories p(t) • K minimizes the closedloop quadratic H∞ performance from w to z. e. by aff2pol or sconnect. The description pdP of the parameterdependent plant P is specified with psys and the vector r gives the number of controller inputs and outputs (set r=[p2.R. To test if a z u 941 .r.tolred) Given an affine parameterdependent plant · x = A ( p )x + B ( p )w + B u 1 2 P z = C 1 ( p )x + D 11 ( p )w + D 12 u y = C x + D w + D u 2 21 22 where the timevarying parameter vector p(t) ranges in a box and is measured in real time.gmin.g. hinfgs returns the optimal closedloop quadratic performance gopt and a polytopic description of the gainscheduled controller pdK.hinfgs Purpose Syntax Description 9hinfgs Synthesis of gainscheduled H∞ controllers [gopt. Note that hinfgs also accepts the polytopic model of P returned.pdK.tol.m2] if y ∈ Rp2 and u ∈ Rm2).S] = hinfgs(pdP..
'sys'. S of the characteristic LMI system. .'par') vertx = polydec(pv) Pj = vertx(:. Given the measurements p(t) of the parameters at time t. 942 . CK(t). 4 Compute the controller statespace matrices at time t as the convex combination of the vertex controllers KΠj: A (t) B (t) K K CK ( t ) DK ( t ) = N ∑ αj KΠ . hinfgs also returns solutions R. Controller Implementation The gainscheduled controller pdK is parametrized by p(t) and characterized by A (p) B (p) K at the corners ³ of the parameter box.j) gives the corresponding corner ³j of the parameter box (pv is the parameter vector description).j) returns the jth vertex controller KΠj while pv = psinfo(pdP. set the third input gmin to γ. j i=1 5 Use AK(t). The controller scheduling should be performed as follows. Finally. DK(t) to update the controller statespace equations. 3 Express p(t) as a convex combination of the ³j: N p(t) = α1³1 + . The arguments tol and tolred control the required relative accuracy on gopt and the threshold for order reduction (see hinflmi for details).+ αN³N. αj Š 0. BK(t). ∑ αj i=1 = 1 This convex decomposition is computed by polydec. .hinfgs closedloop quadratic performance γ is achievable. The the values KΠj of K j CK ( p ) DK ( p ) command Kj = psinfo(pdK.
pdsimul.. Contr. Example See Also See “Design Example” on page 710. pp... 22 (1994). “Robust Performance of LinearParametrically Varying Systems Using ParametricallyDependent Linear Feedback. psys. P. Becker.” Syst.hinfgs Reference Apkarian. Becker. pvec. hinflmi 943 . 23 (1994).” submitted to Automatica.” Systems and Control Letters. “Gain Scheduling via Linear Fractional Transformations. 79–92. October 1995. Packard. G. “SelfScheduled H∞ Control of Linear ParameterVarying Systems.. pp. P. A. 205–215. Packard. and G. Letters. P. Gahinet. polydec.
K.K] = hinflmi(P. The default value is 10–2. a suboptimal controller K that guarantees F (P.r.g.x2.x1. K)∞ < g is obtained with the syntax [gopt.r) also returns solutions R = x1 and S = y1 of the characteristic LMIs for γ = gopt (see “Practical Considerations” on page 618 for details).K] = hinflmi(P.tol.K] = hinflmi(P.options) hinflmi computes an internally stabilizing controller K(s) that minimizes the Description closedloop H∞ performance in the control loop w P(s) y K(s) z u The H∞ performance is the RMS gain of the closedloop transfer function F (P.r) [gopt.hinflmi Purpose Syntax 9hinflmi LMIbased H∞ synthesis for continuoustime plants gopt = hinflmi(P. 944 . [gopt.r) [gopt.y1. K) from w to z. The SYSTEM matrix P contains a statespace realization of the plant P(s) and the row vector r specifies the number of measurements and controls (set r = [p2 m2] when y ∈ Rp2 and u ∈ Rm2).x2.g) The optional argument tol specifies the relative accuracy on the computed optimal performance gopt.y2] = hinflmi(P.K.r.y2] = hinflmi(P.x1. Finally. This function implements the LMIbased approach to H∞ synthesis.r) Alternatively. The optimal H∞ performance gopt and an optimal H∞ controller K are returned by [gopt.y1.
“A Linear Matrix Inequality Approach to H∞ Control.” SIAM J. and R. Contr. 4 (1994). 1994. pp. 30 (1994). and P. “Explicit Controller Formulas for LMIbased H∞ Synthesis. Scherer. 2396–2400. 1307–1317. Skelton.E.hinflmi Control Parameters The optional argument options resets the following control parameters: • options(1) is valued in [0. hinfmix. Conf. • options(2): same as options(1) for S and singular problems with rankdeficient D21 • when options(3) is set to ε > 0. pp. Gahinet. pp.. Amer. This typically yields controllers with slower dynamics in singular problems (rankdeficient D12). Apkarian.. C. T. pp. P. P.. Gahinet. This also improves closedloop damping when p12(s) has jωaxis zeros. 30 (1992).. Robust and Nonlinear Contr.. 1] with default value 0. Contr. Iwasaki. “H∞ Optimization without Assumptions on Finite or Infinite Zeros. dhinflmi. 421–448.” Automatica. J. reducedorder synthesis is performed whenever ρ(R–1 S–1) Š (1 – ε) gopt2 The default value is ε = 10–3 Remark Reference The LMIbased approach is directly applicable to singular plants without preliminary regularization.. Increasing its value reduces the norm of the LMI solution R.” Int. 143–166. Opt. “All Controllers for the General H∞ Control Problem: LMI Existence Conditions and StateSpace Formulas.” in Proc. ltisys. slft 945 . See Also hinfric..
obj. The 2 2 946 .1. and u. The control problem is sketched in Figure 9. More details on the motivations and statement of this problem can be found in “MultiObjective H• Synthesis” on page 515. On input. P is the SYSTEM matrix of the plant P(s) and r is a threeentry vector listing the lengths of z2.K.r.S] = hinfmix(P. w P(s) y K(s) z∞ z2 u Figure 91: Mixed H2/H∞ synthesis If T∞(s) and T2(s) denote the closedloop transfer functions from w to z∞ and z2. Recall that .hinfmix Purpose Syntax Description 9hinfmix Mixed H2/H∞ synthesis with pole placement constraints [gopt.dkbnd.R. respectively.∞ and .h2opt. Note that z∞ and/or z2 can be empty.region.2 denote the H∞ norm (RMS gain) and H2 norm of transfer functions. y.tol) hinfmix performs multiobjective outputfeedback synthesis. hinfmix computes a suboptimal solution of the following synthesis problem: Design an LTI controller K(s) that minimizes the mixed H2/H∞ criterion α T∞ ∞ + β T2 2 subject to • T∞∞ < γ0 • T22 < ν0 • The closedloop poles lie in some prescribed LMI region D.
Contr. • tol is the required relative accuracy on the optimal value of the tradeoff criterion (the default is 10–2).” to appear in IEEE Trans. volume of the special contributions to the ECC 1995. Aut. “Mixed H2 H∞ Control. msfsyn 947 . you can use hinfmix to perform pure pole placement by setting obj = [0 0 0 0]. S via the extra output arguments R and S. set dkbnd = 0.” to appear in Trends in Control: A European Perspective. The default value is 100. Scherer. A variety of mixed and unmixed problems can be solved with hinfmix (see list in “The Function hinfmix” on page 520). 1995.. To make the controller K(s) strictly proper.hinfmix fourentry vector obj = [γ0. Gahinet. Chilali. α. See Also lmireg. and the remaining input arguments are optional: • region specifies the LMI region for pole placement (the default region = [] is the open lefthalf plane).. Example Reference See the example in“H• Synthesis” on page 510. Note that both z∞ and z2 can be empty in such case. You can also access the optimal values of the LMI variables R. hinflmi. In particular. ν0. The function hinfmix returns guaranteed H∞ and H2 performances gopt and h2opt as well as the SYSTEM matrix K of the LMIoptimal controller. C. and P. β] specifies the H2/H∞ constraints and tradeoff criterion. M. “H∞ Design with Pole Placement Constraints: An LMI Approach. Use lmireg to interactively build the LMI region description region • dkbnd is a userspecified bound on the norm of the controller feedthrough matrix DK (see the definition in “LMI Formulation” on page 516)..
c2. 'b2'.d12.b1. B2. The syntax hinfpar(P.d21.'b1') can be used to retrieve just the B1 matrix.. the plant P(s) is specified by its statespace equations · x = Ax + B1w + B2u z = C1x + D11w + D12u y = C2x + D21w + D22u or their discretetime counterparts.d11.r) b1 = hinfpar(P.d22] = hinfpar(P. hinflmi.d21 d22]) The function hinfpar retrieves the statespace matrices A. from P.. For design purposes. The second argument r specifies the size of the D22 matrix.[c1. . B1.nu] where ny is the number of measurements (length of y) and nu is the number of controls (length of u).b2. . Here the third argument should be one of the strings 'a'. Set r = [ny.c2]. ..r.r. See Also hinfric. .c1. B2.[b1 b2].[d11 d12. ..'b1') In H∞ control problems. 'b1'. B1.from the SYSTEM matrix representation of an H∞ plant [a.hinfpar Purpose Syntax Description 9hinfpar Extract the statespace matrices A. ltisys 948 . this data is stored in a single SYSTEM matrix P created by P = ltisys(a. .
x1.y2.d21 d22]) The row vector r specifies the dimensions of D22.Preg] = hinfric(P.options) hinfric computes an H∞ controller K(s) that internally stabilizes the control Description loop w P(s) y K(s) z u and minimizes the RMS gain (H∞ performance) of the closedloop transfer function Twz.r.x2. To minimize the H∞ performance. tol.. The SYSTEM matrix P contains a statespace realization · x = Ax + B1w + B2u z = C1x + D11w + D12u y = C2x + D21w + D22u of the plant P(s) and is formed by the command P = ltisys(a.hinfric Purpose Syntax 9hinfric Riccatibased H∞ synthesis for continuoustime plants gopt = hinfric(P.r) [gopt. The function hinfric implements the Riccatibased approach.[b1 b2].K] = hinfric(P. Set r = [p2 m2] when y ∈ Rp2 and u ∈ Rm2.c2].[d11 d12.[c1. call hinfric with the syntax gopt = hinfric(P.gmin...gmax.r) [gopt.K.y1.r) 949 .
K] = hinfric(P.y1.y2.1] with default value 0. This generally results in less control effort and in controller statespace matrices of smaller norm.r) The H∞ performance can also be minimized within prescribed bounds gmin and gmax and with relative accuracy tol by typing [gopt.hinfric This returns the smallest achievable RMS gain from w to z over all stabilizing controllers K(s). • options(2) controls the amount of regularization used when P12(s) or P21(s) has jωaxis zero(s). a realization of the closedloop transfer function from w to z is given by Twz = slft(P.gmin. possibly at the expense of a higher closedloop RMS gain.Preg] = hinfric(P. An H∞optimal controller K(s) with performance gopt is computed by [gopt. Finally. This tuning parameter is valued in [0.tol) Setting both gmin and gmax to γ > 0 tests whether the H∞ performance γ is achievable and returns a controller K(s) such that Twz∞ < γ when γ is achievable. the full syntax [gopt.K] = hinfric(P.K.r) also returns the solutions X∞ = x2/x1 and Y∞ = y2/y1 of the H∞ Riccati equations for γ = gopt . Beware that making ε too large may hinfric automatically regularizes singular H∞ problems and computes 950 . Given the controller SYSTEM matrix K returned by hinfric.K) Control Parameters reducedorder controllers when the matrix I – γ–2X∞Y∞ is nearly singular.x2.r. Such problems are regularized by replacing A by A + εI where ε is a small positive number.x1. Increasing options(2) increases ε and improves the closedloop damping. Increasing options(1) augments the amount of regularization.gmax. as well as the regularized plant Preg used to compute the controller in singular problems. The optional input options gives access to the control parameters that govern the regularization and order reduction: • options(1) controls the amount of regularization used when D12 or D21 is rankdeficient. Setting gmin or gmax to zero is equivalent to specifying no lower or upper bound.
Gahinet. Aut... In such cases the optimal H∞ performance will be infinite (see “Practical Considerations” on page 618 for more details). See the example in “H• Synthesis” on page 510. Conf. reducedorder synthesis is performed whenever ρ(X∞Y∞) Š (1 – ε) gopt2 The default value is ε = 10–3 Remark Example Reference Setting gmin = gmax = Inf computes the optimal LQG controller. Glover. Doyle. and A. P. Gahinet. and Francis. AC–34 (1989). P.. dhinfric. Conf. B.” in Proc. K.” in Proc. “StateSpace Solutions to Standard H2 and H∞ Control Problems.” IEEE Trans. pp. See Also hinflmi.. • when options(3) is set to ε > 0. Lake Buena Vista. “Reliable Computation of flopt in Singular H∞ Control.. pp.hinfric destroy stabilizability or detectability when the plant P(s) includes shaping filters with poles close to the imaginary axis. ltisys.. Contr. Laub. hinfmix. Amer. “Explicit Controller Formulas for LMIbased H∞ Synthesis. 1527– 1532.. Contr. slft 951 .. Fl. Contr. 2396–2400.. 1994.. 831–847.C. pp. J. Khargonekar.J. Dec. 1994. P.
Please use one line per matrix variable in the text editing areas. • Write in a file the sequence of lmivar/lmiterm commands needed to specify a particular LMI system (write button) • Generate the internal representation of the LMI system by pressing create. do not specify more than one LMI per line.. 3 Specify the LMIs as MATLAB expressions in the lower half of the window. 2 Declare each matrix variable (name and structure) in the upper half of the window. An LMI can stretch over several lines. R for unstructured. buttons) • Save the symbolic description of the LMI system as a MATLAB string (save button). To specify an LMI system. and G for other structures) and by an additional structure matrix similar to the second input argument of lmivar. 1 Give it a name (top of the window). you can perform the following operations by pressing the corresponding pushbutton: • Visualize the sequence of lmivar/lmiterm commands needed to describe this LMI system (view commands buttons) • Conversely. The matrix expression of the LMI system specified by these commands is then displayed by clicking on describe the LMIs.lmiedit Purpose Syntax Description 9lmiedit Specify or display systems of LMIs as MATLAB expressions lmiedit lmiedit is a graphical user interface for the symbolic specification of LMI problems.. However. The structure is characterized by its type (S for symmetric block diagonal. This description can be reloaded later on by pressing the load button • Read a sequence of lmivar/lmiterm commands from a file (read button). Once the LMI system is fully specified. Typing lmiedit calls up a window with two editable text areas and various pushbuttons.. display the symbolic expression of the LMI system produced by a particular sequence of lmivar/lmiterm commands (click the describe. The result is written in a MATLAB variable with the same name as the LMI system 952 ..
To activate the scroll mode.lmiedit Remark Editable text areas have builtin scrolling capabilities. lmiterm. An example and some limitations of lmiedit can be found in “The LMI Editor lmiedit” on page 816. The scroll mode is only active when all visible lines have been used. maintain the mouse button down. newlmi. lmivar. click in the text area. and move the mouse up or down. lmiinfo Example See Also 953 .
lmiinfo Purpose Syntax Description 9lmiinfo Interactively retrieve information about the variables and term content of LMIs lmiinfo(lmisys) lmiinfo provides qualitative information about the system of LMIs lmisys.2.R the left and right inner factors. lmiinfo is an interactive facility where the user seeks specific pieces of information. and the term content of each block. This includes the type and structure of the matrix variables. The labels 1. Cj. Hence C1 may appear in several blocks or several LMIs without implying any connection between the corresponding constant terms. the number of diagonal blocks in the inner factors. Exceptions to this rule are the notations 954 . The term content of a block is symbolically displayed as C1 + A1*X2*B1 + B1'*X2*A1' + a2*X1 + x3*Q1 with the following conventions: • X1. • Aj. the LMI is simply written as L(x) < R(x) If its righthand side is zero. x3 denote the problem variables. Bj. • Cj refers to constant terms. Uppercase X indicates matrix variables while lowercase x indicates scalar variables. • Qj is used exclusively with scalar variables as in x3*Q1. and third matrix variable in the order of declaration. Special cases are I and I (I = identity matrix). it is displayed as N' * L(x) * N < 0 Information on the block structure and term content of L(x) and R(x) is also available. Qj is a dummy label. X2. Bj denote the left and right coefficients of variable terms. Lowercase letters such as a2 indicate a scalar coefficient.M denote the outer factors and L. If the outer factors are missing. second. General LMIs are displayed as N' * L(x) * N < M' * R(x) * M where N. The index j in Aj.3 refer to the first.
1)block is a full block of size 2 This is a system of 1 LMI with 3 variable matrices Do you want information on (v) matrix variables (l) LMIs (q) quit ?> l Which LMI (enter its number k between 1 and 1) ? 1 This LMI is of the form 955 . and z scalar. Y of Type 2.lmiinfo A1*X2*A1' and A1*X2*B1 + B1'*X2'*A1' which indicate symmetric terms and symmetric pairs in diagonal blocks. information about X and the LMI block structure can be obtained as follows: lmiinfo(lmis) LMI ORACLE This is a system of 1 LMI with 3 variable matrices Do you want information on (v) matrix variables (l) LMIs (q) quit ?> v Which variable matrix (enter its index k between 1 and 3) ? 1 X1 is a 2x2 symmetric block diagonal matrix its (1. If this LMI is described in lmis. Example Consider the LMI T T T 0 ð – 2X + A YB + B Y A + I XC T C X – zI where the matrix variables are X of Type 1.
lminbr. decinfo. or you can prompt for specific blocks with option (b).lmiinfo 0 < R(x) where the inner factor(s) has 2 diagonal block(s) Do you want info on the right inner factor ? (w) whole factor (b) only one block (o) other LMI (t) back to top level ?> w Info about the right inner factor block (1.2) : x3*A4 (w) whole factor (b) only one block (o) other LMI (t) back to top level This is a system of 1 LMI with 3 variable matrices Do you want information on (v) matrix variables (l) LMIs (q) quit ?> q It has been a pleasure serving you! Note that the prompt symbol is ?> and that answers are either indices or letters. matnbr. decnbr 956 .1) : I + a1*X1 + A2*X2*B2 + B2'*X2'*A2' block (2.1) : A3*X1 block (2. Remark See Also lmiinfo does not provide access to the numerical value of LMI coefficients. All blocks can be displayed at once with option (w).
lminbr Purpose Syntax Description See Also 9lminbr Return the number of LMIs in an LMI system k = lminbr(lmisys) lminbr returns the number k of linear matrix inequalities in the LMI problem described in lmisys. matnbr 957 . lmiinfo.
The function lmireg can also be used to intersect previously defined LMI regions reg1.lmireg Purpose Syntax Description 9lmireg Specify LMI regions for pole placement purposes region = lmireg region = lmireg(reg1.. Recall that an LMI region is any convex subset D of the complex plane that can be characterized by an LMI in z and z . See Also msfsyn. Calling lmireg without argument starts an interactive query/answer session where you can specify the region of your choice. i. conic sectors. hinfmix 958 . This class of regions encompasses half planes.. ellipses... D = {z ∈ C : L + Mz + MT z < 0} for some fixed real matrices M and L = LT. strips. M] is returned upon termination..reg2.. and any intersection of the above.) lmireg is an interactive facility to specify the LMI regions involved in multiobjective H∞ synthesis with pole placement constraints (see msfsyn and hinfmix).e. The output region is then the [L... The matrix region = [L. This matrix description of the LMI region can be passed directly to msfsyn or hinfmix for synthesis purposes. M] description of the intersection of these regions. disks. reg2.
1). and (2. Recall that. Each lmiterm command adds one extra term to the LMI system currently described. In the calling of limterm. termID (1) = +p p where positive p is for terms on the lefthand side of the pth LMI and negative p i s for terms on the righthand side of the pth LMI. 0 ] for outer factors termID (2:3) = [ i.1). j )th block of the left or right inner factor 959 . (2.flag) lmiterm specifies the term content of an LMI one term at a time. Before using lmiterm. LMI terms are one of the following entities: • outer factors • constant terms (fixed matrices) • variable terms AXB or AXTB where X is a matrix variable and A and B are given matrices called the term coefficients. Recall that LMI term refers to the elementary additive terms involved in the blockmatrix expression of the LMI.2) in a twoblock LMI. termID is a fourentry vector of integers specifying the term location and the matrix variable involved. For instance. the lefthand side always refers to the smaller side of the LMI. When describing an LMI with several blocks. only the terms in blocks on or above the diagonal). The index p is relative to the order of declaration and corresponds to the identifier returned by newlimi. the LMI description must be initialized with setlmis and the matrix variables must be declared with lmivar. specify the blocks (1.B. by convention. [ 0.A. remember to specify only the terms in the blocks on or below the diagonal (or equivalently. j ] for terms in the ( i.lmiterm Purpose Syntax Description 9lmiterm Specify the term content of LMIs lmiterm(termID.
lmiterm 0 for outer factors termID (4) = x for variable terms AXB T – x for variable terms AX B where x is the identifier of the matrix variable X as returned by lmivar.A.'s') adds the symmetrized expression AX + XTAT to the (1. For instance. lmiterm([1 1 1 X].1.A') Aside from being convenient. this shortcut also results in a more efficient representation of the LMI. Setting flag = 's' allows you to specify such expressions with a single lmiterm command. The arguments A and B contain the numerical data and are set according to: Type of Term A B outer factor N constant term C variable term AXB or AXTB matrix value of N matrix value of C matrix value of A (1 if A is absent) omit omit matrix value of B (1 if B is absent) Note that identity outer factors and zero constant terms need not be specified. The extra argument flag is optional and concerns only conjugated expressions of the form (AXB) + (AXB)T = AXB + BTX(T)AT in diagonal blocks.A.1) block of the first LMI and summarizes the two commands lmiterm([1 1 1 X].1.1) lmiterm([1 1 1 X]. 960 .
X2.C.1. X2 are matrix variables of Types 2 and 1.lmiterm Example Consider the LMI T T T 2AX 2 A – x 3 E + DD B X 1 T X1 B –I T T T T CX 1 C + CX 1 C 0 <M 0 –f X2 M where X1. lmivar.E) % x3*E 0]. Similarly.B) % X1'*B 0]. respectively. the terms on the lefthand side of this LMI are specified by: lmiterm([1 1 lmiterm([1 1 lmiterm([1 1 lmiterm([1 2 lmiterm([1 2 Here X1. After initializing the LMI description with setlmis and declaring the matrix variables with lmivar.f.C'. lmiedit.1. T See Also setlmis. the term content of the righthand side is specified by: lmiterm([1 0 0 0]. X3 1 1 1 1 2 X2]. getlmis.D*D') % D*D' X1].1) % f*X2 Note that CX1CT + C X 1 CT is specified by a single lmiterm command with the flag 's' to ensure proper symmetrization.1) % I should be the variable identifiers returned by lmivar.2*A.A') % 2*A*X2*A' x3]. and x3 is a scalar variable (Type 1).'s') % C*X1*C'+C*X1'*C' lmiterm([1 2 2 X2].M) % outer factor M lmiterm([1 1 1 X1]. newlmi 961 .
struct is an Rby2 matrix where • struct(r.n.struct) lmivar defines a new matrix variable X in the LMI system currently described..j)=0 if X(i. . The structure of X is then defined in terms of xn+1.n] in this case. . 0 for scalar. xn+p.1) is the size of the rth block • struct(r. scalar (a multiple of the identity matrix). j) is a hard zero • struct(i. struct is a matrix of the same dimensions as X such that • struct(i. If the problem already involves n decision variables.2) is the type of the rth block (1 for full. 1 for zero block). lmivar optionally returns two extra outputs: (1) the total number n of scalar decision variables used so far and (2) a matrix sX showing the entrywise 962 .struct) [X. first identify how many free independent entries are involved in X. Accordingly. . label the new free variables as xn+1. Set struct = [m.sX] = lmivar(type. To specify a variable X of Type 3. xn+p as indicated above.. If X has R diagonal blocks. type=3: Other structures. j) = xn struct(i. .j)=n if X(i. Each diagonal block is either full (arbitrary symmetric matrix). j) = –xn Sophisticated matrix variable structures can be defined with Type 3.lmivar Purpose Syntax Description 9lmivar Specify the matrix variables in an LMI problem X = lmivar(type. type=2: Full mbyn rectangular matrix. To help specify matrix variables of Type 3. The first argument type selects among available types of variables and the second argument struct gives further information on the structure of X depending on its type.j)= n if X(i. These constitute the set of decision variables associated with X. With Type 3. . Available variable types include: type=1: Symmetric matrices with a blockdiagonal structure. The optional output X is an identifier that can be used for subsequent reference to this new variable. . each entry of X is specified as zero or ±xn where xn is the nth decision variable. or identically zero.
For more details. . X2.. X3 such that • X1 is a 3 × 3 symmetric matrix (unstructured).1 0. Example 1 Consider an LMI system with three matrix variables X1.[3 1]) % Type 1 X2 = lmivar(2. These three variables are defined by setlmis([]) X1 = lmivar(1.[2 3]) 963 . and I2 denotes the identity matrix of size 2. δ1 and δ2 are scalars. Type 3 allows you to specify fairly complex matrix variable structures. • X2 is a 2 × 4 rectangular matrix (unstructured).lmivar dependence of X on the decision variables x1. . • X3 = ∆ 0 0 0 δ1 0 0 0 δ2 I2 where ∆ is an arbitrary 5 × 5 symmetric matrix. consider a matrix variable X with structure X 0 X = 1 0 X 2 where X1 and X2 are 2by3 and 3by2 rectangular matrices. Example 2 Combined with the extra outputs n and sX of lmivar.2 0]) % Type 1 The last command defines X3 as a variable of Type 1 with one full block of size 5 and two scalar blocks of sizes 1 and 2. 2x4 X3 = lmivar(1. xn.n.[2 4]) % Type 2 of dim.sX1] = lmivar(2.[5 1. see Example 2 below and “Structured Matrix Variables” on page 833. respectively. respectively. You can specify this structure as follows: 1 Define the rectangular variables X1 and X2 by setlmis([]) [X1. For instance. .
n. sX2(1.[sX1.1)=7 means that the (1. setmvar 964 .sX2] = lmivar(2.n.sX2]) The resulting variable X has the prescribed structure as confirmed by sX sX = 1 4 0 0 0 2 5 0 0 0 3 6 0 0 0 0 0 0 0 7 8 9 10 11 12 See Also setlmis.sX] = lmivar(3.zeros(3).lmivar [X2. symdec.[3 2]) The outputs sX1 and sX2 give the decision variable content of X1 and X2: sX1 sX1 = 1 4 sX2 sX2 = 7 9 11 2 5 3 6 8 10 12 For instance. 2 Use Type 3 to specify the matrix variable X and define its structure in terms of those of X1 and X2 : [X.zeros(2). lmiedit. delmvar. skewdec.1) entry of X2 is the seventh decision variable. getlmis. lmiterm.
[1 1]) A realization of G(s) is then obtained as [a. ltiss returns the realization (E–1A.D).d] = ltiss(sys) a = 1 b = 1 c = 2 d = 0 Important If the output argument e is not supplied. While this is immaterial for systems with E = I. 2.c.is specified in transfer function form by s+1 sys = ltisys('tf'. D.e] = ltiss(sys) Given a SYSTEM matrix sys created with ltisys. this function returns the statespace matrices A. B. –2 The system G(s) = .b. E.b.ltiss Purpose Syntax Description Example 9ltiss Extract the statespace realization of a linear system from its SYSTEM matrix description [a. See Also ltisys. ltitf. sinfo 965 . C.C.E–1B. this should be remembered when manipulating descriptor systems.c.d.
e) ltisys('tf'..c.den) Description For continuoustime systems.c) ltisys(a.b. Finally.d) ltisys(a. · The syntax sys = ltisys(a) specifies the autonomous system x = Ax while · = Ax.e. d and e are set to the default values D = 0 and E = I. E) into a single SYSTEM matrix sys sys sys sys sys sys = = = = = = ltisys(a) ltisys(a. .d. ltisys stores A. B.ltisys Purpose Syntax 9ltisys Pack the statespace realization (A. E into a single matrix sys with structure A + j(E – I) B C 0 D n 0 0 Inf The upper right entry n gives the number of states (i.b..e) ltisys(a.e) specifies E x by their transfer function n(s) G(s) = d(s) The syntax is then sys = ltisys('tf'.num.c.. B.den) where num and den are the vectors of coefficients of the polynomials n(s) and d(s) in descending order. you can specify SISO systems sys = ltisys(a. D. y = Cx + Du or discretetime systems Exk+1 = Axk + Buk. · E x = Ax + Bu. D. This data structure is called a SYSTEM matrix. A ∈ Rn×n). When omitted.b. yk = Cxk + Duk. C. C.num. 966 .
sinfo 967 .2.[1 7]) See Also ltiss.is specified by s+7 sys = ltisys('tf'.0.[1 4].1. ltitf.eps) s+4 Similarly.ltisys Examples The SYSTEM matrix representation of the descriptor system · εx = – x + 2u y = x is created by sys = ltisys(1. the SISO system with transfer function G(s) = .
1) [n. An error is issued if sys is not a SISO system.den] = ltitf(sys) Given a singleinput/singleoutput system defined in the SYSTEM matrix sys.d] = ltitf(sys) n = 1 d = 1 1 1 Example y=x–1 The vectors n and d represent the function –s+1 G ( s ) = s+1 See Also ltisys. ltiss. The transfer function of the system · x = –x + 2u.ltitf Purpose Syntax Description 9ltitf Compute the transfer function of a SISO system [num.1. is given by sys = ltisys(1. sinfo 968 . ltitf returns the numerator and denominator of its transfer function n(s) G(s) = d(s) The output arguments num and den are vectors listing the coefficients of the polynomials n(s) and d(s) in descending order.2.
magshape
Purpose Syntax Description
9magshape
Graphical specification of the shaping filters
magshape magshape is a graphical user interface to construct the shaping filters needed in loopshaping design. Typing magshape brings up a window where the magnitude profile of each filter can be specified graphically with the mouse. Once a profile is sketched, magshape computes an interpolating SISO filter and returns its statespace realization. Several filters can be specified in the same magshape window as follows:
1 First enter the filter names after Filter names. This tells magshape to write
the SYSTEM matrices of the interpolating filters in MATLAB variables with these names.
2 Click on the button showing the name of the filter you are about to specify.
To sketch its profile, use the mouse to specify a few characteristic points that define the asymptotes and the slope changes. For accurate fitting by a rational filter, make sure that the slopes are integer multiples of ±20 dB/dec.
3 Specify the filter order after filter order and click on the fit data button
to perform the fitting. The magnitude response of the computed filter is then drawn as a solid line and its SYSTEM matrix is written in the MATLAB variable with the same name.
4 Adjustments can be made by moving or deleting some of the specified points.
Click on delete point or move point to activate those modes. You can also increase the order for a better fit. Press fit data to recompute the interpolating filter. If some of the filters are already defined in the MATLAB environment, their magnitude response is plotted when the button bearing their name is clicked. This allows for easy modification of existing filters.
Remark Acknowledgment See Also
See sderiv for the specification of nonproper shaping filters with a derivative action. The authors are grateful to Prof. Gary Balas and Prof. Andy Packard for providing the cepstrum algorithm used for phase reconstruction.
sderiv, frfit, mrfit
969
matnbr
Purpose Syntax Description See Also
9matnbr
Return the number of matrix variables in a system of LMIs
K = matnbr(lmisys) matnbr returns the number K of matrix variables in the LMI problem described by lmisys. decnbr, lmiinfo, decinfo
970
mat2dec
Purpose Syntax Description
9mat2dec
Return the vector of decision variables corresponding to particular values of the matrix variables
decvec = mat2dec(lmisys,X1,X2,X3,...)
Given an LMI system lmisys with matrix variables X1, . . ., XK and given values X1,...,Xk of X1, . . ., XK, mat2dec returns the corresponding value decvec of the vector of decision variables. Recall that the decision variables are the independent entries of the matrices X1, . . ., XK and constitute the free scalar variables in the LMI problem. This function is useful, for example, to initialize the LMI solvers mincx or gevp. Given an initial guess for X1, . . ., XK, mat2dec forms the corresponding vector of decision variables xinit. An error occurs if the dimensions and structure of X1,...,Xk are inconsistent with the description of X1, . . ., XK in lmisys.
Example
Consider an LMI system with two matrix variables X and Y such that • X is A symmetric block diagonal with one 2by2 full block and one 2by2 scalar block • Y is a 2by3 rectangular matrix Particular instances of X and Y are X0 = 1 3 0 0 3 –1 0 0 0 0 5 0 0 0 0 5 , Y0 = 1 2 3 4 5 6
and the corresponding vector of decision variables is given by
decv = mat2dec(lmisys,X0,Y0) decv' ans = 1 3 1 5 1 2 3 4 5 6
971
mat2dec
Note that decv is of length 10 since Y has 6 free entries while X has 4 independent entries due to its structure. Use decinfo to obtain more information about the decision variable distribution in X and Y.
See Also
dec2mat, decinfo, decnbr
972
mincx
Purpose Syntax Description
9mincx
Minimize a linear objective under LMI constraints
[copt,xopt] = mincx(lmisys,c,options,xinit,target)
The function mincx solves the convex program minimize c x subject to
T
N L( x )N ≤ MTR ( x ) M
T
(918)
where x denotes the vector of scalar decision variables. The system of LMIs (918) is described by lmisys. The vector c must be of the same length as x. This length corresponds to the number of decision variables returned by the function decnbr. For linear objectives expressed in terms of the matrix variables, the adequate c vector is easily derived with defcx. The function mincx returns the global minimum copt for the objective cTx, as well as the minimizing value xopt of the vector of decision variables. The corresponding values of the matrix variables is derived from xopt with dec2mat. The remaining arguments are optional. The vector xinit is an initial guess of the minimizer xopt. It is ignored when infeasible, but may speed up computations otherwise. Note that xinit should be of the same length as c. As for target, it sets some target for the objective value. The code terminates as soon as this target is achieved, that is, as soon as some feasible x such that cTx ð target is found. Set options to [] to use xinit and target with the default options.
Control Parameters
The optional argument options gives access to certain control parameters of the optimization code. In mincx, this is a fiveentry vector organized as follows: • options(1) sets the desired relative accuracy on the optimal value lopt (default = 10–2). • options(2) sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default). • options(3) sets the feasibility radius. Its purpose and usage are as for feasp.
973
mincx
• options(4) helps speed up termination. If set to an integer value J > 0, the code terminates when the objective cTx has not decreased by more than the desired relative accuracy during the last J iterations. • options(5) = 1 turns off the trace of execution of the optimization procedure. Resetting options(5) to zero (default value) turns it back on. Setting option(i) to zero is equivalent to setting the corresponding control parameter to its default value. See feasp for more detail.
Tip for SpeedUp
In LMI optimization, the computational overhead per iteration mostly comes from solving a leastsquares problem of the form min Ax – b
x
where x is the vector of decision variables. Two methods are used to solve this problem: Cholesky factorization of ATA (default), and QR factorization of A when the normal equation becomes ill conditioned (when close to the solution typically). The message
* switching to QR
is displayed when the solver has to switch to the QR mode. Since QR factorization is incrementally more expensive in most problems, it is sometimes desirable to prevent switching to QR. This is done by setting options(4) = 1. While not guaranteed to produce the optimal value, this generally achieves a good tradeoff between speed and accuracy.
Memory Problems
QRbased linear algebra (see above) is not only expensiveLMI solvers,memory shortagememory shortage in terms of computational overhead, but also in terms of memory requirement. As a result, the amount of memory required by QR may exceed your swap space for large problems with numerous LMI constraints. In such case, MATLAB issues the error
??? Error using ==> pds Out of memory. Type HELP MEMORY for your options.
You should then ask your system manager to increase your swap space or, if no additional swap space is available, set options(4) = 1. This will prevent
974
mincx
switching to QR and mincx will terminate when Cholesky fails due to numerical instabilities.
Example Reference
See “Example 8.2” on page 823. The solver mincx implements Nesterov and Nemirovski’s Projective Method as described in Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in Convex Programming: Theory and Applications, SIAM, Philadelphia, 1994. Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, Baltimore, Maryland, pp. 840–844. The optimization is performed by the CMEX file pds.mex.
See Also
defcx, dec2mat, decnbr, feasp, gevp
975
On output. and X the corresponding Lyapunov matrix. z2 2 2 The function msfsyn is also applicable to multimodel problems where P is a polytopic model of the plant: 976 . Note also that z∞ or z2 can be empty.X] = msfsyn(P. α.msfsyn Purpose Syntax Description 9msfsyn Multimodel/multiobjective statefeedback synthesis [gopt.region. You can perform pure pole placement by setting obj = [0 0 0 0]. K is the optimal statefeedback gain.obj.K. ν0. ν0.r.h2opt. and β. β] to specify the problem dimensions and the design parameters γ0. The default is the open lefthalf plane. Pcl the closedloop transfer function from w to z ∞ .tol) Given an LTI plant P with statespace equations · x = Ax + B 1 w + B 2 u C 1 x + D 11 w + D 12 u C 2 x + D 22 u z∞ = z2 = msfsyn computes a statefeedback control u = Kx that • Maintains the RMS gain (H∞ norm) of the closedloop transfer function T∞ from w to z∞ below some prescribed value γ 0 > 0 • Maintains the H2 norm of the closedloop transfer function T2 from w to z2 below some prescribed value υ 0 > 0 • Minimizes an H2/H∞ tradeoff criterion of the form α T∞ ∞ + β T2 2 • Places the closedloop poles inside the LMI region specified by region (see lmireg for the specification of such regions). α. Set r = size(d22) and obj = [γ0.Pcl. gopt and h2opt are the guaranteed H∞ and H2 performances.
K In this context. psys. Affine parameterdependent plants are also accepted and automatically converted to polytopic models. lmireg. Example See Also See “Design Example” on page 413.msfsyn · x = A ( t )x + B 1 ( t )w + B 2 ( t )u C 1 ( t )x + D 11 ( t )w + D 12 ( t )u C 2 ( t )x + D 22 ( t )u z∞ = z2 = with timevarying statespace matrices ranging in the polytope A(t) B (t) B (t) A k B k B 2k 1 2 C ( t ) D ( t ) D ( t ) ∈ Co C D 1k 11k D 12k 11 12 1 C (t) 0 D 22 ( t ) C 2k 0 D 22k 2 : k = 1. hinfmix 977 . …. Note that polytopic plants should be defined with psys and that the closedloop system Pcl is itself polytopic in such problems. msfsyn seeks a statefeedback gain that robustly enforces the specifications over the entire polytope of plants.
set the optional argument See Also mustab. .G] = mubnd(M. . this loop remains wellposed as long as mu x σmax(∆i) < βi where βi is the specified bound on ∆i.mubnd Purpose Syntax Description 9mubnd Compute the upper bound for the µ structured singular value [mu.target) mubnd computes the mixedµ upper bound mu = ν∆(M) for the matrix M ∈ Cm×n and the normbounded structured perturbation ∆ = diag(∆1.D. . target to τ . To test whether mu ≤ τ where τ > 0 is a given scalar. The reciprocal of mu is a guaranteed wellposedness margin for the algebraic loop ∆ w M q Specifically.. The uncertainty ∆ is described by delta (see ublock and udiag). muperf 978 . The function mubnd also returns the optimal scaling matrices D and G arising in the computation of mu.delta. ∆n) (see “Structured Singular Value” on page 317).
It is finite if and only if the interconnection is robustly stable.delta.freqs) [margin.muperf Purpose Syntax Description 9muperf Estimate the robust H∞ performance of uncertain dynamical systems [grob.delta) muperf returns grob = Inf when the interconnection is not robustly stable.peakf] = muperf(P.0.peakf] = muperf(P.g. . To compute the robust H∞ performance grob for the prescribed uncertainty delta. [margin. ∆n(s)) is a linear timeinvariant uncertainty quantified by norm bounds σmax(∆i(jω)) < βi(ω) or sector bounds ∆i ∈ {a.peakf] = muperf(P. . b}. Alternatively. muperf converts the robust performance estimation into a robust stability problem for some augmented ∆ structure (see “Robust Performance” on page 321). The robust performance γrob is the smallest γ > 0 such that yL2 < γ uL2 for all bounded input u(t) and instance ∆(s) of the uncertainty.peakf] = muperf(P. call muperf with the syntax [grob. .delta.delta.g) 979 .. The input arguments P and delta contain the SYSTEM matrix of P(s) and the description of the LTI uncertainty ∆(s) (see ublock and udiag).freqs) muperf estimates the robust H∞ performance (worstcase RMS gain) of the uncertain LTI system ∆(s) u P(s) y where ∆(s) = diag(∆1(s).
See Also mustab.muperf estimates the robustness of a given H∞ performance g.+ b – a × margin . An optional vector of frequencies freqs to be used for the µ upper bound evaluation can be specified as fourth input argument. Provided that σmax(∆i(jω)) < margin × βi(ω) in the normbounded case and a+b . how much uncertainty ∆(s) can be tolerated before the RMS gain from u to y exceeds g. With both syntaxes. quadperf 980 .∆ i ∈ a + b – b – a × margin .2 2 2 2 in the sectorbounded case. peakf is the frequency where the worst RMS performance or robustness margin is achieved. the RMS gain is guaranteed to remain below g. . norminf. that is.
G scaling matrices at all tested frequencies.ds. b}. In the normbounded case.peak. The command [margin.freqs) mustab computes an estimate margin of the robust stability margin for the interconnection ∆(s) G(s) where G(s) is a given LTI system and ∆ = diag(∆1. . The optional third argument freqs is a usersupplied vector of frequencies where to evaluate ν∆(G(jω)). ∆n) is a linear timeinvariant uncertainty quantified by norm bounds σmax (∆i(jω)) < βi(ω) or sector bounds ∆i ∈ {a.gs] = mustab(sys. the stability of this interconnection is guaranteed for all ∆(s) satisfying σmax(∆i(jω)) < margin × βi(ω) See “Robust Stability Analysis” on page 319 for the interpretation of margin in the sectorbounded case.ds.fs. To obtain the D.gs] = mustab(sys. G scalings at the frequency fs(i) are retrieved by 981 .mustab Purpose Syntax Description 9mustab Estimate the robust stability margin of uncertain LTI systems [margin. . The uncertainty ∆(s) is specified by delta (see ublock and udiag). The value margin is obtained by computing the mixedµ upper bound ν∆(G(jωi)) over a grid of frequencies ωi. .delta) Here fs is the vector of frequencies used to evaluate ν∆ and the D..peak.delta.fs.delta) returns the guaranteed stability margin margin and the frequency peakf where the margin is weakest.peak] = mustab(sys. use the syntax [margin.
Example See Also See “Example” on page 328.i) G = getdg(gs. See “Caution” on page 321 for more details. muperf. udiag 982 .i) Caution margin may overestimate the robust stability margin when the frequency grid freqs is too coarse or when the µ upper bound is discontinuous due to real parameter uncertainty.mustab D = getdg(ds. mubnd. ublock.
lmiedit. dellmi 983 . See Also setlmis. or dellmi commands to refer to the newly declared LMI. lmivar. lmiterm. showlmi. Tagging LMIs is optional and only meant to facilitate code development and readability.newlmi Purpose Syntax Description 9newlmi Attach an identifying tag to LMIs tag = newlmi newlmi adds a new LMI to the LMI system currently described and returns an identifier tag for this LMI. getlmis. Their value is simply the ranking of each LMI in the system (in the order of declaration). Identifiers can be given mnemonic names to help keep track of the various LMIs. In such cases. This identifier can be used in lmiterm. the identifiers are the safest means of referring to the remaining LMIs. They prove useful when some LMIs are deleted from the LMI system.
01s + 100 is given by norminf(ltisys('tf'. “A Bisection Method for Computing the H∞ Norm of a Transfer Matrix and Related Problems. The optional input tol specifies the relative accuracy required on the computed RMS gain (the default value is 10–2). that is. Contr. 207–219. 984 . norminf computes the peak gain G ∞ = sup σ max ( G ( jω ) ) ω of the frequency response G(jω).= Ax + Bu dt y = Cx + Du and transfer function G(s) = D + C(sE – A)–1B. Kabamba. G ∞ = sup u ∈ L2 u≠0 y L2 u L2 The function norminf returns the peak gain and the frequency at which it is attained in gain and peakf. 2 (1989). this peak gain coincides with the RMS gain or H∞ norm of G.tol) Given the SYSTEM matrix g of an LTI system with statespace equations dx E . When G(s) is stable.peakf] = norminf(g.. Sign.01 100])) ans = 1.. and P.” Math.0000e+03 Reference Boyd.100.norminf Purpose Syntax Description 9norminf Compute the random meansquares (RMS) gain of continuoustime systems [gain. S. Syst. Example The peak gain of the transfer function 100 G ( s ) = 2 s + 0.[1 0. Balakrishnan. respectively. V. pp.
882–884.” IEEE Trans. N. quadperf 985 ..norminf Bruisma. See Also dnorminf. Letters. muperf. pp. Aut..A. “A Fast Algorithm to Compute the H∞Norm of a Transfer Function Matrix. Steinbuch. “On Computing the Infinity Norm.. Robel. AC–34 (1989). G. Contr. 14 (1990). Contr. 287–293. and M. pp.” Syst.
y ( t )y ( t )dt T 0 T→∞ ∫ 1 ∞ H G ( jω )G ( jω )dω = 2π –∞ ∫ where G(s) = C(sI – A)–1B.norm2 Purpose Syntax Description 9norm2 Compute the H2 norm of a continuoustime LTI system h2norm = norm2(g) Given a stable LTI system · x = Ax + Bw G: y = Cx driven by a white noise w with unit covariance. This norm is computed as G 2 = Trace ( CPC ) T where P is the solution of the Lyapunov equation AP + PAT + BBT = 0 See Also norminf 986 . norm2 computes the H2 norm (LQG performance) of G defined by 1 T T 2 G 2 = lim E .
Q(p) = Q0 + p1Q1 + .Q1. α)/dt < 0 for all polytopic decompositions (919). ∑ αi = 1 .] = pdlstab(pds. . ..+ αnQn such that dV(x. Q(α) = α1Q1 + .pnQn such that dV(x. 987 . For a timeinvariant polytopic system · E x = Ax + Bu y = Cx + Du with A + jE B = C D A + jE B i i . .. the resulting robust stability tests are always less conservative than quadratic stability tests when the parameters are either timeinvariant or slowly varying.pdlstab Purpose Syntax Description 9pdlstab Assess the robust stability of a polytopic or parameterdependent system [tau. . pdlstab seeks a Lyapunov function of the form V(x.Q0.. αi Ci Di i=1 n n ∑ αi ≥ 0 .options) pdlstab uses parameterdependent Lyapunov functions to establish the stability of uncertain statespace models over some parameter range or polytope of systems. pn) ∈ Rn. p)/dt < 0 along all admissible parameter trajectories. . Nevertheless. For an affine parameterdependent system · E(p) x = A(p)x + B(p)u y = C(p)x + D(p)u with p = (p1. p) = xTQ(p)–1x. The system description pds is specified with psys and contains information about the range of values and rate of variation of each parameter pi. .. Only sufficient conditions for the existence of such Lyapunov functions are available in general. i=1 (919) pdlstab seeks a Lyapunov function of the form V(x. α) = xTQ(α)–1x. .
pdlstab Several options and control parameters are accessible through the optional argument options: • Setting options(1)=0 tests robust stability (default) • When options(2)=0. See Also quadstab. mustab 988 . In such case. there is equivalence between the robust stability of · E ( p )x = A ( p )x and that of the dual system T· T E(p) z = A(p) z (920) (921) However. Set options(2)=1 to use the least conservative conditions Remark For affine parameterdependent systems with timeinvariant parameters. pdlstab automatically restarts and tests stability on the dual system (921) when it fails on (920). pdlstab uses simplified sufficient conditions for faster running times. the second system may admit an affine parameterdependent Lyapunov function while the first does not.
'traj'.xi.xi.x. The final time and initial state vector can be reset through tf and xi (their respective default values are 5 seconds and 0).tf. pdsimul plots the output trajectories y(t).y. options gives access to the parameters controlling the ODE integration (type help gear for details). The function pdsimul also accepts the polytopic representation of such systems as returned by aff2pol(pds) or hinfgs. Otherwise.'traj'. If 'ut' is omitted. psys. When invoked without output arguments.options) [t. The parameter trajectory and input signals are specified by two time functions p=traj(t) and u=ut(t). gear 989 .options) pdsimul simulates the time response of an affine parameterdependent system · E(p) x = A(p)x + B(p)u y = C(p)x + D(p)u along a parameter trajectory p(t) and for an input signal u(t).y] = pdsimul(pds.tf.'ut'. it returns the vector of integration time points t as well as the state and output trajectories x. The affine system pds is specified with psys. pvec.pdsimul Purpose Syntax Description 9pdsimul Time response of a parameterdependent system along a given parameter trajectory pdsimul(pds.pv. Finally. the response to a step input is computed by default.'ut'. Example See Also See “Design Example” on page 710.
S.N] = popov(sys.delta.P. If the uncertainty delta contains real parameter blocks.S. the conservatism of the Popov criterion can be reduced by first performing a simple loop transformation (see “Real Parameter Uncertainty” on page 325). ublock. aff2lft 990 .S.N] = popov(sys.P.delta) tests the robust stability of this interconnection. quadstab. ?? for details). Then P determines the quadratic part xTPx of the Lyapunov function and D and S are the Popov multipliers (see p.popov Purpose Syntax Description 9popov Perform the Popov robust stability test [t.1) Example See Also See “Design Example” on page 710. Robust stability is guaranteed if t < 0. The command [t. The uncertain system must be described as the interconnection of a nominal LTI system sys and some uncertainty delta (see “NormBounded Uncertainty” on page 227). To use this refined test. Use ublock and udiag to specify delta. pdlstab. mustab.P. call popov with the syntax [t.flag) popov uses the Popov criterion to test the robust stability of dynamical systems with possibly nonlinear and/or timevarying uncertainty (see “The Popov Criterion” on page 324 for details).delta.N] = popov(sys.
It performs the following Description operations depending on the calling sequence: • psinfo(ps) displays the type of system (affine or polytopic). .'eval'. pn and the result is the LTI system of SYSTEM matrix S(p) = S0 + p1S1 + . This information can be optionally stored in MATLAB variables by providing output arguments. . . and outputs of the system. inputs. .'eval'. no of states. ni. . the entries of p should be polytopic coordinates p1. The ranking k is relative to the list of systems syslist used in psys..'par') sk = psinfo(ps..'sys'.ns. Sn}. . the entries of p should be real parameter values p1.. • sys = psinfo(ps. .p) instantiates the system for a given vector p of parameter values or polytopic coordinates. .ni.. Sn. • sk = psinfo(ps.p) psinfo is a multiusage function for queries about a polytopic or parameterdependent system ps created with psys. . For affine parameterdependent systems defined by the SYSTEM matrices S0.+ pnSn For polytopic systems with SYSTEM matrix ranging in Co{S1. the number k of SYSTEM matrices involved in its definition. .psinfo Purpose Syntax 9psinfo Inquire about polytopic or parameterdependent systems created with psys psinfo(ps) [type. S1.k) sys = psinfo(ps.k) returns the kth SYSTEM matrix involved in the definition of ps. . pn satisfying pj Š 0 and the result is the interpolated LTI system of SYSTEM matrix p1 S1 + … + pn Sn S = p1 + … + pn See Also psys. .'par') returns the parameter vector description (for parameterdependent systems only). ltisys 991 .'sys'. and the numbers of ns. .k. • pv = psinfo(ps.no] = psinfo(ps) pv = psinfo(ps. .
Sk (polytope of matrices with vertices S1. . Two types of uncertain statespace models can be manipulated in the LMI Control Toolbox: • Polytopic systems · E(t) x = A(t)x + B(t)u y = C(t)x + D(t)u whose SYSTEM matrix takes values in a fixed polytope: A ( t ) + jE ( t ) B ( t ) ∈ Co A 1 + jE 1 B 1 . .. . . . . . or parameterdependent.. Sk) • Affine parameterdependent systems · E(p) x = A(p)x + B(p)u 992 . . . αi = 1 i=1 i = 1 ∑ ∑ denotes the convex hull of S1. timevarying. .psys Purpose Syntax Description 9psys Specify polytopic or parameterdependent linear systems pols = psys(syslist) affs = psys(pv.. ... A k + jE k B k C(t) D(t) C1 D1 Ck Dk S(t) S1 Sk where S1. . .syslist) psys specifies statespace models where the statespace matrices can be uncertain. Sk} = αi Si : αi ≥ 0 . . . Sk are given “vertex” systems and k k Co{S1.
S1. .[s0.).s1. .s3.[s0 s1 s2]) A(p) = A0 + p1A1 + p2A2. In addition. . A ( p ) + jE ( p ) B ( p ) = C(p) D(p) S(p) A 0 + jE 0 B 0 A + jE 1 B 1 A + jE n B n + p1 1 + . i. . pn) of real parameters.e. The parameters pi can be timevarying or constant but uncertain. S4 is created by pols = psys([s1. . . .s4]) while an affine parameterdependent model with 4 real parameters is defined by affs = psys(pv.. .0) psys(pv.s3. . Both types of models are specified with the function psys. E(. To specify the autonomous parameterdependent system · x = A(p)x...s2.0) ltisys(a2. Sn 993 . B(.s2..) are fixed affine functions of some vector p = (p1. . type s0 s1 s2 ps = = = = ltisys(a0) ltisys(a1. the description pv of the parameter vector (range of values and rate of variation) is required for affine parameterdependent models (see pvec for details). + pn n C0 D0 C1 D1 Cn Dn S0 S1 where S0. The argument syslist lists the SYSTEM matrices Si characterizing the polytopic value set or parameter dependence.). . . a polytopic model with vertex systems S1. .s4]) The output is a structured matrix storing all the relevant information.psys y = C(p)x + D(p)u where A(. Thus. Sn are given SYSTEM matrices. ..
ltisys 994 . E(p) = (1+p1 + p2)I. Typing s0 s1 s2 ps = = = = ltisys(a0) ltisys(a1) ltisys(a2) psys(pv. psinfo.[s0 s1 s2]) instead would specify the system · E(p) x = A(p)x.1” on page 221. aff2pol.psys Do not forget the 0 in the second and third commands. A(p) = A0 +p1A1 +p2A2 Example See Also See “Example 2. p2 . pvec. This 0 marks the independence of the E matrix on p1.
....range.vn. . pn) of uncertain or timevarying real parameters pi. vn]) where the second argument is the concatenation of the vectors v1.vertices) pvec is used in conjunction with psys to specify parameterdependent systems. The output argument pv is a structured matrix storing the parameter vector description.rates) pv = pvec('pol'. . Example Consider a problem with two timeinvariant parameters p1 ∈ [–1. .[v1. If the third argument rates is omitted.. .v2. p2 ∈ [20. The type 'box' corresponds to independent parameters ranging in intervals pj ≤ pj ≤ pj The parameter vector p then takes values in a hyperrectangle of Rn called the parameter box.pvec Purpose Syntax Description 9pvec Specify the range and rate of variation of uncertain or timevarying parameters pv = pvec('box'.. . The type 'pol' corresponds to parameter vectors p ranging in a polytope of the parameter space Rn. . . Otherwise. 50] 995 . Use pvinfo to read the contents of pv.≤ ν j dt Set ν j = Inf and ν j = Inf if pj(t) can vary arbitrarily fast or discontinuously. all parameters are assumed timeinvariant. Such systems are parametrized by a vector p = (p1. The function pvec defines the range of values and the rates of variation of these parameters. . This polytope is defined by a set of vertices V1. Vn corresponding to “extremal” values of the vector p. The second argument range is an nby2 matrix that stacks up the extremal values p j and p j of each pj. rates is also an nby2 matrix and its jth row specifies lower and upper bounds ν j and ν j dp on j : dt dp j ν j ≤ . 2]. Such parameter vectors are declared by the command pv = pvec('pol'.. .
psys 996 .20 50]) Alternatively. v2 = –1 20 50 .v3. The four corners of this rectangle are the four vectors v1 = –1 . this vector can be regarded as taking values in the rectangle drawn in Figure 9.v2.[1 2. p2) is specified by pv = pvec('box'.2. v3 = 2 .[v1.pvec The corresponding parameter vector p = (p1. 20 v4 = 2 50 Hence.v4]) p2 50 20 1 2 p1 Figure 92: Parameter box See Also pvinfo. you could also specify p by pv = pvec('pol'.
pvinfo
Purpose Syntax
9pvinfo
Describe a parameter vector specified with pvec
[typ,k,nv] = pvinfo(pv) [pmin,pmax,dpmin,dpmax] = pvinfo(pv,'par',j) vj = pvinfo(pv,'par',j) p = pvinfo(pv,'eval',c) pvec retrieves information about a vector p = (p1, . . ., pn) of real parameters declared with pvec and stored in pv. The command pvinfo(pv) displays the type of parameter vector ('box' or 'pol'), the number n of scalar parameters, and for the type 'pol', the number of vertices used to specify the parameter
Description
range. For the type 'box':
[pmin,pmax,dpmin,dpmax] = pvinfo(pv,'par',j)
returns the bounds on the value and rate of variations of the jth real parameter pj. Specifically, dp j pmin ≤ p ( t ) ≤ pmax , dpmin ≤  ≤ dpmax j dt For the type 'pol':
pvinfo(pv,'par',j)
returns the jth vertex of the polytope of Rn in which p ranges, while
pvinfo(pv,'eval',c)
returns the value of the parameter vector p given its barycentric coordinates c with respect to the polytope vertices (V1, . . .,Vk). The vector c must be of length k and have nonnegative entries. The corresponding value of p is then given by p =
∑i = 1 ci Vi k ∑i = 1 c i
k
See Also
pvec, psys
997
quadperf
Purpose Syntax Description
9quadperf
Compute the quadratic H∞ performance of a polytopic or parameterdependent system
[perf,P] = quadperf(ps,g,options)
The RMS gain of the timevarying system · E ( t )x = A ( t )x + B ( t )u, is the smallest γ > 0 such that y L2 ≤ γ u L2 y = C ( t )x + D ( t )u
(922)
(923)
for all input u(t) with bounded energy. A sufficient condition for (923) is the existence of a quadratic Lyapunov function V(x) = xTPx, such that
T 2 T dV ∀u ∈ L 2 ,  + y y – γ u u < 0 dt
P>0
Minimizing γ over such quadratic Lyapunov functions yields the quadratic H∞ performance, an upper bound on the true RMS gain. The command
[perf,P] = quadperf(ps)
computes the quadratic H∞ performance perf when (922) is a polytopic or affine parameterdependent system ps (see psys). The Lyapunov matrix P yielding the performance perf is returned in P. The optional input options gives access to the following task and control parameters: • If options(1)=1, perf is the largest portion of the parameter box where the quadratic RMS gain remains smaller than the positive value g (for affine parameterdependent systems only). The default value is 0
998
quadperf
• If options(2)=1, quadperf uses the least conservative quadratic performance test. The default is options(2)=0 (fast mode) • options(3) is a userspecified upper bound on the condition number of P (the default is 109).
Example See Also
See “Example 3.4” on page 310.
muperf, quadstab, psys
999
quadstab
Purpose Syntax Description
9quadstab
Quadratic stability of polytopic or affine parameterdependent systems
[tau,P] = quadstab(ps,options)
For affine parameterdependent systems · E(p) x = A(p)x, or polytopic systems · E(t) x = A(t)x, (A, E) ∈ Co{(A1, E1), . . ., (An, En)}, p(t) = (p1(t), . . ., pn(t))
quadstab seeks a fixed Lyapunov function V(x) = xTPx with P > 0 that
establishes quadratic stability (see “Quadratic Lyapunov Functions” on page 33 for details). The affine or polytopic model is described by ps (see psys). The task performed by quadstab is selected by options(1): • if options(1)=0 (default), quadstab assesses quadratic stability by solving the LMI problem Minimize τ over Q = QT such that ATQE + EQAT < τI for all admissible values of (A, E) Q>I The global minimum of this problem is returned in tau and the system is quadratically stable if tau < 0 • if options(1)=1, quadstab computes the largest portion of the specified parameter range where quadratic stability holds (only available for affine models). Specifically, if each parameter pi varies in the interval pi ∈ [pi0 – δi, pi0 + δi],
quadstab computes the largest θ > 0 such that quadratic stability holds over
the parameter box pi ∈ [pi0 – θδi, pi0 + θδi] This “quadratic stability margin” is returned in tau and ps is quadratically stable if tau Š 1. Given the solution Qopt of the LMI optimization, the Lyapunov matrix P is –1 given by P = Q opt . This matrix is returned in P.
9100
quadstab
Other control parameters can be accessed through options(2) and options(3): • if options(2)=0 (default), quadstab runs in fast mode, using the least expensive sufficient conditions. Set options(2)=1 to use the least conservative conditions • options(3) is a bound on the condition number of the Lyapunov matrix P. The default is 109.
Example See Also
See “Example 3.1” on page 37 and “Example 3.2” on page 38.
pdlstab, mustab, decay, quadperf, psys
9101
ricpen
Purpose Syntax Description
9ricpen
Solve continuoustime Riccati equations (CARE)
X = ricpen(H,L) [X1,X2] = ricpen(H,L)
This function computes the stabilizing solution X of the Riccati equation A XE + E XA + E XFXE – ( E XB + S )R ( B XE + S ) + Q = 0
T T T T –1 T T
(924)
where E is an invertible matrix. The solution is computed by unitary deflation of the associated Hamiltonian pencil
H – λL
E 0 0 = –Q –A –S – λ 0 ET 0 T T 0 0 0 S B R
T
A
F
B
The matrices H and L are specified as input arguments. For solvability, the pencil H – λL must have no finite generalized eigenvalue on the imaginary axis. When called with two output arguments, ricpen returns two matrices X1, X2 such that X1 X2 has or thonormal columns, and X = X2 X 1 is the stabilizing
–1
solution of (924). If X1 is invertible, the solution X is returned directly by the syntax
X = ricpen(H,L). ricpen is adapted from care and implements the generalized Schur method for
solving CAREs.
Reference
Laub, A. J., “A Schur Method for Solving Algebraic Riccati Equations,” IEEE Trans. Aut. Contr., AC–24 (1979), pp. 913–921. Arnold, W.F., and A.J. Laub, “Generalized Eigenproblem Algorithms and Software for Algebraic Riccati Equations,” Proc. IEEE, 72 (1984), pp. 1746– 1754.
9102
ricpen
See Also
care, dricpen
9103
g2. sdiag. in which case the resulting system sys is of the same nature.. The arguments g1. one (and at most one) of these systems can be polytopic or parameterdependent.. In addition.+ Gn(s) + See Also smult. + Gn G of the linear systems G1. ltisys 9104 .. .) sadd forms the parallel interconnection G1 + G2 . . are either SYSTEM matrices (dynamical systems) or constant matrices (static gains). . . . Gn.sadd Purpose Syntax Description 9sadd Parallel interconnection of linear systems sys = sadd(g1.. . g2... sadd performs the sum G(s) = G1(s) + G2(s) + . In terms of transfer functions. .. sloop.
b.condnbr) sys = sbalanc(sys.b.d]=tf2ss([1 1 1].c.sbalanc Purpose Syntax Description 9sbalanc Numerically balance the statespace realization of a linear system [a.condnbr) sbalanc computes a diagonal invertible matrix T that balances the statespace realization (A.c) a = 1000.c. Example Computing a realization of s +s+1 W ( s ) = 2 ( s + 500 ) with the function tf2ss yields [a.b.[1 1000 500^2]) a = 1000 250000 1 0 b = 1 0 c = 999 249999 d = 1 2 To reduce the numerical range of this data and improve numerical stability of subsequent computations. The resulting realization is of the form (T –1AT. The default value is 108. B.b. CT) The optional argument condnbr specifies an upper bound on the condition number of T. C) by reducing the magnitude of the offdiagonal entries of A and by scaling the norms of B and C.b. you can balance this realization by [a. T–1B.c]=sbalanc(a.00 64.25 0 9105 .c] = sbalanc(a.00 3906.
sbalanc b = 64.00 0 c = 15.03 See Also ltisys 9106 .61 61.
. The outputs of a system of name G are collectively denoted by G. • The third argument Kin names the controller and specifies its inputs. More generally. sconnect returns: • The SYSTEM matrix P of the standard H∞ plant P(s) • The vector r = [p2 m2] where p2 and m2 are the number of input and outputs of the controller.. inputs='r(2). sconnect is useful to specify complex interconnections of LTI systems (see Example 1 below). outputs='e=rS.outputs. i. S+d' specifies two outputs e = r – y and y + d where y is the output of the system of name S. use the syntax G(2:3). what signals enter the loop or interconnection • The ouput signals. a vector r of size 2 and a scalar input d • The second argument outputs is a string listing the outputs generated by the control loop. the signals generated by the loop or interconnection • How the inputs of each dynamical system relate to the exogenous inputs and to the outputs of other systems. 9107 .. Specifically.r] = sconnect(inputs.sconnect Purpose Syntax Description 9sconnect Specify general control structures or system interconnections [P.G2in. respectively.g1. Outputs are defined as combinations of the exogenous inputs and the outputs of the dynamical systems. i.) sconnect is useful in loopshaping problems to turn general control structures into the standard linearfractional interconnection used in H∞ synthesis... If no name is specified as in Kin='e'.g. the default name K is given to the controller. To refer to particular outputs. For instance. e. The arguments of sconnect are as follows: • The first argument inputs is a string listing the exogenous inputs. you must specify • The exogenous inputs.e.g2.G1in. the second and third. General control loops or system interconnections are described in terms of signal flows.e.. Kin='K:e' inserts a controller of name K and input e.Kin. For instance. For instance. In this context. d' specifies two inputs.
sys1.[1 0]. G1in='S: K(2). for each known LTI system in the loop.1. Note that the names given to the various systems are immaterial provided that they are used consistently throughout the definitions.sconnect • The remaining arguments come in pairs and specify.'S2'.'S2:S1'. .sys2) 9108 .[1 2]) S = sconnect('r'. input2 . Remark Example 1 One of the dynamical systems can be polytopic or affine parameterdependent. r 1 s+1 1 s+2 y This simple example illustrates the use of sconnect to compute interconnections of systems without controller.. The input list is a string of the form system name : input1 . In this case.'S1:r'.[].d' inserts a system called S whose inputs consist of the second output of the controller K and of the input signal d. its input list Gkin and its SYSTEM matrix gk.sys2) Note that inputK is set to [] to mark the absence of controller. The same result would be obtained with S = smult(sys1.. input n For instance. .[1 1]) sys2 = ltisys('tf'. sconnect always returns a polytopic model of the interconnection. The SYSTEM matrix of this series interconnection is returned by sys1 = ltisys('tf'.
'y=G'. wn.sconnect Example 2 r + − e C(s) u 1 s+1 y The standard plant associated with this simple tracking loop is given by g = ltisys('tf'.1. W3.n.'C:ry'. w2. W2.'G:C'. the corresponding standard plant P(s) is formed by inputs = 'r.W3' Kin = 'K: e=ryWn' W3in = 'W3: y=G+Wd' 9109 . w1.W2.d' outputs = 'W1. w3. respectively. Wn are stored in the variables g. Wd.g) Example 3 W1 + − e K G z1 W2 z2 + + W3 z3 Wd d r + + Wn n If the SYSTEM matrices of the system G and filters W1. wd.[1 1]) P = sconnect('r'.
sconnect
[P,r] = sconnect(inputs,outputs,Kin,'G:K',g,'W1:e',w1,... 'W2:K',w2,W3in,w3,'Wd:d',wd,'Wn:n',wn)
See Also
ltisys, slft, ssub, sadd, smult
9110
sderiv
Purpose Syntax Description
9sderiv
Apply proportionalderivative action to some inputs/outputs of an LTI system
dsys = sderiv(sys,chan,pd) sderiv multiplies selected inputs and/or outputs of the LTI system sys by the
proportionalderivator ns + d. The coefficients n and d are specified by setting
pd = [n , d]
The second argument chan lists the input and output channels to be filtered by ns + d. Input channels are denoted by their ranking preceded by a minus sign. On output, dsys is the SYSTEM matrix of the corresponding interconnection. An error is issued if the resulting system is not proper.
Example
Consider a SISO loopshaping problem where the plant P(s) has three outputs corresponding to the transfer functions S, KS, and T . Given the shaping filters 1 w1(s) =  , w2(s) = 100, w3(s) = 1 + 0.01s, s the augmented plant associated with the criterion w S 1 w KS 2 w T 3 ∞ is formed by
w1 = ltisys('tf',1,[1 0]) w2 = 100 paug = smult(p,sdiag(w1,w2,1)) paug = sderiv(paug,3,[0.01 1])
This last command multiplies the third output of P by the filter w3.
See Also
ltisys, magshape
9111
sdiag
Purpose Syntax Description
9sdiag
Append (concatenate) linear systems
g = sdiag(g1,g2,...) sdiag returns the system obtained by stacking up the inputs and outputs of the systems g1,g2,... as in the diagram.
u1
G1
y1
u un
y yn
Gn
G
If G1(s),G2 (s), . . . are the transfer functions of g1,g2,..., the transfer function of g is G (s) 0 0 1 G(s) = 0 G2 ( s ) 0 0 0 The function sdiag takes up to 10 input arguments. One (and at most one) of the systems g1,g2,... can be polytopic or parameterdependent, in which case g is of the same nature.
...
...
9112
sdiag
Example
Let p be a system with two inputs u1, u2 and three outputs y1 , y2 , y3. The augmented system
y1 u1 u2 P y2 y3
w1 w2 w3
y˜1 y˜2 y˜3
Paug with weighting filters 1 w1(s) =  , w2(s) = 10, s is created by the commands
100s w3(s) = s + 100
w1 = ltisys('tf',1,[1 0]) w3 = ltisys('tf',[100 0],[1 100]) paug = smult(p,sdiag(w1,10,w3))
See Also
ltisys, ssub, sadd, smult, psys
9113
setlmis
Purpose Syntax Description
9setlmis
Initialize the description of an LMI system
setlmis(lmi0)
Before starting the description of a new LMI system with lmivar and lmiterm, type
setlmis([])
to initialize its internal representation. To add on to an existing LMI system, use the syntax
setlmis(lmi0)
where lmi0 is the internal representation of this LMI system. Subsequent lmivar and lmiterm commands will then add new variables and terms to the initial LMI system lmi0.
See Also
getlmis, lmivar, lmiterm, newlmi
9114
setmvar
Purpose Syntax Description
Instantiate a matrix variable and evaluate all LMI terms involving this matrix variable
newsys = setmvar(lmisys,X,Xval) setmvar sets the matrix variable X with identifier X to the value Xval. All terms
9setmvar
involving X are evaluated, the constant terms are updated accordingly, and X is removed from the list of matrix variables. A description of the resulting LMI system is returned in newsys. The integer X is the identifier returned by lmivar when X is declared. Instantiating X with setmvar does not alter the identifiers of the remaining matrix variables. The function setmvar is useful to freeze certain matrix variables and optimize with respect to the remaining ones. It saves time by avoiding partial or complete redefinition of the set of LMI constraints.
Example
Consider the system · x = Ax + Bu and the problem of finding a stabilizing statefeedback law u = Kx where K is an unknown matrix. By the Lyapunov Theorem, this is equivalent to finding P > 0 and K such that (A + BK)P + P(A + BK)T + I < 0. With the change of variable Y := KP, this condition reduces to the LMI AP + PAT + BY + YTBT + I < 0. This LMI is entered by the commands
n = size(A,1) % number of states ncon = size(B,2) % number of inputs setlmis([]) P = lmivar(1,[n 1]) % P full symmetric Y = lmivar(2,[ncon n]) % Y rectangular lmiterm([1 1 1 P],A,1,'s') % AP+PA'
9115
setmvar
lmiterm([1 1 1 Y],B,1,'s') % BY+Y'B' lmiterm([1 1 1 0],1) % I lmis = getlmis
To find out whether this problem has a solution K for the particular Lyapunov matrix P = I, set P to I by typing
news = setmvar(lmis,P,1)
The resulting LMI system news has only one variable Y = K. Its feasibility is assessed by calling feasp:
[tmin,xfeas] = feasp(news) Y = dec2mat(news,xfeas,Y)
The computed Y is feasible whenever tmin < 0.
See Also
evallmi, delmvar
9116
setmvar 9117 . Example See Also See the description of evallmi. evallmi.and righthand sides of an LMI after evaluation of all variable terms [lhs.rhs] = showlmi(evalsys.showlmi Purpose Syntax Description 9showlmi Return the left. the function evallmi evaluates all variable terms in a system of LMIs.and righthand sides of the nth LMI are then constant matrices that can be displayed with showlmi. If evalsys is the output of evallmi. the values lhs and rhs of these left.rhs] = showlmi(evalsys. The left.n) An error is issued if evalsys still contains variable terms.n) For given values of the decision variables.and righthand sides are given by [lhs.
sinfo Purpose Syntax Description 9sinfo Return the number of states. psinfo 9118 . Without output arguments. and outputs of an LTI system sinfo(sys) [ns. Note that ns.ni. and no are the lengths of the vectors x.no] = sinfo(sys) For a linear system · E x = Ax + Bu y = Cx + Du specified in the SYSTEM matrix sys. sinfo displays this information on the screen and also indicates whether the system is in descriptor form (E ¦ I) or not. sinfo returns the order ns of the system (number of states) and the numbers ni and no of its inputs and outputs. u. ltiss. See Also ltisys. ni. y. respectively. inputs. respectively.
sinv returns the inverse system with transfer function H(s) = G(s)–1.sinv Purpose Syntax Description Compute the inverse G(s)–1. of a system G(s) 9sinv h = sinv(g) Given a system g with transfer function G(s) = D + C(sE – A)–1B with D square and invertible. sadd. This corresponds to exchanging inputs and outputs: y = G(s)u ⇔ u = H(s)y See Also ltisys. sdiag 9119 . if it exists. smult.
The optional arguments w 2 z2 udim and ydim specify the lengths of the vectors u and y.ydim) slft forms the linearfractional feedback interconnection w1 sys1 u z1 y w2 sys2 z2 of the two systems sys1 and sys2 and returns a realization sys of the w z closedloop transfer function from 1 to 1 .udim. slft forms one of the two interconnections: 9120 .sys2.When udim and ydim are omitted.slft Purpose Syntax Description 9slft Form the linearfractional interconnection of two timeinvariant systems (Redheffer’s star product) sys = slft(sys1. Note that u enters sys1 while y is an output of this system.
slft also performs interconnections of polytopic or affine parameterdependent systems. An error is issued if neither of these interconnections can be performed. 9121 .slft w1 sys1 z1 u sys2 y or u sys1 y z2 sys2 w2 depending on which system has the larger number of inputs/outputs. An error is issued if the result is neither polytopic nor affine parameterdependent.
sconnect 9122 . P(s).slft(p.k)) or equivalently as clsys = slft(slft(delta.slft Example Consider the interconnection ∆(s) w P(s) z u K(s) y where ∆(s). sloop. K(s) are LTI systems.k) See Also ltisys. A realization of the closedloop transfer function from w to z is obtained as clsys = slft(delta.p).
sloop
Purpose Syntax Description
9sloop
Form the feedback interconnection of two systems
sys = sloop(g1,g2,sgn) sloop forms the interconnection
r
+ G1
sgn
y
G2
The output sys is a realization of the closedloop transfer function from r to y. The third argument sgn specifies either negative feedback (sgn = 1) or positive feedback (sgn = +1). The default value is 1. In terms of the transfer functions G1(s) and G2(s), sys corresponds to the transfer function (I – εG1G2)–1G1 where ε =sgn.
Example
The closedloop transfer function from r to y in r + − 1/(s+1) y
1/s
is obtained by
sys = sloop( ltisys('tf',1,[1 1]) , ltisys('tf',1,[1 0])) [num,den] = ltitf(sys) num =
9123
sloop
0 den = 1.00
1.00
0.00
1.00
1.00
See Also
ltisys, slft, sconnect
9124
smult
Purpose Syntax Description
9smult
Series interconnection of linear systems
sys = smult(g1,g2,...) smult forms the series interconnection
u G1 G2
........
Gn
y
G The arguments g1, g2,... are SYSTEM matrices containing the statespace data for the systems G1, . . ., Gn. Constant matrices are also allowed as a representation of static gains. In addition, one (and at most one) of the input arguments can be a polytopic or affine parameterdependent model, in which case the output sys is of the same nature. In terms of transfer functions, this interconnection corresponds to the product G(s) = Gn(s) × . . . × G1(s) Note the reverse order of the terms.
Example
Consider the interconnection
Wi(s)
G
Wo(s)
where Wi(s), W0(s) are given LTI filters and G is the affine parameterdependent system · x = θ1x + u y = x – θ 2u parametrized by θ1, θ2. The parameterdependent system defined by this interconnection is returned by
9125
smult
sys = smult(wi,g,w0)
While the SYSTEM matrices wi and w0 are created with ltisys, the parameterdependent system g should be specified with psys.
See Also
sadd, sdiag, sloop, slft, ltisys
9126
splot
Purpose Syntax Description
9splot
Plot the various frequency and time responses of LTI systems
splot(sys,type,xrange) splot(sys,T,type,xrange)
The first syntax is for continuoustime systems · E x = Ax + Bu y = Cx + Du The first argument sys is the SYSTEM matrix representation of a system. The optional argument xrange is a vector of frequencies or times used to control the xaxis and the number of points in the plot. Finally, the string type consists of the first two characters of the diagram to be plotted. Available choices include:
Frequency Response type 'bo' 'sv' 'ny' 'li' 'ni' plot
Bode plot Singular value plot Nyquist plot linlog Nyquist plot Black/Nichols chart
Time Response type 'st' 'im' 'sq' 'si' plot
Step response Impulse response Response to a square signal Response to a sinusoidal signal
9127
splot
The syntax splot(sys,T,type,xrange) is used for discretetime systems, in which case T is the sampling period in seconds.
Remark Example
The Control System Toolbox is needed to run splot. Consider the thirdorder SISO system with transfer function 1 G ( s ) = 2 s ( s + 0.001s + 1 ) To plot the Nyquist diagram of the frequency response G(jω), type
g = ltisys('tf',1,[1 0.001 1 0]) splot(g,'ny')
The resulting plot appears in Figure 93. Due to the integrator and poor damping of the natural frequency at 1 rd/s, this plot is not very informative. In such cases, the linlog Nyquist plot is more appropriate. Given the gain/phase decomposition G(jω) = γ(ω)ejφ(ω) of the frequency response, the linlog Nyquist plot is the curve described by x = ρ(ω) cos ϕ(ω), where if γω ≥ 1 γω p( ω) = 1 + log 10 γω if γω > 1 This plot is drawn by the command
splot(g,'li')
y = ρ(ω) sin ϕ(ω)
and appears in Figure 94. The resulting aspect ratio is more satisfactory thanks to the logarithmic scale for gains larger than one. A more complete contour is drawn in Figure 95 by
splot(g,'li',logspace(3,2))
9128
splot
Figure 93: splot(g,'ny')
Figure 94: splot(g,'li')
9129
bode. sigma.'li'.logspace(3.splot Figure 95: splot(g. nyquist.2)) See Also ltisys. impulse 9130 . step.
ltisys. D. C. If (A. ltiss See Also 9131 . the poles are the generalized eigenvalues of the pencil (A. E) in general. E) denotes the statespace realization of this system.spol Purpose Syntax Description 9spol Return the poles of an LTI system poles = spol(sys) The function spol computes the poles of the LTI system of SYSTEM matrix sys. and the eigenvalues of A when E = I. B.
C. spol 9132 . E).sresp Purpose Syntax Description 9sresp Frequency response of a continuoustime system resp = sresp(sys. The first input sys is the SYSTEM matrix of G.f) Given a continuoustime system G(s) of realization (A. See Also ltisys. sresp computes its frequency response G(jω) = D + C(jωE – A)–1B at the frequency ω = f. splot. B. D.
.2..5. psys 9133 . If p is a system with 3 inputs and 6 outputs.ssub Purpose Syntax Description 9ssub Select particular inputs and outputs in a MIMO system subsys = ssub(sys. ssub extracts the subsystem subsys mapping the inputs selected by inputs to the outputs selected by outputs.outputs) Given an LTI or polytopic/parameterdependent system sys.6 is obtained as: ssub(p. Inputs and outputs are labeled 1.inputs.4:6) Example The second and third arguments are the vectors of indices of the selected inputs/outputs.3.. a realization of the transfer function from the input channels 1.3 to the output channels 4. starting from the top of the block diagram: u1 u2 P u3 y1 y2 y3 y4 y5 y6 See Also ltisys.3]. The resulting subsystem has the same number of states as sys even though some of these states may no longer be controllable/observable from the remaining inputs and outputs.[1.
The string type consists of one.ublock Purpose Syntax Description 9ublock Specify the characteristics of an uncertainty block block = ublock(dims. They are also referred to as repeated scalars. . Finally.. 9134 . The default is 'f' • The phase information: the block can be complexvalued ('c') or realvalued ('r'). the quantitative information can be of two types: • Norm bound: here the second argument bound is either a scalar β for uniform bounds. Set dim = [m. Characteristic properties include: • The dynamical nature: linear time invariant ('lti'). or arbitrary nonlinear ('nl'). Finally. i. nonlinear memoryless ('nlm').bound.n] if ∆i is mbyn. linear time varying ('ltv'). The second argument bound contains the quantitative information about the uncertainty (norm bound or sector bounds). ∆n) Such operators arise in the linearfractional representation of uncertain dynamical systems (see “How to Derive Such Models” on page 223). type is a string specifying the dynamical and structural characteristics of ∆i. The default is 'lti' • The structure: 'f' for full (unstructured) blocks and 's' for scalar blocks of the form ∆i = δi x Iri Scalar blocks often arise when representing parameter uncertainty. Complexvalued means arbitrary phase. For instance. .or twoletter identifiers marking the uncertainty block properties.type) ublock is used to describe individual uncertainty blocks ∆i in a blockdiagonal uncertainty operator ∆ = diag(∆1.e. The order of the property identifiers is immaterial. The default is 'c'. a realvalued timevarying scalar block is specified by setting type = 'tvsr'. The first argument dims specifies the dimensions of ∆i. has m outputs and n inputs.. .
Example See Also See the example on p. that is. uinfo. b}.) lies in the sector {a. or a SISO shaping filter W(s) for frequencyweighted bounds W –1∆∞ < 1. aff2lft 9135 . Valid values for a and b include Inf and Inf. slft.ublock ∆∞ < β. udiag. 225 of this manual. σmax(∆(jω)) < W(jω) for all ω • Sector bounds: set bound = [a b] to indicate that the response of ∆(.
. . are the descriptions of ∆1. the kth column being the description of ∆k. slft... . See Also ublock.. udiag.. .) The input arguments delta1. If udiag is called with k arguments. . . the output delta is a matrix with k columns. aff2lft 9136 ..) The function udiag appends individual uncertainty blocks specified with ublock and returns a complete description of the blockdiagonal uncertainty operator ∆ = diag(∆1..delta2. ∆2. .udiag Purpose Syntax Description 9udiag Form blockdiagonal uncertainty structures delta = udiag(delta1.. delta2. ∆2.
See Also ublock. . the function uinfo lists the characteristics of each uncertainty block ∆i. .uinfo Purpose Syntax Description 9uinfo Display the characteristics of a blockdiagonal uncertainty structure uinfo(delta) Given the description delta of a blockdiagonal uncertainty operator ∆ = diag(∆1. as returned by ublock and udiag. ∆n). udiag 9137 . ..
uinfo 9138 .