Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
For Use with MATLAB
®
Gary Balas Richard Chiang Andy Packard Michael Safonov
User’s Guide
Version 3
How to Contact The MathWorks:
www.mathworks.com comp.softsys.matlab support@mathworks.com suggest@mathworks.com bugs@mathworks.com doc@mathworks.com service@mathworks.com info@mathworks.com
Web Newsgroup Technical support Product enhancement suggestions Bug reports Documentation error reports Order status, license renewals, passcodes Sales, pricing, and general information Phone Fax Mail
5086477000 5086477001 The MathWorks, Inc. 3 Apple Hill Drive Natick, MA 017602098
For contact information about worldwide offices, see the MathWorks Web site. Robust Control Toolbox User’s Guide © COPYRIGHT 1992–2005 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without prior written consent from The MathWorks, Inc. FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through the federal government of the United States. By accepting delivery of the Program or Documentation, the government hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.2277014. Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and Documentation by the federal government (or other entity acquiring for or through the federal government) and shall supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to The MathWorks, Inc. MATLAB, Simulink, Stateflow, Handle Graphics, RealTime Workshop, and xPC TargetBox are registered trademarks of The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective holders.
Revision History: August 1992 January 1998 June 2001 June 2004 October 2004 March 2005
First printing
New for Version 1
Online only Online only Online only Online only Online only
Revised for Version 2 Revised for Version 2.0.8 (Release 12.1) Revised for Version 2.0.9 (Release 14) Revised for Version 3 (Release 14SP1) Revised for Version 3.0.1 (Release 14SP2)
Contents
Introduction
1
What Is the Robust Control Toolbox? . . . . . . . . . . . . . . . . . . . 12 Required Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Modeling Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Example: ACC Benchmark Problem . . . . . . . . . . . . . . . . . . . . . 13 Worst Case Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Example: ACC Two Cart Benchmark Problem . . . . . . . . . . . . . 17 Synthesis of Robust MIMO Controllers . . . . . . . . . . . . . . . . 110 Example: Designing a Controller with LOOPSYN . . . . . . . . . 110 Model Reduction and Approximation . . . . . . . . . . . . . . . . . . 114 Example: NASA HIMAT Controller Order Reduction . . . . . . 114 LMI Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Extends Control System Toolbox . . . . . . . . . . . . . . . . . . . . . . 119 About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Multivariable Loop Shaping
2
Tradeoff between Performance and Robustness . . . . . . . . . 22 Norms and Singular Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Typical loop shapes, S and T Design . . . . . . . . . . . . . . . . . . . . 25 Singular Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
i
Guaranteed Gain/Phase Margins in MIMO Systems . . . . . . . . 29 Using LOOPSYN to do HInfinity Loop Shaping . . . . . . . . . Example: NASA HiMAT loopshaping . . . . . . . . . . . . . . . . . . . Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MATLAB Commands for a LOOPSYN design . . . . . . . . . . . . .
211 211 213 213
Using MIXSYN for HInfinity Loop Shaping . . . . . . . . . . . . 218 Example: NASA HiMAT design using MIXSYN . . . . . . . . . . . 219 Loop Shaping Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Model Reduction for Robust Control
3
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Hankel Singular Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Overview of Model Reduction Techniques . . . . . . . . . . . . . . . 34 Approximating Plant Models  Additive Error Methods . . 36 Approximating Plant Models  Multiplicative Error Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Using Modal Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Rigid Body Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Reducing Large Scale Models . . . . . . . . . . . . . . . . . . . . . . . . . 313 Using Normalized Coprime Factor Methods . . . . . . . . . . . . 314 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
ii
Contents
Robustness Analysis
4
Uncertainty Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Uncertain Models of Dynamic Systems . . . . . . . . . . . Creating Uncertain Parameters . . . . . . . . . . . . . . . . . . . . . . . . . Quantifying Unmodeled Dynamics . . . . . . . . . . . . . . . . . . . . . . .
42 42 43 45
Robustness Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 MultiInput, MultiOutput Robustness Analysis . . . . . . . . . Adding Independent Input Uncertainty to Each Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ClosedLoop Robustness Analysis . . . . . . . . . . . . . . . . . . . . . . . Nominal Stability Margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robustness of Stability Model Uncertainty . . . . . . . . . . . . . . .
412 413 415 417 418
WorstCase Gain Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Summary of Robustness Analysis Tools . . . . . . . . . . . . . . . . 423
HInfinity and Mu Synthesis
5
HInfinity Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Performance as Generalized Disturbance Rejection . . . . . . . . . 52 Robustness in the H∞ Framework . . . . . . . . . . . . . . . . . . . . . . . 58 Application of H∞ and mu to Active Suspension Control . Quarter Car Suspension Model . . . . . . . . . . . . . . . . . . . . . . . . . Linear H• Controller Design . . . . . . . . . . . . . . . . . . . . . . . . . . . H• Control Design 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H• Control Design 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Design via mSynthesis . . . . . . . . . . . . . . . . . . . . . . . .
510 510 512 513 514 519
Functions for Control Design . . . . . . . . . . . . . . . . . . . . . . . . . 524
iii
Appendix: Interpretation of HInfinity norm . . . . . . . . . . . 526 Norms of Signals and Systems . . . . . . . . . . . . . . . . . . . . . . . . . 526 Using Weighted Norms to Characterize Performance . . . . . . . 528 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
Building Uncertain Models
6
Introduction to Uncertain Atoms . . . . . . . . . . . . . . . . . . . . . . . 62 Uncertain Real Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Uncertain LTI Dynamics Atoms . . . . . . . . . . . . . . . . . . . . . . . . 610 Complex Parameter Atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Complex Matrix Atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615 Unstructured Uncertain Dynamic Systems . . . . . . . . . . . . . . . 617 Uncertain Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Uncertain Matrices from Uncertain Atoms . . . . . . . Accessing Properties of a umat . . . . . . . . . . . . . . . . . . . . . . . . . Row and Column Referencing . . . . . . . . . . . . . . . . . . . . . . . . . . Matrix Operation on umat Objects . . . . . . . . . . . . . . . . . . . . . . Substituting for Uncertain Atoms . . . . . . . . . . . . . . . . . . . . . . . Uncertain StateSpace Systems (uss) . . . . . . . . . . . . . . . . . . . Creating Uncertain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . Properties of uss Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sampling Uncertain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . Feedback Around Uncertain Plant . . . . . . . . . . . . . . . . . . . . . . Interpreting Uncertainty in Discrete Time . . . . . . . . . . . . . . . Lifting a ss to a uss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Handling Delays in uss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uncertain frd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Uncertain Frequency Response Objects . . . . . . . . . . Properties of ufrd objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Interpreting Uncertainty in Discrete Time . . . . . . . . . . . . . . . Lifting an frd to a ufrd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
619 619 620 621 622 623 625 625 626 627 628 630 631 631 633 633 633 636 636
iv
Contents
Handling Delays in ufrd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636 Basic Control System Toolbox Interconnections . . . . . . . . 637 Simplifying Representation of Uncertain Objects . . . . . . . 638 Effect of AutoSimplify Property . . . . . . . . . . . . . . . . . . . . . . . . 639 Direct Use of simplify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641 Sampling Uncertain Objects . . . . . . . . . . . . . . . . . . . . . . . . . . Generating One Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generating Many Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sampling ultidyn Atoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
642 642 642 643
Substitution by usubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Specifying the Substitution with Structures . . . . . . . . . . . . . . 647 Nominal and Random Values . . . . . . . . . . . . . . . . . . . . . . . . . . 647 Array Management for Uncertain Objects . . . . . . . . . . . . . . Referencing Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays with stack, cat . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays by Assignment . . . . . . . . . . . . . . . . . . . . . . . . Binary Operations with Arrays . . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays With usample . . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays With usubs . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays with gridureal . . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays with repmat . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Arrays with repsys . . . . . . . . . . . . . . . . . . . . . . . . . . . Using permute and ipermute . . . . . . . . . . . . . . . . . . . . . . . . . . Decomposing Uncertain Objects (for Advanced Users) . . Normalizing Functions for Uncertain Atoms . . . . . . . . . . . . . . Properties of the Decomposition . . . . . . . . . . . . . . . . . . . . . . . . Syntax of lftdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced Syntax of lftdata . . . . . . . . . . . . . . . . . . . . . . . . . . . .
649 649 650 652 653 653 655 656 657 658 658 660 660 661 662 664
v
Generalized Robustness Analysis
7
Introduction to Generalized Robustness Analysis . . . . . . . . 72 Robust Stability Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Robust Performance Margin . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 WorstCase Gain Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Introduction to Linear Matrix Inequalities
8
Linear Matrix Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 LMI Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 LMIs and LMI Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 The Three Generic LMI Problems . . . . . . . . . . . . . . . . . . . . . . . . 85 Further Mathematical Background . . . . . . . . . . . . . . . . . . . . . 89 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
The LMI Lab
9
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Some Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Overview of the LMI Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Specifying a System of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 A Simple Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Initializing the LMI System . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
vi
Contents
Specifying the LMI Variables . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Individual LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying LMIs With the LMI Editor . . . . . . . . . . . . . . . . . . . How It All Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
910 912 915 918
Querying the LMI System Description . . . . . . . . . . . . . . . . . 919 lmiinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919 lminbr and matnbr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919 LMI Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920 From Decision to Matrix Variables and Vice Versa . . . . . . 926 Validating Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927 Modifying a System of LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting an LMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deleting a Matrix Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . Instantiating a Matrix Variable . . . . . . . . . . . . . . . . . . . . . . . . Advanced Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Structured Matrix Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . ComplexValued LMIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying cTx Objectives for mincx . . . . . . . . . . . . . . . . . . . . . Feasibility Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WellPosedness Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SemiDefinite B(x) in gevp Problems . . . . . . . . . . . . . . . . . . . . Efficiency and Complexity Issues . . . . . . . . . . . . . . . . . . . . . . . Solving M + PTXQ + QTXTP < 0 . . . . . . . . . . . . . . . . . . . . . . . .
928 928 928 929 931 931 933 936 937 938 939 939 940
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
Function Reference
10
Functions — Categorical List . . . . . . . . . . . . . . . . . . . . . . . . . 102 Uncertain Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
vii
Uncertain Matrices and Systems . . . . . . . . . . . . . . . . . . . . . . . Manipulation of Uncertain Models . . . . . . . . . . . . . . . . . . . . . . Interconnection of Uncertain Models . . . . . . . . . . . . . . . . . . . . Model Order Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robustness and WorstCase Analysis . . . . . . . . . . . . . . . . . . . . Robustness Analysis for ParameterDependent Systems (PSystems) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controller Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mSynthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SampledData Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gain Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supporting Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specification of Systems of LMIs . . . . . . . . . . . . . . . . . . . . . . . LMI Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMI Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modification of Systems of LMIs . . . . . . . . . . . . . . . . . . . . . . . .
102 102 103 103 104 105 106 106 107 107 107 107 108 108 108 109
Functions — Alphabetical List . . . . . . . . . . . . . . . . . . . . . . . 1010
Index
viii Contents
1
Introduction
What Is the Robust Control Toolbox? (p. 12) Modeling Uncertainty (p. 13) Worst Case Performance (p. 17)
Tools for analysis and design of uncertain control systems How to model uncertain parameters, matrices and systems Computing worstcase values for uncertain systems
Synthesis of Robust MIMO Controllers Designing controllers that optimize worst case (p. 110) performance and maximize stability margins (H∞, H2, loopshaping, mixed sensitivity, µsynthesis, and LMI optimization) Model Reduction and Approximation (p. 114) LMI Solvers (p. 118) Extends Control System Toolbox (p. 119) About the Authors (p. 120) Bibliography (p. 121) Simplifying highorder LTI plant and controller models Tools for solving Linear Matrix Inequalities How the Robust Control Toolbox uses LTI objects and commands from the Control System Toolbox Brief biographies of the toolbox authors A list of sources on robust control theory
1
Introduction
What Is the Robust Control Toolbox?
The Robust Control Toolbox (RCT) is a collection of functions and tools that help you to analyze and design MIMO control systems with uncertain elements. You can build uncertain LTI system models containing uncertain parameters and uncertain dynamics. You get tools to analyze MIMO system stability margins and worst case performance. The Robust Control Toolbox includes a selection of control synthesis tools that compute controllers that optimize worstcase performance and identify worstcase parameter values. The toolbox lets you simplify and reduce the order of complex models with model reduction tools that minimize additive and multiplicative error bounds. And, it provides tools for implementing advanced robust control methods like H∞, H2, Linear Matrix Inequalities (LMI), and µsynthesis robust control. You can shape MIMO system frequency responses and design uncertainty tolerant controllers.
Required Software
The Robust Control Toolbox requires that you have installed the Control System Toolbox.
12
and uncertain gains. Control Force u1 Position . m2 and k are uncertain.. 2 m2 s The parameters m1. multiplication. Model uncertainty arises when system gains or other parameters are not precisely known. Example: ACC Benchmark Problem For instance. You can build models of complex systems by combining models of subsystems using addition. or may vary over a given range. Examples of real parameter uncertainties include uncertain pole and zero locations. by which one means complex parameter variations satisfying given magnitude bounds. and division. With the Robust Control Toolbox you can create uncertain LTI models as MATLAB® objects specifically designed for robust control applications. consider the twocart “ACC Benchmark” system [13] consisting of two frictionless carts connected by a spring as shown in shown below. equal to one plus or minus 20%: 13 ..x1  Position Measurement . 2 m1s 1 G2(s) = . where the individual carts have respective transfer functions 1 G1(s) = . One may also have unstructured uncertainties.Modeling Uncertainty Modeling Uncertainty At the heart of robust control is the concept of an uncertain LTI system. as well as with Control System Toolbox commands like feedback and lft.x2 = y1 m2 m m m1 m m e %e e% e e Spring k Figure 11: ACC benchmark problem The system has block diagram model shown in Figure 12.
1
Introduction
m1 = 1±0.2 m2 = 1±0.2 k = 1±0.2 P(s) u1 + u2 Spring k f1 x1 f2 x2 F(s) y1
Σ
G1(s) Cart 1
G2(s) Cart 2
+

Σ
y2
Figure 12: “ACC benchmark” twocart system block diagram y1 = P(s) u1
The upper dashedline block has transfer function matrix F(s) = 0 1 0 G2 ( s ) 1 –1 + G1 ( s ) –1
This code builds the uncertain system model P shown in Figure 12:
% Create the uncertain reak parameters m1, m2 & k m1 = ureal('m1',1,'percent',20); m2 = ureal('m2',1,'percent',20); k = ureal('k',1,'percent',20); s = zpk('s'); % create the Laplace variable s G1 = ss(1/s^2)/m1; % Cart 1 G2 = ss(1/s^2)/m2; % Cart 2 % Now build F and P F = [0;G1]*[1 1]+[1;1]*[0,G2]; P = lft(F,k) % close the loop with the spring k
14
Modeling Uncertainty
The variable P is a SISO uncertain statespace (USS) object, with 4 states, and 3 uncertain parameters m1, m2 and k. You can recover the nominal plant with the command
zpk(P.nominal)
which returns
Zero/pole/gain: 1 s^2 (s^2 + 2)
If the uncertain model P(s) has LTI negative feedback controller 100 ( s + 1 ) C ( s ) = ( 0001s + 1 ) 3 , r u1
C
Σ
P
y1
then you can form the controller and the closedloop system y1 = T(s) u1 and view the closedloop system's step response on the time interval from t=0 to t=0.1 for a “Monte Carlo” random sample of 5 combinations of the 3 uncertain parameters k, m1 and m2 using this code:
C=100*ss((s+1)/(.001*s+1))^3; % LTI controller T=feedback(P*C,1); % closedloop uncertain system step(usample(T,5),.1);
15
1
Introduction
The resulting plot is shown in Figure 13 below.
1.4 1.2 1 Amplitude 0.8 0.6 0.4 0.2 0 0 Step Response
0.02
0.04 0.06 Time (sec)
0.08
0.1
Figure 13: “Monte Carlo” sampling of uncertain system's step response.
16
Worst Case Performance
Worst Case Performance
To be robust, your control system should meet your stability and performance requirements for all possible values of uncertain parameters. Monte Carlo parameter sampling via usample can be used for this purpose as shown in Figure 14, but Monte Carlo methods are inherently hit or miss. With Monte Carlo methods, you might need to take an impossibly large number of samples before you hit upon or near a worstcase parameter combination. The Robust Control Toolbox gives you a powerful assortment of robustness analysis commands that let you directly calculate upper and lower bounds on worstcase performance without random sampling:
WorstCase Robustness Analysis Commands loopmargin loopsens ncfmargin robustperf robuststab wcgain wcmargin wcsens
Comprehensive analysis of feedback loop. Sensitivity functions of feedback loop. Normalized coprime stability margin of feedback loop. Robust performance of uncertain systems. Stability margins of uncertain systems. Worstcase gain of an uncertain system. Worstcase gain/phase margins for feedback loop. Worstcase sensitivity functions of feedback loop.
Example: ACC Two Cart Benchmark Problem
Returning to the “Example: ACC Benchmark Problem”, the closed loop system is
T=feedback(P*C,1); % closedloop uncertain system
This uncertain statespace model T has three uncertain parameters, k, m1 and m2, each equal to 1±20% uncertain variation. To analyze whether the closedloop system T is robustly stable for all combinations of values for these three parameters, you can execute the commands
17
1
Introduction
[StabilityMargin,Udestab,REPORT] = robuststab(T); REPORT
This displays the REPORT
Uncertain System is robustly stable to modeled uncertainty.  It can tolerate up to 311% of modeled uncertainty.  A destabilizing combination of 500% the modeled uncertainty exists, causing an instability at 44.3 rad/s.
The report tells you that the control system is robust for all parameter variations in the ±20% range, and that the smallest destabilizing combination of real variations in the values k, m1 and m2 have sizes somewhere between 311% and 500% greater than ±20%, i.e., between ±62.2% and ±100%. The value Udestab returns an estimate of the 500% destabilizing parameter variation combination:
Udestab k: m1: m2: = 1.2174e005 1.2174e005 2.0000.
0.5
Bode Diagram
0
Magnitude (dB)
−0.5
−1
−1.5
T
wc
− worst−case
Trand − random samples −2 10
0
10 Frequency (rad/sec)
1
Figure 14: Uncertain system closedloop bode plots.
18
Worst Case Performance
You have a comfortable safety margin of between 311% to 500% larger than the anticipated ±20% parameter variations before the closedloop goes unstable. But, how much can closedloop performance deteriorate for parameter variations constrained to lie strictly within the anticipated ±20% range? The following code computes worstcase peak gain of T, and estimates the frequency and parameter values at which the peak gain occurs.
[PeakGain,Uwc] = wcgain(T); Twc=usubs(T,Uwc); % Worst case closedloop system T Trand=usample(T,4); % 4 random samples of uncertain system T bodemag(Twc,'r',Trand,'b.',{.5,50}); % Do bode plot legend('T_{wc}  worstcase',... 'T_{rand}  random samples',3);
The resulting plot is shown in Figure 14.
19
1
Introduction
Synthesis of Robust MIMO Controllers
You can design controllers for MultiInputMultiOutput (MIMO) LTI models with your Robust Control Toolbox using following:
Robust Control Synthesis Commands h2hinfsyn h2syn hinfsyn loopsyn ltrsyn mixsyn ncfsyn sdhinfsyn
Mixed H2/H∞ controller synthesis. H2 controller synthesis. H∞ controller synthesis. H∞ loop shaping controller synthesis Looptransfer recovery controller synthesis H∞ mixedsensitivity controller synthesis. H∞ normalized coprime factor controller synthesis. Sampleddata H∞ controller synthesis.
Example: Designing a Controller with LOOPSYN
One of the most powerful, yet simple controller synthesis tools is loopsyn. Given an LTI plant, you specify the shape of the openloop systems frequency response plot that you want, then loopsyn computes a stabilizing controller that best approximates your specified loop shape. For example, consider the 2×2 NASA HiMAT aircraft model (Safonov, Laub and Hartmann [8]) depicted in Figure 15. The control variables are elevon and canard actuators (δe and δc). The output variables are angle of attack (α) and attitude angle (θ). The model has six states, viz.
110
Synthesis of Robust MIMO Controllers
x1 x2 x = x3 x4 x5 x6 =
· α α · θ θ xe xδ
where x e and x δ are elevator and canard actuator states.
Aileron Canard Flap Rudder
Elevator Elevon xaxis 0 Velocity Horizontal
Figure 15: Aircraft configuration and vertical plane geometry.
111
1
Introduction
You can enter the statespace matrices for this model with the following code:
% NASA HiMAT model G(s) ag =[ 2.2567e02 3.6617e+01 1.8897e+01 3.2090e+01 9.2572e05 1.8997e+00 1.2338e02 0 0 0 bg = [ 0 0 0 0 30 0 cg = [ 0 0 dg = [ 0 0 0; 0; 0; 0; 0; 30]; 1 0 0; 0]; 0 0 0 1 0 0 0; 0]; 0 0 0 1.1720e+01 2.6316e+00 1.0000e+00 0 0 3.2509e+00 7.6257e01; 2.2396e+01; 0; 0;
9.8312e01 7.2562e04 1.7080e01 4.9652e03; 8.7582e04 3.1604e+01 0 0 0 0 3.0000e+01
0 3.0000e+01];
G=ss(ag,bg,cg,dg);
To design a controller to shape the frequency response (sigma) plot so that the system has approximately a bandwidth of 10 rad/s, you can set as your target desired loop shape Gd(s)=10/s, then use loopsyn(G,Gd) to find a loopshaping controller for G that optimally matches the desired loop shape Gd by typing:
s=zpk('s'); w0=10; Gd=w0/(s+.001); [K,CL,GAM]=loopsyn(G,Gd); % design a loopshaping controller K % Plot the results sigma(G*K,'r',Gd,'k.',Gd/GAM,'k:',Gd*GAM,'k:',{.1,30}) figure ;T=feedback(G*K,eye(2)); sigma(T,ss(GAM),'k:',{.1,30});grid
The value of γ = GAM is returned is an indicator of the accuracy to which the optimal loop shape matches your desired loop shape and is an upper bound on the resonant peak magnitude of the closedloop transfer function T=feedback(G*K,eye(2)). In this case, γ = 1.6024 = 4 db — see Figure 16.
112
Synthesis of Robust MIMO Controllers
sigma plot
40 30
magnitude, db
desired loop shape G , db d achieved loop shape GK
γ Gd , db
20 10 0
Gd /γ, db
−10 −20 −1 10
10
0
10
1
frequency rad/s
Figure 16: MIMO robust loopshaping with loopsyn(G,Gd).
The achieved loop shape matches the desired target Gd to within about γ, db.
113
1
Introduction
Model Reduction and Approximation
Complex models are not always required for good control. Yet it is an unfortunate fact that optimization methods (including methods based on H∞, H2 and µsynthesis optimal control theory) generally tend to produce controllers with at least as many states as the plant model. For this reason, the Robust Control Toolbox offers you an assortment of modelorder reduction commands that help you to find lesscomplex, loworder approximations to plant and controller models.
Model Reduction Commands reduce balancmr bstmr hankelmr modreal ncfmr schurmr slowfast stabproj imp2ss
Main interface to model approximation algorithms Balanced truncation model reduction Balanced stochastic truncation model reduction. Optimal Hankel norm model approximations Statespace modal truncation/realization Balanced normalized coprime factor model reduction. Schur balanced truncation model reduction Statespace slowfast decomposition Statespace stable/antistable decomposition Impulse response to statespace approximation
Among the most import types of model reduction methods are minimize bounds on additive, multiplicative, and Normalized Coprime Factor (NCF) modelerror. You can access all three of these methods using the command
reduce.
Example: NASA HIMAT Controller Order Reduction
For instance, the NASA HiMAT model considered in the last section has 8states, and the optimal loopshaping controller turns out to have 16 states. Using model reduction, you can remove at least some of them without
114
Model Reduction and Approximation
appreciably affecting stability or closedloop performance. For controller order reduction, the NCF model reduction is particularly useful, and it works equally well with controllers that have poles anywhere in the complex plane. For the NASA HiMAT design in the last section, you can type
hankelsv(K,'ncf','log');
which will display a logarithmic plot of the NCF hankel singular values—see Figure 18.
10
0
Hankel SV of Coprime Factors [Nl Ml]
10
−2
10
−4
10 log 10
−6
−8
10
−10
10
−12
10
−14
0
2
4
6
8 Order
10
12
14
16
Figure 17: Hankel Singular Values of Coprime Factorization of K.
Theory says that, without danger of inducing instability, you can confidently discard at least those controller states that have NCF Hankel singular values that are much smaller than ncfmargin(G,K). Compute ncfmargin(G,K) and add it to your Hankel singular values plot.
115
116 .'ncf'.30}). plot(v(1:2). In this case. The result is plotted in Figure 19.'b'.K).'r'.{.K)*[1 1].'').G*K.11.'log'). sigma(G*K1. hold on. hold off 10 0 Hankel SV of Coprime Factors [Nl Ml] 10 −2 10 −4 10 log −6 10 −8 10 −10 10 −12 10 −14 0 2 4 6 8 Order 10 12 14 16 Figure 18: Five of the 16 NCF hankel singular values of HiMAT controller K are small compared to ncfmargin(G. ncfmargin(G.v=axis.'ncf').'errortype'.1 Introduction hankelsv(K.1. you can safely discard 5 of the 16 states of K and compute an 11state reduced controller by typing K1=reduce(K.
Though this does not affect stability. 117 . If you wanted to better preserve lowfrequency performance.Model Reduction and Approximation 50 singular values: open−loop GK1 −− 11−state reduced K1 GK −− 16−state original K 40 30 Singular Values (dB) 20 10 0 −10 −20 −1 10 10 Frequency (rad/sec) 0 10 1 Figure 19: HiMAT with 11state controller K1 vs. you would discard fewer than 5 of the 16 states of K. original 16state controller K The picture above shows that low frequency gain is decreased considerably for inputs in one vector direction. it affects performance.
as well and functions like hinfsyn and h2hinfsyn.1 Introduction LMI Solvers At the core of many emergent robust control analysis and synthesis routines are powerful general purpose functions for solving a class of convex nonlinear programming problems known as Linear Matrix Inequalities.and righthand sides of an evaluated LMI Complete documentation is available in the online LMI LAB tutorial. Some of main functions that help you access the LMI capabilities of the Robust Control Toolbox are shown in the table below:. The LMI capabilities are invoked by Robust Control Toolbox functions that evaluate worst case performance. 118 . Specification of LMIs lmiedit setlmis lmivar lmiterm newlmi getlmis GUI for LMI specification Initialize the LMI description Define a new matrix variable Specify the term content of an LMI Attach an identifying tag to new LMIs Get the internal description of the LMI system LMI Solvers feasp gevp mincx dec2mat Test feasibility of a system of LMIs Minimize generalized eigenvalue with LMI constraints Minimize of a linear objective with LMI constraints Convert output of the solvers to values of matrix variables Evaluation of LMIs/Validation of Results evallmi showlmi Evaluate for given values of the decision variables Return the left.
g. LTI statespace systems produced by commands such as G=tf(1. The major analysis and synthesis commands in the RCT accept LTI object inputs. e.Extends Control System Toolbox Extends Control System Toolbox The Robust Control Toolbox (RCT) is designed to work with the Control System Toolbox (CST). lft and bode. The uncertain system (USS) objects in the RCT generalize the CST LTI SS objects and help ease the task of analyzing and plotting uncertain systems. [1.1].3). invert) and the RCT provides USS uncertain system extensions of CST interconnection and plotting functions like feedback. 119 .[1 1].. The RCT extends he capabilities of the CST and leverages off the LTI and plotting capabilities for the CST. You can do many of the same algebraic operations on uncertain systems that are possible for LTI objects (multiply. add.[1 2 3]) G=ss([1 0. 0 1].
numerical linear algebra. Richard Chiang is employed by Boeing Satellite Systems. and numerical software for control. linear algebra and numerical algorithms in control problems. model reduction and inflight system identification. linear matrix inequalities. and three large space structure vibration controllers using modern robust control theory and the tools he built in this toolbox. Working in industry instead of academia. Dr. and nonparametric statistics. His research interests include robust control theory. Andy Packard is with the Faculty of Mechanical Engineering at the University of California. Berkeley. Pascal Gahinet is employed by The MathWorks. Prof. Richard serves a unique role in our team. His research interests include control and decision theory. flight control. 120 . Prof. Richard has designed three flight control laws. applications of system theory to aerospace problems. Arkadi Nemirovski is with the Faculty of Industrial Engineering and Management at Technion. 12 spacecraft attitude control laws. Gary Balas is with the Faculty of the Aerospace Engineering & Mechanics at the University of Minnesota and is president of MUSYN Inc. His research interests include robust control theory. Dr. bridging the gap between theory and reality.1 Introduction About the Authors Prof. Israel. Michael Safonov is with the Faculty of Electrical Engineering at the University of Southern California. El Segundo. His research interests include aerospace control systems. CA. Haifa. both experimental and theoretical. The Linear Matrix Inequality (LMI) portion of the Robust Control Toolbox was developed by these two authors: Dr. His research interests include robustness issues in control analysis and design. In his career. He is a Boeing Technical Fellow and has been working in aerospace industry over 25 years. complexity theory. His research interests include convex optimization. and control of fluid flow.
7592. C. [2] P. Linear Matrix Inequalities in Systems and Control Theory. 1981. Contr. Philadelphia. Safonov. El Ghaoui and S. Stein. of Automat. Skogestad and I. June 1517. 1990. “Robustness Results in LinearQuadratic Gaussian Based Multivariable Control Designs. Recent Advances in Robust Control. Stability and Robustness of Multivariable Feedback Systems.. 1980. No. 1996. “Multivariable Feedback Design: Concepts for a Classical/Modern Synthesis. Safonov. Multivariable Feedback Control. 1. Contr. Y. Y. Feron and V.” IEEE Trans. MA: MIT Press. Dorato (editor). AC26. Chiang and H. G.. Recent Advances in LMI Theory for Control. Conf. AC26(1):416. pp. [6] N..Bibliography Bibliography [1] S. on Automat. Robust Control. S. New York: IEEE Press. 1988. 1998. Postlethwaite”. [11] R. 1981. PA: SIAM. on Automat.. N. ∞ 121 . 2000. A. vol. Philadelphia. Athans. Safonov. “Feedback Properties of Multivariable Systems: The Role and Use of Return Difference Matrix. and M. SanchezPena and M. 1981. Boyd and L.” Proc. R. Cambridge. “CACSD Using the StateSpace L∞ Theory— A Design Example. PA:SIAM. 1987.” IEEE Trans.” IEEE Trans. Sandell. Niculescu. J. Feb. Doyle and G. of American Contr. on Automatic Control. AC26(1):4765. [9] M. [12] S. [10] M. “H Control Synthesis for a Large Space Structure. Dorato and R. GA. 1988. and G. New York: Wiley. Laub. Chiang. Contr. Sznaier. New York: IEEE Press.. Atlanta. A. [4] J. Robust Systems Theory and Applications. Safonov and R. Balakrishnan. G. G.P. G. [3] P. Yedavalli (editors). Jr. El Ghaoui and E. Hartmann. New York: Wiley. [8] M. AC33(5):477479. [7] M. K. Flashner. Lehtomaki. R. 1994.” IEEE Trans. [5] L.
” Proc. MA. Bernstein. C. “A Benchmark Problem for Robust Controller Design. Robust and Optimal Control. Glover. Zhou. NJ: Prentice Hall. San Diego. 1991. American Control Conf. Wie and D. J. June 2628. Doyle and K.. 122 . 1990. also Boston. S. CA May 2325. 1996.1 Introduction [13] B. EnglewoodCliffs. [14] K.
221) Synthesis of MIMO H∞ optimal loopshaping controllers: MIXSYN method and HiMAT aircraft example Summary of loop shaping commands and utilities . 25) Singular values.2 Multivariable Loop Shaping Tradeoff between Performance and Robustness (p. sensitivity. 218) Loop Shaping Commands (p. disturbance attenuation and robustness Sigma plot design specifications (loop shape. and complementary sensitivity) Using LOOPSYN to do HInfinity Loop Synthesis of MIMO H∞ optimal loopshaping controllers: Shaping (p. 22) Typical loop shapes. 211) LOOPSYN method and HiMAT aircraft example Using MIXSYN for HInfinity Loop Shaping (p. S and T Design (p.
These types of uncertainty are relatively small at lower frequencies and typically increase at higher frequencies. the model provides no indication of the phase angle of the true plant. Thus. your control system bandwidth is essentially limited to the frequency range over which your multiplicative plant uncertainty ∆M has gain magnitude ∆M<1. There may be small unmodelled timedelays or stray electrical capacitance. Norms and Singular Values For MIMO systems the transfer functions are matrices and relevant measures of gain are determined by singular values. equivalently G = ( I + ∆ M )G 0 . and similar effects can be responsible for plant model uncertainty. high frequency torsional bending modes. But if your plant model uncertainty is so large that you do not even know the sign of your plant gain. Imprecisely understood actuator time constants. which means that the only way you can reliably stabilize your plant is to ensure that the loop gain is less than one. the frequency at which there are uncertain variations in your plant of size∆M=2 marks a critical threshold beyond which there is insufficient information about the plant to reliably design a feedback controller. High loop gains significantly larger than one in magnitude can attenuate the effects of plant model uncertainty and reduce the overall sensitivity of the system to plant noise. or. With such a 200% model uncertainty. plant model uncertainty can be a fundamental limiting factor in determining what can be achieved with feedback. in mechanical systems. you can design highgain.2 Multivariable Loop Shaping Tradeoff between Performance and Robustness When the plant modeling uncertainty is not too big. H∞ and H2 norms — which are defined as follows: 22 . then you cannot use large feedback gains without the risk that the system will become unstable. Allowing for an additional factor of 2 margin for error. In the case of singleinputsingleoutput (SISO) plants. highperformance feedback controllers. Multiplicative Uncertainty: Given an approximate model of the plant G0 of a plant G. Plant model uncertainty arises from may sources. the multiplicative uncertainty ∆M of the model G0 is defined –1 as ∆ M = G 0 ( G – G 0 ) or.
When A is a square nbyn matrix. Singular Values: The singular values of a rank r matrix A ∈ C . are the nonnegative squareroots of the eigenvalues of A * A ordered such that σ 1 ≥ σ 2 ≥ … ≥ σ p > 0. Another important concept is the notion of singular values. For stable continuoustime LTI systems G ( s ) .. the least ∆ singular value) is denoted σ ( A ) = σ n m×n Properties of Singular Values Some useful properties of singular values are 1 σ ( A ) = max x ∈C n x Ax Ax 2 σ ( A ) = min x ∈C n x Properties 1 and 2 are especially important because they establish that the greatest and least singular values of a matrix A as the maximal and minimal “gains” of the matrix as the input vector “x” varies over all possible directions. denoted σ i . i. σr + 1 = σr + 2 = … = σp = 0 then the greatest singular value σ 1 is sometimes denoted σ ( A ) = σ 1 .e. the H2norm and the H∞norms are defined terms of the frequencydependent singular values of G ( jω ) : 23 .. If r < p then there are p – r zero singular values. then the nth singular value (i. p ≤ min { m . The H∞norm is the peak gain of G across all frequencies and all input directions. The H2norm is the energy of the impulse response of plant G.Tradeoff between Performance and Robustness H2 and H∞ norms.e. n } .
2 Multivariable Loop Shaping H2norm: G 2 = H∞norm ∆ 1 2π ∫–∞ ∑ ( σi ( G ( jω ) ) ) i=1 ∞ p 2 dω G ∞ = supσ ( G ( jω ) ) ω ∆ ( sup: the least upper bound ) : 24 .
d plant disturbance effects + + (21) command r + error e  K(s) control u G(s) output y "controller" "plant" Figure 21: Block diagram of the multivariable feedback control system. respectively. and T(s) play an important role in robust multivariable control system design. viz. In order to quantify the multivariable stability margins and performance of such systems. Block diagram of the multivariable feedback control system. The singular value Bode plots of each of the three transfer function matrices S(s). 25 . The two matrices S(s) and T(s) are known as the sensitivity function and complementary sensitivity function. R(s). you can use the singular values of the closedloop transfer function matrices from r to each of the three outputs e. The matrix R(s) has no common name. S and T Design Typical loop shapes. S( s ) = ( I + L ( s ) ) def def –1 –1 –1 R( s ) = K ( s ) ( I + L ( s ) ) T( s ) = L ( s ) ( I + L ( s ) ) def = I – S( s) where the L(s) is the loop transfer function matrix L ( s ) = G ( s )K ( s ) .Typical loop shapes. S and T Design Consider the multivariable feedback control system shown in Figure 21. The singular values of the loop transfer function matrix L(s) are important because L(s) determines the matrices S(s) and T(s). u and y.
the “size” of the smallest stable ∆ M (s) which destabilizes the system in Figure 22 when ∆ A = 0 . Allowing W 1 ( jω ) to depend on frequency ω enables you to specify a different attenuation factor for each frequency ω. Thus a disturbance attenuation performance specification may be written as σ ( S ( jω ) ) ≤ W 1 ( jω ) where W 1 ( jω ) is the desired disturbance attenuation factor. –1 –1 (22) PERTURBED PLANT ∆A(s) + + Σ  K(s) G(s) Ι + ∆A(s) Σ + Figure 22: Additive/Multiplicative uncertainty. Let us consider how the singular value Bode plot of complementary sensitivity T(s) determines the stability margin for multiplicative perturbations ∆ M . The singular value Bode plots of R(s) and of T(s) are used to measure the stability margins of multivariable feedback designs in the face of additive plant perturbations ∆ A and multiplicative plant perturbations ∆ M . 26 . by definition. The multiplicative stability margin is.2 Multivariable Loop Shaping Singular Values The singular values of S(jω) determine the disturbance attenuation since S(s) is in fact the closedloop transfer function from disturbance d to plant output y — see Figure 21. See Figure 22. respectively.
the greater will be the size of the smallest destabilizing multiplicative perturbation. Additive Robustness The size of the smallest destabilizing additive uncertainty ∆ A is 1 σ ( ∆ A ( jω ) ) = σ ( R ( jω ) ) As a consequence of Robustness Theorems 1 and 2. it is common to specify the stability margins of control systems via singular value inequalities such as σ ( R { jω } ) ≤ W 2 ( jω ) σ ( T { jω } ) ≤ W 3 ( jω ) where W 2 ( jω ) and W 3 ( jω ) are the respective sizes of the largest anticipated additive and multiplicative plant perturbations. –1 –1 (23) (24) 27 . you have the following useful characterization of “multiplicative” stability robustness: Multiplicative Robustness The size of the smallest destabilizing multiplicative uncertainty ∆ M (s) is 1 σ ( ∆ M ( jω ) ) = σ ( T ( jω ) ) The smaller is σ ( T ( jω ) ) .Typical loop shapes. S and T Design Taking σ ( ∆ M ( jω ) ) to be the definition of the “size” of ∆ M ( jω ) . A similar result is available for relating the stability margin in the face of additive plant perturbations ∆ A (s) to R(s) if we take σ ( ∆ A ( jω ) ) to be our definition of the “size” of ( ∆ A ( jω ) ) at frequency ω. and hence the greater will be the stability margin.
–1 –1 if σ ( L ( s ) ) » 1 if σ ( L ( s ) ) « 1 T s) = L(s)( I + L(s)) ( ≈ L ( s ).2 Multivariable Loop Shaping It is common practice to lump the effects of all plant uncertainty into a single fictitious multiplicative perturbation ∆ M . It is interesting to note that in the upper half of Figure 23 (above the zero db line) 1 σ ( L ( jω ) ) ≈ σ ( S ( jω ) ) while in the lower half of Figure 23 below the zero db line σ ( L ( jω ) ) ≈ σ ( T ( jω ) ) This results from the fact that S( s ) = ( I + L ( s ) ) def def –1 σ i ( T [ jω ] ) ≤ W 3 ( jω ) –1 ≈ L(s) . 28 . so that the control design requirements may be written 1 .≥ W 1 ( jω ) . σ i ( S ( jω ) ) as shown in Figure 23.
either as specified upper/lower bounds or as a target desired loop shape — see Figure 23 Guaranteed Gain/Phase Margins in MIMO Systems For those who are more comfortable with classical singleloop concepts. T ∞ is a multiloop generalization of the closedloop resonant peak magnitude 29 . Indeed in the singleinputsingleoutput case L ( jω ) σ ( T ( jω ) ) = 1 + L ( jω ) which is precisely the quantity you obtain from Nichols chart Mcircles. Thus. S and T Design o(L) Gd DESIRED LOOPSHAPE W  1 0 db PERFORMANCE BOUND . Thus.Typical loop shapes. as found on the Nichols chart. S and T. it is not uncommon to see specifications on disturbance attenuation and multiplicative stability margin expressed directly in terms of forbidden regions for the Bode plots of σ i ( L ( jω ) ) as “singular value loop shaping” requirements. there are the important connections between the multiplicative stability margins predicted by σ ( T ) and those predicted by classical Mcircles. o(T) w 1 o(S) o(L) DESIRED CROSSOVER ωc ROBUSTNESS BOUND 1 W  3 Figure 23: Singular value specifications on L.
⎝ 2 S ∞⎠ These formula are valid provided S ∞ and T ∞ are larger than one. Also.2 Multivariable Loop Shaping which. as is normally the case. via the formulae [6] 1 G M ≥ 1 + T ∞ 1 G M ≥ 1 + 1–1⁄ S ∞ –1 1 θ M ≥ 2 sin ⎛ ⎞ ⎝ 2 T ∞⎠ –1 1 θ M ≥ 2 sin ⎛ ⎞ . is closely related to the damping ratio of the dominant closedloop poles. as classical control experts will recognize. Upper bounds on gm are 1 g m ≤ 1 – T ∞ 1 g m ≤ 1+1⁄ S ∞ 210 . The infinity norms of S and T also yield gain reduction tolerances. The gain reduction tolerance gm is defined to be the minimal amount by which the gains in each loop would have to be decreased in order to destabilize the system. The margins apply even when the gain perturbations or phase perturbations occur simultaneously in several feedback channels. Block diagram of the multivariable feedback control system. it turns out that you may relate T ∞. S ∞ to the classical gain margin GM and phase margin θ M in each feedback loop of the multivariable feedback system of Figure 21.
211 . The longitudinal dynamics of the HiMAT aircraft trimmed at 25000 ft and 0. 0 0 1.9 Mach are unstable and have two right half plane phugoid modes.8997e+00 9. 30 0. 0 0 0 1 0 0].7080e01 4.8897e+01 3. The linear model has –1 statespace realization G ( s ) = C ( Is – A ) B with 6 states.Using LOOPSYN to do HInfinity Loop Shaping Using LOOPSYN to do HInfinity Loop Shaping The command loopsyn lets you design a stabilizing feedback controller to optimally shape the open loop frequency response of MIMO feedback control system to match as closely as possible a desired loop shape Gd—see Figure 23.0000e+00 0 0 0. Example: NASA HiMAT loopshaping To see how the loopsyn command works in practice to address robustness and performance tradeoffs.bg. 0 0.2396e+01.cg. ag =[ 2.2090e+01 3.Gd) Here G is the LTI transfer function matrix of a MIMO plant model.7582e04 3.1604e+01 2. 0 0 0 0 3. 0 0]. let’s consider again the NASA HiMAT aircraft model taken from the paper of Safonov.6617e+01 1.6257e01.0000e+01]. bg = [0 0. Laub and Hartmann [8]. The LTI controller K has the property that it shapes the loop L=G*K so that it matches the frequency response of Gd as closely as possible. 0 0. 0 0 0 0 0 3.0000e+01 0.2567e02 3. G=ss(ag. 0 0.2509e+00 7. cg = [0 1 0 0 0 0. and the last two representing elevon and canard control actuator dynamics—see Figure 24.1720e+01 2. and K is the optimal loop shaping controller.6316e+00 8. with the first four states representing angle of attack (α) and attitude angle (θ) and their rates of change.dg).2572e05 1.9652e03. 1. dg = [0 0. Gd is the target desired loop shape for the loop transfer function L=G*K. subject to the constant that the must stabilize the plant model G.8312e01 7.2338e02 1.2562e04 1. The basic syntax of the loopsyn loopshaping controller synthesis command is K = LOOPSYN(G. 9. 0 30].
Other effects like control actuator timedelays and fuel sloshing also contribute to model inaccuracy at even higher frequencies. and beyond. However as noted in [8]. Aileron Canard Flap Rudder Elevator Elevon xaxis 0 Velocity Horizontal Figure 24: Aircraft configuration and vertical plane geometry. 1000%) between the frequency response of the model and the actual aircraft for frequency ω > 100 rad/sec. The output variables are angle of attack (α) and attitude angle (θ).e. the frequency ω > 100 rad/sec. but the dominant unmodelled effects are the fuselage bending modes. 212 .. since it was derived by treating the aircraft as a rigid body and neglecting lightly damped fuselage bending modes that occur at somewhere between 100 and 300 rad/sec. . by making sure that the loop has gain less than 20 db at.2 Multivariable Loop Shaping The control variables are elevon and canard actuators (δe and δc). These unmodelled bending modes might cause as much as 20 db deviation (i. This model is good at frequencies below 100 rad/s with less than 30% variation between the true aircraft and the model in this frequency range. You can think of these unmodelled bending modes as multiplicative uncertainty of size 20db. and design your controller using loopsyn. it does not reliably capture very high frequency behaviors.
..'k:'...T.100}).. 213 .'k. '\sigma(Gd) \pm GAM...{.Gd/GAM.'. sigma(I+L.: Maximize the sensitivity function as much as possible. I=eye(size(L)). % S=inv(I+L).. % desired loop shape %% Compute the optimal loop shaping controller K [K.L. T=IS. % frequency response plots figure. S=feedback(I. Gd*GAM.title('\alpha and \theta command step responses').'r'.. MATLAB Commands for a LOOPSYN design %% Enter the desired loop shape Gd s=zpk('s')..grid legend('1/\sigma(S) performance'.Gd..CL. '\sigma(L) loopshape'.Gd).1.. sensitivity S and %% complementary sensitivity T: L=G*K.'k:'.. Both specs can be accommodated by taking as the desired loop shape Gd(s)=8/s. %% Plot the results: % step response plots step(T).L). db').''..':'.: –20 db/decade rolloff slope and –20 db loopgain at 100 rad/sec 2 Performance Spec. '\sigma(Gd) desired loop'. % Laplace variable s Gd=8/s.GAM]=loopsyn(G. %% Compute the loop L. '\sigma(T) robustness'..Using LOOPSYN to do HInfinity Loop Shaping Design Specifications The singular value design specifications are: 1 Robustness Spec.
db σ ( GK ).5 To: Out(2) 1 0.2 Multivariable Loop Shaping The plots of the resulting step and frequency response for the NASA HiMAT loopsyn loop shaping controller design are shown in below in Figure 26.GAM. db . 20log10(GAM) tells you the accuracy that your loopsyn control design matches the target desired loop:.5 0 0 From: In(2) 5 10 0 5 10 Figure 25: HiMAT closed loop step responses 214 . db for ω < ω c for ω > ω c .e. The number ±GAM.5 To: Out(1) 1 0.. From: In(1) 1. σ ( GK ). db ≥ G d . db + GAM. db ≤ G d .5 0 1.db (i.
Using LOOPSYN to do HInfinity Loop Shaping 50 40 30 Singular Values (dB) 20 10 0 −10 −20 −30 −40 −1 10 0 Singular Values σ(L) loopshape σ(Gd) desired loop σ(Gd) ± GAM. db 10 10 Frequency (rad/sec) Singular Values 1 10 2 40 30 Singular Values (dB) 20 10 0 −10 −20 −30 −40 −1 10 0 σ(T) robustness 1/σ(S) performance 10 10 Frequency (rad/sec) 1 10 2 Figure 26: LOOPSYN design results for NASA HiMAT 215 .
If your first attempt at loopsyn design does not achieve everything you wanted. For instance. your 0db crossover ωc must be greater than the magnitude of any plant righthalfplane poles and less than the magnitude of any righthalfplane zeros If do not take care to max Re(p i) > 0 pi < ωc < min Re(z i) > 0 zi . it should rolloff with a negative slope of between 20 and 40 db/decade. Your loop Gd loop should have high gain (as great as possible) at frequencies where you model is good in order to assure good control accuracy and good disturbance attenuation. 2 Performance. you will need to readjust you target desired loop shape Gd. choose target loop shape Gd that conforms to these fundamental constraints. which helps to keep phase lag to less than 180° inside the control loop bandwidth (0 < ω < ωc). Here are some basic design tradeoffs to consider: 1 Stability Robustness. then loopsyn will still compute the optimal loopshaping controller K for your Gd but you should expect that the optimal loop L=G*K will have a poor fit to the 216 . below the crossover frequency ωc. which impose fundamental limits on your 0db crossover frequency ωc [12]. Other considerations that might affect your choice of Gd are the righthalfplane poles and zeros of the plant G. 3 Crossover and RollOff. Your desired loop shape Gd should have its 0db crossover frequency (denoted ωc) between the above two frequency ranges and. Your target loop Gd should have low gain (as small as possible) at high frequencies where typically your plant model is so poor that you its phase angle is completely inaccurate with errors approaching ±180° or more.2 Multivariable Loop Shaping Fine tuning the LOOPSYN target loop shape Gd to meet design goals.
217 .Using LOOPSYN to do HInfinity Loop Shaping target loop shape Gd and consequently it may be impossible to meet your performance goals.
W3) With mixsyn controller synthesis.2 Multivariable Loop Shaping Using MIXSYN for HInfinity Loop Shaping A popular alternative approach to loopsyn loopshaping is the H∞ mixedsensitivity loopshaping.W1.[]. W3 T The term T y1u1 ∞ is called a mixedsensitivity cost function because it penalizes both sensitivity S(s) and complementary sensitivity T(s). you need to take care to be sure that your 0 db crossover frequency for the Bode plot of W1 is below the 0 db crossover frequency of 1/W3 as shown in Figure 23. In choosing design specifications W1 and W3 for a mixsyn controller design. so that there is a gap for the desired loop shape Gd to pass between the performance bound 218 . Loop shaping is achieved when you choose W1 to have the target loop shape for frequencies ω < ωc. which is implemented by the Robust Control Toolbox command K=mixsyn(G. your performance and stability robustness specifications equations (22) and (24) are combined into a single infinity norm specification of the form T y1u1 ∞ ≤ 1 where (see Figure 27) T y1 u1 = def W1 S . and you choose 1/W3 to be the target for ω > ωc.
W1=(s/MS+WS)/(s+AS*WS).AS=. % Compute the loop L1. I=eye(size(L1)).W1.WS=5.AT=. % Laplace variable s MS=2. % S=inv(I+L1).[].Using MIXSYN for HInfinity Loop Shaping W1 and your robustness bound W 3 .05.WT=20. W3=(s+WT/MT)/(AT*s+WT). Example: NASA HiMAT design using MIXSYN To do a mixsyn H∞ mixedsensitivity synthesis design on the HiMAT model. MT=2. or your performance and robustness requirements will not be achievable. S1=feedback(I. % Plot the results: 219 . start with the plant model G created in “Example: NASA HiMAT loopshaping” on page 211 and type the following commands: % Set up the performance and robustness bounds W1 & W3 s=zpk('s').03. sensitivity S1 and comp sensitivity T1: L1=G*K1.CL1.W3).GAM1]=mixsyn(G. % Compute the Hinfinity mixedsensitivity optimal sontroller K1 [K1. % Next compute an plot the closedloop system.L1). T1=IS1. Otherwise. AUGMENTED PLANT P(s) AUGMENTED PLANT P(s) W W1W 11 W 1 ++ u 11 uu 22 yy 22 uu yy G G yy 1a1a –1 yy 11 ee GG W3 W3 W3 W3 yy 1b 1b CONTROLLER CONTROLLER F(s) F(s) K K Figure 27: MIXSYN H∞ mixedsensitivity loopshaping Ty1 u1.
L1.'..GAM1/W3. 40 30 Singular Values (dB) 20 10 0 −10 −20 −30 −1 10 Singular Values 1/σ(S) performance σ(T) robustness σ(L) loopshape σ(W1) performance bound σ(1/W3) robustnes bound 10 10 Frequency (rad/sec) 0 1 10 2 Figure 28: MIXSYN design results for NASA HiMAT 220 .'k... The resulting mixsyn singular value plots for the are shown below... title('\alpha and \theta command step responses').'k'.':'..T1.100}). '\sigma(T) robustness'..grid legend('1/\sigma(S) performance'..5). % frequency response plots figure.''..1. '\sigma(W1) performance bound'....2 Multivariable Loop Shaping % step response plots step(T1. W1/GAM1...1. sigma(I+L1. '\sigma(1/W3) robustnes bound'). '\sigma(L) loopshape'.'r'..{.
Some of the main commands that you are likely to use for loop shaping design.Loop Shaping Commands Loop Shaping Commands The Robust Control Toolbox gives you several choices for shaping the frequency response properties of multiinputmultioutput (MIMO) feedback control loops. are listed below:: MIMO loop shaping commands loopsyn ltrsyn mixsyn ncfsyn H∞ loop shaping controller synthesis LQG looptransfer recovery H∞ mixedsensitivity controller synthesis GloverMcFarlane H∞ normalized coprime factor loop shaping controller synthesis MIMO loop shaping utility functions augw Augmented plant for weighted H2 and H∞ mixed sensitivity control synthesis Weights for Hinfinity mixed sensitivity (mixsyn. augw) Singular value plots of LTI feedback loops makeweight sigma 221 . and associated utility functions.
2 Multivariable Loop Shaping 222 .
3 Model Reduction for Robust Control Introduction (p. 36) Approximating Plant Models . 310) Reducing Large Scale Models (p. 32) Overview of Model Reduction Techniques (p.Additive Error Methods (p. followed by a comparison between additive and multiplicative methods How to keep the jωaxis poles of a plant during model reduction Strategies for reducing very large plant models Using Normalized Coprime Model order reduction examples Factor Methods (p. 38) Using Modal Algorithms (p.Multiplicative Error Method (p. 313) Motivations. 315) Sources for model reduction theory . The multiplicative error method is presented. Hankel singular values Hankel singular value based model reduction routines and their Error Bounds Robust Frequency. 34) Approximating Plant Models . Large size plant. Rigid body dynamics. 314) References (p. Examples.
to design a control system to meet certain specifications. They can achieve a reduced order model that preserves the majority of the system characteristics. 3 Finally. model reduction arises in several places: 1 It is desirable to simplify the best available model in light of the purpose to which the model is to be used — namely. Model reduction routines in this toolbox can be put into two categories: 1 Additive error method: reduced order model has an additive error bounded by an error criterion. 2 Multiplicative error method: reduced order model has a multiplicative or relative error bounded by an error criterion. Model reduction techniques presented here are all based on the Hankel singular values of a system. The error is measured in terms of peak gain across frequency (H∞ norm) and the error bounds are a function of the neglected Hankel singular values. using a smaller size model with most of the system important dynamics preserved is highly desirable. 2 To speed up the simulation process in design validation stage. Keeping larger “energetic” states of a system preserves most of its characteristics in terms of stability. Hankel Singular Values In control theory. whereas Hankel singular values define the “energy” of each state in the system. 32 .3 Model Reduction for Robust Control Introduction In the design of robust controllers for complicated systems. if a modern control method such as LQG or H is employed for ∞ which the complexity of the control law is not explicitly constrained. eigenvalues define a system stability. frequency and time responses. the order of the resultant controller is likely to be considerably greater than is truly needed. A good model reduction algorithm applied to the control law can sometimes significantly reduce control law complexity with little change in control system performance.
4. its Hankel singular values are defined as [1] σH = λ i ( PQ ) T where P and Q are Controllability and Observability Grammians satisfying AP + PA T = – BB T T A Q + QA = – C C For example. rand('state'.D).Introduction Mathematically. G = rss(30. hankelsv(G) returns a Hankel singular value plot as follows Hankel Singular Value 10 9 8 7 6 abs 5 4 3 2 1 0 0 5 10 15 Order 20 25 30 35 which shows that system G has most of its “energy” stored in states 1 through 15 or so.1234).3). given a stable statespace system (A. randn('state'. We will show later on using model reduction routines to keep a15state reduced model will preserve most of its dynamic response. 33 .B.5678).C.
some model reduction routines produce a reduced order model Gred of the original model G with a bound on the error G – Gred ∞ . As discussed in previous sections. These model reduction routines are also categorized into two groups: Additive Error and Multiplicative Error types. and are all based on the Hankel singular values of the system. In other words. Table 31: Top Level Model Reduction Command Method reduce Description Main interface to model approximation algorithms 34 .. Others produce a reduced order model with a bound on the relative error G ( G – Gred ) ∞ . These algorithms let you control the absolute or relative approximation error. These theoretical bounds are based on the “tails” of the Hankel Singular Values of the model. –1 Additive Error Bound: [1] n G – Gred ∞ ≤ 2 where σ i are denoted the i th ∑ σi k+1 Hankel singular value of the original system G. Multiplicative (Relative) Error Bound: [2] n G ( G – Gred ) ∞ ≤ –1 ∏ ( 1 + 2σi ( k+1 2 1 + σi + σi ) ) – 1 where σ i are denoted the ith Hankel singular value of the phase matrix of the model G (see reference page on bstmr).e.3 Model Reduction for Robust Control Overview of Model Reduction Techniques The Robust Control Toolbox offers several algorithms for model approximation and order reduction. robust control theory quantifies a system uncertainty as either Additive or Multiplicative types. the peak gain across frequency. i.
Overview of Model Reduction Techniques Table 32: Normalized Coprime Balanced Model Reduction command Method ncfmr Description Normalized coprime balanced truncation Table 33: Additive Error Model Reduction commands Method balancmr schurmr hankelmr Description Squareroot balanced model truncation Schur balanced model truncation Hankel minimum degree approximation Table 34: Multiplicative Error Model Reduction command Method bstmr Description Balanced stochastic truncation Table 35: Additional Model Reduction Tools Method modreal slowfast stabproj Description Modal realization and truncation Slow and fast state decomposition Stable and antistable state projection 35 .
'g. G = rss(30. % balanced truncation to models with sizes 12:16 [g1. % or use reduce % Schur balanced truncation by specifying `MaxError' [g2.5.Additive Error Methods Given a system in LTI form.0123 info1.8529 or plot the model error vs.info1] = balancmr(G. sigma(G. the following commands reduce the system to any desired order as one specifies.2]). rand('state'.0.ErrorBound(1) % 2. reduced models g1 and g2.1234). The judgement call has based on its Hankel Singular Values as shown in previous paragraph.g2.:.[1.g1.12:16). 30 Singular Values G gr g2 25 20 15 Singular Values (dB) 10 5 0 −5 −10 −4 10 10 −3 10 −2 10 10 Frequency (rad/sec) −1 0 10 1 10 2 10 3 To check whether the theoretical error bound is satisfied: norm(Gg1(:.'r'.') shows a comparison plot of the original model G.3 Model Reduction for Robust Control Approximating Plant Models .'MaxError'.0.0.8.3).'b'. randn('state'. error bound via the following commands 36 .1).4.'inf') % 2.5678).info2] = schurmr(G.
Approximating Plant Models .Additive Error Methods [sv.w.ylabel('SV').:.info1.w] = sigma(Gg1(:. title('Error Bound and Model Error') 10 1 Error Bound and Model Error 10 0 10 −1 SV 10 −2 10 −3 10 −4 10 −3 10 −2 10 −1 10 rad/sec 0 10 1 10 2 10 3 37 .ErrorBound(1)*ones(size(w))) xlabel('rad/sec').sv. loglog(w.1)).
bode(G.'order'.'order'.infos] = reduce(G.gr. Clearly the phase matching algorithm using bstmr provides a better fit in the Bode plot.bode(G. [gs.'b'.Multiplicative Error Method In most cases.'r'). title('Additive Error Method') figure(2). hence producing more accurate reduced order model than the Additive Error methods. figure(1).'r'). randn('state'.'bst'. The following commands illustrate the significance of a multiplicative error model reduction method as compared to any additive error type.1.3 Model Reduction for Robust Control Approximating Plant Models . [gr. G = rss(30.gs.'algo'.7). title('Relative Error Method') 10 5 0 −5 Additive Error Method Magnitude (dB) Phase (deg) −10 −15 −20 −25 −30 −35 −40 135 90 45 0 −45 −90 −135 10 −2 10 −1 10 Frequency (rad/sec) 0 10 1 10 2 38 . This characteristics is obviously shown in system models with low damped poles.1234).1).'b'.'algo'.'balance'.infor] = reduce(G. Multiplicative Error model reduction method bstmr tends to bound the relative error between the original and reduced order models across the frequency range of interests.5678). rand('state'.7).
may produce a reduced order model missing those low damped poles/zeros frequency regions. Balanced Stochastic Method (bstmr) produces a better reduced order model fit in those frequency range to make multiplicative error small. for some systems with low damped poles/zeros. or hankelmr only care about minimizing the overall “absolute” peak error. 39 . schurmr.Multiplicative Error Method 20 Relative Error Method 10 Magnitude (dB) Phase (deg) 0 −10 −20 −30 −40 180 135 90 45 0 −45 −90 −135 10 −2 10 −1 10 Frequency (rad/sec) 0 10 1 10 2 Therefore.Approximating Plant Models . Whereas additive error methods such as balancmr.
e. Gjw.d = 0. modreal puts a system into its Modal Form with eigenvalues appearing on the diagonal of its Amatrix. All the blocks will be ordered in ascending order based on their eigenvalues magnitudes by default or descending order based on their real parts. specifying the number of jωaxis poles splits the model into two systems with one containing only jωaxis dynamics. Therefore. the final approximation of the model is simply Gjw+Gred. [Gjw. the other containing the nonjω axis dynamics.3 Model Reduction for Robust Control Using Modal Algorithms Rigid Body Dynamics In many cases. % only one rigid body dynamics G2. After G2 is further reduced to Gred. rigid body dynamics of a flexible structure plant or integrators of a controller.d = Gjw.ylabel('NonRigid Body') Further model reduction can be done on G2 without any numerical difficulty. % put DC gain of G into G2 subplot(211). a model’s jωaxis poles are important to keep after model reduction.5678)..1).sigma(G2).G2] = modreal(G.1. G = rss(30.d.g. 310 . then a unique routine modreal serves the purpose nicely. randn('state'.1234).1). and complex eigenvalues will appear in 2x2 real blocks. Real eigenvalues will appear in 1x1 blocks.sigma(Gjw).ylabel('Rigid Body') subplot(212). rand('state'.
hankelmr.1).1234).5 3 2.info] = reduce(G). bstmr.5 abs 2 1.'b'. not shown) 4 3. G = rss(30.5 0 0 5 10 15 Order 20 25 30 35 311 .1. schurmr. a Hankel singular value plot will be shown as follows Hankel Singular Values (1 of jw−axis pole(s) = inf.5 1 0. The following single command creates a size 8 reduced order model from its original 30state model: rand('state'. [gr.Using Modal Algorithms This process of splitting jωaxis poles has been builtin and automated in all the model reduction routines (balancmr.gr.'r') Without specifying the size of the reduced order model. randn('state'. % choose a size of 8 at prompt bode(G. hankelsv) so that users need not to worry about splitting the model to begin with.5678).
Again. 312 . the rigid body dynamics has been preserved for further controller design.3 Model Reduction for Robust Control 30 25 20 15 Bode Diagram Original G (30−state) Gred (8−state) Magnitude (dB) Phase (deg) 10 5 0 −5 −10 −15 −20 720 630 540 450 360 270 180 10 −2 10 −1 10 0 10 Frequency (rad/sec) 1 10 2 10 3 The default algorithm balancmr of reduce has done a great job of approximating a 30state model with just 8state.
For a typical 240state flexible spacecraft model in spacecraft industry. most Hankel based routines can fail to produce a good reduced order model. Any modern robust control design technique mentioned in this toolbox can then be easily applied to this smaller size plant for a controller design. 313 . then truncates the dynamic model to a intermediate stage model with a comfortable size of 50state or so. From this point on. and low damped dynamics. those more sophisticate Hankel singular value based routines can further reduce this intermediate stage model in a much more accurate fashion to a smaller size for final controller design.Reducing Large Scale Models Reducing Large Scale Models For some really large size problems (states > 200). modreal puts the large size dynamics into the modal form. applying modreal and bstmr (or any other additive routines) in sequence can reduce the original 240state plant dynamics to a 7state 3axis model including rigid body dynamics. modreal turns out to be the only way to start the model reduction process. Because of the size and numerical properties associated with those large size.
info2] = ncfmr(K). without specifying the size of the reduced order model. % The same model G used in the 1st example [Kred.Kred) Again. However.3). rand('state'. K= rss(30.5678).3 Model Reduction for Robust Control Using Normalized Coprime Factor Methods A special model reduction routine ncfmr produces a reduced order model by truncating a balanced coprime set of a given model. previously mentioned methods (except ncfmr) can nicely preserve the original integrator(s) in the model. It can directly simplify a modern controller with integrators to a smaller size by balanced truncation of the normalized coprime factors. 120 Singular Values Original K (30−state) Kred (15−state) 100 80 Singular Values (dB) 60 40 20 0 −20 10 −4 10 −3 10 −2 10 Frequency (rad/sec) −1 10 0 10 1 10 2 10 3 If integral control is important. In this case.1234). the integrators will not be preserved afterwards. any model reduction routine presented here will plot a Hankel singular value bar chart and prompt for user decision for a reduced model size. enter “15”. sigma(K. randn('state'.4. It does not need modreal for pre/post processing like the other routines do. 314 .
on Automat. Safonov and R. and D. no. J. 35. vol.error Bounds. 39. [16] K. 315 .” IEEE Trans. [17] M. no. Glover. April 1990. J. vol. NJ: Prentice Hall. “Optimal Hankel Model Reduction for Nonminimal Systems. J. Contr. and Their L∝ . of Adaptive Control and Signal Processing. Safonov and R. 34. 729733. Safonov and R. vol. on Automat. “A Schur Method for Balanced Model Reduction. Obinata and B. Limebeer. Glover. 1996. July 1989. Model Reduction for Control System Design. Y. Y. pp. 7. 4. pp. 259272 (1988). 6. Control.” International J. Anderson.. [18] M.” Int.. London: SpringerVerlag. 1984. C. “All Optimal Hankel Norm Approximation of Linear Multivariable Systems. D. Doyle and K. “Model Reduction for Robust Control: A Schur Relative Error Method.” IEEE Trans. 2. G.References References [15] K. pp. 11451193. Chiang. [20] G. Chiang. Chiang. N. 496502. Zhou. EnglewoodCliffs. O. Y. [19] M. 2001. Contr. Robust and Optimal Control. G. vol. G. no.
3 Model Reduction for Robust Control 316 .
42) Robustness Analysis (p. 48) MultiInput. MultiOutput Robustness Analysis (p. 412) WorstCase Gain Analysis (p. 420) What is uncertainty modeling? Why is it important? Designing and comparing various controllers Creating and analyzing uncertain MIMO models How to perform worstcase gain analysis Summary of Robustness Analysis Tools Tables of functions available for robust control (p.4 Robustness Analysis Uncertainty Modeling (p. 423) .
which often quantifies model uncertainty by describing absolute or relative uncertainty in the process’s frequency response. timeinvariant objects. and 2 Frequencydomain uncertainty. These can be used to create coarse and simple or detailed and complex descriptions of the model uncertainty present within your process models. The primary building blocks. notions such as gain and phase margins (and their generalizations) help us quantify the sensitivity of the above ideas (stability and performance) in the face of model uncertainty. Creating Uncertain Models of Dynamic Systems The two dominant forms of model uncertainty are: 1 Uncertainty in parameters of the underlying differential equation models. Finally. model uncertainty) is the primary job of the feedback control system. are uncertain real parameters and uncertain linear. or arbitrarily small disturbances. and understanding the effects of uncertainty are important tasks of the control engineer. which is the imprecise knowledge of how the control input directly affects the feedback variables. Once formulated. low frequency disturbances) without catastrophically increasing the effect of other dominant forms (sensor noise. The Robust Control Toolbox has builtin features allowing you to specify model uncertainty in a simple and natural manner. Reducing the effect of some forms of uncertainty (initial conditions. highlevel system robustness tools can help you analyze the potential degradation of stability and performance of the closedloop system brought on by the system model uncertainty.4 Robustness Analysis Uncertainty Modeling Dealing with. Rolloff filters in highfrequency ranges is how we are then forced to deal with highfrequency sensor noise in a feedback system. 42 . Closedloop stability is the manner in which we deal with the (always present) uncertainty in initial conditions. Highgain feedback in low frequency ranges is a typical manner in which we deal with the effect of unknown biases and disturbances acting on the process output. called uncertain elements or atoms.
it also has variability. As an example. nominal value 5.5000 5. Being uncertain. H = tf(1.'Percentage'. along with conventional system creation commands (such as ss and tf). use the uncertain real parameter bw to model a first order system whose bandwidth is between 4.5. Continuous System 43 . or uss. 1 Output.5 and 5.[1/bw 1]) USS: 1 State. variability = [10 10]% get(bw) Name: 'bw' NominalValue: 5 Mode: 'Percentage' Range: [4.5000] PlusMinus: [0.5 rad/s. and a percentage uncertainty of 10%. • a range about the nominal.5000] Percentage: [10 10] AutoSimplify: 'basic' Note that the range of variation (Range property) and the additive deviation from nominal (the PlusMinus property) are consistent with the Percentage property value.Uncertainty Modeling Using these two basic building blocks.10) This creates a ureal object. you can easily create uncertain system models.5000 0. NominalValue 5. Creating Uncertain Parameters An uncertain parameter has a name (used to identify it within an uncertain system with many uncertain parameters) and a nominal value. bw = ureal('bw'. with name 'bw'. 1 Input. described in either one of the following manners • an additive deviation from the nominal. The result is a uncertain statespace object. View its properties using the get command Uncertain Real Parameter: Name bw. or • a percentage deviation from the nominal. Create a real parameter. You can create statespace and transfer function models with uncertain real coefficients using ureal objects.
called a uss object.4 Robustness Analysis bw: real. bode(H.NominalValue) ans = 5 Next.{1e1 1e2}). The nominal value of H is a statespace object. nominal = 5. 1 occurrence Note that the result H is an uncertain system. pole(H. variability = [10 10]%. Verify that the pole is at 5. use bode and step to examine the behavior of H. 0 Bode Diagram −5 Magnitude (dB) Phase (deg) −10 −15 −20 −25 −30 0 −45 −90 10 −1 10 0 10 Frequency (rad/sec) 1 10 2 44 .
” The precise meaning is not clear.5 Time (sec) 1 1.9 0. say 30 rad/s. It is common to hear “The model is good out to 8 radians/second.3 0. and for frequencies beyond. You can capture the more complicated uncertain behavior that typically occurs at high frequencies using the ultidyn uncertain element which is described next. the highfrequency rolls off at 20 dB/decade regardless of the value of bw. the model is not necessarily representative of the process behavior. Quantifying Unmodeled Dynamics An informal manner to describe the difference between the model of a process and the actual process behavior is in terms of bandwidth.5 While there are variations in the bandwidth and time constant of H.Uncertainty Modeling step(H) 1 Step Response 0. When coupled with a nominal model and a frequency 45 . The uncertain linear. but it is reasonable to believe that for frequencies lower than (say) 5 rad/s.5 0. A ultidyn object represents an unknown linear system whose only known attribute is a uniform magnitude bound on its frequency response. ultidyn can be used to model this type of knowledge.4 0. the model is accurate. the guaranteed accuracy of the model degrades.1 0 0 0.7 0. In the frequency range between 5 and 30.6 Amplitude 0.2 0.8 0. timeinvariant dynamics object.
3 Create a ultidyn object.[1 1]). using tf. Delta. with dependence on both Delta and bw.05. the firstorder system with an uncertain timeconstant. W. Continuous System Delta: 1x1 LTI. ultidyn objects can be used to capture uncertainty associated with the model dynamics. called the “weight. The utility makeweight is useful for creating 1st order weights with specific low and high frequency gains.” whose magnitude represents the relative uncertainty at each frequency. max. variability = [10 10]%. follow the steps below. nominal = 5. 46 . Gnom = H. The uncertain model G is formed by G = Gnom*(1+W*Delta). gain = 1. G = Gnom*(1+W*Delta) USS: 2 States. 1 occurrence bw: real. 2 Create a filter.4 Robustness Analysis shaping filter.9. Suppose that the behavior of the system modeled by H significantly deviates from its 1storder behavior beyond 9 rad/s.1 100] rad/s. Delta = ultidyn('Delta'. If the magnitude of W represents an absolute (rather than relative) uncertainty. 1 occurrence Note that the result G is also an uncertain system. W = makeweight(. The command below carry out the steps described above. ss or zpk. Gnom itself may already have parameter uncertainty. 1 Input. over the frequency range [0. Gnom. In order to model frequency domain uncertainty as described above using ultidyn objects. and specified gain crossover frequency.10). with magnitude bound equal to 1. for example you believe about 5% potential relative error at low frequency and increasing to 1000% at high frequency where H rolls off. You can use bode to make a Bode plot of 20 random samples of G's behavior. use the formula G = Gnom + W*Delta instead. 1 Create the nominal system. In this case Gnom is H. 1 Output.
Uncertainty Modeling bode(G. we design and compare two feedback controllers for G.{1e1 1e2}. 47 .25) 10 5 0 Magnitude (dB) Bode Diagram −5 −10 −15 −20 −25 −30 360 180 Phase (deg) 0 −180 10 −1 10 0 10 Frequency (rad/sec) 1 10 2 In the next section.
xi wn K1 wn K2 = = = = = 0. design two controllers.4 Robustness Analysis Robustness Analysis Next. with different ωn.[1 0]).) ωn 2ξω 2 K I = .1). Form the closedloop systems using feedback. T2 = feedback(G*K2. 48 . 3 and 7. 7.5 respectively.. It will not be surprising if the model variations lead to significant degradations in the closedloop performance. 3. tfinal = 3.1). tf([(2*xi*wn/51) wn*wn/5]. the design equations for KI and KP are (based on the nominl openloop time constant of 0.good steadystate tracking and disturbance rejection properties.[1 0]). Plot the step responses of 20 samples of each closedloop system. both achieving ξ=0. we design a feedback controller for G. Given desired closedloop damping ratio.2. K P = .5. 2 Note that the nominal closedloop bandwidth achieved by K2 is in a region where G has significant model uncertainty. T1 = feedback(G*K1. Since the plant model is nominally a firstorder lag. we chose a PI control architecture. tf([(2*xi*wn/51) wn*wn/5].– 1 5 5 In order to study how the uncertain behavior of G impacts the achievable closedloop bandwidth. ξ and natural frequency ωn. The goals of this design are the usual ones .707.707.
stabmarg1 stabmarg1 = ubound: 4.4959 [stabmarg2.2544 destabfreq: 10. stabmarg2 stabmarg2 = ubound: 1.destabu1.20) Step Response 1. the model variations have a greater effect.T2.0241 lbound: 4.report1] = robuststab(T1).5 −1 0 0. However.5 Amplitude 0 −0.0241 destabfreq: 3. [stabmarg1.5 1 1.2545 lbound: 1.'r'.'b'.destabu2.Robustness Analysis step(T1.5 Time (sec) 2 2.tfinal.5 3 The step responses for T2 exhibit a faster rise time since K2 sets a higher closed loop bandwidth.report2] = robuststab(T2).5249 49 . You can use robuststab to check the robustness of stability to the model variations.5 1 0.
.wcu2] = wcgain(S2). causing an instability at 3.It can tolerate up to 125% of modeled uncertainty.A destabilizing combination of 125% the modeled uncertainty exists. their performance is clearly affected to different degrees.8684 ubound: 1.5 rad/s.It can tolerate up to 402% of modeled uncertainty. The report variable briefly summarizes the analysis. report2 report2 = Uncertain System is robustly stable to modeled uncertainty.5152 [maxgain2.9025 critfreq: 3. . you can use wcgain. To determine how the uncertainty affects closedloop performance.4 Robustness Analysis The stabmarg variable gives lower and upper bounds on the stability margin.G*K1).A destabilizing combination of 402% the modeled uncertainty exists. To do this. and call wcgain. A stability margin greater than 1 means the system is stable for all values of the modeled uncertainty. report1 report1 = Uncertain System is robustly stable to modeled uncertainty. you can compute the worstcase effect of the uncertainty on the peak magnitude of the closedloop sensitivity (S=1/(1+GK)) function. S1 = feedback(1. This peak gain is typically correlated with the amount of overshoot in a step response.6671 critfreq: 11. maxgain2 maxgain2 = lbound: 4. A stability margin less than 1 means there are allowable values of the uncertain elements that make the system unstable. [maxgain1. causing an instability at 10.G*K2).0231 410 . form the closedloop sensitivity functions. .5 rad/s. . S2 = feedback(1.wcu1] = wcgain(S1). maxgain1 maxgain1 = lbound: 1. While both systems are stable for all variations.6031 ubound: 4.
wcu2).wcu1).'b'.'r'.NominalValue.NominalValue.usubs(S2. You can use usubs to substitute these worstcase values for uncertain elements. Use bodemag and step to make the comparison. we explore these robustness analysis tools further on a multiinput. The wcu variable contains specific values of the uncertain elements that achieve this worstcase behavior. multioutput system. as well as the specific frequency where the maximum gain occurs. while K2 achieves better nominal sensitivity than K1. hold on bodemag(S2.'r'). hold off Bode Diagram 20 10 0 Magnitude (dB) −10 −20 −30 −40 −50 −1 10 10 0 10 Frequency (rad/sec) 1 10 2 10 3 Clearly. bodemag(S1.usubs(S1. 411 .Robustness Analysis The maxgain variable gives lower and upper bounds on the worstcase peakgain of the Sensitivity transfer function.'b'). In the next section. and compare the nominal and worstcase behavior. Hence the worstcase performance of K2 is inferior to K1 for this particular uncertain model. the nominal closedloop bandwidth extends too far into the frequency range where the process uncertainty is very large.
The properties InputName. for example. 2state system whose model has parametric uncertainty in the statespace matrices.10). predominantly from a transfer function perspective. make uncertain A and C matrices.p 0]. Using the parameter. InputGroup and OutputGroup behave in exactly the same manner as all of the system objects (ss. eye(2). You can also create uncertain statespace models made up of uncertain statespace matrices.B. d. The NominalValue is a ss object.4 Robustness Analysis MultiInput. get(H) a: b: c: d: StateName: Ts: InputName: OutputName: InputGroup: OutputGroup: NominalValue: Uncertainty: Notes: UserData: [2x2 [2x2 [2x2 [2x2 {2x1 0 {2x1 {2x1 [1x1 [1x1 [2x2 [1x1 {} [] umat] double] umat] double] cell} cell} cell} struct] struct] ss] atomlist] The properties a. Moreover. c. a 2input. all of the analysis tools covered thus far can be applied to these systems as well. [0 p.0 0]). we've focused on simple uncertainty models of singleinput and singleoutput systems. p A B C H = = = = = ureal('p'.[0 0. 2output. MultiOutput Robustness Analysis So far.10.p 1]. although you will add frequency domain input uncertainty to the model in the next section.C. and StateName behave in exactly the same manner as ss objects. [1 p. and frd).'Percentage'. First create an uncertain parameter p. ss(A. You can view the properties of the uncertain system H using the get command. 412 . Consider. tf. OutputName. b. The B matrix happens to be notuncertain. zpk.
20. Similar statements hold for actuator in channel 2.45. Said differently. Use ultidyn objects Delta1 and Delta2. the actuator models are unitygain for all frequencies. 2 Outputs. variability = [10 10]%. 2output uncertain system. max. W1 = makeweight(. nominal = 10. 2 Inputs. 2 occurrences Note that G is a 2input. 413 . Delta2 and p.2. It has 4 states. Nevertheless it is known that the behavior of the actuator for channel 1 is both modestly uncertain (say 10%) at low frequencies and the high frequency behavior beyond 20 rad/s is not accurately modeled. with dependence on 3 uncertain elements. along with shaping filters W1 and W2 to add this form of frequency domain uncertainty into the model. with larger modest uncertainty at low frequency (say 20%) but accuracy out to 45 rad/s. Delta2 = ultidyn('Delta2'. gain = 1.1. W2 = makeweight(.1+W2*Delta2) USS: 4 States. Delta1. You can plot a 2second step response of several samples of G. 1 occurrence Delta2: 1x1 LTI. max.[1 1]).50). Continuous System Delta1: 1x1 LTI. MultiOutput Robustness Analysis Adding Independent Input Uncertainty to Each Channel The model for H did not include actuator dynamics. The 10% uncertainty in the natural frequency is obvious.MultiInput. and one each from the shaping filters W1 and W2 which are embedded in G. 1 occurrence p: real.[1 1]). gain = 1. Delta1 = ultidyn('Delta1'.50). 2 from H. G = H*blkdiag(1+W1*Delta1.
5 1 0.5 2 0 Time (sec) 0.5 0 −0.5 1 0.5 −2 Step Response From: In(2) Amplitude To: Out(1) To: Out(2) −2. start the Bode plot beyond the resonance.5 −1 −1. For clarity.5 2 You can plot Bode plot of 50 samples of G.5 1 1.5 0 0.5 −2 −2. 414 .5 −1 −1.2) From: In(1) 1.5 1. The highfrequency uncertainty in the model is also obvious.4 Robustness Analysis step(G.5 0 −0.5 1 1.
all of the closedloop systems will be uncertain as well.{13 100}. 2 inputs.50) From: In(1) 20 Bode Diagram From: In(2) To: Out(1) Magnitude (dB) . load mimoKexample size(K) Statespace model with 2 outputs. including sensitivity and complementary sensitivity at both the input and output. You can use the command loopsens to form all of the standard plant/controller feedback configurations. F = loopsens(G.K) F = Poles: [13x1 double] 415 . MultiOutput Robustness Analysis bode(G. Since G is uncertain.MultiInput. Phase (deg) To: Out(1) To: Out(2) To: Out(2) 0 −20 −40 360 0 −360 20 0 −20 −40 360 0 −360 10 Frequency (rad/sec) 2 10 2 ClosedLoop Robustness Analysis You need to load the controller and verify that it is 2input and 2output. and 9 states.
Stable is 1 if the nominal closedloop system is stable. Graph 50 samples along with the nominal. The suffix i and o refer to the input and output of the plant (G). 416 .” Hence Ti is mathematically the same as K ( I + GK ) G while Lo is G*K.4 Robustness Analysis Stable: Si: Ti: Li: So: To: Lo: PSi: CSo: 1 [2x2 [2x2 [2x2 [2x2 [2x2 [2x2 [2x2 [2x2 uss] uss] uss] uss] uss] uss] uss] uss] F is a structure with many fields. and CSo is mathematically the same as K ( I + GK ) –1 –1 You can examine the transmission of disturbances at the plant input to the plant output using bodemag on F.PSi. Finally P and C refer to the “plant” and “controller. T for complementary sensitivity. and F. In the remaining 10 fields.poles. The poles of the nominal closedloop system are in F. S stands for sensitivity. and L for openloop gain.
loopatatime disk margins and simultaneous multivariable margins.PSi.Sim] = loopmargin(G.':/'. and simultaneously at both input and output. individually at the plant output.MultiInput. [I.DI.SimO. The third output argument are the simultaneous gain and phase variations allowed in all input channels to the plant. They are computed for the nominal system and do not reflect the uncertainty models within G. SimI 417 .DO.K).{1e1 100}. MultiOutput Robustness Analysis bodemag(F.O. Explore the simultaneous margins individually at the plant input.SimI.50) From: In(1) 20 10 0 −10 −20 −30 −40 Magnitude (dB) Bode Diagram From: In(2) To: Out(1) To: Out(2) −50 −60 20 10 0 −10 −20 −30 −40 −50 −60 −1 10 10 0 10 1 10 Frequency (rad/sec) −1 2 10 0 10 1 10 2 Nominal Stability Margins You can use loopmargin to investigate loopatatime gain and phase margins.
This is not always the case in multiloop feedback systems.4769] PhaseMargin: [76.3957 76. Sim Sim = GainMargin: [0.3522 Nevertheless. the 418 . able to tolerate significant gain (more than +/50% in each channel) and phase 30 degrees variations simultaneously in all input and output channels of the plant. The 6th output argument are the simultaneous gain and phase variations allowed in all output channels to the plant.8882 30.5671 1. Robustness of Stability Model Uncertainty With loopmargin. complex uncertain system models.1180 8. when you consider all such variations simultaneously. multiloop system. When working with detailed. As expected.5441] Frequency: 6.3836] PhaseMargin: [76.1193 8. as well as phase variations up to 76 degrees. SimO SimO = GainMargin: [0. these numbers indicate a generally robust closedloop system. the margins are somewhat smaller than those at the input or output alone.7635] PhaseMargin: [30. These margins are computed only for the nominal system. The last output argument are the simultaneous gain and phase variations allowed in all input and output channels to the plant.3957] Frequency: 18.2287 This information implies that the gain at the plant input can vary in both channels independently by factors between (approximately) 1/8 and 8. and do not reflect the uncertainty explicitly modeled by the ureal and ultidyn objects.4 Robustness Analysis SimI = GainMargin: [0.3522 Note that the simultaneous margins at the plant output are similar to those at the input. you determined various margins of the nominal.8882] Frequency: 18.5441 76.
in terms of stability. Use any of the closedloop systems within F = loopsens(G. In fact.. and hence the stability properties are the same. In this example. to the variations modeled by the uncertain parameters Delta1. In the next section.To.Si. This analysis confirms what the loopmargin analysis suggested. we study the effect of these variations on the closedloop output sensitivity function. [stabmarg.7576 report report = Uncertain System is robustly stable to modeled uncertainty. You can use robuststab to check the stability margin of the system to these specific modeled variations. Delta2 and p. etc. use robuststab to compute the stability margin of the closedloop system represented by Delta1. have the same internal dynamics. . F. .2175 destabfreq: 13. Delta2 and p. stabmarg stabmarg = ubound: 2. F. All of them.So).desgtabu. 419 .K).MultiInput.It can tolerate up to 222% of modeled uncertainty. causing an instability at 13.8 rad/s.2175 lbound: 2. MultiOutput Robustness Analysis conventional margins computed by loopmargin may not always be indicative of the actual stability margins associated with the uncertain elements.A destabilizing combination of 222% the modeled uncertainty exists. The closedloop system is quite robust.report] = robuststab(F. the system can tolerate more than twice the modeled uncertainty without losing closedloop stability.
So.So. bodemag(F.NominalValue.freq] = norm(F. occurring at a frequency of 36 rad/s.'inf') PeakNom = 1. It clearly shows decent disturbance rejection in all channels at low frequency.0483 The peak is about 1.4 Robustness Analysis WorstCase Gain Analysis You can plot the Bode magnitude of the nominal output sensitivity function. 420 .1317 freq = 7.NominalValue. [PeakNom.13.{1e1 100}) From: In(1) 10 0 −10 To: Out(1) Bode Diagram From: In(2) −20 −30 −40 Magnitude (dB) −50 −60 10 0 −10 −20 −30 To: Out(2) −40 −50 −60 −70 −80 −90 −100 −1 10 10 0 10 1 10 Frequency (rad/sec) −1 2 10 0 10 1 10 2 You can compute the peak value of the maximum singular value of the frequency response matrix using norm.
2.wcu).1 using usubs. Delta2 and p vary over their ranges? You can use wcgain to answer this.6 0.5) From: In(1) 1. when the uncertain elements Delta1. You can substitute the values for Delta1. Make the substitution in the output complementary sensitivity.4 1. step(F. The frequency where the peak is achieved is about 8.4 0. and do a step response. [maxgain.5.1017 2. which is the worst combination of uncertain values in terms of output sensitivity amplification does not show significant degradation of the command response significantly.1 and 2.usubs(F.1835 8.2 1 0.wcu] maxgain maxgain = lbound: ubound: critfreq: = wcgain(F.5546 The analysis indicates that the worstcase gain is somewhere between 2.To.So).WorstCase Gain Analysis What is the maximum output sensitivity gain that is achieved. Delta2 and p that achieve the gain of 2.5 Step Response From: In(2) 1 To: Out(1) Amplitude To: Out(2) 0. 2.8 0.5 0 1. The settling time is increased by about 421 .2 0 −0.To.NominalValue.2 0 1 2 3 4 5 0 Time (sec) 1 2 3 4 5 The perturbed response.
from 2 to 4. 422 . but is still quite small.4 Robustness Analysis 50%. and the offdiagonal coupling is increased by about a factor of about 2.
including the simultaneous gain/phase margins Robustness performance of uncertain systems Computes the robust stability margin of a nominally stable uncertain system Computes the worstcase gain of a nominally stable uncertain system Computes worstcase (over uncertainty) loopatatime diskbased gain and phase margins Computes worstcase (over uncertainty) sensitivity of plantcontroller feedback loop ufrd loopsens loopmargin robustperf robuststab wcgain wcmargin wcsens 423 . as well as MIMO gain and phase margins for a multiloop system. timeinvariant dynamics Creates uncertain statespace object from uncertain statespace matrices Creates uncertain frequency response object Compute all relevant open and closedloop quantities for a MIMO feedback connection Compute loopatatime.Summary of Robustness Analysis Tools Summary of Robustness Analysis Tools Function ureal ultidyn uss Description Creates uncertain real parameter Creates uncertain. linear.
4 Robustness Analysis 424 .
524) Appendix: Interpretation of HInfinity norm (p. µ. 510) Functions for Control Design (p.5 HInfinity and Mu Synthesis This chapter covers an introduction to H ∞ and structured singular value. 526) References (p. 52) Application of H∞ and mu to Active Suspension Control (p. control design. HInfinity Performance (p. 533) Discusses a modern approach to characterizing closedloop performance A fully worked example of designing a compensator for active suspension control A discussion of the functions you can use for robust control design Details about norms and their properties A list of relevant papers and books about Hinfinity and structured singular value design .
5 HInfinity and Mu Synthesis HInfinity Performance Performance as Generalized Disturbance Rejection The modern approach to characterizing closedloop performance objectives is to measure the size of certain closedloop transfer function matrices using various matrix norms. reference  control input 6 external force disturbance K  G ? error . and H∞. as shown in Figure 51. Matrix norms provide a measure of how large output signals can get for certain classes of input signals. Hence. design objective would be to Design K to keep tracking errors and control input signal small for all reasonable reference commands. though not precise. 52 . it is important to develop a clear understanding of how many types of control objectives can be posed as a minimization of closedloop transfer functions.e . Consider a tracking problem. and optimal control. a natural performance objective is the closedloop gain from exogenous influences (reference commands. Hence.tracking e ? noise Typical ClosedLoop Performance Objective A reasonable. K is some controller to be designed and G is the system we want to control. and control input signal limitations. sensor noise. Optimizing these types of performance objectives. Specifically. with disturbance rejection. . and external force disturbances. over the set of stabilizing controllers is the main thrust of recent optimal control theory. such as L1. sensor noises. measurement noise. and external force disturbances) to regulated variables (tracking errors and control input signal). H2.
In the diagram. if the performance objective is in the form of a matrix norm. Interconnection with Typical MIMO Performance Objectives The closedloop performance objectives is formulated as weighted closedloop transfer functions which are to be made small through feedback. multioutput (MIMO) dynamic system. there are two different aspects to the gain of T: • Spatial (vector disturbances and vector errors) • Temporal (dynamical relationship between input/output signals) Hence the performance criterion must account for: • Relative magnitude of outside influences • Frequency dependence of signals • Relative importance of the magnitudes of regulated variables So. G denotes the plant model and K is the feedback controller. it should actually be a weighted norm WLTWR where the weighting function matrices WL and WR are frequency dependent. reference tracking error = T external force control input noise regulated variables outside influences We can assess performance by measuring the gain from outside influences to regulated variables. good performance is associated with T being small.HInfinity Performance let T denote the closedloop mapping from the outside influences to the regulated variables. 53 . A natural (mathematical) manner to characterize acceptable performance is in terms of the MIMO ⋅∞ (H∞) norm. Since the closedloop system is a multiinput. A generic example. to account for bandwidth constraints and spectral content of exogenous signals. is shown in block diagram form in Figure 51. In other words. which includes many relevant terms. See the Appendix at the end of this chapter for an interpretation of the H∞ norm and signals.
The weighting functions are used to scale the input/output transfer functions such ˜ ˜ that when Ted∞ < 1.d3] into physical units defined as [d1. [Wcmd . their frequency dependence and their relative importance. Similarly weights or scalings [Wact .Wsnois ].Wperf2 ] transform and scale 54 . This is captured in Figure 51 where the weights or scalings.5 HInfinity and Mu Synthesis Figure 51: Generalized and Weighted Performance Block Diagram The blocks in Figure 51 might be scalar (SISO) and/or multivariable (MIMO). Performance requirements on the closedloop system are transformed into the H∞ framework with the help of weighting or scaling functions. d2. Wdist.d2. The mathematical objective of H∞ control is to make the closedloop MIMO transfer function Ted satisfy Ted∞ < 1. depending on the specific example. are used to transform and scale the normalized input signals [d1. Weights are selected to account the relative magnitude of signals. the relationship between d and e is suitable. Wperf1. d3].
Signal Meaning d1 ˜ d1 d2 ˜ d2 d3 ˜ d3 e1 ˜ e1 e2 ˜ e2 e3 ˜ e3 Normalized reference command Typical reference command in physical units Normalized exogenous disturbances Typical exogenous disturbances in physical units Normalized sensor noise Typical sensor noise in physical units Weighted control signals Actual control signals in physical units Weighted tracking errors Actual tracking errors in physical units Weighted plant errors Actual plant errors in physical units Wcmd Wcmd is included in H∞ control problems that require tracking of a reference command. Pilot commands could be modeled as normalized signals passed through a first order filter 55 . [e1. in a flight control problem. It describes the magnitude and the frequency dependence of the reference commands generated by the normalized reference signal. Wcmd shapes (magnitude and frequency) the normalized reference command signals into the actual (or typical) reference signals that you expect to occur. e2. e3]. Suppose that the stick has a maximum travel of three inches. For example.HInfinity Performance physical units into normalized output signals. weighting functions and models follows. fighter pilots can (and will) generate stick input reference commands up to a bandwidth of about 2Hz. Normally Wcmd is flat at low frequency and rolls off at high frequency. An interpretation of the signals.
and air currents.5 HInfinity and Mu Synthesis 3 W act = 1 .e. For example. the appropriate model is ω W model = 10 2 2 s + 2ζω + ω · 2 Wdist Wdist shapes the frequency content and magnitude of the exogenous disturbances affecting the plant. such as the ground excitations.s + 1 2 ⋅ 2π Wmodel Wmodel represents a desired ideal model for the closedlooped system and is often included in problem formulations with tracking requirements. the objective of the closedloop system is the match the defined model. sound (pressure) waves. in these units. For good command tracking response. Unit conversions might be necessary to insure exact correlation between the ideal model and the closedloop system. In the fighter pilot example. The ideal model would then be ω W model = 10 2 2 s + 2ζω + ω 2 for specific desired natural frequency ω and desired damping ratio ζ. i. you might desire the closedloop system to respond like a welldamped secondorder system. consider an electron microscope as the plant. Inclusion of and ideal model for tracking is often called a model matching problem. 56 . Then. You would capture the spectrum and relative magnitudes of these disturbances with the transfer function weighting matrix Wdist. The dominant performance objective is to mechanically isolate the microscope from outside mechanical disturbances. suppose that rollrate is being commanded and 10°/second response is desired for each inch of stick motion.
Hsens Hsens represents a model of the sensor dynamics or an external antialiasing filter. The transfer functions used to describe Hsens are based on physical 57 . deflection rate/velocity. Displacement or rotation measurement is often quite accurate at low frequency and in steadystate. gradually increase in magnitude as a first or second system. derived from laboratory experiments or based on manufacturer measurements. Therefore the corresponding Wsnois weight would be larger at low and high frequency and have a smaller magnitude in the midfrequency range. medium grade accelerometers have substantial noise at low frequency and high frequency. The inverse of the weight is related to the allowable size of tracking errors. Wperf2 Wperf2 penalizes variables internal to the process G. nonzero value at high frequency. The Wsnois weight tries to capture this information. rolls off at first or second order. in which case Wperf1 is flat at low frequency. which is often higher in one frequency range than another.HInfinity Performance Wperf1 Wperf1 weights the difference between the response of the closedloop system and the ideal model. Each sensor measurement feedback to the controller has some noise. Often you may desire accurate matching of the ideal model at low frequency and require less accurate matching at higher frequency. Wsnois Wsnois represents frequency domain models of sensor noise. in the face of the reference commands and disturbances described by Wref and Wdist. but responds poorly as frequency increases. in the control problem. response of the control signals. Wact Wact is used to shape the penalty on control signal use. The weighting function for this sensor would be small at low frequency. etc. or other variables that are not part of the tracking objective. Wact is a frequency varying weighting function used to penalize limits on the deflection/position. and flattens out at a small. For example. such as actuator states that are internal to G. Each control signal is usually penalized independently. in the face of the tracking and disturbance rejection objectives defined above. Wmodel.. and level out at high frequency.
These models might also be lumped into the plant model G. ∆A(s). at the plant input and additive uncertainty. Robustness in the H∞ Framework Performance and robustness tradeoffs in control design were discuss in the context of multivariable loop shaping in Chapter ***. This generic block diagram has tremendous flexibility and many control performance objectives can be formulated in the H∞ framework using this block diagram description.5 HInfinity and Mu Synthesis characteristics of the individual components. Û½ Ö ¹ ¡ ¹ Ñ¹ Å ´× µ Þ½ Û¾ ¹ ¡ ´×µ Þ¾ Ã ´×µ ¹ Ñ¹ ´×µ ¹ Ñ Ý ¹ The transfer function matrices are defined as: 1 TF(s) z →w 1 1 TF(s) z 2 → w2 = = TI ( s ) KS O ( s ) = = KG ( I ) . In the H∞ control design framework. 58 . Theorems 1 and 2 in Chapter *** give bounds on the size of the transfer function matrices from z1 to w1 and z2 to w2 to insure the closedloop system is robust to multiplicative uncertainty. In the H∞ control problem formulation. the robustness objectives enter the synthesis procedure as additional input/output signals to be kept small. ∆M(s). you can include robustness objectives as addition disturbance to error transfer functions to be kept small. K ( I + GK ) –1 where TI(s) denotes the input complementary sensitivity function and SO(s) denotes the output sensitivity function. Consider the following figure of a closedloop feedback system with additive and multiplicative uncertainty models. around the plant G(s). The interconnection with the uncertainty blocks removed follows.
s + 1 5 W M = 0.w2]. WM(jω) and WA(jω) are the respective sizes of the largest anticipated additive and multiplicative plant perturbations. 59 .10 1 . represents a percentage error in the model and is often small in magnitude at low frequency. That is minimize the H∞ norm of the transfer matrix from z.HInfinity Performance Ö ¹ Ñ¹ Û½ Ã ´×µ Þ½ ¹ Ñ Û¾ Þ¾ ¹ ´×µ ¹ Ñ Ý ¹ The H∞ control robustness objective is now in the same format as the performance objectives. You can initially select WA to be a constant whose magnitude is 10% of the G(s)∞. 2 to 5 ((200% to 500% modeling error). represents an absolute error which is often small at low frequency and large in magnitude at high frequency.s + 1 200 . to w. Weighting or scaling matrices are often introduced to shape the frequency and magnitude content of the sensitivity and complementary sensitivity transfer function matrices. ∆M(s) and ∆A(s) are assumed to be norm bounded by 1.z2].e. ∆M(s)<1 and ∆A(s)<1. A typical multiplicative weight is 1 . i. WA. Let WM correspond to the multiplicative uncertainty and WA correspond to the additive uncertainty model.05 and 0. [w1. between 0. The additive weight or scaling. G(s). [z1. at least twice the bandwidth of the closedloop system. The multiplicative weighting or scaling.20 (5% to 20% modeling error) growing larger in magnitude at high frequency. Hence as a function of frequency. The weight will transition by crossing a magnitude value of 1. WM. which corresponds to 100% uncertainty in the model. The magnitude of this weight depends directly on the magnitude of the plant model.
You will see the tradeoff between passenger comfort. the wheel travel. serves to model the compressibility of the pneumatic tyre. The variables xs. and damper. and r are the car body travel. represents the wheel assembly. is controlled by feedback and 510 . Active suspensions allow the designer to balance these objectives using a hydraulic actuator. represents the car chassis. and suspension deflection. mus. controlled by feedback. The spring. ms. Ñ× × Ü× × ¯¬ Æ ÀÀ ÀÀ ÀÀ ÀÀ ÀÀ ÀÀ × Ø ÑÙ× ÀÀ ÀÀ ÀÀ ÀÀ ÀÀ ÜÙ× Ö The sprung mass. i. Quarter Car Suspension Model The quarter car model shown will be used to design active suspension control laws. minimizing car body travel. ks.5 HInfinity and Mu Synthesis Application of H∞ and mu to Active Suspension Control Conventional passive suspensions employ a spring and damper between the car body and wheel assembly. represent a passive spring and shock absorber that are placed between the car body and the wheel assembly. represent a tradeoff between conflicting performance metrics such as passenger comfort. while the unsprung mass. road holding. The force fs. In this section. you will design of an active suspension system for a quarter car body and wheel assembly model using the H∞ control design technique. xus. bs.e. and the road disturbance respectively. versus suspension travel as the performance objective. kt. while the spring. kN. applied between the sprung and unsprung masses. between the chassis and wheel assembly.
Defining x1:=xs. ms = 290. A34 = [ 0 0 0 1.Application of H∞ and mu to Active Suspension Control represents the active component of the suspension system. x 2 : = x s . % % % % % kg kg N/m/s N/m N/m A linear. D = [0 0. and assume that the control signal is the force fs are ignored in this · · example. Similarly. A12(2. qcar = ss([A12.[ k s ( x 1 – x 3 ) + b s ( x 2 – x 4 ) – f s ].[ k s ( x 1 – x 3 ) + b s ( x 2 – x 4 ) – k t ( x 3 – r ) – f s ]. [kt 10000]/mus].:). ks = 16182 .:). 56. bs = 1000. The dynamics of the actuator. mus = 59. A12 = [ 0 1 0 0. x3:=xus and x 4 : = x us . 0 10000/ms]. The tradeoff between passenger comfort and suspension deflection is due to the fact that it is not possible to 511 . B34]. · x 1 = x 2. 0 0. m us The following component values are taken from reference [Lin97].D). the suspension deflection transfer function has an invariant point at the rattlespace frequency. 23. timeinvariant model of the quarter car model. It is well known [Hedrick90] that the acceleration transfer function has an invariant point at the tyrehop frequency. acceleration and suspension deflection. B12(2.3 rad/sec.[B12. · 1 x 2 = – . B12 = [0 0. 0 0 0 0]. B34 = [0 0. is constructed from the equations of motion and parameter values. 0 1]. kt = 190000. qcar. the following statespace description of the quarter car dynamics. qcar. · 1 x 4 = .C. 1 0 1 0. C = [1 0 0 0.7 rad/sec. A34]. [ks bs kskt bs]/mus]. are the road disturbance and actuator force respectively and the outputs are the car body deflection. ms · x 3 = x 4. [ks bs ks bs]/ms ]. The inputs to the model.
Wn = 0. ½ ¹ Ï Ö Ö¹ Ü½ ½» Ö × ¹ Ï Ø Ü½ Ü¿ ¹ Ï ¹Ï Ü½ Ü½ Ü¿ ¹ ¹ ¾ ¿ Ý ½ · Ï Ò ¾ The measured output or feedback signal. For more details on H∞ control design the reader is referred to [DGKF. A block diagram of the H∞ control design interconnection for the active suspension problem is shown in the following figure. We assume that the maximum road disturbance is 7 cm and hence choose Wref = 0.01 m. The controller acts on this signal to produce the control input. the performance objectives are achieved via minimizing weighted transfer function norms. Wn is set to a sensor noise value of 0. Linear H∞ Controller Design The design of linear suspension controllers that emphasize either passenger comfort or suspension deflection. Wn would be frequency dependent and would serve to model the noise associated with the displacement sensor.01. 512 . In a more realistic design.5 HInfinity and Mu Synthesis simultaneously keep both the above transfer functions small around the tyrehop frequency and in the low frequency range. Fran1. GloD. SkoP.07. The weight Wref is used to scale the magnitude of the road disturbances. Zame] and the references therein. the hydraulic actuator force fs. is the suspension deflection x1−x3. The block Wn serves to model sensor noise in the measurement channel. The controllers in this section are designed using linear H∞ synthesis [FialBal]. y. As is standard in the H∞ framework. Weighting functions serve two purposes in the H∞ framework: they allow the direct comparison of different performance objectives with the same norm and they allow for frequency information to be incorporated into the analysis.
Wx1. qcaric1 = sysic. The car body acceleration. There is one control input.07. In the first design.[1 500]). nmeas = 1. The magnitude and frequency content of the control force fs is limited by the 100 s + 50 weighting function Wact. nmeas. qcar(2)+Wn ]'. input_to_Wn = '[ d2 ]'. denoted qcaric1. ncont. fs ]'. The magnitude of 13 s + 500 the weight increases above 50 rad/s in order to limit the closedloop bandwidth.[1 2*pi*5]). is not included in this control problem formulation 1 3 You can construct the weighted H∞ plant model for control design. d2. input_to_qcar = '[ Wref.. input_to_Wref = '[ d1 ]'. Wx1 = 8*tf(2*pi*5. Wact = (100/13)*tf([1 50]. input_to_Wact = '[ fs ]'. H∞ Control Design 1 The purpose of the weighting functions W x and W x 1 – x 3 are to keep the 1 car deflection and the suspension deflection small over the desired frequency ranges. The weight magnitude rolls off above 5×2π rad/s to respect a wellknown H∞ design ruleofthumb that requires the performance weights to roll off before an openloop zero (56. inputvar = '[ d1. input_to_Wx1 = '[ qcar(1) ]'. ncont = 1. We choose W act = . 513 . fs. W x – x . systemnames = 'qcar Wn Wref Wact Wx1'. is using the sysic command. fs]'.7 rad/s in this case). you are designing the controller for passenger comfort and hence the car body deflection x1 is penalized. The suspension deflection weight. is the measured signal and corresponds to the last output of qcaric1. the hydraulic actuator force and one measurement signal. The control signal corresponds to the last input to qcaric1.Application of H∞ and mu to Active Suspension Control Wref = 0.. A H∞ controller is synthesized with the hinfsyn command. which is noisy. outputvar = '[ Wact. the car body acceleration.
5g'. W x . The car deflection weight. 10 1 Bode Diagram From: Road Disturbance To: Suspension Deflection 10 0 Magnitude (abs) 10 −1 Passive K1 10 −2 10 0 10 Frequency (rad/sec) 1 10 2 H∞ Control Design 2 In the second design.Scl1.K1).56698 You can analyze the H∞ controller by constructing the closedloop feedback system. 514 .5 HInfinity and Mu Synthesis [K1.ncont).gam1] = hinfsyn(qcaric1.1:2).nmeas. Wx1x3 = 25*tf(1.[1/10 1]). CL1 = lft(qcar([1:4 2]. you are designing the controller to keep the suspension deflection transfer function small.gam1) ans = Hinfinity controller K1 achieved a norm of 0. You can construct the weighted H∞ plant model for control design. is not included in this control problem 1 formulation. sprintf('Hinfinity controller K1 achieved a norm of %2.3 rad/s) in the design. Bode magnitude plots of the passive suspension and active suspension are shown in the following figure. Hence the road disturbance to suspension deflection x1−x3 is penalized via the weighting function Wx1x3. The Wx1x3 weight magnitude rolls off above 10 rad/s to roll off before an openloop zero (23. CL1.
K2).89949 Recall that this H∞ control design emphasizes minimization of suspension deflection over passenger comfort where as the first H∞ design focussed on passenger comfort.Wx1*ycar(1).fs].Equation{1} = equate(ycar.gam2] = hinfsyn(qcaric2. You can analyze the H∞ controller by constructing the closedloop feedback system. Bode magnitude plots of the transfer function from road 515 . CL2 = lft(qcar([1:4 2]. fs]).1:2). [K2. fs = icsignal(1).System. ycar = icsignal(size(qcar.1)).qcar*[Wref*d(1).5g'. is using the sysic command. M.gam2) ans = Hinfinity controller K2 achieved a norm of 0. M.Input = [d. M.Output = [Wact*fs. The same control and measurements are used as in the first design. CL2.Application of H∞ and mu to Active Suspension Control denoted qcaric2.ncont). qcaric2 = M. M = iconnect. d = icsignal(2).ycar(2)+Wn*d(2)]. sprintf('Hinfinity controller K2 achieved a norm of %2.nmeas. The second H∞ controller is synthesized with the hinfsyn command.Scl2.
3 rad/sec. 516 .7 rad/s. Observe the reduction in suspension deflection in the vicinity of the tyrehop frequency ω1= 56. and the corresponding increase in the acceleration frequency response in this vicinity.5 HInfinity and Mu Synthesis disturbance to suspension deflection for both controllers and the passive suspension system shown in the following figure. Also. 10 1 Bode Diagram From: Road Disturbance To: Suspension Deflection 10 0 Magnitude (abs) 10 −1 Passive K1 K2 10 −2 10 0 10 Frequency (rad/sec) 1 10 2 The dotted and solid lines in the figure are the closedloop frequency responses that result from the different performance weighting functions selected. a reduction in suspension deflection has been achieved for frequencies below the rattlespace frequency ω2= 23. compared to Design 1.
Time response plots of the two H∞ controllers are shown in following figures. solid and dotted lines correspond to the passive suspension. The suspension deflection response of Design 2 to a 5 cm bump is good however the acceleration response to the 5 cm bump is much inferior to Design 1 (see the 517 . the rattlespace frequency at 23. Time domain performance characteristics are critical to the success of the active suspension system on the car. where a=0. This is due to the fact that suspension deflection was not penalized in this design. Observe that the acceleration response of Design 1 to the 5 cm bump is very good however the suspension deflection is larger than for Design 2.3 rad/sec.025 corresponds to a road bump of peak magnitude 5 cm. H∞ controller 1 and 2 respectively.Application of H∞ and mu to Active Suspension Control The second H∞ control design attenuates both resonance modes where as the first controller focussed its efforts on the first mode. All responses correspond to the road disturbance r(t) r(t) = a ( 1 – cos 8πt ). = 0 otherwise. 10 3 Bode Diagram From: Road Disturbance To: Car Body Acceleration 10 2 Magnitude (abs) 10 1 10 0 Passive K1 K2 10 −1 10 0 10 Frequency (rad/sec) 1 10 2 All the analysis till now has been in the frequency domain. The dashed.
you will design a controller which achieves robust performance using the µsynthesis control design methodology. Equally.15 0.4 0.05 −0.2 0.8 1 −8 0 0.2 0.02 −0.04 0.2 0.15 0 0.1 0. Once again this is due to the fact that car body displacement and acceleration were not penalized in design 2.03 x1 (m) Body Acceleration 6 K1 K2 Passive bump 4 Accel (m/s/s) 2 0 −2 −4 −6 0.6 Time (sec) 0. if not more important.05 0 −0.02 0.8 1 Suspension Deflection 0. This section described synthesis of H∞ to achieve the performance objectives on the active suspension system.6 0.8 1 −0.02 x1 − x3 (m) Control Force 0.01 −0.4 0. Body Travel 0. is the design of controlles robust to model error or uncertainty.02 0 0. The goal of every control design is to achieve the desired performance specifications on the nominal model as well as other plants that are close to the nominal model.05 0.04 0.8 1 Designs 1 and 2 represent extreme ends of the performance tradeoff spectrum. This is called robust performance.6 0. In other words.06 fs (10kN) 0 −0. In the next section.6 Time (sec) 0.4 0.2 0.01 0 −0. it you want to achieve the performance objectives in the presence of model error or uncertainty.4 0.1 −0.2 0 0.5 HInfinity and Mu Synthesis figure).04 −0. The 518 .
1 The hydraulic actuator is modeled as act = .[1/60 1]).3. gain = 1.50. which corresponds to the frequency variation of the model uncertainty and the uncertain LTI dynamic object ∆unc defined as unc.[1 1]).'b/r+'. 1 Output. 1 . and 50 random actuator models described by actmod. In section.Application of H∞ and mu to Active Suspension Control active suspension system again serves as the example. At low frequency. denoted with a '+' symbol. The actuator model itself is uncertain. bode(actmod. 1 Input.. The model uncertainty is represented by the weight.logspace(1. it can vary up to 10% from its nominal value. actmod.[1/800 1]).10*tf([1/4 1]. 1 occurrence The actuator model. you will include a first order model of the hydraulic actuator dynamics as well as an uncertainty model to account for differences between the actuator model and the actual actuator dynamics. Control Design via µSynthesis The active suspension H∞ controllers designed in the previous section ignored the hydraulic actuator dynamics. Instead of an assume perfect actuator. unc = ultidyn('unc'. Continuous System unc: 1x1 LTI.120)) 519 .s + 1 60 act = tf(1. actmod = act*(1+ Wunc*unc) USS: 2 States. Wunc. Around 4 rad/s the percentage variation starts to increase and reaches 400% at approximately 800 rad/s. is an uncertain statespace system. max. You can describe the actuator model error as a set of possible models using a weighting function. The following Bode plot shows the nominal actuator model. below 4 rad/s. Wunc = 0. a nominal actuator model with modeling error is introduced into the control problem.
actmod.5 HInfinity and Mu Synthesis 10 1 Nominal and 50 random actuator models Magnitude (abs) Phase (deg) 10 0 10 −1 10 90 −2 0 −90 −180 −270 10 −1 10 0 10 Frequency (rad/sec) 1 10 2 10 3 The uncertain actuator model. The revised control design interconnection diagram is ½ ¹ ¹ Ø Ø Ï Ö Ö¹ Ü½ ½» Ö × ¹ ¹ ¹Ï ¹¡ ÙÒ ÙÒ Ü½ Ü¿ Ý ¹Ï ¹Ï Ü½ Ü½ Ü¿ ¹ ¹ Ò ¾ ¿ Ï ½ · Ï ¾ ØÑÓ 520 . is represents the model of the hydraulic actuator used for control.
outputvar = '[ Wact. Note that the 521 .nmeas.gdk] = dksyn(qcaricunc. input_to_Wx1 = '[ qcar(1) ]'.5g'. which is noisy. fs ]'.Application of H∞ and mu to Active Suspension Control You are designing the controller for passenger comfort.53946 You can analyze the µsynthesis controller by constructing the closedloop feedback system. d2. fs. SkoP]. systemnames = 'qcar Wn Wref Wact Wx1 actmod'.gdk) ans = musynthesis controller Kdk achieved a norm of 0.ncont). The D−K iteration procedure is an approximation to µsynthesis that attempts to synthesize a controller that achieves robust performance [SteD. PacDB. is the measured signal and corresponds to the last output of qcaricunc.CLdk. Wx1. is using the sysic command. input_to_Wact = '[ fs ]'. Bode magnitude plots of the passive suspension and active suspension systems on the nominal actuator model with H∞ design 1 and the µsynthesis controller are shown in the following figure. qcar(2)+Wn ]'. the control signal corresponds to the last input to qcaric1. The car body acceleration. CLdkunc. fs]'. the hydraulic actuator force and one measurement signal. ncont.1:2)*blkdiag(1. denoted qcaricunc. sprintf('musynthesis controller Kdk achieved a norm of %2. As previously described. input_to_Wref = '[ d1 ]'. input_to_qcar = '[ Wref. There is one control input. qcaricunc = sysic. CLdkunc = lft(qcar([1:4 2].actmod). input_to_Wn = '[ d2 ]'. inputvar = '[ d1. as in the first H∞ control design. the car body acceleration. [Kdk.Kdk). input_to_actmod = '[ fs ]'. hence the car body deflection x1 is penalized with Wx1. The uncertain weighted H∞ plant model for control design. BalPac. A µsynthesis controller is synthesized using D−K iteration with the dksyn command. nmeas.
The uncertain closedloop systems. CL1unc = lft(qcar([1:4 2]. are formed with K1 and Kdk respectively. Bode Diagram From: Road Disturbance To: Suspension Deflection 10 1 10 0 Magnitude (abs) 10 −1 Passive K1 Kdk 10 −2 10 0 10 Frequency (rad/sec) 1 10 2 It is important to understand how robust both controllers are in the presence of model error.K1).dksamples). You can simulate the active suspension system with the H∞ design 1 and the µsynthesis controller. 522 . 40 random plant models in the model set are simulated. CL1unc and CLdkunc.40). CL1unc40 = usubs(CL1unc. both controllers are robust and performance well in the presence of actuator model error. [CLdkunc40.dksamples] = usample(CLdkunc. The µsynthesis controller. Kdk. As you can see. achieves slightly better performance than H∞ design 1.1:2)*blkdiag(1.5 HInfinity and Mu Synthesis µsynthesis controller better attenuates the first resonant mode at the expense of decrease performance below 3 rad/sec.actmod). For each uncertain system.
8 0.1 0.02 0 0.4 0.1 0.02 0.6 Time (seconds) 0.04 0.08 0.02 0 0.5 0.07 0.05 Body Travel (m) 0.06 0.02 0.01 0 −0.7 0.5 0.03 0.2 0.03 0.9 1 523 .8 0.06 0.Application of H∞ and mu to Active Suspension Control Random sample of 40 plant models: DK Controller 0.6 Time (seconds) 0.3 0.9 1 Random sample of 40 plant models: H∞ Design 1 0.7 0.04 0.01 0 −0.4 0.08 0.01 −0.05 Body Travel (m) 0.2 0.3 0.01 −0.07 0.
simulation and insight into the dynamics of the plant. H∞ loop shaping controller synthesis Looptransfer recovery controller synthesis H∞ mixedsensitivity controller synthesis H∞ normalized coprime factor controller synthesis Sampledata H∞ controller synthesis h2hinfsyn h2syn hinfsyn loopsyn ltrsyn mixsyn ncfsyn sdhinfsyn 524 . combining parameter selection with analysis. including: • H2 control design • H∞ standard and loop shaping control design • H∞ normalized coprime factor control design • Mixed H2/H∞ control design • µsynthesis via D−K iteration • Sampledata H∞ control design These functions cover both continuous and discretetime problems. The Robust Control Toolbox provides a set of commands that you can use for a broad range of multivariable control applications. Function augw Description Augments plant weights for mixedsensitivity control design Mixed H2/H∞controller synthesis H2 controller synthesis H∞ controller synthesis. The design methods are iterative.5 HInfinity and Mu Synthesis Functions for Control Design The term control system design refers to the process of synthesizing a feedback control law that meets design specifications in a closedloop control system. The following table summarizes the H2 and H∞ control design commands.
Functions for Control Design The following table summarizes the µsynthesis via D−K iteration control design commands. minimumphase model 525 . Function dksyn dkitopt drawmag fitfrd fitmagfrd Description Synthesis of a robust controller via µsynthesis Create a dksyn options object Interactive mousebased sketching and fitting tool Fit scaling frequency response data with LTI model Fit scaling magnitude data with stable.
(L2norm). with statespace model 526 . which is defined as e 2 := ⎛ e ( t ) dt⎞ ⎝ –∞ ⎠ ∫ ∞ 2 1 2 If this integral is finite. then the signal e is square integrable. denoted as e ∈ L2. for mathematical convenience. For vectorvalued signals.5 HInfinity and Mu Synthesis Appendix: Interpretation of HInfinity norm Norms of Signals and Systems There are several ways of defining norms of a scalar signal e(t) in the time domain. We will often use the 2norm. e1 ( t ) e(t) = e2 ( t ) en ( t ) the 2norm is defined as 1 2 2 2 dt ) e 2 := ( … ∫ ∞ –∞ = ( ∫ 1 2 ∞ T e ( t )e ( t )dt ) –∞ e(t) In µTools the dynamic systems we deal with are exclusively linear.
1 2 1 ∞ T 2 := T ( jω ) F dω 2 2π –∞ ∫ T ∞ := maxσ [ T ( jω ) ] w∈R where the Frobenius norm (see the MATLAB norm command) of a complex matrix M is M F := trace ( M*M ) Both of these transfer function norms have input/output timedomain interpretations. in the transfer function form e(s) = T(s)d(s). starting from initial condition x(0) = 0. If.Appendix: Interpretation of HInfinity norm · x = A B x C D d e or. T(s) := C(sI – A)–1B + D Two mathematically convenient measures of the transfer matrix T(s) in the frequency domain are the matrix H2 and H∞ norms. two signals d and e are related by · x = A B x CD d e 527 .
5 HInfinity and Mu Synthesis then: • for d. it should actually be a weighted norm WLTWR where the weighting function matrices WL and WR are frequency dependent. This is discussed in greater detail in the next section. the steadystate variance of e is T2. we discuss some interpretations of the H∞ norm. if the performance objective is in the form of a matrix norm. • The L2 (or RMS) gain from d → e . Within the structured singular value setting considered in chapter ****. we must also account for: • Relative magnitude of outside influences • Frequency dependence of signals • Relative importance of the magnitudes of regulated variables So. For this reason. Using Weighted Norms to Characterize Performance In any performance criterion. 528 . e 2 max d≠0 d 2 is equal to T∞. to account for bandwidth constraints and spectral content of exogenous signals. white noise process. a unit intensity. the most natural (mathematical) manner to characterize acceptable performance is in terms of the MIMO ⋅∞ (H∞) norm.
~ d  T ~ e Figure 53: Unweighted MIMO System: Vectors from Left to Right The diagrams in Figure 53 and Figure 54 represent the exact same system. Note that it is more traditional to write the diagram in Figure 53 with the arrows going from left to right as in Figure 54. starting from initial condition equal to 0. Let β > 0 be defined as β := T ∞ := maxσ [ T ( jω ) ] w∈R (51) Now consider a response. ˜ ˜ For a given driving signal d ( t ) .Appendix: Interpretation of HInfinity norm e ~ T ~ d Figure 52: Unweighted MIMO System Suppose T is a MIMO stable linear system. Assume that the dimensions of T are ne × nd.≤ β 1⁄2 ˜ ∞ ˜T d 2 ˜ d ( t )d ( t ) dt ∫ 1⁄2 ∫0 529 . with transfer function matrix T(s). define e as the output. In that case. Parseval’s theorem gives that ∞ T ˜ ˜ [ e ( t )e ( t ) dt ] ˜ e 2 0 . as shown in Figure 53.= . We prefer to write these block diagrams with the arrows going right to left to be consistent with matrix and operator composition.
there are specific disturbances d that result in the ratio ˜ d 2 arbitrarily close to β. If realistic multivariable performance objectives are to be represented by a single. Since many different objectives are being lumped into one matrix and the associated cost is the norm of the matrix. the vectors of the sinusoidal magnitude responses are unweighted. ω . Diagonal weights are most easily interpreted. and measured in Euclidean norm. with a2 ≤ 1. As you would expect. and any d vector of phases φ ∈ R n d . along with Figure 53. T∞ is referred to as the L2 (or RMS) gain of the system. it is important to use frequencydependent weighting functions. … … ˜ d(t) = 530 . a sinusoidal. steadystate interpretation of T∞ is also possible: For any frequency ω ∈ R . and φ. MIMO ⋅∞ objective on a closedloop transfer function. additional scalings are necessary. Moreover.5 HInfinity and Mu Synthesis ˜ e 2 Moreover. so that different requirements can be meaningfully combined into a single cost function. β. as defined in equation Figure 54. Consider the diagram of Figure 54. is the smallest number such that this fact is true for every a2 ≤ 1. Because of this. Note that in this interpretation. define a time signal a 1 sin ( ωt + φ 1 ) a nd sin ( ωt + φ n d ) ˜ Applying this input to the system T results in a steadystate response e ss of the form b 1 sin ( ωt + 1 ) ˜ e ss ( t ) = b ne sin ( ωt + ne ) The vector b ∈ R n e will satisfy b2 ≤ β. any vector of amplitudes a ∈ R n .
0 0 … Ln … . d and ˜ WLTWR∞ is as follows: The steadystate solution e ss .. denoted as ˜ e 1 sin ( ωt + ϕ1 ) ˜ e ss ( t ) = ˜ e n e sin ( ωt + ϕ n d ) satisfies … (52) ∑ ne i=1 ˜ ˜ W L i ( jw )e i ≤ 1 for all sinusoidal input signals d of the form 2 ˜ d(t) = ˜ d 1 sin ( ωt + φ i ) ˜ d nd sin ( ωt + φ n ) d … (53) satisfying 531 . . Specifically. 0 0 … Rn … … d e WL e e ~ T ~ d WR d = WL e ~ = ~ WL T d = WL T WR d Figure 54: Weighted MIMO System Bounds on the quantity WLTWR∞ will imply bounds about the sinusoidal ˜ ˜ ˜ steadystate behavior of the signals d and e (= Td ) in Figure 53. stable transfer function matrices. with diagonal entries denoted Li and Ri.. . e R1 0 … 0 … … … WR = 0 R2 … 0 . L1 0 … 0 WL = 0 L2 … 0 . the steadystate relationship between e (= Td ) . ˜ ˜ ˜ ˜ for sinusoidal signal d .Appendix: Interpretation of HInfinity norm Assume that WL and WR are diagonal.
to W L represent the desired upper bound on the subsequent errors that are produced. ˜ and all sinusoidal disturbances d of the form (53) satisfying ˜ d i ≤ W R ( jω ) i the steadystate error components will satisfy 1 ˜ e i ≤ W L ( jω ) i This shows how one could pick performance weights to reflect the desired frequencydependent performance objective. the weighted H∞ norm does not actually give element˜ byelement bounds on the components of e based on elementbyelement ˜ bounds on the components of d . and use .5 HInfinity and Mu Synthesis ˜ 2 di . though. The precise bound it gives is in terms of ˜ ˜ Euclidean norms of the components of e and d (weighted appropriately by WL(j ω ) and WR(j ω )). Use WR to represent the relative 1 magnitude of sinusoids disturbances that might be present. 532 . This approximately (very approximately — the next statement is not actually correct) implies that WLTWR∞ ≤ 1 if and only if for every fixed frequency ω .≤ 1 2 W R ( jω ) i=1 nd ∑ i if and only if WLTWR∞ ≤ 1. Remember.
Lecture Notes in Control and Information Sciences.A. 426438. Cohen. pp.. “Statespace solutions to standard H2 and H∞ control problems. K. pp. and G.3. SpringerVerlag.B.K. pp. vol.K. J. Khargonekar. January. 1992. vol. vol. AC37. Pearson.. August 1989. 88.C. 204 (1990). J.” ASME Journal of Dynamics.” Systems and Control Letters. 1993.J. Balas. and J.” Vehicle Systems Dynamics. and Batsuen. (1997). 8.References References [BalPac:] Balas. and A. multivariable robust control with a µ perspective. 167172. 11.A. pp.” International Journal of Control.6. vol. 714718.J. “Invariant Properties of Automotive Suspensions. 1987. [Fran1:] Francis. “Linear. no.J. “Design of nonlinear controllers for active vehicle suspensions using parametervarying control synthesis. Doyle. J. pp. G. A course in H∞ control theory. and B. [Hedrick90:] Hedrick. vol. and G.K. “A general framework for linear periodic systems with applications to H∞ sampleddata control. and J. 2127. [BamP:] Bamieh. pp. 33. 671688. vol. “Sensitivity minimization in an H∞ norm: Parametrization of all suboptimal solutions. 533 . 418–435. 1996. and N. Doyle. 1987. A. and Kanellakopoulos. Balas. pp. P. 785816. vol. [FialBal:] Fialho. Measurements and Control: Special Edition on Control.A. “Statespace formulae for all stabilizing controllers that satisfy an H∞ norm bound and relations to risk sensitivity. 46. J. Glover.” Proceedings of The Institution of Mechanical Engineers. AC34. [PacDB:] Packard. I..C. J. May 2000.” Proceedings of the American Control Conference.. [DGKF:] Doyle. 831847.” IEEE Transactions on Automatic Control. 115. [Lin971:] Lin.” CRC Controls Handbook. June. pp. pp. no. 1115–1193. K. vol. [BallC:] Ball. 1984. 2b. Section 2.. 5. pp. no. International Journal of Control.” IEEE Transactions on Automatic Control. “The structured singular value µframework. B. ``Road Adaptive Nonlinear Design of Active Suspensions. T. Packard. B. I. Berlin.C. Francis. [GloD:] Glover. August 1989. 39. 351370.
Postlethwaite. num. 1. John Wiley & Sons. G.5 HInfinity and Mu Synthesis [SkoP:] Skogestad. [Zame:] Zames. multiplicative seminorms. 534 . Doyle. 1996. G. 1991. vol. and approximate inverses. “Feedback and optimal sensitivity: model reference transformations. January. pp.” IEEE Transactions on Automatic Control. vol. 14. Multivariable Feedback Control: Analysis & Design. [SteD:] Stein.. 301320. “Beyond singular values and loopshapes.. pp. AC26. and J. S. and I. 1981.” AIAA Journal of Guidance and Control. 516.
62) Uncertain Matrices (p. 660) How to build uncertain real. 646) Array Management for Uncertain Objects (p. 619) Uncertain StateSpace Systems (uss) (p. dynamics. 637) Simplifying Representation of Uncertain Objects (p. and LTI uncertain matrices. complex. 642) Substitution by usubs (p.6 Building Uncertain Models Introduction to Uncertain Atoms (p. 638) Sampling Uncertain Objects (p. 649) Decomposing Uncertain Objects (for Advanced Users) (p. 633) Basic Control System Toolbox Interconnections (p. and systems How to manipulate matrices used in systems with structured uncertainty Building systems with uncertain statespace matrices and/or uncertain linear dynamics Discusses uncertain frequency response data (frd) objects A list of Control System Toolbox interconnection commands that work with uncertain objects How to simplify representations of uncertain objects in your models How to randomly sample uncertain objects How to fix a subset of uncertain objects in your model while leaving the rest uncertain Working with multidimensional arrays containing uncertain objects Discusses advanced decomposition techniques . 625) Uncertain frd (p.
within its modeled range. ucomplex and ucomplexm atoms.. val1. partial name property matching.. Prop2. the syntax is p1 = ureal(name.. val1. val2. linear. which are accessed through get and set methods. and set(b..'PropertyName'. Prop1. For ureal.). For ultidyn and udyn. Prop1. Prop2. p3 = ucomplexm(name.. val2. p5 = udyn(name.. ioSize.. Prop1.. Prop2.. ultidyn. There are 5 classes of uncertain atoms: Function ureal ultidyn ucomplex ucomplexm udyn Description Uncertain real parameter Uncertain. Prop1. 62 . timeinvariant dynamics Uncertain complex parameter Uncertain complex matrix Uncertain dynamic system All of the atoms have properties. val2..). For instance. NominalValue. p2 = ucomplex(name. val1. val1. val1. This get and set interface mimics the Control System Toolbox and MATLAB Handle Graphics® behavior. ioSize.. Prop1.e... NominalValue. get(a.. For example.PropertyName.6 Building Uncertain Models Introduction to Uncertain Atoms Uncertain atoms are the building blocks used to form uncertain matrix objects and uncertain system objects.Value) is the same as b.). Prop2.). the command usample will generate a random instance (i. Functionality also includes tabcompletion and caseinsensitive.PropertyName = value. so the syntax is p4 = ultidyn(name. not uncertain) of the atom.. NominalValue.. val2.'PropertyName') is the same as a. val2. the NominalValue is fixed. ucomplex and ucomplexm atoms.). Prop2. For ureal.
'Range'. Several other properties (PlusMinus. For instance usample(p4. See the section “Sampling Uncertain Objects” on page 642 to learn more about usample. The properties are: Properties Name NominalValue Mode Meaning Class char double char Internal Name Nominal value of atom Signifies which description (from 'PlusMinus'. whole arrays of instances can be created. All properties of a ureal can be accessed through get and set. Uncertain Real Parameters An uncertain real parameter is used to represent a real number whose value is uncertain. With an integer argument.'Percentage') of uncertainty is invariant when NominalValue is changed PlusMinus Additive variation scalar or 1x2 double 1x2 double Range Percentage Numerical range Additive variation (% of absolute value of nominal) 'off'  {'basic'} 'full' scalar or 1x2 double char AutoSimplify 63 . Percentage) describe the uncertainty in the parameter's value. and a nominal value (the NominalValue property). Uncertain real parameters have a name (the Name property).100) generates an array of 100 instances of the ultidyn object p4.Introduction to Uncertain Atoms usample(p1) creates a random instance of the uncertain real parameter p1. Range.
and note that the Range and Percentage descriptions of variability are automatically maintained. with 20% variability. View the properties and their values. Some examples are shown below. The Mode property controls what aspect of the uncertainty remains unchanged when NominalValue is changed. Percentage and PlusMinus are all automatically synchronized. variability = [1 1] get(a) Name: 'a' NominalValue: 3 Mode: 'PlusMinus' Range: [2 4] PlusMinus: [1 1] Percentage: [33. Again. Assigning to any of Range/Percentage/PlusMinus changes the value. but does not change the mode. partial name property matching. nominal value 2. with default values for all unspecified properties (including plus/minus variability of 1). NominalValue 3. The AutoSimplify property controls how expressions involving the real parameter are simplified. default values are used.3333] AutoSimplify: 'basic' Create an uncertain real parameter. taking advantage of the caseinsensitive. 64 . and note that the Range and PlusMinus descriptions of variability are automatically maintained. Its default value is 'basic'. view the properties. a = ureal('a'. and the default value of PlusMinus is [1 1]. If the nominal value is 0. Other values for AutoSimplify are 'off' (no simplification performed) and 'full' (modelreductionlike techniques are applied). which means elementary methods of simplification are applied as operations are completed.6 Building Uncertain Models The properties Range. In many cases.3) Uncertain Real Parameter: Name a. If no property/value pairs are specified. The default Mode is PlusMinus. nominal value 3.3333 33. the full property name is not specified. Create an uncertain real parameter. then the Mode cannot be Percentage. See the section “Simplifying Representation of Uncertain Objects” on page 638 to learn more about the AutoSimplify property and the command simplify.
All descriptions of variability are automatically updated. and remains Percentage.0000] AutoSimplify: 'basic' Change the range of the parameter.'percentage'. Changing NominalValue preserves the Percentage property.3000] PlusMinus: [0.3000] Percentage: [5.3300] 65 .0900 2. get(b) Name: 'b' NominalValue: 2.20) Uncertain Real Parameter: Name b. then the Range and PlusMinus properties are determined from the Percentage property and NominalValue.6000 2.3].2000 Mode: 'Percentage' Range: [2.5300] PlusMinus: [0.2.0000] AutoSimplify: 'basic' As mentioned. NominalValue 2. and automatically updates the Range and PlusMinus properties.2. the Mode property signifies what aspect of the uncertainty remains unchanged when NominalValue is modified. while the nominal value remains fixed.4000] PlusMinus: [0.4000 0.0000 20. variability = [20 20]% get(b) Name: 'b' NominalValue: 2 Mode: 'Percentage' Range: [1.Introduction to Uncertain Atoms b = ureal('b'.Range = [1.9000 2.9 2. b.4000] Percentage: [20.1100 0. Although the change in variability was accomplished by specifying the Range.NominalValue = 2.1000 0. Hence. if a real parameter is in Percentage mode. the Mode is unaffected. b.0000 15. get(b) Name: 'b' NominalValue: 2 Mode: 'Percentage' Range: [1.
4000 0.'full') Uncertain Real Parameter: Name e.4000] PlusMinus: [0.'range'.[20 30]). NominalValue 10.5.'perc'. d = ureal('d'.6 Building Uncertain Models Percentage: [5.5000] Percentage: [20 30] AutoSimplify: 'basic' Create an uncertain parameter. specifying variability with Percentage.0000 15.'mode'. create an uncertain real parameter.5000] PlusMinus: [1 1..10. and set the AutoSimplify property to 'full'.4000 0.'per'. get(c) Name: 'c' NominalValue: 5 Mode: 'Percentage' Range: [6 3.0000] AutoSimplify: 'basic' Create an uncertain parameter with an unsymmetric variation about its nominal value.1. e = ureal('e'.. get(d) Name: 'd' NominalValue: 1 Mode: 'Range' Range: [1.6000] Percentage: [40.[40 60]).'perce'.'mode'.0000 60] AutoSimplify: 'basic' Finally. c = ureal('c'.. but force the Mode to be Range. 'autosimplify'.[23].'plusminus'. variability = [20 30]% get(e) Name: 'e' NominalValue: 10 Mode: 'Percentage' Range: [8 13] PlusMinus: [2 3] 66 .
2 2.3.40) Uncertain Real Parameter: Name g.'plusminus'. This last occurrence also determines the Mode. and plot a histogram.40) Uncertain Real Parameter: Name f. regardless of the property/value pairs ordering. variability = [40 40]% g = ureal('g'.'perce'. hist(reshape(hsample.'range'.[2 1]. unless Mode is explicitly specified. h = ureal('h'. reshape the array. NominalValue 2.Mode ans = Range Create an uncertain real parameter. with 20 bins (within the range of 2to4).'plusminus'.[1000 1]).2.8] g. hsample = usample(h.20). use usample to generate 1000 instances (resulting in a 1by1by1000 array).'perce'. NominalValue 3.'mode'. 67 .Introduction to Uncertain Atoms Percentage: [20 30] AutoSimplify: 'full' Specifying conflicting values for Range/Percentage/PlusMinus in a multiple property/value set is not an error.[2 1]. Range [1. f = ureal('f'.1000). in which case that is used. the last (in list) specified property is used.3). In this case.
6 Building Uncertain Models Make the range unsymmetric about the nominal value.Range = [2 6]. hist(reshape(hsample.1000).40). and histogram plot (with 40 bins over the range of 2to6) h.[1000 1]). and repeat the sampling. 68 . hsample = usample(h.
However. the number of samples less than the nominal value and the number of samples greater than the nominal value is equal (on average). creates an unnamed atom. The given name is 'UNNAMED'.5 6 Note that the distribution is skewed. See the section “Decomposing Uncertain Objects (for Advanced Users)” on page 660 to learn more about the normalized description. Verify this. with default property values. length(find(hsample(:)<h. This can be observed with get and set.NominalValue)) ans = 491 The distribution used in usample is uniform in the normalized description of the uncertain real parameter. ureal.5 5 5. for that matter).5 3 3.NominalValue)) ans = 509 length(find(hsample(:)>h.Introduction to Uncertain Atoms 80 70 60 50 40 30 20 10 0 2 2.5 4 4. by itself. get(ureal) Name: 'UNNAMED' NominalValue: 0 Mode: 'PlusMinus' 69 . There is no notion of an empty ureal (or any other atom.
The default value is 'GainBounded'. the default value for Bound (i.6 Building Uncertain Models Range: PlusMinus: Percentage: AutoSimplify: set(ureal) Name: NominalValue: Mode: Range: PlusMinus: Percentage: AutoSimplify: [1 1] [1 1] [Inf Inf] 'basic' 'String' '1x1 real DOUBLE' 'Range  PlusMinus' '1x2 DOUBLE' '1x2 or scalar DOUBLE' 'Not settable since Nominal==0' '['off'  'basic'  'full']' Uncertain LTI Dynamics Atoms Uncertain linear. γ) is 1. and are created by specifying their size (number of outputs and number of inputs). The property Bound is a single number. If Type is 'PositiveReal'. When Type is 'PositiveReal'. The property Type specifies whether the known attributes about the frequency response are related to gain or phase. The property Type may be 'GainBounded' or 'PositiveReal'. 610 . timeinvariant objects. timeinvariant dynamic objects. ∆(ω)+∆*(ω)≥2⋅γ for all frequencies. then the atom represents the set of all stable. linear. which along with Type. if ∆ is an ultidyn atom.. When Type is 'GainBounded'.e. and if γ denotes the value of the Bound property. Specifically. The NominalValue is always (γ+1+2γ)I.e. Uncertain linear. are used to represent unknown linear. timeinvariant systems whose frequency response satisfies certain conditions: · If Type is 'GainBounded'.. The NominalValue of ∆ is always the 0matrix. ultidyn. whose only known attributes are bounds on their frequency response. γ) is 0. completely specifies what is known about the uncertain frequency response. timeinvariant objects have an internal name (the Name property). σ [ ∆ ( ω ) ] ≤ γ for all frequencies. the default value for Bound (i.
Set the 611 . The default value is 1.Introduction to Uncertain Atoms All properties of a ultidyn are can be accessed with get and set (although the NominalValue is determined from Type and Bound. The properties are Properties Name NominalValue Type Bound SampleStateDim Meaning Class char See above char Internal Name Nominal value of atom 'GainBounded' 'PositiveReal' Norm bound or minimum real Statespace dimension of random samples of this uncertain element 'off'  {'basic'} 'full' scalar double scalar double char AutoSimplify The SampleStateDim property specifies the state dimension of random samples of the atom when using usample. You can create a 2by3 gainbounded uncertain linear dynamics atom. and not accessible with set).[2 3]). The AutoSimplify property serves the same function as in the uncertain real parameter. and check the properties. whose frequency response always has real part greater than 0. f = ultidyn('f'.5. Verify its size. size(f) ans = 2 3 get(f) Name: 'f' NominalValue: [2x3 double] Type: 'GainBounded' Bound: 1 SampleStateDim: 1 AutoSimplify: 'basic' You can create a 1by1 (scalar) positivereal uncertain linear dynamics atom.
5000 SampleStateDim: 5 AutoSimplify: 'basic' nyquist(usample(g.'positivereal'.'bound'.[1 1].5).SampleStateDim = 5.6 Building Uncertain Models SampleStateDim property to 5.'type'. system with uncertain behavior. every ultidyn atom is interpreted as a continuoustime.5000 Type: 'PositiveReal' Bound: 0. g = ultidyn('g'. 6 Nyquist Diagram 4 2 Imaginary Axis 0 −2 −4 −6 −2 0 2 4 Real Axis 6 8 10 Timedomain of ultidyn atoms On its own.0. quantified by bounds (gain or realpart) on its 612 .30)) xlim([2 10]) ylim([6 6]). g. get(g) Name: 'g' NominalValue: 1. and plot a Nyquist plot of 30 instances of the atom. View the properties.
[1 1]). then the timedomain characteristic of the atom is determined from the timedomain characteristic of the system. Complex Parameter Atoms The ucomplex atom represents an uncertain complex number. h = ultidyn('h'. get(usample(h). with radius specified by the Radius property. centered at NominalValue.'Ts') ans = 0 get(usample(h). which means the radius is derived from the absolute value of the NominalValue. whose value lies in a disc. SEe the section “Interpreting Uncertainty in Discrete Time” on page 630 for more information. To see this. when a ultidyn atom is an uncertain element of an uncertain state space model (uss).Introduction to Uncertain Atoms frequency response.'Ts') ans = 0 get(usample(h). and view the sample time of several random samples of the atom. The bounds (gainbounded or positivity) apply to the frequencyresponse of the atom. create a ultidyn. The size of the disc can also be specified by Percentage.'Ts') ans = 0 However. The properties of ucomplex objects are Properties Name NominalValue Mode Radius Meaning Class char double char Internal Name Nominal value of atom 'Range'  'Percentage' Radius of disk double 613 .
0000. and plot in the complex plane.1. 614 . the samples appear to be from a disc of radius 1.400).6 Building Uncertain Models Properties Percentage AutoSimplify Meaning Class Additive variation (percent of Radius) 'off'  {'basic'}  'full' double char The simplest construction requires only a name and nominal value. The default Mode is Radius. plot(asample(:). a = ucomplex('a'.'o').7214 AutoSimplify: 'basic' set(a) Name: 'String' NominalValue: '1x1 DOUBLE' Mode: 'Radius  Percentage' Radius: 'scalar DOUBLE' Percentage: 'scalar DOUBLE' AutoSimplify: '['off'  'basic'  'full']' Sample the uncertain complex parameter at 400 values.2j) Uncertain Complex Parameter: Name a. Radius 1 get(a) Name: 'a' NominalValue: 2. centered in the complex plane at the value 2−j. asample = usample(a. NominalValue 21i. and the default radius is 1. xlim([0 4]). ylim([3 1]). Clearly.0000i Mode: 'Radius' Radius: 1 Percentage: 44.
and ∆ is any complex matrix with · σ ( ∆ ) ≤ 1 .WR are known matrices.5 1 1.5 −3 0 0.5 −1 −1.5 3 3.5 0 −0.Introduction to Uncertain Atoms 1 0. All properties of a ucomplexm are can be accessed with get and set.5 2 2. represents the set of matrices given by the formula N + W L ∆W R where N. ucomplexm.5 4 Complex Matrix Atoms The uncertain complex matrix class.WL. The properties are Properties Name NominalValue WL Meaning Class char double double Internal Name Nominal value of atom Left weight 615 .5 −2 −2.
Note the elementbyelement sizes of the difference are generally equal.NominalValue ans = 1 2 3 4 5 6 7 8 9 10 11 12 m.10 11 12]) Uncertain Complex Matrix: Name m.7 8 9.3376 0.2948 0. indicative of the default (identity) weighting matrices that are in place.2508 0. 4x3 get(m) Name: 'm' NominalValue: [4x3 double] WL: [4x4 double] WR: [3x3 double] AutoSimplify: 'basic' m.6 Building Uncertain Models Properties WR AutoSimplify Meaning Class Right weight 'off'  {'basic'}  'full' double char The simplest construction requires only a name and nominal value. and compare to the nominal value. You can create a 4by3 ucomplexm element.2384 0.3028 0.4 5 6.[1 2 3. and view its properties. abs(usample(m)m.WL ans = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 Sample the uncertain matrix.2867 0.1001 0. The default left and right weight matrices are identity.1260 0.NominalValue) ans = 0.2506 616 . m = ucomplexm('m'.
6413 0.8 1.2756 1. and compare to the nominal value.4 0.WL = diag([0. and their role in the user's guide is small. robuststab) do not handle these types of uncertain elements. As such.0304 0. these elements do not provide a significant amount of usability. and the largest differences are near the (4.g. these uncertain elements represent noncommuting symbolic variables (placeholders).8335 Unstructured Uncertain Dynamic Systems The unstructured uncertain dynamic system class.2753 0. and the general trend that the smallest differences are near the (1.2 0. All algebraic operations. m.4012 0. and substitution (with usubs) is allowed. abs(usample(m)m.WR = diag([0.2200 0. and across the columns.. For practical purposes. represents completely unknown multivariable. Note the elementbyelement sizes of the difference.1) element. However. which is completely expected by choice of the diagonal weighting matrices.1717 0.1657 Change the left and right weighting matrices.Introduction to Uncertain Atoms 0..0860 0.6]). Sample the uncertain matrix.0527 0. such as addition.0057 0.e. size 2x3 size(m) ans = 2 3 get(m) Name: 'm' 617 . and properties. making the uncertainty larger as you move down the rows.1 1 4]). m. Check its size. subtraction. m = udyn('m'. udyn. timevarying nonlinear systems.[2 3]) Uncertain Dynamic System: Name m. all of the analysis tools (e.0091 0.4099 1.3472 0.NominalValue) ans = 0. cascade) operate properly.3) element. multiplication (i. You can create a 2by3 udyn element.
6 Building Uncertain Models NominalValue: [2x3 double] AutoSimplify: 'basic' 618 .
multiplied. resulting in uncertain matrices. and then a 3by2 uncertain matrix using these uncertain atoms. inverted. The uncertain atoms making up a umat are accessible through the Uncertainty gateway. a = ureal('a'. variability = [1 1]. using parenthesis. double) naturally can be viewed as uncertain matrices without any uncertainty. and the properties of each atom within a umat can be changed directly.. b = ureal('b'. subtracted.. Creating Uncertain Matrices from Uncertain Atoms You can create 2 uncertain real parameters.1 3] UMAT: 3 Rows.Uncertain Matrices Uncertain Matrices Uncertain matrices (class umat) are built from doubles. Standard MATLAB numerical matrices (i. 3 occurrences The size and class of M are as expected size(M) ans = 3 class(M) ans = 2 619 . Uncertain matrices can be added.e. nominal = 3. etc.b a+1/b. The NominalValue of a uncertain matrix is the result obtained when all uncertain atoms are replaced with their own NominalValue. The command wcnorm computes tight bounds on the worstcase (maximum over the uncertain elements' ranges) norm of the uncertain matrix.'pe'.3). transposed. 2 Columns a: real. 2 occurrences b: real.10. The rows and columns of an uncertain matrix are referenced in the same manner that MATLAB references rows and columns of an array. and integer indices. variability = [20 20]%. The command usample generates a random sample of the uncertain matrix. substituting random samples (within their ranges) for each of the uncertain atoms. and uncertain atoms. M = [a 1/b. specific values may be substituted for any of the uncertain atoms within a umat. nominal = 10.20). using traditional MATLAB matrix building syntax. Using usubs.
Uncertainty a: [1x1 ureal] b: [1x1 ureal] Direct access to the atoms is facilitated through Uncertainty. nominal = 3.0000 0.a.Range = [2.Uncertainty. get(M) NominalValue: [3x2 double] Uncertainty: [1x1 atomlist] The NominalValue is a double. 2 occurrences b: real.5 2]. M.1000 10.a. variability = [20 20]%.NominalValue ans = 3. which is simply a gateway from the umat to the uncertain atoms.6 Building Uncertain Models umat Accessing Properties of a umat Use get to view the accessible properties of a umat. obtained by replacing all uncertain elements with their nominal values.0000 3. Check the Range of the uncertain element named 'a' within M. 2 Columns a: real.Uncertainty) ans = atomlist M. then change it.Uncertainty. M UMAT: 3 Rows.0000 3. 3 occurrences 620 . M. class(M.5 5]. variability = [0.1000 1.Range ans = 2 4 M. nominal = 10.0000 The Uncertainty property is a atomlist object.
1 occurrence b: real.10. variability = [20 20]%. variability = [20 20]%. M = [a 1/b.5 2]. subtracting the two atoms gives an error.a.b a+1/b. Note.20).3).a.Uncertainty. M. however. 1 occurrence 621 . Verify that the variable a in the workspace is no longer the same as the variable a within M. variability = [20 20]%. nominal = 3.lftmask Atoms named 'a' have different properties.:) UMAT: 2 Rows. that singleindexing is only allowed if the umat is a column or a row.Uncertain Matrices The change to the uncertain real parameter a only took place within M. b = ureal('b'.5 2].Range = [2. M. nominal = 10. 1 occurrence b: real. variability = [0.5 5]. 1 occurrence h(2) UMAT: 1 Rows.a) ans = 0 Note that combining atoms which have a common internal name. 2 Columns a: real. variability = [0.a ??? Error using ==> ndlft.Uncertainty. nominal = 10. Row and Column Referencing Standard Row/Column referencing is allowed.Uncertainty. and use singleindex references to access elements of it. not 0. 1 Columns b: real. For instance.2) UMAT: 4 Rows. isequal(M. and make a 2by2 selection from M a = ureal('a'.a .1 3]. h = M([2 1 2 3].'pe'. Reconstruct M (if need be). nominal = 3. nominal = 10. 2 occurrences Make a single column selection from M. but different properties leads to an error. M(2:3. 1 Columns a: real.
H = M. M(3. 1 Columns a: real. Premultiply M by a 1by3 constant matrix.a + 3*M.Uncertainty.5 2]. usample(K*H. variability = [40 40]%. 1 occurrence b: real.40) UMAT: 3 Rows.b + 1) UMAT: 1 Rows.'perc'. d = M1(1) . As expected. nominal = 10.Uncertainty. variability = [20 20]%. make the (3. nominal = 10. 2 occurrences b: real. 2 occurrences c: real. variability = [0. variability = [20 20]%. 1 Columns simplify(d.2) = ureal('c'. 2 occurrences c: real. nominal = 3. 1 occurrence Finally.6 Building Uncertain Models h(3) UMAT: 1 Rows.e.'*M. 1 occurrence Verify that the 1st entry of M1 is 2*a + 3*b + 1. nominal = 10. such as matrixmultiply.'class') ans = 0 Transpose M.3) 622 .2) entry of M uncertain. variability = [0. the result is the 2by2 identity matrix. 1 occurrence b: real. nominal = 3. K = inv(H). 2 Columns a: real. nominal = 3. Simplifying the class shows that the result is zero as expected.(2*M. transpose. resulting in a 1by2 umat. an inverse.5 2]. nominal = 3. 2 Columns a: real. M1 = [2 3 1]*M UMAT: 1 Rows.3. and inverse.. and sample the uncertain result. Direct subtraction yields a umat without any dependence on uncertain elements.5 2]. variability = [0. variability = [40 40]%. form a product. Combinations of certain (i. not uncertain) matrices and uncertain matrices are allowed. variability = [20 20]%. 1 occurrence Matrix Operation on umat Objects Many matrix operations are allowed. nominal = 3.
2 occurrences c: real.'a'.2) = 1.'a'. For example. as described in the section “Substitution by usubs” on page 646. This results in a umat.'b'.a) UMAT: 3 Rows.0000 1. 2 Columns a: real.'b'. M4 = usubs(M.'nominal'.Uncertain Matrices ans(:. 2 Columns b: real. nominal = 3.0000 1.3) = 1.Uncertainty.'random') UMAT: 3 Rows. we can substitute all instances of the uncertain real parameter named b with M. variability = [20 20]%. M5 = usubs(M.0000 0.0000 Substituting for Uncertain Atoms You can substitute for uncertain atoms by using usubs. 4 occurrences c: real. nominal = 10.5 2].0000 0.0000 1.4) UMAT: 3 Rows.:.0000 0.{'a' 'b'}.0000 0. 2 Columns c: real. nominal = 3. M. nominal = 3. substitute a and b with the number 4. and c with the number 5.:. nominal = 3. variability = [40 40]%.'c'.0000 0. resulting in a umat with dependence on the uncertain real parameters a and c.0000 0. M3 = usubs(M. with dependence on the uncertain real parameters b and c. and then the value given. variability = [40 40]%.0000 ans(:. variability = [40 40]%. 1 occurrence Similarly. variability = [0. Substitute all instances of the uncertain real parameter named a with the number 4.4.:. 1 occurrence Nominal and/or random instances can easily be specified.Uncertainty.0000 ans(:.5) 623 .1) = 1. the atom names can be listed in a cell array. 1 occurrence If one value is being substituted for many different atoms.a. This section describes some special cases. M2 = usubs(M.
0000 0.8). See the section “Array Management for Uncertain Objects” for more information about how multidimensional arrays of uncertain objects are handled. Md = [1 2 3.6 Building Uncertain Models M5 = 4.6). M = umat(Md) UMAT: 4 Rows.2500 4.5.5. M = umat(Md) UMAT: 4 Rows.7). See the section “Sampling Uncertain Objects” on page 642 for more information. Lifting a double matrix to a umat A notuncertain matrix may be interpreted as an uncertain matrix that has no dependence on uncertain atoms.0000 4. 5 Columns [array. 5 Columns [array. M = umat(Md) UMAT: 4 Rows.5. 6 x 1] Md = randn(4. the 3rd dimension and beyond are interpreted as array dimensions. Note from the display that once the matrix is interpreted as a umat.6. 5 Columns [array.2500 5. 6 x 7] Md = randn(4. Use the umat command to lift a double to the umat class. Md = randn(4.7. 6 x 7 x 8] 624 .6. 3 Columns High dimensional double matrices can also be lifted. M = umat(Md) UMAT: 2 Rows.0000 1.4 5 6].0000 The command usample also generates multiple random instances of a umat (and uss and ufrd).
the ss object. nominal = 10.50). 2 Outputs. p3 = ureal('p3'. and one double matrix D.2]).p2+p3]. 2 occurrences 625 . Usually the result is an uncertain system. Packing them together with the ss command results in a continuoustime uncertain system. variability = [50 50]%. B = [p2. the result will be a uncertain statespace (uss) object. they are often built from statespace matrices using the ss command. not uncertain) counterpart. variability = [0. 2state uncertain system. Creating Uncertain Systems Uncertain systems (class uss) are built from of certain and/or uncertain statespace matrices. You can create three uncertain real parameters.1 1p3]. B and C matrices are made up of uncertain real parameters. A = [p1 p2. Pack the 4 matrices together using the ss command.'plusm'.3.10. 2 occurrences p2: real.D) USS: 2 States. B and C. Combining uncertain systems with uncertain systems (with the feedback command.Uncertain StateSpace Systems (uss) Uncertain StateSpace Systems (uss) Uncertain systems (uss) are linear systems with uncertain statespace matrices and/or uncertain linear dynamics. 1 Input.5 1.C. D = [0. the A.e. Like their certain (i.5 1.0]. which is familiar to the users of the Control System Toolbox.[.B. Continuous System p1: real. 1input. p2 = ureal('p2'. sys = ss(A.0 p1]. C = [1 0. nominal = 3.'pe'. Then create 3 uncertain matrices A. for example) usually leads to an uncertain system. The nominal value of an uncertain system is the ss object. In the case where some of the statespace matrices are uncertain. In the example below.0). p1 = ureal('p1'. usually using the ss command.2].. Notuncertain systems can be combined with uncertain systems. This results in a continuoustime 2output.
d. tf.6 Building Uncertain Models p3: real. get(sys) a: b: c: d: StateName: Ts: InputName: OutputName: InputGroup: OutputGroup: NominalValue: Uncertainty: Notes: UserData: [2x2 [2x1 [2x2 [2x1 {2x1 0 {''} {2x1 [1x1 [1x1 [2x1 [1x1 {} [] umat] umat] umat] double] cell} cell} struct] struct] ss] atomlist] The properties a. OutputName. variability = [1 1]. and StateName behave in exactly the same manner as Control System Toolbox ss objects. zpk. and therefore all methods for ss objects are available. compute the poles and step response of the nominal system. b.NominalValue) 626 . InputGroup and OutputGroup behave in exactly the same manner as all of the Control System Toolbox system objects (ss. pole(sys. The NominalValue is the ss object of the Control System Toolbox. The properties InputName.NominalValue) ans = 10 10 step(sys. For instance. and frd). 2 occurrences Properties of uss Objects View the properties with the get command. nominal = 0. c.
sys.07 0. then change its left endpoint. all analysis tools from the Control System Toolbox are available.05 0.Uncertainty.7 Just as with the umat class. size(manysys) 20x1 array of statespace models Each model has 2 outputs. step(manysys) 627 .1 0.15 −0. manysys = usample(sys.range ans = 2. This gives a 20by1 ss array. 1 input.05 To: Out(1) Amplitude To: Out(2) −0.08 0.2000 sys. Consequently. Direct access to the atoms is facilitated through Uncertainty. Check the Range of the uncertain element named 'p2' within sys.01 0 0 0.2 0.20).06 0.Uncertainty. Randomly sample the uncertain system at 20 points in its modeled uncertainty range.1 0.p2.09 0. acting as a gateway to the uncertain atoms. Sampling Uncertain Systems The command usample randomly samples the uncertain system at a specified number of points.04 0.5 0. the Uncertainty property is a atomlist object.6 0.1 −0.5000 4.range(1) = 2.03 0.02 0.3 Time (sec) 0.25 0. and 2 states.p2.4 0.2 −0.Uncertain StateSpace Systems (uss) 0 Step Response −0.
15 −0.5 2 2. tau = ureal('tau'.1 −0.3 0.8 0.5).1 −0.4 0.4).6 Building Uncertain Models 0 −0.05 −0. whose DC value is 0.6 0. First create the uncertain plant. W = makeweight(0.5 The command step can be called directly on a uss object. and a 1st order weighting function.'Percentage'. bodemag and nyquist. delta.. The default behavior samples the uss object at 20 instances.1 0 −0. and plots the step responses and the nominal value for these 20 models.5 0.2 0 0. 628 . highfrequency gain is 10.2 0. delta = ultidyn('delta'.2.2 −0.2.35 Step Response Amplitude To: Out(1) To: Out(2) −0.7 0. create an unmodeled dynamics atom. Feedback Around Uncertain Plant It is possible to form interconnections of uss objects.5 1 Time (sec) 1. gamma = ureal('gamma'.'SampleStateDim'. Next. The same features are available for bode.3 −0. and whose crossover frequency is 8 rad/sec.6.6).25 −0.4 0. Start with two uncertain real parameters.30). A common example is to form the feedback interconnection of a given controller with an uncertain plant.5.[1 1].
5.[tau 1])*(1+W*delta). create the uncertain plant consisting of the uncertain parameters and the unmodeled dynamics.1).Uncertain StateSpace Systems (uss) Finally. You can create an integral controller based on nominal plant parameters.20) 6 5 4 Step Response Amplitude 3 2 1 0 −1 −2 0 0. plot samples of the openloop and closedloop step responses.5 Step Response 1 Amplitude 0.5 4 4.5 1 1.5 Time (sec) 3 3. Nominally the closedloop system will have damping ratio of 0.5. C = tf(KI.1).2). P = tf(gamma.5 0 −0.1.707 and time constant of 2*tau. Create the uncertain closedloop system using the feedback command. subplot(2. KI = 1/(2*tau. Using usample and step. CLP = feedback(P*C.[1 0]).5 2 2.1. step(CLP.5 5 629 .5 5 1.20) subplot(2. As expected the integral controller reduces the variability in the low frequency response.5 1 1.5 4 4.Nominal).5 0 0. step(P.Nominal*gamma.5 2 2.5 Time (sec) 3 3.
In the first case. create without specifying sampletime. Look at d1v. gain = 1.Ts ans = 0 d2v.42) USS: 0 States. sys1 = uss([]. Continuous System delta: 1x1 LTI.0.delta is continuoustime.4200 Finally. max. in the case of a discretetime uss object. gain = 1.delta.d1v] = usample(sys1). it is not the case that ultidyn objects are interpreted as continuoustime uncertainty in feedback with sampleddata systems.delta) USS: 0 States. force discretetime with a sample time of 0. with sample time 0. [sys1s. In the second case. delta = ultidyn('delta'. the values of the atoms used in the sample are returned in the 2nd argument from usample as a structure.42.42. This very interesting hybrid theory has been studied by many authors. which indicates continuous time. 1 Input. max. When obtaining random samples using usample.[]. get a random sample of each system.d2v] = usample(sys2).delta.6 Building Uncertain Models Interpreting Uncertainty in Discrete Time The interpretation of a ultidyn atom as a continuoustime or discretetime system depends on the nature of the uncertain system (uss) within which it is an uncertain element. the system d2v.delta.delta. Then.delta.Ts and d2v. create two 1input. the system d1v.[].[]. since sys2 is discretetime. since sys1 is continuoustime. 1 Output. create a scalar ultidyn object. 1 occurrence sys2 = uss([]. but it is beyond the scope of the toolbox. 1 Input. with sample time 0.42 delta: 1x1 LTI. 1 occurrence Next. see [DullerudGlover]. In the second case.Ts.[1 1]). 630 . In the first case. For example. 1output uss objects using the ultidyn object as their “D” matrix. Discrete System. Ts = 0. d1v.Ts ans = 0. 1 Output.[]. [sys2s.delta is discretetime.42.
2 Outputs.2. Before operations involving ss objects containing delays and uncertain objects. Handling Delays in uss In the current implementation.2. usys = uss(sys) USS: 3 States.4) 631 .3. and an uncertain steadystate gain ranging from 4 to 6. Continuous System Arrays of ss objects can also be lifted. usys = uss(sys) Warning: Omitting DELAYs in conversion to USS > In uss. See the section“Array Management for Uncertain Objects” on page 649 for more information about handling arrays of uncertain objects. For example. usys = gain*pade(sys. sys = rss(3.05 1]). Delays are omitted and a warning is displayed when ss objects are lifted to uss objects. Continuous System This lifting process happens in the background whenever ss objects are combined with any uncertain object. 1 Input. sys = rss(3. sys. Use the uss command to “lift” the ss to the uss class.3 seconds. delays are not allowed. use the pade command to convert the ss object to a delay free object.3. gain = ureal('gain'. 1 Input.1).5).uss at 103 USS: 3 States. secondorder rolloff beyond 20 rad/s.inputdelay = 1.Uncertain StateSpace Systems (uss) Lifting a ss to a uss A noncertain state space object may be interpreted as an uncertain state space object that has no dependence on uncertain atoms. 2 Outputs.[1 1])*tf(1.inputdelay = 0. consider an uncertain system with a time constant approximately equal to 1. sys = tf(1.[0.1). sys. an extra input delay of 0. This can be approximated using the pade command as follows. Consequently all delays will be lost in such operations. Use the command pade to approximately preserve the effect of the time delay in the ss object.
variability = [1 1].uss at 103 In umat. 1 Input. The difference is obvious from the step responses.5 2 Time (sec) 2. Continuous System gain: real. the time delay is unfortunately omitted.gain*sys.5 3 3. since this operation involves lifting sys to a uss as described above. 1 Output.5 1 1.6 Building Uncertain Models USS: 6 States. 1 occurrence If gain is multiplied by sys directly.5 4 632 . nominal = 5. step(usys.5) Warning: Omitting DELAYs in conversion to USS > In uss.mtimes at 7 6 Step Response 5 4 3 Amplitude 2 1 0 −1 0 0.umat at 98 In atom.4.
logspace(2.2]).'plusm'. nominal = 3. 2 occurrences p3: real. variability = [0.0). nominal = 10.10. variability = [0. sysg = frd(sys. 1 Input. etc. 2 occurrences p3: real. nominal = 0. The natural command that would do this is frd (an overloaded version in the @uss directory). Continuous System. variability = [1 1]. variability = [1 1]. 2 occurrences p2: real.2]. Reconstruct sys.2].[. 100 Frequency points p1: real. Continuous System p1: real.C. concatenated. p3 = ureal('p3'.1 1p3].B.50).p2+p3]. nominal = 3. 2 occurrences Properties of ufrd objects View the properties with the get command. Use the frd command. 633 . nominal = 10. 2 occurrences p2: real.0 p1]. p2 = ureal('p2'.5 1. 1 Input. They also arise when frequency response data (in an frd object) is combined (added. The result is an uncertain frequency response data object. p1 = ureal('p1'. A = [p1 p2.'pe'. variability = [50 50]%. nominal = 0. if necessary. variability = [50 50]%.5 1.0]. D = [0. multiplied.2.5 1.) to an uncertain matrix (umat). B = [p2. Creating Uncertain Frequency Response Objects The most common manner in which a ufrd arises is taking the frequency response of a uss.3. sys = ss(A. C = [1 0.Uncertain frd Uncertain frd Uncertain frequency responses (ufrd) arise naturally when computing the frequency response of an uncertain statespace (uss). 2 occurrences Compute the uncertain frequency response of the uncertain system.100)) UFRD: 2 Outputs. referred to as a ufrd. along with a frequency grid containing 100 points.D) USS: 2 States. 2 Outputs.
The properties InputName. and hence all methods for frd objects are available.nom) 634 . and frd). bode(sysg.6 Building Uncertain Models get(sysg) Frequency: ResponseData: Units: Ts: InputName: OutputName: InputGroup: OutputGroup: NominalValue: Uncertainty: Notes: UserData: Version: [100x1 double] [2x1x100 umat] 'rad/s' 0 {''} {2x1 cell} [1x1 struct] [1x1 struct] [2x1 frd] [1x1 atomlist] {} [] 4 The properties ResponseData and Frequency behave in exactly the same manner as Control System Toolbox frd objects. InputGroup and OutputGroup behave in exactly the same manner as all of the Control System Toolbox system objects (ss. except that ResponseData is a umat. zpk. OutputName. The NominalValue is a Control System Toolbox frd object. For instance. plot the Bode response of the nominal system. tf.
2 occurrences 635 . variability = [1 1]. nominal = 14. the Uncertainty property is an atomlist object. Change the nominal value of the uncertain element named 'p1' within sysg to 14. and replot the Bode plot of the (new) nominal system. variability = [50 50]%. 1 Input. nominal = 0.p1.Uncertain frd 0 −10 Bode Diagram To: Out(1) Magnitude (dB) . Continuous System. 100 Frequency points p1: real. variability = [0.5 1.nom = 14 UFRD: 2 Outputs. acting as a gateway to the uncertain atoms. Direct access to the atoms is facilitated through Uncertainty.unc. Phase (deg) To: Out(2) To: Out(1) To: Out(2) −20 −30 −40 180 90 0 −20 −40 −60 −80 −100 0 −90 −180 −3 10 10 −2 10 −1 10 Frequency (Hz) 0 10 1 10 2 Just as with the umat and uss classes. 2 occurrences p3: real.2]. 2 occurrences p2: real. sysg. nominal = 3.
2.2. 100 Frequency points Arrays of frd objects can also be lifted. Handling Delays in ufrd This is identical to handling delays in USS. usysg = ufrd(sysg) UFRD: 2 Outputs.100)). Continuous System.logspace(2. See the section “Array Management for Uncertain Objects” on page 649 for more information. 636 . 1 Input. sys = rss(3.6 Building Uncertain Models Interpreting Uncertainty in Discrete Time This issue is described in the section “Interpreting Uncertainty in Discrete Time” on page 630.1). Use the ufrd command to “lift” an frd object to the ufrd class. Lifting an frd to a ufrd A notuncertain frequency response object may be interpreted as an uncertain frequency response object that has no dependence on uncertain atoms. as described in the section “Handling Delays in uss” on page 631. sysg = frd(sys.
Basic Control System Toolbox Interconnections Basic Control System Toolbox Interconnections All of the basic system interconnections defined in the Control System Toolbox • append • blkdiag • series • parallel • feedback • lft • star • starp work with uncertain objects as well. resulting in an uncertain object. 637 . Uncertain objects may be combined with certain objects.
[1 1])).sys21 sys22] x1 1 0 0 0 u1 2 0 2 x2 0 1 0 0 u2 0 2 0 x3 0 0 1 0 x4 0 0 0 1 x4 0 2 c = y1 y2 d = x1 1 0 x2 2 0 x3 0 1. obvious from the decomposition 1 H ( s ) = 2 .6 Building Uncertain Models Simplifying Representation of Uncertain Objects A minimal realization of the transfer function matrix 2 s+1 H(s) = 3 s+1 4 s+1 6 s+1 has only 1 state. = ss(tf(3.[1 1])).1 2 . formed by sys11 sys12 sys21 sys22 sys = a = x1 x2 x3 x4 b = x1 x2 x3 = ss(tf(2.[1 1])). 3 s+1 However.[1 1])). [sys11 sys12.5 x4 0 3 638 . a “natural” construction. = ss(tf(6. = ss(tf(4.
in 'full'. The default value is 'basic'. with a very tight tolerance to minimize error. has 4 states. The command simplify employs adhoc simplification and reduction schemes to reduce the complexity of the representation of uncertain objects. After (nearly) every operation. and then create a 1by2 umat. view the AutoSimplify property of a. a = ureal('a'. the internal representation of uncertain objects built up from uncertain atoms can become nonminimal. numerical based methods similar to truncated balanced realizations are used.AutoSimplify ans = basic m1 = [a+4 6*a] UMAT: 1 Rows. variability = [1 1]. 1 occurrence 639 . depending on the sequence of operations in their construction. basic and full. Effect of AutoSimplify Property Create an uncertain real parameter. a. Each uncertain atom has an AutoSimplify property whose value is one of the strings 'off'. There are three levels of simplification: off. cycling through all of the uncertain atoms.Simplifying Representation of Uncertain Objects u1 u2 y1 0 0 y2 0 0 Continuoustime model. The AutoSimplify property of each atom dictates the types of computations that are performed. 2 Columns a: real. In the same manner. and is nonminimal. 'basic' or 'full'. and attempting to simplify (without error) the representation of the effect of that uncertain object. no simplification is even attempted. In the 'off' case. In 'basic'. the command simplify is automatically run on the uncertain object. Finally.4). nominal = 4. both of whose entries involve the uncertain parameter. fairly simple schemes to detect and eliminate nonminimal representations are used.
variability = [1 1]. Recreate the 1by2 umat. 2 Columns a: real.'a'. both of whose entries involve the square of the uncertain parameter. 2 occurrences Although m4 has a less complex representation (2 occurrences of a rather than 4 as in m3). reset the AutoSimplify property of a to 'basic' (from 'off').AutoSimplify = 'off'. Set the AutoSimplify property of a to 'full' (from 'basic'). Higher order (quadratic. variability = [1 1]. 4 occurrences Note that the resulting uncertain matrix m3 depends on “4 occurrences” of a. m2 = [a+4 6*a] UMAT: 1 Rows. etc. Create an uncertain real parameter.AutoSimplify = 'basic'. nominal = 4. some numerical variations are seen when both uncertain objects are evaluated at (say) 0. usubs(m3. and a 1by2 umat. a. variability = [1 1]. the resulting uncertain matrix m1 only depends on “1 occurrence” of a. a. nominal = 4. m4 = [a*(a+4) 6*a*a] UMAT: 1 Rows. For example.0) ans = 640 . Now note that the resulting uncertain matrix m2 depends on “2 occurrences” of a. m3 = [a*(a+4) 6*a*a] UMAT: 1 Rows. Set the AutoSimplify property of a to 'off' (from 'basic'). a. bilinear. Now note that the resulting uncertain matrix m4 depends on “2 occurrences” of a. 2 Columns a: real. nominal = 4.) duplication is often not detected by the 'basic' autosimplify level. Recreate the 1by2 umat. 2 Columns a: real. 2 occurrences The 'basic' level of autosimplification often detects (and simplifies) duplication created by linear terms in the various entries.AutoSimplify = 'full'.6 Building Uncertain Models Note that although the uncertain real parameter a appears in both (two) entries of the matrix.
0000 6. usubs(m3. 4 occurrences Note that the resulting uncertain matrix m3 depends on “4 occurrences” of a. and a 1by2 umat. nominal = 4. nominal = 4.1) ans = 5 6 usubs(m4. 641 . The simplify command can be used to perform a 'full' reduction on the resulting umat.Simplifying Representation of Uncertain Objects 0 0 usubs(m4.'a'. Again create an uncertain real parameter. m4 = simplify(m3.'full') UMAT: 1 Rows. which can either 'basic' or 'full'. The second input is the desired reduction technique. a.0e015 * 0. 2 Columns a: real.0000 Direct Use of simplify The simplify command can be used to override the AutoSimplify property of all uncertain element. 2 Columns a: real. both of whose entries involve the square of the uncertain parameter. Set the AutoSimplify property of a to 'basic'.AutoSimplify = 'basic'.0) ans = 1. variability = [1 1].1) ans = 5.'a'. 2 occurrences The resulting uncertain matrix m4 depends on only “2 occurrences” of a after the reduction. m3 = [a*(a+4) 6*a*a] UMAT: 1 Rows. The first input to the simplify command is an uncertain object. The example below shows the differences encountered evaluating at a equal to 1.4441 0 Small numerical differences are also noted at other evaluation points. variability = [1 1].'a'.
20 samples of a ureal gives a 1by120 double array. size(usample(M. usample(M) ans = 5. then usample(A. a sample of a ureal is a scalar double.0290i 35.7298 Create a 1by3 umat with A and an uncertain complex parameter C.6 Building Uncertain Models Sampling Uncertain Objects The command usample is used to randomly sample an uncertain object. B = usample(A) B = 5. M = [A C A*A].N) generates N samples of A. 642 . 30 samples of the 1by3 umat M yields a 1by3by30 array. A single sample of this umat is a 1by3 double.4375 + 6. For example.2+6j). size(B) ans = 1 1 20 Similarly. A = ureal('A'. then usample(A) generates a single sample of A.9785 1.6).7428 Generating Many Samples If A is an uncertain object. C = ucomplex('C'. Generating One Sample If A is an uncertain object. B = usample(A. For example. giving a notuncertain instance of the uncertain object.20).30)) ans = 1 3 30 See the section “Creating Arrays With usample” on page 653 for more information about sampling uncertain objects.
gain bounded ultidyn object. 1output.'r').'Bound'. theta = linspace(pi. size(delS) 30x1 array of statespace models Each model has 1 output.[1 1]. The property SampleStateDim of the ultidyn class determines the state dimension of the samples.3). delS = usample(del. 1state systems. Verify that the default state dimension for samples is 1. Verify that this creates a 30by1 ss array of 1input. Note that the gain bound is satisfied and that the Nyquist plots are all circles. hold off. del = ultidyn('del'. 643 . Plot the Nyquist plot of these samples and add a disk of radius 3.Bound*exp(sqrt(1)*theta).Sampling Uncertain Objects Sampling ultidyn Atoms When sampling a ultidyn atom (or an uncertain object that contains a ultidyn atom in its Uncertainty gateway) the result is always a statespace (ss) object. indicative of 1st order systems. nyquist(delS) hold on.30).SampleStateDim ans = 1 Sample the uncertain atom at 30 points. plot(del. 1 input. Create a 1by1.pi). del. with gainbound 3. and 1 state.
'r').5 −1 −1. theta = linspace(pi. and repeat entire procedure.Bound*exp(sqrt(1)*theta).30).pi). delS = usample(del. hold off. 644 .SampleStateDim = 4. nyquist(delS) hold on.5 1 0. The Nyquist plots satisfy the gain bound and as expected are more complex than the circles found in the 1storder sampling. plot(del.5 Imaginary Axis 0 −0.6 Building Uncertain Models 2 Nyquist Diagram 1. del.5 −2 −3 −2 −1 0 Real Axis 1 2 3 Change the SampleStateDim to 4.
Sampling Uncertain Objects 3 Nyquist Diagram 2 1 Imaginary Axis 0 −1 −2 −3 −3 −2 −1 0 Real Axis 1 2 3 645 .
eta = ureal('eta'. A = [3+delta+eta delta/eta. delta is replaced by 2.3.3).'delta'. and eta is replaced by A. 1 occurrence Use usubs to substitute the uncertain element named delta in A with the value 2. 3 occurrences rho: real.{2.6).2.rho).'delta'.{'delta'. but not all.6 Building Uncertain Models Substitution by usubs If an uncertain object (umat. list individually. variability = [1 1]. variability = [1 1]. variability = [1 1].7+rho rho+delta*eta] UMAT: 2 Rows. replaces both the atoms delta and eta with the real number 2. or in cells.2). of the uncertain parameters to specific values.{'delta'.A.3. Any superfluous substitution requests are ignored. If it makes sense.1).3. 2 occurrences eta: real.Uncertainty. nominal = 1. nominal = 2. rho = ureal('rho'. nominal = 1.rho.3.3. 2 Columns delta: real. nominal = 6. it is often necessary to freeze some.'eta'}.A. The usubs command accomplishes this.'eta'}. B2 = usubs(A. Hence 646 . variability = [1 1]. is an uncertain matrix with dependence only on eta and rho. 2 Columns eta: real. 3 occurrences rho: real.2. ufrd) has many uncertain parameters.'eta'. Note that the result. and also allows more complicated substitutions for an atom. So B3 = usubs(A.3) UMAT: 2 Rows. uss. and respective values to substitute for them. In each case.2. The following are the same B1 = usubs(A. You can create 3 uncertain real parameters and use them to create a 2by2 uncertain matrix A. nominal = 6. a single replacement value can be used to replace multiple atoms. B.Uncertainty. usubs accepts a list of atom names. leaving all other uncertain atoms intact. delta = ureal('delta'.rho}). variability = [1 1]. 1 occurrence To set multiple atoms. B = usubs(A.Uncertainty.
whose fieldnames are the names of the atoms being substituted with values. B6 is the same as B1 and B2 above. delta and eta. Nominal and Random Values If the replacement value is the (partial and caseindependent) string 'Nominal'. Set the values of these fields to be the desired values. and substituting does not alter the result.Substitution by usubs B4 = usubs(A. B6 = usubs(A.{'delta'.0000 0.delta = 2.'fred'.Uncertainty.NV). is the same as A. is the same as B3. any superfluous fields are ignored. robuststab.NV).3.Uncertainty). NV. and usample all return substitutable values in this structure format.rho. Therefore. Create a structure NV with 2 fields. See the section “Creating Arrays With usubs” on page 655 for more information. Specifying the Substitution with Structures An alternative syntax for usubs is to specify the substituted values in a structure. Therefore B8 = usubs(A. B7 is the same as B6. then the listed atom are replaced with their nominal values.gamma = 0. and B5 = usubs(A. Then perform the substitution with usubs. NV.3.{'fred' 'gamma'}. B7 = usubs(A.0000 11.0000 647 .5). Again. NV.'nom') B8 = 11.2. Here.0).fieldnames(A.3333 6. adding an additional field gamma to NV. Here. The commands wcgain.eta = A.'eta'}.
say 6.5000 15. and achieves the same effect.'rho'. 648 . and would be the typical use of usubs with the 'nominal' argument.5183 0.0000 are the same. and then following (or preceeding) the call to usubs with a call to usample (to generate the random samples) is acceptable.6.'nom'.6100 Unfortunately. delta to a random value (within its range) and rho to a specific value.'rand'.6 Building Uncertain Models B9 = A. set eta to its nominal value.2531 13. However.NominalValue B9 = 11.5) B10 = 10.'delta'.0000 0.'eta'. the 'Nominal' and 'Random' specifiers may not be used in the structure format. explicitly setting a field of the structure to an atom's nominal value.3333 6. Within A.0000 11. It is possible to only set some of the atoms to NominalValues.5 B10 = usubs(A.
.2)==1.Array Management for Uncertain Objects Array Management for Uncertain Objects All of the uncertain system classes (uss.idx2. respectively. The first two dimensions correspond to the outputs and inputs of the system.. preserving all of the array dimensions. If size(M. umat objects are treated in the same manner. then M(Sidx) selects the corresponding elements. and selects from each array dimension the “slices” referred to by the idx1. if size(M) equals [4 5 3 6 7].[1 2 4]) is [2 3 3 6 7]. then G = M(Yidx. then szM(3:end) are sizes of the array dimensions of M. 649 . idx2. For these types of objects. Any dimensions beyond are the array dimensions..Uidx) selects the outputs (rows) referred to by Yidx and the inputs (columns) referred to by Uidx.. if szM = size(M).idx1..1)==1 or size(M. The command size returns a row vector with the sizes of all dimensions. then single indexing on the inputs or outputs (rows or columns) is allowed. Any dimensions beyond are referred to as the array dimensions..Uidx. and that Yidx and Uidx are vectors of integers. The first two dimensions are the rows and columns of the uncertain matrix. ufrd) may be multidimensional arrays. Referencing Arrays Suppose M is a umat. Then M(Yidx. Hence. For example. 5th and higher dimensions (which often model parametrized variability in the system input/output behavior). If there are K array dimensions. .idxK) selects the outputs and inputs referred to by Yidx and Uidx. idxK are vectors of integers. If Sidx is a vector of integers. and idx1. This is intended to provide the same functionality as the LTIarrays of the Control System Toolbox. then (for example) the size of M([4 2].. 4th. uss or ufrd. it is clear that the first two dimensions (system output and input) are interpreted differently from the 3rd. All array dimensions are preserved.
[a b 1]..[8 10 12 2 4 20 18]) is valid. hence the result is a 1by3by4by1 umat.4) equals length(idx2).1) equals length(Yidx). The expression G = M([1 3]. preserving the order dictated by MATLAB single indexing (e. size(G. For instance. b = ureal('b'. nominal = 2. a = ureal('a'. referred to as a 4by1 umat array.6 Building Uncertain Models idx2.[4 5 6].[a 0 0]) [array.2). or does it correspond to the first array dimension?”). the 10th element of a 7by4 array is the element in the (3. M = stack(1.2) position in the array).[5 3 1]. and less than K index vectors are used in doing the array referencing. then it is not allowable to combine single indexing in the output/input dimensions along with indexing in the array dimensions. nominal = 4. The result has size(G) equals [2 2 3 3 7]. size(G.K+2) equals length(idxK).3) equals length(idx1).. 1 occurrence variability = [1 1]. This will result in an ambiguity in how to interpret the second index vector in the expression (i. idxK index vectors.g. If M has K array dimensions. Creating Arrays with stack. The first argument of stack specifies in which array dimension the stacking occurs. b: real. then the MATLAB convention for single indexing is followed. size(M) ans = 1 3 4 arraysize(M) b 4+a].2) equals length(Uidx)..4). the stacking is done is the 1st array dimension. size(G.e. In the example below.[a UMAT: 1 Rows. size(G.. Create a [4by1] umat array by stacking four 1by3 umat objects with the stack command. suppose size(M) equals [3 4 6 5 7 4]. and M has array dimensions.. and size(G.. 3 Columns a: real. Note that if M has either one output (row) or one input (column).[2 3 4]. “does it correspond to the input/output reference. 1 occurrence 650 . 4 x 1] variability = [1 1].[1 4]. The last index vector [8 10 12 2 4 20 18] is used to reference into the 7by4 array. cat An easy manner to create an array is with stack. Consequently.
[4 5 6]) ans = 0 0 0 simplify(M(:.:. Use referencing to access parts of the [4by1] umat array and compare to the expected values.3) . The last two should be the value 5.2. both M and N can be recovered from M2. respectively.:.:. simplify(M(:. NominalValue 4.3. size(M2) ans = 1 3 4 arraysize(M2) ans = 4 2 2 As expected.:.Array Management for Uncertain Objects ans = 4 1 Check that result is valid. 651 .1) .3.[a b 1]) ans = 0 0 0 simplify(M(:.2) .M. The first 4 examples should all be arrays full of 0 (zeros).N). N = randn(1.2)4) Uncertain Real Parameter: Name a.4).1)M).[a b 4+a]) ans = 0 0 0 simplify(M(:. M2 = stack(2. and the uncertain real parameter a.:.:.3)) % should be 5 ans = 5 simplify(M(1. variability = [1 1] You can create a random 1by3by4 double matrix and stack this with M along the second array dimension. creating a 1by3by4by2 umat.[a 0 0]) ans = 0 0 0 simplify(M(1. d1 = simplify(M2(:.4) .
M = stack(1. d3 = simplify(M3(:.:. both M and N can be recovered from M3.1:4)M).1.:.[4 5 6]. Mequiv(1.[a b 1]. [max(abs(d1(:))) max(abs(d2(:)))] ans = 0 0 It is also possible to stack M and N along the 1st array dimension.5:8)N).[4 5 6]). creating a 1by3by8by1 umat.1) = a. there is no need to preallocate the variable first. Mequiv(1.[a b 4+a]. size(M3) ans = 1 3 8 arraysize(M3) ans = 8 1 As expected. an equivalent construction to a = ureal('a'. 652 . [max(abs(d3(:))) max(abs(d4(:)))] ans = 0 0 Creating Arrays by Assignment Arrays can be created by direct assignment. Simply assign elements – all resizing is performed automatically.N). For instance.4).4) = [a 0 0].1) = b.:.2). Mequiv(1. is Mequiv(1.[a b 4+a].:.:.3.2)N).M.2. d4 = simplify(M3(:.:.[a 0 0]).6 Building Uncertain Models d2 = simplify(M2(:. As with other MATLAB classes.1) = 1. M3 = stack(1.2:3) = stack(1. Mequiv(1. b = ureal('b'.
k2. max(abs(d5(:))) ans = 0 Binary Operations with Arrays Most operations simply cycle through the array dimensions..k2... or ufrd) arrays with identical array dimensions (slot 3 and beyond). implicitly through a repmat. d5 = simplify(MMequiv)..B(:. For illustrative purposes. In general..k1.. create a 2by3 umat. Now compare the extra dimensions: In the kth dimension. 653 . Assume A and B are umat (or uss. squeeze and reshape are included to facilitate this management.k1.k1..k2. The user is required to manage the extra dimensions (i.:.k2.. any binary operation requires that the extradimensions are compatible. The compatibility of the extra dimensions is determined by the following rule: If lA=lB. doing pointwise operations.:.e. Using the ureal objects a and b from above...) = fcn(A(:. Methods such as permute.. uss and ufrd objects allow for slightly more flexible interpretation of this. 1) trailing dimensions are not listed.Array Management for Uncertain Objects The easiest manner for you to verify that the results are the same is to subtract and simplify. Suppose the array dimensions of A are and that the array dimensions of B are m 1 × … × m l . and singleton dimensions match with anything. the infinite number of singleton (i.e. .. keep track of what they mean). setting C(:.:. The umat.B) is equivalent to looping on k1. or nk=1 or mk=1. The operation C = fcn(A.. it must be that one of 3 conditions hold: nk=mk. By B MATLAB convention. consider a binary operation involving variables A and B. In other words. nonsingleton dimensions must exactly match (so that the pointwise operation can be executed).)) The result C has the same array dimensions as A and B.).. then pad the shorter dimension list with trailing 1's. A n1 × … × nl Creating Arrays With usample You can generate an array by sampling an uncertain object in some of the uncertain elements.
values is a 20by15 struct array. leaving a new array dimension in its place.bvalues] = usample(M. This results in a 3by2by20by15 double.20. class(Mss) ans = double In this case. 654 . 20 x 1] a: real. nominal = 2. Ms UMAT: 3 Rows. 2 Columns a: real. It follows that usubs(M.'a'. nominal = 4. nominal = 4.'b'. variability = [1 size(Ms) ans = 3 2 20 1]. whose values are the values used in the random sampling.avalues] = usample(Ms.'b'. 2 Columns [array. 6 occurrences Sample (at 20 random points within its range) the uncertain real parameter b in the matrix M.6 Building Uncertain Models M = [a b. The uncertain element b of M has been “sampled out”. This results in a 3by2by20 umat. size(Mss) ans = 3 2 20 15 class(Mss) ans = double The above 2step sequence can be performed in 1 step. [Ms. 2 occurrences Continue sampling (at 15 random points within its range) the uncertain real parameter a in the matrix Ms.values) is the same as Mss. 3 occurrences 1]. with only one uncertain element. variability = [1 b: real. with 2 fields b and a.15).'a'. [Mss. variability = [1 size(M) ans = 3 2 1].values] = usample(M.15).1b 1+a*b] UMAT: 3 Rows.20). a. [Mss.b*b a/b.
M = [a b. Sample the 2dimensional space with 800 points. yielding mb. Then usubs(M.b*b a/b. a = ureal('a'. Make its values random scalars. Bvalue = struct('b'.Avalue) UMAT: 3 Rows. Similarly substitute the uncertain real parameter b in M with the values in Avalue. 6 occurrences 655 . generating a 20by15 grid in a 2dimensional space.num2cell(rand(1. Create a 1by4 struct array with fieldname b. the field names of Values are some (or all) of the names of the uncertain elements of M.values] = usample(M.2).Bvalue) 1].num2cell(rand(5. Substitute the uncertain real parameter a in M with the values in Avalue.4).Array Management for Uncertain Objects Rather than sampling the each variable (a and b) independently. as described in the section “Creating Arrays With usample” on page 653. size(Ms) ans = 3 2 800 size(values) ans = 800 1 Creating Arrays With usubs Suppose that Values is a struct array with the following properties: the dimensions of Values “match” the array dimensions of M.1))). Avalue = struct('a'.Values) will substitute the uncertain elements in M with the contents found in the respective fields of Values. the twodimensional space can be sampled. You can create a 3by2 uncertain matrix using 2 uncertain real parameters. ma = usubs(M.{'a' 'b'}. [Ms. yielding ma. 5 x 1] b: real. variability = [1 mb = usubs(M.800).1b 1+a*b]. and the dimensions of the contents of the fields within Values match the sizes of the uncertain elements within M. 2 Columns [array. nominal = 2. Create a 5by1 struct array with fieldname a. b = ureal('b'.4))).
Bvalue). 1 occurrence Grid the uncertain real parameter b in M with 100 points.6 Building Uncertain Models UMAT: 3 Rows. b = ureal('b'. 1 occurrence c: real. max(abs(thediff(:))) ans = 4.b c] UMAT: 2 Rows.Avalue).3]. nominal = 3. a = ureal('a'. yielding mab. Mgrid1 = gridureal(M. variability = [1 0. 2 Columns [array. nominal = 2. 2 Columns a: real. c = ureal('c'. 2 occurrences c: real. gridureal removes a specified uncertain real parameter and adds an array dimension (to the end of the existing array dimensions). It is a specialized case of usubs. mab = usubs(ma.'Plusminus'. Do the analogous operation for mb.'Range'. variability = [15 15]%.'Percentage'.4. as expected. mba = usubs(mb. 1 occurrence 656 .5 4]. substituting the uncertain real parameter b in ma with the values in Bvalue.[2.15).5 4]. thediff = mabmba. nominal = 3. The new array dimension represents the uniform samples of the uncertain object along the specified uncertain real parameter's range. 2 occurrences Continue. M = [a b.100) UMAT: 2 Rows. variability = [1 1].3]). and note that the difference is 0.[1 . 2 Columns [array.2.4409e016 Creating Arrays with gridureal The command gridureal enables uniform sampling of specified uncertain real parameters within an uncertain object. Create a 2by2 uncertain matrix with 3 uncertain real parameters.'b'. nominal = 4.5 4]). range = [2. with dependence on uncertain real parameters a and c. 1 occurrence b: real. Subtract. 100 x 1] a: real. range = [2. nominal = 2. variability = [1 0.3]. The result is a umat array.3. nominal = 4. 1 x 4] a: real. yielding mba.
ultidyn. size(M) ans = 1 3 4 657 . Compare to generating the same uncertain matrix through multiplication.5). variability = [1 Amat2 = a*ones(2. grid the uncertain real parameter a with 20 points. a = ureal('a'. Use repmat to tile this 1by3by4by1 umat.Array Management for Uncertain Objects Operating on the uncertain matrix M. and replicate it in a 2by3 uncertain matrix.[4 5 6]. namely double. into a 2by3by8by5 umat.) and umat objects. 3 Columns a: real. char. M = stack(1. etc. 1 occurrence Create (as in section a [4by1] umat array by stacking four 1by3 umat objects with the stack command.2). It works on the builtin objects of MATLAB.'a'. the uncertain real parameter b with 12 points. as well as the generalized container objects cell and struct.'c'. nominal = 5.12. size(Mgrid3) ans = 2 2 20 12 7 Creating Arrays with repmat The MATLAB command repmat is used to replicate and tile arrays. simplify(AmatAmat2) ans = 0 0 0 0 0 0 1].[a b 1]. The identical functionality is provided for replicating and tiling uncertain elements (ureal. b = ureal('b'. and the uncertain real parameter c with 7 points. a = ureal('a'.[a b 4+a]. Amat = repmat(a.20.'b'. Mgrid3 = gridureal(M.3). The result is a 2by2by20by12by7 double array. You can create an uncertain real parameter.[2 3]) UMAT: 2 Rows.[a 0 0]).7).4).
3)).5:8. and uncertain frequency response data (ufrd) is done with repsys. 3 Columns [array.[2 1 2 5]) UMAT: 2 Rows. The array produced has the same values of A but the order of the subscripts needed to access any particular element are rearranged as specified by ORDER.5)).:. This means that the first 2 dimensions are treated differently from dimensions 3 and beyond. Using permute and ipermute The commands permute and ipermute are generalizations of transpose.:. nominal = 2. variability = [1 1].ORDER) rearranges the dimensions of A so that they are in the order specified by the vector ORDER. permute(A. nominal = 4. d1 = simplify(MMtiled(2.1:4.2)). 8 x 5] a: real. The syntax and behavior of repsys for uss and ufrd objects are the same as the traditional repsys which operates on ss and frd object. Just as in those cases. d2 = simplify(MMtiled(1. the uncertain version of repsys also allows for diagonal tiling. 1 occurrence Verify the equality of M and a few certain tiles of Mtiled. d3 = simplify(MMtiled(2. regardless of the extent of the replication and tiling. It is not permissible to permute across these groups. [max(abs(d1(:))) max(abs(d2(:))) max(abs(d3(:)))] ans = 0 0 0 Note that repmat never increases the complexity of the representation of an uncertain object. The syntax and behavior are the same as the manner in which repmat is used to replicate and tile matrices. All of the uncertain objects are essentially 2dimensional (output and input) operators with array dependence. The number of occurrences of each uncertain element remains the same.1:4. 1 occurrence b: real. which exchanges the rows and columns of a twodimensional matrix. The elements of ORDER must be a rearrangement of the numbers from 1 to N. 658 .:.6 Building Uncertain Models Mtiled = repmat(M. Creating Arrays with repsys Replicating and tiling uncertain statespace systems (uss. variability = [1 1].
The remaining elements of ORDER must be a rearrangement of the numbers 3 through N. the restriction is enforced in the software. 659 . the first two elements of ORDER must be a rearrangement of the numbers 1 and 2. The elements of the ORDER vector refer to all dimensions. Hence. However. If either of those conditions fail. there is no possibility of permute across these dimensions. the restriction is built into the syntax. either permute or transpose can be used to effect the transpose operation. use the command transpose instead. Therefore. For umat. an error is generated. for umat arrays. The elements of the ORDER vector only refer to array dimensions. In you need to permute the first two dimensions.Array Management for Uncertain Objects For uss and ufrd.
and in the case of ureal objects. It is left as an exercise for the user to work out the various values for A.” The purpose of the uncertain objects (ureal. advanced users may want access to the familiar M/∆ form. moreover. with positivity bound β. β If E is an uncertain positivereal. C and D as functions of the nominal value and range. umat. some details are required in describing the decomposition. Normalizing Functions for Uncertain Atoms Associated with each uncertain element is a normalizing function. with range [L R] and nominal value N. linear. uss.g. F(N)=0. ucomplexm udyn). timeinvariant dynamic uncertainty. If E is an uncertain gainbounded. then the normalizing function F is A + Bρ F ( ρ ) = C + Dρ with the property that for all ρ satisfying L≤ρ≤R. and F(R)=1.. The command lftdata accomplishes this decomposition. Since ureal. timeinvariant dynamic uncertainty. ultidyn. if the uncertain elements are normalized. ultidyn. then the normalizing function F is β β –1 F ( E ) = I – α ⎛ E – . and allow the user to focus on modeling and analyzing uncertain systems. ucomplex.I⎞ I + α ⎛ E – . ss. ucomplex and ucomplexm do not have their NominalValue necessarily at zero. ufrd) is a generalized feedback connection (lft) of a notuncertain object (e. double.E.6 Building Uncertain Models Decomposing Uncertain Objects (for Advanced Users) Each uncertain object (umat. are not symmetric about the NominalValue. linear. Nevertheless. rather than the details of correctly propagating the M/∆ representation in manipulations. then the normalizing function F is 1 F ( E ) = . B.I⎞ ⎝ ⎝ 2 ⎠ 2 ⎠ 660 . If ρ is an uncertain real parameter. this decomposition is often called “the M/∆ form. In robust control jargon. frd) with a diagonal augmentation of uncertain atoms (ureal. it follows that −1≤F(ρ)≤1. uss. etc. The normalizing function maps the uncertain element into a normalized uncertain element. with gainbound β.) is to hide this underlying decomposition. F(L)=−1.
F(ρ) 0 0 0 0 0 F(ξ) 0 0 0 ∆ ( ρ .( ξ – C ) γ The normalizing function for uncertain complex matrices H. bounds. unaltered. H..H. the absolute value of the normalizing function (or norm. A ( ρ.P) with the following properties M is certain (i. with ranges.ξK. with nominal value N and weights WL and WR is F ( H ) = W L ( H – N )W R –1 –1 In each case. ξ ) = M 22 + M 21 ∆ ( ρ. Properties of the Decomposition Take an uncertain object A. with nominal value C and radius γ is 1 F ( ξ ) = .…. if A is uss.E.E.E. depending on the same uncertain elements as A. timeinvariant dynamics P1. uncertain complex parameters ξ1.ξ. Write A(ρ.P) to indicate this dependence.H. ∆ is always a umat. then M is ss. E. The normalizing function for an uncertain complex parameter ξ. then M is frd).P) is given by a linear fractional transformation of M and ∆(ρ. Using lftdata. E. with elements made up of the normalizing functions acting on the individual uncertain elements..PQ.ξ.E.P). The form of ∆ is block diagonal.H.…. E . weights. and uncertain positivereal linear.…. uncertain gainbounded linear. if A is umat. H. uncertain complex matrices H1.ED. P ) = 0 0 F(H) 0 0 0 0 0 F(E) 0 0 0 0 0 F(P) A(ρ. in the matrix case) varies from 0 and 1. then M is double.….Decomposing Uncertain Objects (for Advanced Users) where α=2β+1. ξ .H. P ) ] M 12 –1 661 . as the uncertain atom varies over its range. if A is ufrd. A can be decomposed into two separate pieces: M and ∆(ρ.e. H.HB. P ) [ I – M 11 ∆ ( ρ. etc. timeinvariant dynamics E1. ξ. ξ.ρN. dependent on uncertain real parameters ρ1.ξ.….ξ.
1 occurrence Note that A depends on 2 occurrences of delta. variability = [1 1].6 Building Uncertain Models The order of the normalized atoms making up A is not the simple order shown above. nominal = 6.0000 0. delta = ureal('delta'. It is actually the same order as given by the command fieldnames(M. A = [3+delta+eta delta/eta.0000 0. and Delta has the same uncertainty dependence as A. Decompose A into M and Delta.0000 0 0 1.1).0000 0 0 1.1667 0 0 0 6.0000 1.3333 0 0 1.0000 0 11. You can create a 2by2 umat named A using 3 uncertain real parameters.6). nominal = 1. Syntax of lftdata The decomposition is carried out by the command lftdata.1667 0 0 0 1. rho = ureal('rho'. variability = [1 1].0000 1. 2 Columns delta: real. variability = [1 1]. eta = ureal('eta'. nominal = 2. 3 occurrences rho: real.0000 0 0 0 0 0.1667 0 0 0. 2 occurrences eta: real.Delta] = lftdata(A) M = 0 0 1.7+rho rho+delta*eta] UMAT: 2 Rows.3333 0 0 0 0 0 0 1.Uncertainty). See the section “Advanced Syntax of lftdata” on page 664 for more information. Note that M is a double. [M.1667 0 0 0.2). 3 occurrences of eta and 1 occurrence of rho.0000 0 0 0 0 0 0 0 0 0 0 0 662 .0000 0.
nominal = 1.8016 0 0 0.8016 0 0 0 0 0 0 0 0 ans(:.2453 0 0 0 0 0 0.9328 0 0 0 0 0 0.Decomposing Uncertain Objects (for Advanced Users) 0 1.0000 6.4898 0 0 0 0 0 0 0.1222 0 0 0.5) ans(:.0224 0 0 0.1) = 0.:.3433 0 0 0 0 0 0 0 0 0 0 0. there are 3 independent values.4) = 0.4700 0 0 0 0 0 0.2106 0 0 0 0 0 0.1498 0 0 0 0 0 0.2106 0 0 0 0 0 0. 2 occurrences eta: real. usample(Delta.3433 0 0 0.0000 11.0224 0 0 0 0 0 0 0 0 ans(:.2) = 0. variability = [1 1]. variability = [1 1].2106 0 0 0 0 0 0. nominal = 6. and duplication of the entries is consistent with the dependence of Delta and A on the 3 uncertain real parameters. Things to note are: it is diagonal.3629 663 .3) = 0.4898 0 0 0 0 0 0 0. 1 occurrence Sample Delta at 5 points. nominal = 2. variability = [1 1].9328 0 0 0 0 0 0.0000 0 0 2.9328 0 0 0 0 0 0.:.:. 6 Columns delta: real.:.1498 0 0 0 0 0 0.0000 1. the values range between 1 and 1.0000 UMAT: 6 Rows.9620 0 0 0 0 0 0.1498 0 0 0 0 0 0.4898 0 0 0 0 0 0 0. 3 occurrences rho: real.1222 0 0 0 0 0 0 0 0 ans(:.
BlkStruct] = lftdata(A).3917 0 0 0 0 0 0 0.1409 0 0 0 0 0 0 0.'full') ans = 0 0 0 0 Advanced Syntax of lftdata Even for the advanced user. maxnorm = wcnorm(Delta) maxnorm = lbound: 1. verify that the maximum gain of Delta is indeed 1. On the other hand. The rows of BlkStruct correspond to the uncertain atoms named in fieldnames(A.0004 Finally. as it is still a complex object.5) = 0. Subtract (and use the 'full' option in simplify) simplify(lft(Delta. BlkStruct BlkStruct = 1 1 2 1 1 1 3 1 1 1 1 1 fieldnames(A.Uncertainty).1409 0 0 0 0 0 0 0.3917 0 0 0 0 0 0 0.0000 ubound: 1.:.M)A. the variable Delta will actually not be that useful. its internal structure is described completely using a 3rd argument.Uncertainty) ans = 'delta' 664 . [M.M) is the same as A.5187 In fact.3917 0 0 0 0 0 0 0.6 Building Uncertain Models ans(:. Note that the range/bound information about each uncertain atom is not included in BlkStruct. verify that lft(Delta.Delta.
and it is of type 1 (4th column of BlkStruct). which means ureal. and it is of type 1. 665 . there are 2 copies diagonally augmented (3rd column of BlkStruct). The second atom is named 'eta'. ucomplexm is type 4. It is 1by1. a poweruser has direct access to all of the linear fractional transformation details. Other types include: ultidyn is type 2. by manipulating M and BlkStruct. It is 1by1 (first two columns of BlkStruct. there are 3 copies diagonally augmented. It is 1by1.Decomposing Uncertain Objects (for Advanced Users) 'eta' 'rho' Together. and udyn is type 5. ucomplex is type 3. and it is of type 1. Hence. these mean Delta is a block diagonal augmentation of the normalized version of 3 uncertain atoms. The third atom is named 'rho'. The first atom is named 'delta'. and can easily work at the level of the theorems and algorithms that underlie the methods. there 1 copy.
6 Building Uncertain Models 666 .
Decomposing Uncertain Objects (for Advanced Users) 667 .
6 Building Uncertain Models 668 .
76) WorstCase Gain Measure (p. 75) Robust Performance Margin (p. 77) What is generalized robustness analysis? A brief discussion of robust stability margins The definition of robust performance margins The maximum achievable gain over all uncertain system objects .7 Generalized Robustness Analysis Introduction to Generalized Robustness Analysis (p. 72) Robust Stability Margin (p.
system performance is characterized by system gain (e. Here.72 if uncertain elements can deviate from their nominal values by 1.72 Maximum System Gain due to varying amounts of uncertainty System performance degradation curve 1..5 0 0 System Gain as large as 1. Moreover.5 Nominal System Gain = 0.5 2 2. Maximum System Gain over Uncertainty 3 2. 0. the maximum possible degradation increases as the uncertain elements are allowed to further and further deviate from their nominal values.g.5 units.e. zero deviation from their nominal values) the input/output gain of the system is its 72 .5 1 1.5 2 1. peak magnitude on Bode plot). small system gains are desirable. Interpreting the system as the relationship mapping disturbances/commands to errors.8 1 0. The graph below shows the typical tradeoff curve between allowable deviation of uncertain elements from their nominal values and the worstcase degradation in system performance.5 Bound on Normalized Uncertainty 3 When all uncertain elements are set to their nominal values (i. and large gains are undesirable.7 Generalized Robustness Analysis Introduction to Generalized Robustness Analysis The performance of a nominallystable uncertain system model will generally degrade for specific values of its uncertain elements..
8. the nominal system gain is about 0. Determining specific attributes of the system performance degradation curve are referred to as robustness computations. the maximum (over the uncertain elements) system gain increases. and is called the system performance degradation curve. It is monotonically increasing. In the figure. The commands robuststab. The heavy blue line represents the maximum system gain due to uncertainty of various sizes (the horizontal axis). “robustness computations” refer to determining specific attributes of the system performance degradation curve. Redraw the system performance degradation curve with 3 additional curves: a hyperbola defined by xy=1. and a vertical line tangent to the asymptotic behavior of the performance 73 . a vertical line drawn at the uncertainty bound = 1. Generally. robustperf and wcgain all compute single scalar attributes of the system performance degradation curve.Introduction to Generalized Robustness Analysis nominal value. As the uncertainties are allowed to deviate from nominal.
7 Generalized Robustness Analysis degradation curve at large uncertainty bounds.5 2 Uncertainty level at which system can become unstable 1.5 WCGain=1.5 2 2.5 1 1. 3 Maximum System Gain over Uncertainty WCGain uses bound of 1 on normalized uncertainty 2.5 Bound on Normalized Uncertainty 3 74 . These are used to define three robustness measures. performance tradeoff space 1 PerfMarg=0.22 y=1/x curve in uncertainty size .88 0.vs.9 0 0 0.5 StabMarg=1. explained next.
5 1 1. is the size of the smallest deviation from nominal of the uncertain elements that leads to system instability.5 1 0.5 Bound on Normalized Uncertainty 3 System instability is equivalent to the system gain becoming arbitrarily large. StabMarg.Robust Stability Margin Robust Stability Margin The robust stability margin.9 0 0 0.5 StabMarg=1. and hence characterized by the vertical line tangent to the asymptotic behavior of the performance degradation curve.5 2 2. 3 Maximum System Gain over Uncertainty Robust Stability Margin 2.5 2 System performance degradation curve Uncertainty level at which system can become unstable 1. 75 .
5 Bound on Normalized Uncertainty 3 The point where the system performance degradation curve crosses the green line is used as a scalar measure of the robustness of a system to uncertainty. performance tradeoff space 1 PerfMarg=0.7 Generalized Robustness Analysis Robust Performance Margin The hyperbola is used to define the performance margin.5 2 System performance degradation curve 1. The horizontal coordinate of the crossing point is the robust performance margin. Conversely. 76 . PerfMarg.5 1 1.vs.88 0. an intersection low on the hyperbola represent “robustlyperforming systems.” 3 Maximum System Gain over Uncertainty Robust Performance Margin 2.5 2 2. Systems whose performance degradation curve intersects high on the hyperbola curve represent “nonrobustlyperforming systems” in that very small deviations of the uncertain elements from their nominal values can result in very large system gains.5 y=1/x curve in uncertainty size .5 0 0 0.
Mathematically.22 1.5 2 System performance degradation curve WCGain=1. Each measure captures a single scalar attribute of the system performance degradation curve. Consequently. 77 .5 1 1. it is possible that the StabMarg of sysA is larger than the StabMarg of sysB. they are independent quantities. for two uncertain systems. though the PerfMarg of sysA is smaller than the PerfMarg of sysB. this is the vertical coordinate of the performance degradation curve as it crosses the vertical line drawn at the uncertainty bound = 1.5 Worst−Case Gain Measure 0.WorstCase Gain Measure WorstCase Gain Measure The worstcase gain measure is the maximum achievable system gain over all uncertain elements whose normalized size is bounded by 1.5 2 2. answering subtly different questions. 3 Maximum System Gain over Uncertainty WCGain uses bound of 1 on normalized uncertainty 2. sysA and sysB.5 Bound on Normalized Uncertainty 3 0 0 On the graph. Nevertheless.5 1 0. they are useful metrics for concise description of the robustness of a system (uss or ufrd) due to various uncertain elements.
7 Generalized Robustness Analysis 78 .
82) LMIs and LMI Problems (p.8 Introduction to Linear Matrix Inequalities Linear Matrix Inequalities (p. 89) References (p. 810) An introduction to the concept of linear matrix inequalities and what LMI functionality can do for you The basic properties of LMIs Detailed mathematical development of LMI theory Relevant papers on linear matrix inequalities . 84) Further Mathematical Background (p.
flexible. • Once formulated in terms of LMIs. LMI Features The Robust Control Toolbox LMI functionality serves two purposes: • Provide stateoftheart tools for the LMIbased analysis and design of robust control systems • Offer a flexible and userfriendly environment to specify and solve general LMI problems (the LMI Lab) :Examples of LMIbased analysis and design tools include • Functions to analyze the robust stability and performance of uncertain systems with varying parameters (popov. The LMI Control Toolbox is designed as an easy and progressive gateway to the new and fastgrowing field of LMIs: • For users who occasionally need to solve LMI problems. the “LMI Lab” (Chapter 8) offers a rich. This makes LMIbased design a valuable alternative to classical “analytical” methods. the “LMI Editor” and the tutorial introduction to LMI concepts and LMI solvers provide for quick and easy problem solving..8 Introduction to Linear Matrix Inequalities Linear Matrix Inequalities Linear Matrix Inequalities (LMIs) and LMI techniques have emerged as powerful design tools in areas ranging from control engineering to system identification and structural design. and fully programmable environment to develop customized LMIbased tools.) 82 . • While most problems with multiple constraints or objectives lack analytical solutions in terms of matrix equations. quadstab. See [9] for a good introduction to LMI concepts. they often remain tractable in the LMI framework. a problem can be solved exactly by efficient convex optimization algorithms (the “LMI solvers”).. • For more experienced LMI users. Three factors make LMI techniques appealing: • A variety of design specifications and constraints can be expressed as LMIs. quadperf .
Note that the scope of this facility is by no means restricted to controloriented applications. and pole placement objectives (h2hinfsyn) • Functions for synthesizing robust gainscheduled H∞ controllers (hinfgs) For users interested in developing their own applications. While these solvers are significantly faster than classical convex optimization algorithms. the LMI Lab provides a generalpurpose and fully programmable environment to specify and solve virtually any LMI problem. For example. 83 . you should keep in mind that the complexity of LMI computations can grow quickly with the problem order (number of states).Linear Matrix Inequalities • Functions to design robust control with a mix of H2. H∞. while the cost of solving and equivalent “Riccati inequality” LMI is o(n6). Note The LMI Control Toolbox implements stateoftheart interiorpoint LMI solvers. the number of operations required to solvea a Riccati equation is o(n3) where n is the state dimension.
. . is equivalent to A ( x ) := diag ( A 1 ( x ). is a convex optimization problem. . . . A K ( x ) ) < 0 . Hence multiple LMI constraints can be imposed on the vector of decision variables x without destroying convexity.e. The LMI (11) is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that + A ⎛ y 2 z⎞ < 0 . called the feasible set. .…. ⎪ . if any. . . As a result.8 Introduction to Linear Matrix Inequalities LMIs and LMI Problems A linear matrix inequality (LMI) is any constraint of the form A ( x ) := A 0 + x 1 A 1 + … + x N A N < 0 where • x = (x1. AN are given symmetric matrices • < 0 stands for “negative definite. . it can be solved numerically with guarantees of finding a solution when one exists. respectively. xN) is a vector of unknown scalars (the decision or optimization variables) • A0. AK(x)) denotes the blockdiagonal matrix with A1(x). . . Convexity has an important consequence: even though (11) has no analytical solution in general. Note that a system of LMI constraints can be regarded as a single LMI since ⎧ ⎪ A1 ( x ) < 0 ⎪ .” i.. the largest eigenvalue of A(x) is negative Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of (11) since they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0. ⎨ ⎪ ⎪ AK ( x ) < 0 ⎪ ⎩ where diag (A1(x). . 84 . is a convex subset of RN • Finding a solution x to (11). AK(x) on its diagonal. . . . . ⎝ ⎠ • Its solution set.
. Many control problems and design specifications have LMI formulations [9]. Finally. This is especially true for Lyapunovbased analysis and design. Xn) where L(. . Xn) < R(X1. Minimizing a convex objective under LMI constraints is also a convex problem. A simple example is the Lyapunov inequality A X + XA < 0 T (81) where the unknown X is a symmetric matrix. .) and R(. . . . . the linear objective minimization problem Minimize c x subject to A ( x ) < 0 plays an important role in LMIbased design. but rather in the form L(X1. this LMI could be rewritten in the form (11). . It owes its name to the fact that is related to the largest generalized eigenvalue of the pencil (A(x). The Three Generic LMI Problems Finding a solution x to the LMI system A(x) < 0 (82) is called the feasibility problem.B(x)). LMIs do not naturally arise in the canonical form (11). xN as the independent scalar entries of X. which is the approach taken in the LMI Lab. Yet it is more convenient and efficient to describe it in its natural form (12). . .LMIs and LMI Problems In most control applications. but also for 85 . Defining x1. . Xn. . . . the generalized eigenvalue minimization problem ⎧ A ( x ) < λB ( x ) ⎪ Minimize λ subject to ⎨ B ( x ) > 0 ⎪ ⎩ C(x) < 0 T (83) (84) is quasiconvex and can be solved by similar techniques.) are affine functions of some structured matrix variables X1. . . In particular.
7]. [10]. identification. [13]. [3]. [2]) • Control of stochastic systems [9] • Weighted interpolation problems [[9] To hint at the principles underlying LMI design. H∞ control. [18]) • Design of robust gainscheduled controllers ([5]. Further applications of LMIs arise in estimation. matrix scaling problems. structural design [6. The main strength of LMI formulations is the ability to combine various design constraints or objectives in a numerically tractable manner. covariance control. Stability The stability of the dynamic system · x = Ax is equivalent to the feasibility of Find P = PT such that AT P + P A < 0. let’s review the LMI formulations of a few typical design objectives.) [9] • Multimodel/multiobjective state feedback design ([4]. and so on.8 Introduction to Linear Matrix Inequalities optimal LQG control. A nonexhaustive list of problems addressed by LMI techniques includes the following: • Robust stability of systems with LTI uncertainty (µanalysis) ([24]. [21]. [23]. [8]) • Lyapunov stability of parameterdependent systems [12] • Input/state/output properties of LTI systems (invariant ellipsoids. 86 . etc. [9]. P > I. optimal design. [14]) • Multiobjective H∞ synthesis ([18]. decay rate. [16]) • Quadratic stability of differential inclusions ([15]. etc. [17]. [27]) • Robust stability in the face of sectorbounded nonlinearities (Popov criterion) ([22]. [10]) • Robust pole placement • Optimal LQG control [9] • Robust H∞ control ([11]. [28].
…. This gain is the global minimum of the following linear objective minimization problem [1. 25]. T RMS gain The randommeansquares (RMS) gain of a stable LTI system · ⎧ x = Ax + Bu ⎨ ⎩ y = Cx + Du is the largest input/output gain over all bounded inputs u(t).A n } = ⎨ a i A i : a i ≥ 0. Minimize γ over X = XT and γ such that ⎛ T T ⎞ ⎜ A X + XA XB C ⎟ ⎜ T T ⎟ B X –γ I D ⎟ < 0 ⎜ ⎜ ⎟ ⎝ C D –γ I ⎠ X > 0 LQG performance For a stable LTI system · ⎧ x = Ax + Bw G⎨ ⎩ y = Cx 87 .LMIs and LMI Problems This can be generalized to linear differential inclusions (LDI) · x = A ( t )x where A(t) varies in the convex envelope of a set of LTI models: N ⎧ n ⎫ ⎪ ⎪ A ( t ) ∈ Co { A 1 . ai = 1 ⎬ ⎪ ⎪ i=1 ⎩i = 1 ⎭ ∑ ∑ A sufficient condition for the asymptotic stability of this LDI is the feasibility of Find P = PT such that A i P + PA i < 0. 26. P > I.
the LQG or H2 performance G2 is defined by ⎧1 T T ⎫ 2 G 2 : = lim E ⎨ .Q).Q such that AP + PA + BB < 0 ⎛ Q CP ⎞ ⎜ ⎟ >0 ⎝ PC T P ⎠ Again this is a linear objective minimization problem since the objective Trace (Q) is linear in the decision variables (free entries of P. T T 2 2 T T T 88 .8 Introduction to Linear Matrix Inequalities where w is a white noise disturbance with unit covariance.y ( t )y ( t )dt ⎬ T → ∞ ⎩T 0 ⎭ 1 ∞ H = G ( jω )G ( jω )dω 2π – ∞ ∫ ∫ It can be shown that G 2 = inf { Trace ( CPC ) : AP + PA + BB < 0 } Hence G 2 is the global minimum of the LMI problem Minimize Trace (Q) over the symmetric matrices P.
we always use strict inequalities in this manual. Keeping this subtlety in mind. A wellposed reformulation of (16) would be Minimize cTx subject to x ≥ 0. Some LMI problems are formulated in terms of inequalities rather than strict inequalities. ⎝ xx⎠ (85) Such problems cannot be handled directly by interiorpoint methods which require strict feasibility of the LMI constraints. it matters when A(x) can be made negative semidefinite but not negative definite. this algorithm does not require an initial feasible point for the linear objective minimization problem (14) or the generalized eigenvalue minimization problem (15). While this distinction is immaterial in general. a variant of (14) is Minimize cTx subject to A(x) < 0. A simple example is ⎛ ⎞ T Minimize c x subject to ⎜ x x ⎟ ≥ 0.Further Mathematical Background Further Mathematical Background Efficient interiorpoint algorithms are now available to solve the three generic LMI problems (13)–(15) defined in “The Three Generic LMI Problems” on page 85. That is. These algorithms have a polynomialtime complexity. and V is a datadependent scaling factor. the number N(ε) of flops needed to compute an εaccurate solution is bounded by M N3 log(V/ε) where M is the total row size of the LMI system. 19]. The Robust Control Toolbox implements the Projective Algorithm of Nesterov and Nemirovski [20. In addition to its polynomialtime complexity. 89 . For instance. N is the total number of scalar decision variables.
M. Dec. Aut. Englewood Cliffs.O.. S. M. Conf.. Robust and Nonlinear Contr.P. Aut. [3] Bambang.. “Mixed H2 /H∞ Control with Pole Placement. “Structured and Simultaneous Lyapunov Functions for System Stability Problems. 1994. B. Contr. AC–28 (1983). PrenticeHall. 856860. [10] Chilali. [8] Boyd. Amer. pp.” to appear in Structural Optimization. J. SIAM books.. 23 (1994)... M. and A. Apkarian. Gahinet. [6] Bendsoe. 205215.. R. A. “H∞ Design with Pole Placement Constraints: an LMI Approach. 421448. pp. Contr. 848850.“ Proc. “Robust Performance of LinearParametrically Varying Systems Using ParametricallyDependent Linear Feedback. pp. S. and S.” Systems and Control Letters. and P. 20262031. [4] Barmish.” to appear in IEEE Trans.. P. 1994. and the Popov Criterion in 810 . “Potential Reduction PolynomialTime Method for Truss Topology Design..8 Introduction to Linear Matrix Inequalities References [1] Anderson. “Stabilization of Uncertain Systems via Linear Control. [9] Boyd..D.. Opt. P.” to appear in SIAM J. P.. pp. El Ghaoui. Nemirovski. P. Conf. Packard. Network Analysis. Also in Proc.” Proc. Uchida.. and M. E. pp. 49 (1989). [5] Becker.” Proc.” StateFeedback Case. pp. [7] BenTal. E. G.. “Optimization Methods for Truss Geometry and Topology Design. “Affine ParameterDependent Lyapunov Functions for Real Parametric Uncertainty.. Philadelphia.R. pp. Conf. [13] Haddad.” Int. 1993. pp. 4 (1994). Linear Matrix Inequalities in Systems and Control Theory. [11] Gahinet. Becker. Contr. Shimemura. Gahinet.“ParameterDependent Lyapunov Functions. 22152240. 27772779. W. J. and Q. 1994. Apkarian. Contr. Contr. Zowe. [12] Gahinet.. and P. Conf. Yang. 1994. Contr. and D. Chilali. A. 1973. Balakrishnan. Contr. Amer. and J. B. Contr.S. and K. 553558. P. Feron. Berstein. P. BenTal. and G.. “A Linear Matrix Inequality Approach to H∞ Control.” Int. [2] Apkarian. V. Constant Real Parameter Uncertainty.. “IEEE Trans. “SelfScheduled H∞ Control of Linear ParameterVarying Systems. Vongpanitlerd. Dec. L..
E. Yu.. C. and P.M. AC–21 (1976). J. I. Suda. 1994..R. 29 (1994). Interior Point Polynomial Methods in Convex Programming: Theory and Applications. Doyle.A. “Connection between the Popov Stability Criterion and Bounds for Real Parameter Uncertainty. Conf. volume of the special contributions to the ECC 1995. Hall.” submitted to Int. and N.” to appear in Trends in Control: A European Perspective. 1994. Conf. “The Complex Structured Singular Value. pp.” Automation and Remote Control.” Proc. Belanger. 857875.. and A. [14] Iwasaki. pp. Amer. 13071317. 705708..P. Rotea. Aut.” Proc. 1991. Nemirovski. [16] How.. A.C. 14 (1991). [19] Nemirovski. 22742279 and 26322633. pp. pp. Philadelphia..” J. Contr. 10841089...P.” Automatica.. “Absolute Stability of Nonlinear Systems of Automatic Control. 39 (1991).” IEEE Trans. pp. T.. Ohara.” IEEE Trans. H. pp. 840844. and R. G. “Regulators for Linear TimeVarying Plants with Uncertain Parameters. and J.. 1994. A. 71109. 824837. Dec. SIAM Books. Skelton. Englewood Cliffs. Robust and Nonlinear Contr. PrenticeHall. [18] Masubuchi. Conf. Contr. Contr. [20] Nesterov. and M.References Robust Analysis and Synthesis: Part 1 and 2. Nonlinear System Analysis. 1992.. Gahinet. and P. “All Controllers for the General H∞ Control Problem: LMI Existence Conditions and StateSpace Formulas. J. V. 30 (1994). and S.” Automatica. Contr.” Proc. M. Guidance. pp. “The Projective Method for Solving Linear Matrix Inequalities. [21] Packard. Amer.“Mixed H2 /H∞ Control: a Convex Optimization Approach. [23] Scherer. 516. [22] Popov. P. “Mixed H2 H∞ Control.R. “LMIBased Controller Synthesis: A Unified Formulation and Solution. 22 (1962).P. “Beyond Singular Values and Loop Shapes. and J. pp. [24] Stein. Doyle. 1993. pp. Contr. [17] Khargonekar. A. Aut. 811 . [25] Vidyasagar.... [15] Horisberger..C.
” in Robust Control Theory. M. M. 1994. 228238 and 465476. pp. Part I and II.. Aut. P. AC–16 (1971). C. Doyle.” IEEE Trans. [28] Zames. Newlin. AC–11 (1966). G.” IEEE Trans. 621634. 143174.. “Let's Get Real. 812 . and J. J. “LeastSquares Stationary Optimal Control and the Algebraic Riccati Equation.C.. Aut. P. pp.. [27] Young. pp.. Springer Verlag.8 Introduction to Linear Matrix Inequalities [26] Willems. “On the InputOutput Stability of TimeVarying Nonlinear Feedback Systems. Contr. Contr.
926) variables Validating Results (p. and the tools available in the LMI Lab How to set up a system of LMIs How to find out characteristics of your LMI system How the LMI solvers work From Decision to Matrix Variables and The relationships between decision variables and matrix Vice Versa (p. 919) LMI Solvers (p.9 The LMI Lab “Introduction” on page 92 Specifying a System of LMIs (p. 931) References (p. canonical forms of the LMI problem. 920) An quick look at the LMI Lab. 97) Querying the LMI System Description (p. 928) Advanced Topics (p. 927) Modifying a System of LMIs (p. 942) Verifying your solutions Adapting an existing system to a new problem Topics for the experienced user A list of relevant papers .
+ xNLN < 0 where • L0. LN are given symmetric matrices • x = (x1. . . 92 . We refer to x1. Their high performance is achieved through CMEX implementation and by taking advantage of the particular structure of each LMI. . . and generalized eigenvalue minimization) • Validate results This chapter gives a tutorial introduction to the LMI Lab as well as more advanced tips for making the most out of its potential. It blends simple tools for the specification and manipulation of LMIs with powerful LMI solvers for three generic LMI problems. mincx. . The three solvers feasp. L1. The names “design variables” and “optimization variables” are also found in the literature. . it can be solved numerically by calling the appropriate LMI solver. the optimization variables are specified directly as matrix variables with some given structure. . The LMI Lab offers tools to • Specify LMI systems either symbolically with the LMI Editor or incrementally with the lmivar and lmiterm commands • Retrieve information about existing systems of LMIs • Modify existing systems of LMIs • Solve the three generic LMI problems (feasibility problem. xN)T ∈ RN is the vector of scalar variables to be determined. . xN as the decision variables. . and gevp constitute the computational engine of the LMI Control Toolbox. The tutorial material is also covered by the demo lmidem. Similarly. the various LMI constraints can be described in their natural blockmatrix form. . Once an LMI problem is specified. . . linear objective minimization. Thanks to a structureoriented representation of LMIs. Some Terminology Any linear matrix inequality can be expressed in the canonical form L(x) = L0 + x1L1 + . . .9 The LMI Lab Introduction The LMI Lab is a highperformance package for solving general LMI problems.
Moreover. the number of matrices involved in (82) grows roughly as n2 /2 if n is the size of the A matrix. N are given matrices and the problem variables are X = XT ∈ Rn×n and γ ∈ R. LMIs assume a block matrix form where each block is an affine combination of the matrix variables. Hence. C.Introduction Even though this canonical expression is generic. In general. ⎜ x x ⎟ ⎝ 0 –2 ⎠ ⎝ 2 3⎠ Here the decision variables are the free entries x1. LMIs rarely arise in this form in control applications. We use the following terminology to describe such LMIs: 93 . Consider for instance the Lyapunov inequality A X + XA < 0 T (91) ⎛ x x ⎞ ⎛ ⎞ where A = ⎜ – 1 2 ⎟ and the variable X = ⎜ 1 2 ⎟ is a symmetric matrix. working with the canonical form is also detrimental to the efficiency of the LMI solvers. For instance. B. consider the following LMI drawn from H∞ theory ⎛ T T ⎜ A X + XA XC B T⎜ N ⎜ CX – γI D ⎜ T T B D – γI ⎝ ⎞ ⎟ ⎟N < 0 ⎟ ⎟ ⎠ (93) where A. the canonical form is very inefficient from a storage viewpoint since it requires storing o(n2 /2) matrices of size n when the single nbyn matrix A would be sufficient. x3 of X and the canonical form of this LMI reads ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ x1 ⎜ –2 2 ⎟ + x2 ⎜ 0 –3 ⎟ + x3 ⎜ 0 0 ⎟ < 0 ⎝ 2 0⎠ ⎝ –3 4 ⎠ ⎝ 0 –4 ⎠ (92) Clearly this expression is less intuitive and transparent than (81). the expression ATX + XA in the Lyapunov inequality (81) is explicitly described as a function of the matrix variable X. x2. As a fairly typical illustration. and only the A matrix is stored. For these various reasons. D. Finally. the LMI Lab uses a structured representation of LMIs.
By symmetry. skewsymmetric. Common structures include rectangular unstructured. As for the matrix variables X and γ. • Each block of L(X.9 The LMI Lab • N is called the outer factor. Constant terms are fixed matrices like B and D above. and scalar.1) contains two elementary terms: ATX and XA. the block (1. like XA. γ) is an affine expression in the matrix variables X and γ. More sophisticated structures are sometimes encountered in control problems. they are characterized by their dimensions and structure. • The inner factor L(X. symmetric. γ) is entirely specified by the blocks on or above the diagonal. Variable terms involve one of the matrix variables. For instance. This expression can be broken down into a sum of elementary terms. The outer factor needs not be square and is often absent. XCT. γ ) = ⎜ CX – γI D ⎜ ⎜ T T D – γI B ⎝ ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ is called the inner factor. For instance. its block structure being characterized by the sizes of its diagonal blocks. The LMI (83) is specified by the list of terms in each block. L(X. • Terms are either constant or variable. • X and γ are the matrix variables of the problem. as is any LMI regardless of its complexity. the matrix variable X could be constrained to the blockdiagonal structure ⎛ x 0 0 ⎜ 1 X = ⎜ 0 x2 x3 ⎜ ⎜ 0 x x 3 4 ⎝ ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ 94 . and the block matrix ⎛ T T ⎜ A X + XA XC B L ( X. γ) is a symmetric block matrix. Note that scalars are considered as 1by1 matrices. and –γI above.
Overview of the LMI Lab The LMI Lab offers tools to specify. Its main purpose is to • Allow for straightforward description of LMIs in their natural blockmatrix form • Provide easy access to the LMI solvers (optimization codes) • Facilitate result validation and problem modification The structureoriented description of a given LMI system is stored as a single vector called the internal representation and generically denoted by LMISYS in the sequel. and numerically solve LMIs. or assembled incrementally with the two commands lmivar and lmiterm. be defined in the LMI Lab. manipulate. The first option is more intuitive and transparent while the second option is more powerful and flexible. 95 . structured LMI problems are specified by declaring the matrix variables and describing the term content of each LMI.Introduction Another possibility is the symmetric Toeplitz structure ⎛ x x x ⎜ 1 2 3 X = ⎜ x2 x1 x2 ⎜ ⎜ x x x ⎝ 3 2 1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ Summing up. and the related numerical data. This termoriented description is systematic and accurately reflects the specific structure of the LMI constraints. This vector encodes the structure and dimensions of the LMIs and matrix variables. LMI systems of arbitrary complexity can therefore. There is no builtin limitation on the number of LMIs that you can specify or on the number of blocks and terms in any given LMI. It must be stressed that you need not attempt to read or understand the content of LMISYS since all manipulations involving this internal representation can be performed in a transparent manner with LMILab tools. a description of all LMI terms. The LMI Lab supports the following functionalities: Specification of a System of LMIs LMI systems can be either specified as symbolic matrix expressions with the interactive graphical user interface lmiedit.
Modification of a System of LMIs An existing system of LMIs can be modified in two ways: • An LMI can be removed from the system with dellmi. …. They return a feasible or optimal vector of decision variables x*. to fix some variables and solve the LMI problem with respect to the remaining ones. for example. 96 . all variable terms in the LMI system are evaluated for the value x* of the decision variables.9 The LMI Lab Information Retrieval The interactive function lmiinfo answers qualitative queries about LMI systems created with lmiedit or lmivar and lmiterm. These solvers can handle very general LMI systems and matrix variable structures. The corresponding values X * . It can also be instantiated. This allows a fast check and/or analysis of the results. The left. Result Validation The solution x* produced by the LMI solvers is easily validated with the functions evallmi and showlmi. Solvers for LMI Optimization Problems Generalpurpose LMI solvers are provided for the three generic LMI problems defined on page 85. • A matrix variable X can be deleted using delmvar. You can also use lmiedit to visualize the LMI system produced by a particular sequence of lmivar/lmiterm commands. With evallmi.and righthand sides of each LMI then become constant matrices that can be displayed with showlmi. X * of the matrix variables are 1 K given by the function dec2mat. This operation is performed by setmvar and allows. that is. set to some given matrix value.
. Important: Throughout this chapter. or (2) via the LMI Editor lmiedit where LMIs can be specified directly as symbolic matrix expressions. . . . XK) N < MT R(X1. “lefthand side” refers to what is on the “smaller” side of the inequality. . Thanks to 97 . . XK) M where • X1. . Accordingly. . XK . . X is called the righthand side and 0 the lefthand side of the LMI 0<X even when this LMI is written as X > 0. . . This process creates the socalled internal representation of the LMI system. The specification of an LMI system involves two steps: 1 Declare the dimensions and structure of each matrix variable X1.Specifying a System of LMIs Specifying a System of LMIs The LMI Lab can handle any system of LMIs of the form NT L(X1. . each block being an affine combination of X1. XK are matrix variables with some prescribed structure • The left and right outer factors N and M are given matrices with identical dimensions • The left and right inner factors L(. There are two ways of generating the internal description of a given LMI system: (1) by a sequence of lmivar/lmiterm commands that build it incrementally. . and “righthand side” to what is on the “larger” side. 2 Describe the term content of each LMI. .) and R(. It is stored as a single vector called LMISYS. the LMI Editor is more straightforward to use. .) are symmetric block matrices with identical block structures. . hence particularly wellsuited for beginners. This computer description of the problem is used by the LMI solvers and in all subsequent manipulations of the LMI system. Though somewhat less flexible and powerful than the commandbased description. XK and their transposes. . . . .
four outputs. it also constitutes a good tutorial introduction to lmivar and lmiterm. This problem has a simple LMI formulation: there exists an adequate scaling D if the following feasibility problem has solutions: Find two symmetric matrices X ∈ R6×6 and S = DT D ∈ R4×4 such that 98 . Run the demo lmidem to see a complete treatment of this example. if any. Accordingly. and six states. and consider the set of input/output scaling matrices D with blockdiagonal structure ⎛ ⎜ ⎜ D = ⎜ ⎜ ⎜ ⎜ ⎝ d1 0 0 0 ⎞ ⎟ 0 d1 0 0 ⎟ ⎟ 0 0 d2 d3 ⎟ ⎟ 0 0 d4 d5 ⎟ ⎠ (95) –1 (94) The following problem arises in the robust stability analysis of systems with timevarying uncertainty [4]: Find.9 The LMI Lab its coding and decoding capabilities. a scaling D of structure (85) such that the largest gain across frequency of D G(s) D–1 is less than one. beginners may elect to skip the subsections on lmivar and lmiterm and to concentrate on the GUIbased specification of LMIs with lmiedit.1 Consider a stable transfer function G ( s ) = C ( sI – A ) B with four inputs. Example 8. A Simple Example The following tutorial example is used to illustrate the specification of LMI systems with the LMI Lab tools.
[2 0.1.1. getlmis returns the internal representation LMISYS of this LMI system.2 1]) % 1st LMI lmiterm([1 lmiterm([1 lmiterm([1 lmiterm([1 1 1 1 2 1 1 2 2 X]. Equation 97.C) X]. 99 .C'.1. The next three subsections give more details on the syntax and usage of these various commands. Upon completion.B) S].'s') S].1) % 2nd LMI lmiterm([ 2 1 1 X].1) lmiterm([3 1 1 0].Specifying a System of LMIs ⎛ ⎞ ⎜ A T X + XA + C T SC XB ⎟ ⎜ ⎟ <0 ⎜ ⎟ T B X –S ⎠ ⎝ (96) X>0 S>1 (97) (98) The LMI system (Equation 96. and Equation 98) can be described with the LMI Editor as outlined below.1) % 3rd LMI lmiterm([ 3 1 1 S].1) LMISYS = getlmis Here the lmivar commands define the two matrix variables X and S while the lmiterm commands describe the various terms in each LMI. its internal description can be generated with lmivar and lmiterm commands as follows: setlmis([]) X=lmivar(1.[6 1]) S=lmivar(1. 1. Alternatively.1. More information on how the internal representation is updated by lmivar/lmiterm can also be found in “How It All Works” on page 918.A.
. the LMI Lab offers two predefined structure types along with the means to describe more general structures: Type 1: Symmetric block diagonal structure. For details on how to use Type 3. or a scalar matrix Dj = d × I. When specifying a new system.. General structures.. type setlmis(LMIS0) Specifying the LMI Variables The matrix variables are declared one at a time with lmivar and are characterized by their structure. or –xn where xn denotes the nth decision variable in the problem.. The principle is as follows: each entry of X is specified independently as either 0. type setlmis([]) To add on to an existing LMI system with internal representation LMIS0. The function setlmis initializes the LMI system description. This third type is used to describe more sophisticated structures and/or correlations between the matrix variables. This corresponds to matrix variables of the form ⎛ D 0 … 0 ⎞ ⎜ 1 ⎟ ⎜ 0 D ⎟ 2 X = ⎜ ⎟ ⎜ 0 ⎟ ⎜ ⎟ ⎝ 0 … 0 Dr ⎠ … where each diagonal block Dj is square and is either zero.. This corresponds to arbitrary rectangular matrices without any particular structure.. Type 2: Type 3: Rectangular structure.. … 910 . To facilitate the specification of this structure.9 The LMI Lab Initializing the LMI System The description of an LMI system should begin with setlmis and end with getlmis. d∈R . xn. . a full symmetric matrix. This type encompasses ordinary symmetric matrices (single block) and scalar variables (one block of size one).
• For matrix variables of Type 2.[2 0. for instance. the first one being a 2by2 scalar block and the second one a 2−βψ−2 full block. the second input of lmivar is a twoentry vector listing the row and column dimensions of the variable. a 3by5 rectangular matrix variable would be defined by lmivar(2.[6 1]) % X lmivar(1. this second input is a matrix with two columns and as many rows as diagonal blocks in X. both are symmetric and S inherits the blockdiagonal structure (85) of D. Indeed. S is of the form ⎛ ⎜ ⎜ S = ⎜ ⎜ ⎜ ⎜ ⎝ s1 0 0 0 ⎞ ⎟ 0 s1 0 0 ⎟ ⎟ 0 0 s2 s3 ⎟ ⎟ 0 0 s3 s4 ⎟ ⎠ After initializing the description with the command setlmis([]).2 1] means that S has two diagonal blocks. the first input specifies the structure type and the second input contains additional information about the structure of the variable: • For a matrix variable X of Type 1. In “Example 8. Specifically.Specifying a System of LMIs see “Structured Matrix Variables” on page 931 below as well as the lmivar entry in the reference pages.[3 5]) 911 . For instance. [2 0. The first column lists the sizes of the diagonal blocks and the second column specifies their nature with the following convention: 1 → full symmetric block 0 → scalar block –1 → zero block In the second command. these two matrix variables are declared by lmivar(1. the matrix variables X and S are of Type 1.2 1]) % S In both commands.1” on page 98.
. Variable terms are of the form PXQ where X is a variable and P. The inner factors being symmetric. terms involving a matrix variable. Recall that LMI terms fall into three categories: • The constant terms. we are left with specifying the term content of each LMI. fixed matrices like I in the lefthand side of the LMI S>I • The variable terms. hence in the creation of a different LMI.2 1]) The identifiers X and S are integers corresponding to the ranking of X and S in the list of matrix variables (in the order of declaration).9 The LMI Lab For convenience. this is sufficient to specify the entire LMI. Finally. i.e. • The outer factors The following rule should be kept in mind when describing the term content of an LMI: Important: Specify only the terms in the blocks on or above the diagonal. i. For instance. Specifying all blocks results in the duplication of offdiagonal terms. respectively. 912 . you can describe the blocks on or below the diagonal.[2 0.e.1” could be defined by X = lmivar(1.. Q are given matrices called the left and right coefficients.[6 1]) S = lmivar(1. For instance. Alternatively. lmivar also returns a “tag” that identifies the matrix variable for subsequent reference. lmivar can also return the total number of decision variables allocated so far as well as the entrywise dependence of the matrix variable on these decision variables (see the lmivar entry in the reference pages for more details). Note that these identifiers still point to X and S after deletion or instantiation of some of the matrix variables. Specifying Individual LMIs After declaring the matrix variables with lmivar. ATX and CTSC in (86). X and S in “Example 8. Here their values would be X=1 and S=2.
Some shorthand is provided to simplify term specification. and –S. This entry is 0 for constant terms.A. In each command. scalar values are allowed as shorthand for scalar matrices. the vector [1 1 2 1] indicates that the term is attached to the (1. the LMI ⎛ ⎞ ⎜ A T X + XA + C T SC XB ⎟ ⎜ ⎟ <0 ⎜ ⎟ T B X –S ⎠ ⎝ is described by lmiterm([1 lmiterm([1 lmiterm([1 lmiterm([1 1 1 1 2 1 1 2 2 1]. blocks are zero by default.” and m means “righthand side of the mth LMI” • The second and third entries identify the block to which the term belongs.Specifying a System of LMIs LMI terms are specified one at a time with lmiterm. Finally. a constant 913 .C'. See“ComplexValued LMIs” on page 933 for the specification of LMIs with complexvalued coefficients. matrices of the form αI with α scalar.'s') 2]. i. For instance. First. and k for terms involving X k (here X and S are first and second variables in the order of declaration).C) 1].1.e. 1. For instance. For instance.1. the first argument is a fourentry vector listing the term characteristics as follows: • The first entry indicates to which LMI the term belongs.. the first command specifies ATX + XA as the “symmetrization” of XA. The value m means “lefthand side of the mth LMI. or matrix coefficients P and Q for variable terms PXQ or PXTQ). XB. in diagonal blocks the extra argument 's' allows you to specify the conjugated expression AXB + BTXTAT with a single lmiterm command. outer factor. k for terms involving the kth matrix variable T Xk. 2) block • The last entry indicates which matrix variable is involved in the term. Thus. Finally. CTSC. These arguments must refer to existing MATLAB variables and be realvalued. Second.1) These commands successively declare the terms ATX + XA.B) 2]. the second and third arguments of lmiterm contain the numerical data (values of the constant term.
A. type LMISYS = getlmis This returns the internal representation LMISYS of this LMI system. For instance.1) lmiterm([3 1 1 0].1.1.[6 1]) S = lmivar(1. These identifiers can be used in lmiterm commands to refer to a given LMI or matrix variable.1) lmiterm([Slmi 1 1 0].1.1. which justifies the –3 in the first command.2 1]) BRL = newlmi lmiterm([BRL lmiterm([BRL lmiterm([BRL lmiterm([BRL 1 1 1 2 1 1 2 2 X].1) Xpos = newlmi lmiterm([Xpos 1 1 X].1) % 1*S*1 = S % 1*I = I Recall that by convention S is considered as the righthand side of the inequality. This MATLAB description of the problem can be forwarded to other LMILab functions for subsequent processing. This also applies to the coefficients P and Q of variable terms. this would look like: setlmis([]) X = lmivar(1.1) When the LMI system is completely specified.1.3” on page 931 is described by lmiterm([ 3 1 1 2]. The variable identifiers are returned by lmivar and the LMI identifiers are set by the function newlmi. Finally.C) X].'s') S].[2 0. 1.1) Slmi = newlmi lmiterm([Slmi 1 1 S].B) S].1”. to improve readability it is often convenient to attach an identifier (tag) to each LMI and matrix variable. the third LMI S > I in “Example 8. The dimensions of scalar matrices are inferred from the context and set to 1 by default. The command getlmis must be used only once and after declaring all matrix variables and LMI terms.C'. 914 .9 The LMI Lab term of the form αI can be specified as the “scalar” α. For the LMI system of “Example 8.
Note that Xpos refers to the righthand side of the second LMI. Specify the LMIs as MATLAB expressions here. Read a sequence of lmivar/lmiterm commands from a file by clicking Read. respectively. Specifying LMIs With the LMI Editor The LMI Editor lmiedit is a graphical user interface (GUI) to specify LMI systems in a straightforward symbolic manner.Specifying a System of LMIs Here the identifiers X and S point to the variables X and S while the tags BRL. and third LMI. second. Xpos. Use the view commands buttons to visualize the sequence of lmivar/lmiterm commands needed to describe this LMI system. Save/Load the symbolic description of the LMI system as a MATLAB string. Declare each matrix variable (name and structure) here. 915 . and Slmi point to the first. Typing lmiedit calls up a window with several editable text areas and various buttons. Similarly. X would indicate transposition of the variable X. Generate the internal representation of the LMI system by clicking Create.
the LMI ⎛ T ⎞ ⎜ A X + XA XB ⎟ < 0 ⎜ ⎟ T ⎝ B X –I ⎠ is entered by typing [a'*x+x*a x*b. the LMI system defined by a particular sequence of lmivar/lmiterm commands can be displayed as a MATLAB expression by clicking on the describe. R for unstructured. This matrix contains specific information about the structure and corresponds to the second argument of lmivar (see “Specifying the LMI Variables” on page 910 for details). The left. b'*x 1] < 0 if x is the name given to the matrix variable X in the upper half of the worksheet.9 The LMI Lab In more detail. Once the LMI system is fully specified. This description can be reloaded later on by clicking the load button.. the following tasks can be performed by clicking the corresponding button: • Visualize the sequence of lmivar/lmiterm commands needed to describe this LMI system (view commands button). • Save the symbolic description of the LMI system as a MATLAB string (save button). Please use one line per matrix variable in the text editing areas. 2 Specify the LMIs as MATLAB expressions in the lower half of the worksheet.and righthand sides of the LMIs should be valid MATLAB expressions. The structure is characterized by its type (S for symmetric block diagonal. You can then click on describe the matrix variables or describe the LMIs to visualize the symbolic expression of the LMI system specified by these 916 . Conversely. and G for other structures) and by an additional “structure” matrix. buttons. 1 Declare each matrix variable (name and structure) in the upper half of the worksheet. to specify your LMI system. For instance.. • Read a sequence of lmivar/lmiterm commands from a file (read button). Beginners can use this facility as a tutorial introduction to the lmivar and lmiterm commands.
This feature is useful for code validation and debugging. Rather. Limitations Though fairly general. The result is written in a MATLAB variable named after the LMI system (if the “name of the LMI system” is set to mylmi. Write in a file the sequence of lmivar/lmiterm commands needed to describe a particular LMI system (write button). • Loops and if statements are ignored. Note that all LMIrelated data should be defined in the MATLAB workspace at this stage. The file should describe a single LMI system but may otherwise contain any sequence of MATLAB commands. Finally.Specifying a System of LMIs commands. • When turning lmiterm commands into a symbolic description of the LMI system. zero blocks can be entered simply as 0 and need not be dimensioned. upper diagonal LMI blocks need not be fully specified. you can use various shortcuts when entering LMI expressions at the keyboard. an error is issued if the first argument of lmiterm cannot be 917 . The internal representation can be passed directly to the LMI solvers or any other LMI Lab function. lmiedit is not as flexible as lmiterm and the following limitations should be kept in mind: • Parentheses cannot be used around matrix variables. Keyboard Shortcuts As with lmiterm. the identity matrix can be entered as 1 without dimensioning. the internal representation is written in the MATLAB variable mylmi). (a+b)'*x + x'*(a+b) is perfectly valid. Similarly. the expression (a*x+b)'*c + c'*(a*x+b) is invalid when x is a variable name. By contrast. • Generate the internal representation of the LMI system by clicking create. For instance. This is helpful to develop code and prototype MATLAB functions based on the LMI Lab. you can just type (*) in place of each such block. For instance.
These global variables are initialized by setlmis. and are not visible in the workspace. all updating is performed through global variables for maximum speed. In fact. How It All Works Users familiar with MATLAB may wonder how lmivar and lmiterm physically update the internal representation LMISYS since LMISYS is not an argument to these functions. be sure to • Invoke getlmis only once and after completely specifying the LMI system • Refrain from using the command clear global before the LMI system description is ended with getlmis 918 . cleared by getlmis. Even though this artifact is transparent from the user's viewpoint. Use the LMI and variable identifiers supplied by newlmi and lmivar to avoid such difficulties.1” on page 98 with lmiedit.9 The LMI Lab evaluated. Figure 81 shows how to specify the feasibility problem of “Example 8.
and matnbr to extract and display all relevant information in a userreadable format. the term content of each LMI block. lmiinfo lmiinfo is an interactive facility to retrieve qualitative information about LMI systems. enter matnbr(LMISYS) 919 . etc. The user should not attempt to read or retrieve information directly from this vector. lminbr and matnbr These two functions return the number of LMIs and the number of matrix variables in the system. the number of matrix variables and their structure. This includes the number of LMIs. for instance. To invoke lmiinfo. The Robust Control Toolbox provides three functions called lmiinfo. lminbr. To get the number of matrix variables. enter lmiinfo(LMISYS) where LMISYS is the internal representation of the LMI system produced by getlmis.Querying the LMI System Description Querying the LMI System Description Recall that the full description of an LMI system is stored as a single vector called the internal representation.
. i. mincx. . of the free entries of the matrix variables X1. .e. . 2].. . These solvers are CMEX implementations of the polynomialtime Projective Algorithm Projective Algorithm of Nesterov and Nemirovski [3. and gevp take as input the internal representation LMISYS of an LMI system and return a feasible or optimizing value x* of the decision variables. it is necessary to distinguish between the standard LMI constraints C(x) < D(x) and the linearfractional LMIs A(x) < λB(x) 920 . . . .9 The LMI Lab LMI Solvers LMI solvers are provided for the following three generic optimization problems (here x denotes the vector of decision variables. The corresponding values of the matrix variables X1. . For generalized eigenvalue minimization problems. Note that A(x) < B(x) above is a shorthand notation for general structured LMI systems with decision variables x = (x1. xN). The corresponding solver is called gevp. XK with prescribed structure) that satisfies the LMI system A(x) < B(x) The corresponding solver is called feasp. XK are derived from x* with the function dec2mat. . . The three LMI solvers feasp. . XK): • Feasibility problem Find x ∈ RN (or equivalently matrices X1. . . . . • Generalized eigenvalue minimization problem Minimize λ over x ∈ RN subject to C(x) < D(x) 0 < B(x) A(x) < λB(x). • Minimization of a linear objective under LMI constraints Minimize cTx over x ∈ RN subject to A(x) < B(x) The corresponding solver is called mincx.
you should follow these three rules to ensure proper specification of the problem: • Specify the LMIs involving λ as A(x) < B(x) (without the λ) • Specify them last in the LMI system. These options are described in detail in the reference pages. This positivity constraint is required for wellposedness of the problem and is not automatically added by gevp (see the reference pages for details). gevp systematically assumes that the last L LMIs are linearfractional if L is the number of LMIs involving λ • Add the constraint 0 < B(x) or any other constraint that enforces it. Q = ⎜ – 1 – 3 – 12 ⎟ ⎜ ⎠ ⎝ 0 – 12 – 36 ⎞ ⎟ ⎟ ⎟ ⎠ T T (99) It can be shown that the minimizer X* is simply the stabilizing solution of the algebraic Riccati equation ATX + XA + XBBTX + Q = 0 This solution can be computed directly with the Riccati solver care and compared to the minimizer returned by mincx. . XK.LMI Solvers attached to the minimization of the generalized eigenvalue λ. B = ⎜ 0 ⎜ ⎟ ⎜ ⎝ 1 –2 –1 ⎠ ⎝ 1 ⎞ ⎛ 1 –1 0 ⎟ ⎜ ⎟ . An initial guess xinit for x can be supplied to mincx or gevp. The following example illustrates the use of the mincx solver. Example 8. From an LMI optimization standpoint. When using gevp. . . Finally. . problem (89) is equivalent to the following linear objective minimization problem: 921 . Use mat2dec to derive xinit from given values of the matrix variables X1.2 Consider the optimization problem Minimize Trace(X) subject to A X + XA + XBB X + Q < 0 with data ⎛ –1 –2 1 ⎞ ⎛ 1 ⎜ ⎟ ⎜ A = ⎜ 3 2 1 ⎟. various options are available to control the optimization process and the solver behavior.
q) 0].0.eye(3)) Note that the function defcx provides a more systematic way of specifying such objectives (see “Specifying cTx Objectives for mincx” on page 936 for details).xopt] = mincx(LMIs. Since c should select the diagonal entries of X. full symmetric lmiterm([1 lmiterm([1 lmiterm([1 lmiterm([1 1 1 2 2 1 1 2 1 X].1) LMIs = getlmis 2 Write the objective Trace(X) as cTx where x is the vector of free entries of X.0] [copt. 1) X]. 3 Call mincx to compute the minimizer xopt and the global minimum copt = c'*xopt of the objective: options = [1e 5.[3 1]) % variable X.'s') 0].options) Here 1e 5 specifies the desired relative accuracy on copt.0.b'.c.9 The LMI Lab ⎛ T ⎞ Minimize Trace(X) subject to ⎜ A X + XA + Q XB ⎟ < 0 ⎜ ⎟ T ⎝ B X –I ⎠ (910) Since Trace(X) is a linear function of the entries of X. The following trace of the iterative optimization performed by mincx appears on the screen: Solver for linear objective minimization under LMI constraints 922 . c = mat2dec(LMIs.0.1. that is. this problem falls within the scope of the mincx solver and can be numerically solved as follows: 1 Define the LMI constraint (89) by the sequence of commands setlmis([]) X = lmivar(1.a. it is obtained as the decision vector corresponding to X = I.
723491 18.511476 13.306781 25.919668 19.646811 new lower bound: 18.687324 new lower bound: 18.719624 18.LMI Solvers Iterations : Best objective value so far 1 2 3 *** 4 *** 5 *** 6 *** 7 *** 8 *** 9 *** 10 *** 11 *** 12 *** 13 *** 14 *** 15 8.005604 34.768450 new lower bound: 17.882558 new lower bound: 18339853 new lower bound: 18.712175 new lower bound: 18.753903 18.717986 18.803708 18.123012 new lower bound: 17.732574 18.023978 923 .716509 18.705715 new lower bound: 18.552558 new lower bound: 18.716094 new lower bound: 18.714880 new lower bound: 18.063640 new lower bound: 15.819471 21.189417 19.
which means that a feasible x satisfying the constraint (810) was found only at the second iteration.9 The LMI Lab *** 16 *** new lower bound: 18.8895 2.2046 2.2046 ⎞ ⎜ ⎟ X opt = ⎜ – 5.2201 ⎟ ⎜ ⎟ ⎝ 2.q. Lower bounds on the global minimum of cTx are sometimes detected as the optimization progresses.00e+09 The iteration number and the best value of cTx at the current iteration appear in the left and right columns. 1) norm(XoptXst) 924 .716695 guaranteed relative accuracy: 9.716873 Result: feasible solution of required accuracy best objective value: 18.3542 – 5.50e 06 fradius saturation: 0. The corresponding optimal value of the matrix variable X is given by Xopt = dec2mat(LMIs.2855 2. Note that no value is displayed at the first iteration.0771 ⎠ This result can be compared with the stabilizing Riccati solution computed by care: Xst = care(a.8895 – 6.b.2201 – 6. 4 mincx also returns the optimizing vector of decision variables xopt.xopt.000% of R = 1. mincx reports that the global minimum for the objective Trace(X) is –18.5by10–6. This is the value copt returned by mincx.717297 18.716695 new lower bound: 18. These lower bounds are reported by the message *** new lower bound: xxx Upon termination.X) which returns ⎛ – 6. respectively.716695 with relative accuracy of at least 9.
5390e 05 925 .LMI Solvers ans = 6.
X2. given a value xdec of the vector of decision variables. In addition. Conversely. XK . The two functions mat2dec and dec2mat perform the conversion between these two descriptions of the problem variables. Given particular values X1. X2. called the decision variables. the value X2 of the second matrix variable is extracted from xdec by X2 = dec2mat(LMISYS.X1.xdec. the corresponding value xdec of the vector of decision variables is returned by mat2dec: xdec = mat2dec(LMISYS. The total numbers of matrix variables and decision variables are returned by matnbr and decnbr. X3. the LMI solvers optimize the vector x of free scalar entries of these matrices. . X2.9 The LMI Lab From Decision to Matrix Variables and Vice Versa While LMIs are specified in terms of their matrix variables X1.2) The last argument indicates that the second matrix variable is requested. . the function decinfo provides precise information about the mapping between decision variables and matrix variable entries (see the reference pages). . It could be set to the matrix variable identifier returned by lmivar. For instance. X3 of these variables. the corresponding value of the kth matrix is given by dec2mat. Consider an LMI system with three matrix variables X1. . respectively. 926 .X3) An error is issued if the number of arguments following LMISYS differs from the number of matrix variables in the problem (see matnbr).
In the LMI problem considered in “Example 8.1) The first command evaluates the system for the value xopt of the decision variables.Validating Results Validating Results The LMI Lab offers two functions to analyze and validate the results of an LMI optimization.8917e 07 4.and righthand sides of a particular LMI are returned by showlmi. you can verify that the minimizer xopt returned by mincx satisfies the LMI constraint (810) as follows: evlmi = evallmi(LMIs.and righthand sides of the first (and only) LMI.rhs] = showlmi(evlmi. The negative definiteness of this LMI is checked by eig(lhsrhs) ans = 2.9333e 05 1.xopt) [lhs. The function evallmi evaluates all variable terms in an LMI system for a given value of the vector of decision variables. Once this evaluation is performed.0387e 04 3. the feasible or optimal vector returned by the LMI solvers.6680e+01 927 .2” on page 921. the left. and the second command returns the left. for instance.
that is. To avoid confusion. The resulting system of two LMIs is returned in NEWSYS. and Slmi are the identifiers attached to the three LMIs (86)–(88). If BRL. the last LMI S>I remains known as the third LMI even though it now ranks second in the modified system.[4 1]) % X W = lmivar(2.2) where the second argument specifies deletion of the second LMI. For instance. Xpos. and setmvar.9 The LMI Lab Modifying a System of LMIs Once specified. Slmi keeps pointing to S > I even after deleting the second LMI by NEWSYS = dellmi(LMISYS.Xpos) Deleting a Matrix Variable Another way of modifying an LMI system is to delete a matrix variable. to remove all variable terms involving this matrix variable.[2 4]) % W 928 .1” on page 98 is described in LMISYS and that we want to remove the positivity constraint on X. suppose that the LMI system of “Example 8. This LMI is defined by setlmis([]) X = lmivar(1. This operation is performed by delmvar. a system of LMIs can be modified in several ways with the functions dellmi. This is done by NEWSYS = dellmi(LMISYS. it is safer to refer to LMIs via the identifiers returned by newlmi. As a result. For instance. The LMI identifiers (initial ranking of the LMI in the LMI system) are not altered by deletions. delmvar. consider the LMI ATX + XA + BW + WTBT + I < 0 with variables X = XT ∈ R4×4 and W ∈ R2×4. Deleting an LMI The first possibility is to remove an entire LMI from the system with dellmi.
A. enter NEWSYS = setmvar(LMISYS. As a result. S = DTD) to a multiple of the identity matrix. note that deleting a matrix variable is equivalent to setting it to the zero matrix of the same dimensions with setmvar. it is therefore advisable to refer to the remaining variables through their identifier. Consider again “Example 8.Modifying a System of LMIs lmiterm([1 1 1 X].1” on page 98 and suppose we want to know if the peak gain of G itself is less than one. to fixsetmvar some variables and optimize with respect to the remaining ones. that is.W) The resulting NEWSYS now describes the Lyapunov inequality ATX + XA + I < 0 Note that delmvar automatically removes all LMIs that depended only on the deleted matrix variable. a legitimate choice is S = 2−βψ−I.1. this variable is removed from the problem and all terms involving it become constant terms. Instantiating a Matrix Variable The function setmvar is used to set a matrix variable to some given value.2) 929 . To set S to this value. Keeping in mind the constraint S > I.S. for instance. type the command NEWSYS = delmvar(LMISYS. The matrix variable identifiers are not affected by deletions and continue to point to the same matrix variable. This is useful.'s') lmiterm([1 1 1 0].1. Finally.'s') lmiterm([1 1 1 W]. For subsequent manipulations. if G∞ < 1 This amounts to setting the scaling matrix D (or equivalently.B.1) LMISYS = getlmis To delete the variable W.
Here the value 2 is shorthand for 2−by−I.3) or NEWSYS = dellmi(NEWSYS. and the third argument is the value to which S should be set. be deleted by NEWSYS = dellmi(NEWSYS. 930 . The resulting system NEWSYS reads ⎛ ⎞ ⎜ A T X + XA + 2C T C XB ⎟ ⎜ ⎟ <0 ⎜ ⎟ T B X – 2I ⎠ ⎝ X>0 2I > I Note that the last LMI is now free of variable and trivially satisfied. therefore. It could.9 The LMI Lab The second argument is the variable identifier S.Slmi) if Slmi is the identifier returned by newlmi.
define the variable X (Type 1) by setlmis([]) [X.3 Suppose that the problem variables include a 3by3 symmetric matrix X and a 3by3 symmetric Toeplitz matrix ⎛ y y y ⎞ ⎜ 1 2 3⎟ Y = ⎜ y2 y1 y2 ⎟ ⎜ ⎟ ⎜ y y y ⎟ ⎝ 3 2 1⎠ The variable Y has three independent entries. each entry is specified as either 0 or ±xn where xn is the nth decision variable. Since Y is independent of X. To retrieve this number.Advanced Topics Advanced Topics This last section gives a few hints for making the most out of the LMI Lab. Structured Matrix Variables Fairly complex matrix variable structures and interdependencies can be specified with lmivar.n] = lmivar(1. Recall that the symmetric blockdiagonal or rectangular structures are covered by Types 1 and 2 of lmivar provided that the matrix variables are independent.n+[1 2 3.[3 1]) The second output argument n gives the total number of decision variables used so far (here n = 6). you must use Type 3 and specify each entry of the matrix variables directly in terms of the free scalar variables of the problem (the socalled decision variables). Given this number. these decision variables should be labeled n + 1. hence involves three decision variables. To describe more complex structures or correlations between variables.2 1 2. With Type 3. The following examples illustrate how to specify nontrivial matrix variable structures with lmivar. n + 3 where n is the number of decision variables involved in X. We first consider the case of uncorrelated matrix variables. n + 2.3 2 1]) 931 . Example 8. Y can be defined by Y = lmivar(3. It is directed toward users who are comfortable with the basics described above.
Example 8. y.Y) ans = 7 8 9 8 7 8 9 8 7 The next example is a problem with interdependent matrix variables.4 Consider three matrix variables X.1 0]) The third output of lmivar gives the entrywise dependence of X and Y on the decision variables (x1.[1 0.sX] = lmivar(1. we can visualize the decision variable distributions in X and Y with decinfo: lmis = getlmis decinfo(lmis. x4) := (x. first define the two independent variables X and Y (both of Type 1) as follows: setlmis([]) [X.toeplitz(n+[1 2 3])) where toeplitz is a standard MATLAB function. For verification purposes. Y. x3. Z with structure ⎛ ⎞ X = ⎜ x 0 ⎟. ⎝ 0 y⎠ ⎛ ⎞ Y = ⎜ z 0 ⎟.[1 0. t): sX = 1 0 0 2 932 . t are independent scalar variables.n.sY] = lmivar(1. y.9 The LMI Lab or equivalently by Y = lmivar(3. z. x2. z.1 0]) [Y.X) ans = 1 2 4 2 3 5 4 5 6 decinfo(lmis. ⎝ 0 t ⎠ ⎛ ⎞ Z = ⎜ 0 –x ⎟ ⎝ –t 0 ⎠ where x. To specify such a triple.n.
1).–sY(2. you can now specify the structure of Z in terms of the decision variables x1 = x and x4 = t: [Z.2) 0]) Since sX(1.sZ] = lmivar(3. this defines the variable ⎛ 0 –x ⎞ ⎛ ⎞ 1 ⎟ = ⎜ 0 –x ⎟ Z = ⎜ ⎜ –x 0 ⎟ ⎝ –t 0 ⎠ 4 ⎝ ⎠ as confirmed by checking its entrywise dependence on the decision variables: sZ = 0 4 0 1 ComplexValued LMIs The LMI solvers are written for realvalued matrices and cannot directly handle LMI problems involving complexvalued matrices.Advanced Topics sY = 3 0 0 4 Using Type 3 of lmivar.2) refers to x4.[0 sX(1. However.n.1) refers to x1 while sY(2. complexvalued LMIs can be turned into realvalued LMIs by observing that a complex Hermitian matrix L(x) satisfies L(x) < 0 if and only if ⎛ Re ( L ( x ) ) Im ( L ( x ) ) ⎞ ⎜ ⎟ <0 ⎝ – Im ( L ( x ) ) Re ( L ( x ) ) ⎠ This suggests the following systematic procedure for turning complex LMIs into real ones: • Decompose every complex matrix variable X as X = X1 + jX2 where X1 and X2 are real 933 .
the LMI system (811) would be specified as follows: H T 934 . X1 and X2 should be declared as symmetric and skewsymmetric matrix variables.by ⎛ A A 1 2 ⎜ ⎜ –A A ⎝ 2 1 ⎞ ⎟ ⎟ ⎠ For instance. Xj real): ⎛ M M 1 2 ⎜ ⎜ –M M 2 1 ⎝ ⎞ ⎟ ⎟ ⎠ T ⎛ X X 1 2 ⎜ ⎜ –X X 2 1 ⎝ ⎞⎛ M M 1 2 ⎟⎜ ⎟ ⎜ –M M 2 1 ⎠⎝ ⎞ ⎛ X X ⎞ 1 2 ⎟ ⎟ <⎜ ⎟ ⎜ –X X ⎟ 2 1 ⎠ ⎠ ⎝ ⎛ X X ⎞ 1 2 ⎟ ⎜ <I ⎜ –X X ⎟ 2 1 ⎠ ⎝ Note that X = XH in turn requires that X 1 = X 1 and X 2 + X 2 = 0 . and an equivalent realvalued LMI is readily derived from the above observation. X2 for the real and imaginary parts of each LMI. This yields affine expressions in X1. including real ones. that M ∈ C5×5. the real counterpart of the LMI system M XM < X. Assuming. H X = X >I H (911) reads (given the decompositions M = M1 + jM2 and X = X1 + jX2 with Mj. for instance. For LMIs without outer factor. a streamlined version of this procedure consists of replacing any occurrence of the matrix variable X = X1 + jX2 by ⎛ X 1 ⎜ ⎜ ⎝ –X 2 X2 ⎞ ⎟ X1 ⎠ ⎟ and any fixed matrix A = A1 + jA2. Consequently.9 The LMI Lab • Decompose every complex matrix coefficient A as A = A1 + jA2 where A1 and A2 are real • Carry out all complex matrix products. respectively.
sX2] = lmivar(3.sX2 sX1]) % describe the real counterpart of (1.Advanced Topics M1=real(M)..1) lmis = getlmis Note the threestep declaration of the structured matrix variable bigX = ⎛ X X 1 2 ⎜ ⎜ –X X 2 1 ⎝ ⎞ ⎟: ⎟ ⎠ 1 Specify X1 as a (real) symmetric matrix variable and save its structure description sX1 as well as the number n1 of decision variables used in X1.sX1] = lmivar(1. 935 . M2=imag(M) bigM=[M1 M2.1) lmiterm([2 1 1 bigX].12): lmiterm([1 1 1 0].1.n1)) bigX = lmivar(3.1..[5 1]) [X2.n1.M2 M1] setlmis([]) % declare bigX=[X1 X2.bigM'.X2 X1] with X1=X1' and X2+X2'=0: [X1. 3 Define the structure of bigX in terms of the structures sX1 and sX2 of X1 and X2 . The command skewdec(5.1) lmiterm([ 1 1 1 bigX].bigM) lmiterm([ 2 1 1 bigX].n2.n1) creates a 5by–5 skewsymmetric structure depending on the decision variables n1 + 1. 2 Specify X2 as a skewsymmetric matrix variable using Type 3 of lmivar and the utility skewdec. n1 + 2..skewdec(5. See the previous subsection for more details on such structure manipulations.[sX1 sX2.
[3 0]) P = lmivar(1. [Xj. such objectives are expressed in terms of the matrix variables rather than of x.j. X.1] setlmis([]) X = lmivar(1. Then the for loop performs the following operations: 1 Evaluate the matrix variables X and P when all entries of the decision vector x are set to zero except xj:= 1. The function defcx facilitates the derivation of the c vector when the objective is an affine function of the matrix variables.Pj] = defcx(lmisys.[2 1]) : : lmisys = getlmis T the c vector such that cTx = Trace(X) + x 0 Px0 can be computed as follows: n = decnbr(lmisys) c = zeros(n. This operation is performed by the function defcx.P) c(j) = trace(Xj) + x0'*Pj*x0 end T The first command returns the number of decision variables in the problem and the second command dimensions c accordingly. consider the linear objective Trace(X) + x 0 Px0 where X and P are two symmetric variables and x0 is a given vector. or uTXu where u is a given vector. For the sake of illustration. In most control problems. Apart from lmisys and j. Examples include Trace(X) where X is a symmetric matrix variable. P have been declared by x0 = [1. the inputs of defcx are the identifiers X and 936 .9 The LMI Lab Specifying cTx Objectives for mincx The LMI solver mincx minimizes linear objectives of the form cTx where x is the vector of decision variables. however.X. If lmisys is the internal representation of the LMI system and if x0.1) for j=1:n.
the feasibility radius bound regularizes problems with redundant variable sets. 2 Evaluate the objective expression for X:= Xj and P:= Pj. This yields the jth entry of c by definition.Advanced Topics P of the variables involved in the objective. Finally. and the outputs Xj and Pj are the corresponding values. In our example the result is c = 3 1 2 1 Other objectives are handled similarly by editing the following generic skeleton: n = decnbr( LMI system ) c = zeros(n. or gevp. This may also speed up computations and improve numerical stability. The feasibility radius R is set by the third entry of the options vector of the LMI solvers.1) for j=1:n. mincx. In rough terms. it is possible to constrain the solution x to lie in the ball xTx < R2 where R > 0 is called the feasibility radius. the set of scalar variables is redundant when an equivalent problem could be formulated with a smaller number of variables. Its default value is R = 109.j. Setting R to a negative value means “no rigid bound. matrix identifiers) c(j) = objective(matrix values) end Feasibility Radius When solving LMI problems with feasp.” in which case the feasibility radius is increased during the 937 . This specifies a maximum (Euclidean norm) magnitude for x and avoids getting solutions of very large norm. [ matrix values ] = defcx( LMI system.
that is.9 The LMI Lab optimization if necessary. 938 . For the LMI problems addressed by mincx and gevp. nonstrict feasibility generally causes the solvers to fail and to return an “infeasibility” diagnosis. Although there is no universal remedy for this difficulty. which reformulates the problem Find x such that L ( x ) ≤ 0 as Minimize t subject to Lx < t × I. As a result. this difficulty is automatically circumvented by feasp. (912) (913) In this modified problem. This “flexible bound” mode may yield solutions of large norms. when the LMI L(x) ≤ 0 has solutions while L(x) < 0 has no solution. it is sometimes possible to eliminate underlying algebraic constraints to obtain a strictly feasible problem with fewer variables. the feasible set has a nonempty interior. For feasibility problems. To compute feasible solutions. the computational effort is typically higher as feasp strives to approach the global optimum tmin = 0 to a high accuracy. the LMI constraint is always strictly feasible in x. such techniques require that the system of LMI constraints be strictly feasible. WellPosedness Issues The LMI solvers used in the LMI Lab are based on interiorpoint optimization techniques. t and the original LMI (812) is feasible if and only if the global minimum tmin of (813) satisfies tmin ≤ 0 For feasible but not strictly feasible problems. however. that is. these solvers may encounter difficulty when the LMI constraints are feasible but not strictly feasible.
Advanced Topics Another issue has to do with homogeneous feasibility problems such as ATP + P A < 0. replace the constraint P > 0byP > αI with α > 0. The resulting problem is equivalent to (814) and can be solved directly with gevp. A simple remedy consists of replacing the constraints A(x) < B(x). with B 1 ( x ) > 0 strictly feasible ⎝ 0 0⎠ cannot be directly solved with gevp. the positivity of B(x) for some x ∈ Rn is required for the wellposedness of the problem and the applicability of polynomialtime interiorpoint methods. by ⎛ ⎞ A(x) < ⎜ Y 0 ⎟ . Note that this does not alter the problem due to its homogeneous nature. Efficiency and Complexity Issues As explained in the beginning of the chapter. Technically. ⎝ 0 0⎠ Y < λ B 1 ( x ). C ( x ) < 0. B1 ( x ) > 0 B(x) > 0 (914) where Y is an additional symmetric variable of proper dimensions. P>0 While this problem is technically wellposed. the termoriented description of LMIs used in the LMI Lab typically leads to higher efficiency than the canonical representation 939 . To compute a nontrivial Lyapunov matrix and easily differentiate between feasibility and infeasibility. SemiDefinite B(x) in gevp Problems Consider the generalized eigenvalue minimization problem Minimize λ subject to A ( x ) < λB ( x ). B ( x ) > 0. the LMI optimization is likely to produce solutions close to zero (the trivial solution of the nonstrict problem). Hence problems where ⎛ B (x) 0 ⎞ B(x) = ⎜ 1 ⎟ .
Cholesly factorization of the Hessian matrix (default) [2] • MbyN2 when numerical instabilities warrant the use of QR factorization instead While the theory guarantees a worstcase iteration count proportional to M. Make sure that your LMI problem cannot be solved with mincx before using gevp. Q are given matrices and X is an unstructured mbyn matrix variable. (915) This is no longer true. it is therefore preferable to rewrite it as (815) and to specify it in this form. the number of iterations actually performed grows slowly with M in most problems. 940 . Solving M + PTXQ + QTXTP < 0 In many outputfeedback synthesis problems. P. Finally. however. 2 Given this Lyapunov function.9 The LMI Lab A 0 + x 1 A 1 + … + x N A N < 0. while feasp and mincx are comparable in complexity. gevp typically demands more computational effort. If your LMI problem has few free scalar variables but many terms in each LMI. the design can be performed in two steps: 1 Compute a closedloop Lyapunov function via LMI optimization. the flop count per iteration for the feasp and mincx solvers is proportional to • N3 when the leastsquares problem is solved via. derive the controller statespace matrices by solving an LMI of the form M + P XQ + Q X P < 0 T T T (916) where M. when the number of variable terms is nearly equal to or greater than the number N of decision variables in the problem. If M denotes the total row size of the LMI system and N the total number of scalar decision variables. Each scalar variable xj is then declared independently and the LMI terms are of the form xjAj.
941 . Typically.Advanced Topics It turns out that a particular solution Xc of (816) can be computed via simple linear algebra manipulations [1]. The function basiclmi returns the “explicit” solution Xc: Xc = basiclmi(M.Q. Xc corresponds to the center of the ellipsoid of matrices defined by (816). basiclmi also offers the option of computing an approximate leastnorm solution of (816).Q) Since this central solution sometimes has large norm.P.'Xmin') and involves LMI optimization to minimize X. This is done by X = basiclmi(M.P.
pp. P.S.. [3] Nesterov. Nemirovski. Conf. Conf. “A Linear Matrix Inequality Approach to H∞ Control. Robust and Nonlinear Contr.9 The LMI Lab References [1] Gahinet. Amer. Contr. Contr. 1994. 1994. Philadelphia. J. 942 . [4] Shamma. A.” Proc. [2] Nemirovski. 840–844. Interior Point Polynomial Methods in Convex Programming: Theory and Applications. 3163–3168. 4 (1994). pp. pp. J.. “Robustness Analysis for TimeVarying Systems. “The Projective Method for Solving Linear Matrix Inequalities. Gahinet. and A. Dec. 1992. and P..” Proc.” Int. and P. Apkarian. SIAM Books... 421–448. Yu..
then they are listed alphabetically. 102) Functions — Alphabetical List (p. . Functions are grouped by application in tables at the beginning of this chapter.10 Function Reference This chapter gives a detailed description of all Robust Control Toolbox functions. In addition. Functions — Categorical List (p. information on each function is available through the online Help facility. Lists the Robust Control Toolbox functions alphabetically. 1010) Lists the Robust Control Toolbox functions according to their purpose.
10 Function Reference Functions — Categorical List Uncertain Elements ucomplex ucomplexm udyn ultidyn ureal Uncertain complex parameter Uncertain complex matrix Uncertain dynamics Uncertain linear timeinvariant object Uncertain real parameter Uncertain Matrices and Systems umat uss ufrd randatom randumat randuss Plant augmentation for statespace models Plant augmentation for transfer functions General multivariable interconnected system Create a random uncertain element Create a random uncertain matrix Create a random uncertain statespace model Manipulation of Uncertain Models isuncertain simplify usample usubs gridureal lftdata uss/ssbal True for uncertain systems Simplify representation of an uncertain object Generate random samples of an uncertain object Substitutes values for uncertain elements Grid real parameters over their range Extract LFT data from uncertain objects Diagonal state/uncertainty scaling for an uncertain system 102 .
Functions — Categorical List Interconnection of Uncertain Models imp2exp sysic iconnect icsignal imp2exp Converts implicit LFT relationship to explicit I/O Form interconnections of LTI and uncertain objects Creates an interconnection object (alternative to sysic) Creates an ICsignal object for equations used with iconnect Creates an identicallyzero ICsignal used with iconnect Model Order Reduction reduce balancmr bstmr hankelmr modreal ncfmr schurmr slowfast stabproj imp2ss Main interface to model approximation algorithms Balanced truncation model reduction Balanced stochastic truncation model reduction Optimal Hankel norm model approximation Schur balanced truncation model reduction Balanced normalized coprime factor model reduction Schur balanced truncation model reduction Statespace slowfast decomposition Statespace stable/antistable decomposition Impulse response to statespace approximation 103 .
10 Function Reference Robustness and WorstCase Analysis cpmargin gapmetric loopmargin loopsens mussv mussvextract ncfmargin popov robustperf robuststab robopt wcnorm wcgain wcgopt wcmargin wcsens Coprime stability margin of plantcontroller feedback loop Compute the cap and the Vinnicombe gap metric Comprehensive analysis of feedback loops Sensitivity functions of feedback loops Bounds on the structured singular value (µ) Extracts data from mussv output structure Normalized coprime stability margin of feedback loop Test for robust stability with Popov criterion Robust performance of uncertain systems Stability margins of uncertain systems Creates a robuststab/robustperf options object Worstcase norm of an uncertain matrix Worstcase gain of an uncertain system Creates a wcnorm/wcgain options objects Worstcase gain/phase margins for feedback loops Worstcase sensitivity functions of feedback loops 104 .
Functions — Categorical List Robustness Analysis for ParameterDependent Systems (PSystems) psys psinfo ispsys pvec pvinfo polydec uss aff2pol quadstab quadperf pdlstab decay pdsimul Specify a parameterdependent system (Psystem) Query characteristics of a Psystem True for parameterdependent systems Specify a vector of uncertain or timevarying parameters Query characteristics of a parameter vector Compute polytopic coordinates with respect to box corners Convert affine Psystems to uncertain statespace models Convert affine Psystems to polytopic representation Assess quadratic stability of parameterdependent systems Assess quadratic Hinf performance of Psystems Test robust stability using parametric Lyapunov functions Compute quadratic decay rate Simulate Psystems along parameter trajectories 105 .
H∞ controller synthesis. for continuous sampleddata system H∞ loop shaping controller synthesis Looptransfer recovery controller synthesis H∞ mixedsensitivity controller synthesis H∞ normalized coprime factor controller synthesis Construct a shaping filter µSynthesis cmslsyn dksyn dkitopt drawmag fitfrd fitmagfrd Constant matrix µsynthesis Synthesis of a robust controller by µsynthesis Create a dksyn options object Interactive mousebased sketching and fitting tool Fit frequencydependent scaling with LTI models Fit scaling magnitude data with stable.10 Function Reference Controller Synthesis augw h2hinfsyn h2syn hinfsyn sdhinfsyn loopsyn ltrsyn mixsyn ncfsyn mkfilter Augments plant weights for mixedsensitivity control design Mixed H2/H∞ controller synthesis H2 controller synthesis. minimumphase model 106 . H∞ controller synthesis.
Functions — Categorical List SampledData Systems adhinfnorm sdlsim sdhinfsyn Induced L2 norm of a sampleddata system Time response of sampleddata feedback systems Sampledata H∞ controller synthesis Gain Scheduling hinfgs Design gainscheduled H∞ controllers Supporting Utilities bilin mktito sectf skewdec Statespace bilinear transform Partition LTI systems by input and output groups Sector transformation for LTI systems Create a skewsymmetric matrix (LMI) Specification of Systems of LMIs lmiedit setlmis lmivar lmiterm newlmi getlmis Open the LMI editor GUI Initialize the creation of LMIs Create a new matrixvalued variable in LMI systems Add a new term to a given LMI Add a new LMI to an LMI system Get the internal description of LMI systems 107 .
and righthandside of evaluated LMIs 108 .10 Function Reference LMI Characteristics lmiinfo lminbr matnbr decnbr dec2mat mat2dec decinfo Get information about an existing system of LMIs Get the number of LMIs in an LMI system Get the number of matrix variables in an LMI system Get the number of decision variables in an LMI system Extract matrix variable value from vector of decision variables Construct decision variables vector from matrix Show how matrix variables depend on decision variables LMI Solvers feasp mincx gevp defcx compute a solution to a given LMI system Minimize a linear objective under LMI constraints Solve generalized eigenvalue minimization problems Specify c’x objectives of mincx Validation of Results evallmi showlmi Evaluate the LMIs for given values of decision variables Return the left.
Functions — Categorical List Modification of Systems of LMIs dellmi delmvar setmvar Synthesis of a robust controller via µsynthesis Create a dksyn options object Interactive mousebased sketching and fitting tool 109 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1087 getlmis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027 bstmr . . . . . . . . . . . . . . . . . . . . 1063 drawmag . . . . . . . . . . . . . . . . . . . . . . 10112 hinfgs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099 hankelmr . . . . . . . . . . . . . . . 1022 bilin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067 evallmi . . . 1048 diag . . . . . . . . . . . . . . . . . . . . . . . . . 1088 gevp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075 fitmagfrd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037 decay . . . . . . . . . . . . 1046 delmvar . . . . . . . . . . . . . 1093 h2hinfsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097 h2syn . . . . . . . . . . . . . . . . . . . . . . . 1019 balancmr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10118 iconnect . . . . . . . . 1044 defcx . . . . . . . . . 1071 fitfrd . . . . . . . . . . . 10115 hinfsyn . . . . . . . . . . . . . . . . . . . . . . . . 10105 hankelsv . . . . . . . . . . . . 10125 1010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1089 gridureal . . . . 1018 augw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050 dksyn . . . . 1078 gapmetric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032 cmsclsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015 aff2pol . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049 dkitopt . 1039 decinfo . . . . . . . . . . . . . . . . . . . . 1054 dmplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045 dellmi . . . . . . . . . . . . . . . 1069 feasp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083 genphase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1040 decnbr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043 dec2mat .10 Functions — Alphabetical List 10 actual2normalized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pdsimul . . . . . . . . . . . . . isuncertain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncfsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ncfmr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . popov . . lftdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10129 10130 10133 10136 10137 10139 10145 10147 10151 10152 10153 10156 10160 10162 10173 10177 10183 10187 10188 10190 10193 10196 10198 10200 10202 10204 10209 10214 10218 10222 10226 10227 10228 10230 10231 10232 10233 1011 . . . . . imp2ss . . . modreal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mussv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mat2dec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lmireg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lmiterm . . . . . . . . . . . . . . . . . . . ncfmargin . . . . . . . . . . . . . . . . . . . . mixsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pdlstab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ispsys . . . loopsens . . . . . . . . . . . . . . . . . . . . . . lmiedit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lmiinfo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . loopmargin . . . . . . . . . . . . . . frd/loglog . . . . . . . . . . . . . . . . . . . . . . . . . . . loopsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mktito . . . . . . . . . . . . . . . normalized2actual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mkfilter . . . . . . . . . . . . . lmivar . . . . . . . . . . . . . polydec . . . . . . . . . . . . . . . . . . . . . . . . . . . . lminbr . . newlmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ltrsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mincx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . matnbr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Functions — Alphabetical List icsignal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . psinfo . . . . . . . . . . . . . . . . . . . . . mussvextract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . msfsyn . . . . . . . . . . . . . . . . . . . . . imp2exp . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quadstab . . . . . pvinfo . . . . . . . . . . . . . . . . . . . . . robopt . . . . . . . . frd/schur . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sdlsim . . . . . . . . stabproj . . . . . . . . . . . . . . . . . . . . . . . . . . frd/rcond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . setmvar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . skewdec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ucomplexm . . . . . . . . . . . . . . schurmr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . uss/ssbal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . robustperf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . setlmis . . . . . . . . . . . . . . . . . . . . . . . sdhinfsyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sysic . . . . . . . . . . . . . symdec . . . . . . . . . . . . . . . . . . . . . . . . . . . . pvec . . . . . . . . . . . . . . . . . . . . . . . ucomplex . . . . . . . . . . . . . . . . . . . . . . . . .10 psys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . slowfast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . simplify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quadperf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . squeeze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sdhinfnorm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sectf . showlmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . randumat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . randuss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . frd/svd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10234 10236 10238 10239 10241 10243 10245 10247 10249 10250 10254 10255 10258 10266 10276 10277 10282 10284 10287 10292 10297 10298 10299 10301 10302 10307 10308 10310 10311 10314 10316 10318 10319 10320 10323 10325 10327 1012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . frd/semilog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . robuststab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . reduce . . . . . . . . . . . . . . . . udyn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . randatom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . repmat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . uplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . usubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . umat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wcgain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ureal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ultidyn . . . . . . wcnorm . . . . . . . . . uss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wcmargin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Functions — Alphabetical List ufrd . . . . . . . . . wcgopt . . . . . . . . . . usample . . . . . . . 10328 10330 10332 10334 10337 10340 10343 10345 10348 10361 10366 10372 10378 1013 . . . . . wcsens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 1014 .
where each end point is 1 unit from the nominal. For example. 1015 .3.[1 3 5]) ans = 1.[0 6]) ans = 1.0000 0.V) ndist = actual2normalized(A.[1 5]).250).[2 4]) ans = 0. the normalized distance is intuitive. this system is stable when the normalized distance of the uncertain element values from the nominal is less than 1. a = ureal('a'. If V is an array of values.4. while points that lie outside the range are greater than 1 unit from the nominal.0000 1. actual2normalized(a. Example For uncertain real parameters whose range is symmetric about their nominal value.V) is the normalized distance between the nominal value of the uncertain atom A and the given value V.actual2normalized Purpose Syntax Description 10actual2normalized Calculate the normalized distance between a nominal value and a given value for an uncertain atom ndist = actual2normalized(A. ndist = actual2normalized(a.4.5000 Plot the normalized distance for several values.5000 0.5000 actual2normalized(a. Note that the relationship between a normalized distance and a numerical difference is linear. then ndist is an array of normalized distances. values = linspace(3. if an uncertain system has a stability margin of 1.9.0000 actual2normalized(a.5000 1.'range'. Points that lie inside the range are less than 1 unit from the nominal.values). Create uncertain real parameters with a range that is symmetric about the nominal value. The robustness margins computed in robuststab and robustperf serve as bounds for the normalized distances in ndist.
5000 1.0000 0.5]) ans = 0.'range'.7500 actual2normalized(a.ndist) Next. create an asymmetric parameter. where each end point is 1 unit from the nominal. Points that lie inside the range are less than 1 unit from the nominal.5000 1.[1 5]). 1016 .[2 4.4.5000 0.5000 Plot the normalized distance for several values. au = ureal('a'.0000 actual2normalized(a.actual2normalized plot(values. while points that lie outside the range are greater than 1 unit from the nominal. actual2normalized(a.[0 6]) ans = 1.[1 4 5]) ans = 1. Note that the relationship between normalized distance and numerical difference is nonlinear.
plot(values.actual2normalized ndistu = actual2normalized(au.ndistu) xlim([4 10]) ylim([0 4]) See Also robuststab robustperf Calculate robust stability margin Calculate robust performance margin 1017 .values).
aff2pol Purpose Syntax Description 10aff2pol Convert affine parameterdependent models to polytopic models polsys = aff2pol(affsys) aff2pol derives a polytopic representation polsys of the affine parameter dependent system · E ( p )x = A ( p )x + B ( p )u y = C ( p )x + D ( p )u (101) (102) where p = (p1. the SYSTEM matrices ⎛ A ( p ) + jE ( p ) B ( p ) ex ex ex ⎜ ⎜ D ( p ex ) C ( p ex ) ⎝ ⎞ ⎟ ⎟ ⎠ for all corners pex of the parameter box or all vertices pex of the polytope of parameter values. i. . . .e.. See Also psys pvec uss Specification of uncertain statespace model Quantification of uncertainty on physical parameters Create an uncertain statespace model 1018 . The vertex systems of polsys are the instances of (91)–(92) at the vertices pex of the parameter range. pn) is a vector of uncertain or timevarying real parameters taking values in a box or a polytope. The description affsys of this system should be specified with psys..
W1. R and T are given by S = ( I + GK ) –1 –1 –1 R = K ( I + GK ) . control signal and output signal respectively (see block diagram) so that the closedloop transfer function matrix is the weighted mixed sensitivity ∆ W 1S Ty u = W2 R 1 1 W3 T where S.W2. respectively. AUGMENTED PLANT P(s) y W1 W2 + u 1 u 2 Σ e u G y W3 } y 1 y y 2 CONTROLLER F(s) K(s) Figure 101: Plant Augmentation.W1. and W3(s) penalizing the error signal. T = GK ( I + GK ) The LTI systems S and T are called the sensitivity and complementary sensitivity.augw Purpose Syntax Description 10augw Statespace or transfer function plant augmentation for use in weighted mixedsensitivity H∞ and H2 loopshaping design. W2(s).W2. P = AUGW(G.W3) computes a statespace model of an augmented LTI plant P(s) with weighting functions W1(s). 1019 .W3) P = AUGW(G.
and NY where G is NYbyNU.cNU+1:c). 1020 .W3) is P(s) as in the “Algorithm” section below.1:rNY.NU). P. you may simply assign an empty matrix [ ].[]. Algorithm The augmented plant P(s) produced by is W1 P( s) = 0 0 I –W1 G W2 W3 G –G with statespace realization A B1 B2 P ( s ) := C 1 D 11 D 12 C 2 D 21 D 22 AG –B W1 C G 0 = BW CG 3 –DW DG 1 0 AW1 0 0 CW 0 0 0 1 0 0 AW 0 0 CW 0 0 2 2 0 0 0 AW 0 0 C W3 0 3 0 BW1 0 0 DW 0 0 I 1 BG – B W1 D G B W2 BW DG 3 –DW DG 1 0 ˜ C G + D W3 C G –CG DW 2 ˜ D G + D W3 D G –D G Partitioning is embedded via P=mktito(P. NU. P = AUGW(G.NY. W2 and W3 must be either empty.'U2'.'Y2'.InputGroup = struct('U1'.. but without the second row (without the row containing W2). e.augw For dimensional compatibility. If one of the weights is not needed.OutputGroup = struct('Y1'.W1.g.c]=size(P). each of the three weights W1. which sets the InputGroup and OutputGroup properties of P as follows [r.rNY+1:r). a scalar (SISO) or have respective input dimensions NY.1:cNU. P.
CL2. W2 and W3 must be proper. W3=[]. W2 and W3 should be stable.CL.1*(s+100)/(100*s+1). G=(s1)/(s+1).') legend('S = 1/(1+L)'. W1.GAM/W1.GAM2]=h2syn(P). bounded as s → ∞ or. W2=0. Additionally. 'T=L/(1+L)'.'k'.T.W2.2) Singular Values S = 1/(1+L) GAM/W1 T=L/(1+L) GAM*G/W2 60 40 20 Singular Values (dB) 0 −20 −40 −60 −80 −4 10 10 −2 10 0 10 Frequency (rad/sec) 2 10 4 10 6 Limitations The transfer functions G. i. as z → ∞ .'r. [K.GAM*G/W2. P=augw(G. S=inv(1+L). [K2. T=1S. The plant G should be stabilizable and detectable. L=G*K.. P will not be stabilizable by any K. in the discretetime case.GAM]=hinfsyn(P).'GAM*G/W2'.'k.1.e. W1.W3).'GAM/W1'. else. sigma(S.'.augw Example s=zpk('s').W1. W1=0. h2syn hinfsyn mixsyn mktito See Also H2 controller synthesis H∞ controller synthesis H∞ mixedsensitivity controller synthesis Partition LTI system via Input and Output Groups 1021 .'r'.
key1.redinfo] = balancmr(G.redinfo] = balancmr(G. The error bound is computed based on Hankel singular values of G.. or optionally a vector packed with desired orders for batch runs ORDER A batch run of a serial of different reduced order models can be generated by specifying order = x:y..value1. Hence. Argument G Description LTI model to be reduced (without any other inputs will plot its Hankel singular values and prompt for reduced order) (Optional) an integer for the desired order of the reduced model.order) [GRED. σι.key1... reduced order can be directly determined by examining the system Hankel SV’s.balancmr Purpose Syntax 10balancmr Balanced model truncation via square root method GRED = balancmr(G) GRED = balancmr(G. all the 1022 . the function will show a Hankel singular value plot of the original model and prompt for model order number to reduce.) balancmr returns a reduced order model GRED of G and a struct array redinfo Description containing the error bound of the reduced model and Hankel singular values of the original system. With only one input argument G. This method guarantees an error bound on the infinity norm of the additive error  GGRED ∞ for wellconditioned model reduced problems [1]: n G – Gred ∞ ≤ 2 ∑ σi k+1 This table describes input arguments for balancmr... By default. or a vector of positive integers.) [GRED.order. For a stable system these values indicate the respective state energy of the system.value1.
'MaxError'overides ORDER input. In this case. Display Hankel singular plots (default 'off'). But weights have to be stable. When present. 'Weights' array 'Display' 'Order' 'on' or 'off' Integer. because from control stability point of view. Order of reduced model.balancmr antistable part of a system is kept. vector or cell array Weights on the original model input and/or output can make the model reduction algorithm focus on some frequency range of interests.Win} cell Reduce to achieve H∞ error. Argument Value Description 'MaxError' Real number or vector of different errors {Wout. Defaults are both identity. minimum phase and invertible. getting rid of unstable state(s) is dangerous to model a system. Optimal 1by2 cell array of LTI weights Wout (output) and Win (input). Weights must be invertible. Use only if not specified as 2nd argument. 1023 . 'MaxError' can be specified in the same fashion as an alternative for 'Order'. This table lists the input arguments 'key' and its 'value'. reduced order will be determined when the sum of the tails of the Hankel sv's reaches the 'MaxError'.
D) of a system and k.UnstabSV (Hankel SV of unstable part of G) REDINFO G can be stable or unstable.ErrorBound (bound on  GGRED ∞) • REDINFO.1:k))1/2 SR.k. Argument GRED Description LTI reduced order model.C.BIG = Lo U(:.1:k) Σ(1. continuous or discrete. 1 Find the SVD of the controllability and observability grammians P = Up Σp VpT Q = Uq Σq VqT 2 Find the square root of the grammians (left/right eigenvectors) Lp = Up Σp1/2 Lo = Uq Σq1/2 3 Find the SVD of (LoTLp) LoT Lp = U Σ VT 4 Then the left and right transformation for the final kth order reduced model is SL.k.BIG = Lp V(:. Algorithm Given a state space (A. Becomes multidimensional array when input is a serial of different model order array A STRUCT array with three fields: • REDINFO.1:k))1/2 1024 .StabSV (Hankel SV of stable part of G) • REDINFO. the following steps will produce a similarity transformation to truncate the original state space system to the kth order reduced model.1:k) Σ(1. the desired reduced order.B.balancmr This table describes output arguments.
wt1.6789). no. 'weight'.G.5678).5.01.']). eval(['sigma(G. wt2 = rss(6. BIG AS R. 39. Control. [g3. ˆ A ˆ C ˆ B ˆ D = S T L. 729733.” IEEE Trans. wt2. Contr.. K. 1984. [2] Safonov M. 7.5). 6.1234). redinfo1] = balancmr(G).error Bounds. “A Schur Method for Balanced Model Reduction. “All Optimal Hankel Norm Approximation of Linear Multivariable Systems. [g1. 11451193.balancmr 5 Finally. J. stable or unstable system. G.d = 2*eye(4).[0. randn('state'.20).[10:2:18]).wt2}). 0.G = rss(30.g' num2str(i) '). See Also reduce schurmr hankelmr bstmr ncfmr Top level model reduction function Balanced model truncation via Schur method Hankel minimum degree approximation Balanced stochastic model truncation via Schur method Balanced model truncation for normalized coprime factors 1025 . vol. [10:2:18]. BIG D T The proof of the square root balance truncation algorithm can be found in [2]. July 1989. 34." Int. Example Given a continuous or discrete.12345).4.05]).4). [g4. Y.4). pp. [g5.{wt1. rand('state'. and R. Chiang. and Their Lµ . vol. on Automat. end Reference [1] Glover.'MaxError'. % display Hankel SV plot % and prompt for order (try 15:20) [g2.5. randn('state'. redinfo3] = balancmr(G. wt1 = rss(6. BIG B CS R. redinfo5] = balancmr(G. redinfo2] = balancmr(G.d = eye(5)*2. BIG S L. redinfo4] = balancmr(G. pp. no. the following commands can get a set of reduced order models based on your selections: rand('state'. for i = 1:5 figure(i).
balancmr hankelsv Hankel singular values 1026 .
1027 . This transformation maps lines and circles to circles and lines in the complex plane.bilin Purpose Syntax Description 10bilin Multivariable bilinear transform of frequency (s or z). [4].AUG) bilin computes the effect on a system of the frequencyvariable substitution. based on the METHOD you select Table 101: Bilinear transform types. ˜ VERS=1. the sampling period. to do shifting of jω modes [2]. αz + δ s = γz + β The variable VERS denotes the transformation direction: ˜ VERS= 1. Bilin computes several statespace bilinear transformations such as backward rectangular. in general.VERS. etc. People often use this transformation to do sampleddata control system design [1] or. 'FwdRec' forward rectangular: z–1 s = T AUG = T. the sampling period. GT = bilin(G. Method 'BwdRec' Type of bilinear transform backward rectangular: z–1 s = Tz AUG = T.. reverse transform ( z → s ) or ( s → s ) . [3]. forward transform ( s → z ) or ( s → s ) .METHOD.
05 0 –2 1 1 0 1 0 0 Following is an example of four common “continuous to discrete” bilin transformations for the sampled plant: A= [1 1. D = 0 0 .⎜ . Consider the following continuoustime plant (sampled at 20 Hz) · A = – 1 1 .B.Ts. B=[1 0. 0 1]. 'G_Bilin' METHOD = 'G_Bilin'. % ANALOG % Tustin 1028 .05.+ 1⎟ ⎝⎠ h AUG = [T h].bilin Table 101: Bilinear transform types. T s = 0. 'S_ftjw' shifted jωaxis. D=[0 0.⎟ z T ⎜ . sys = ss(A. B = 1 0 .'tustin'). Example Example 1. continuoustime to continuoustime: ˜ s + p1 s = ˜ 1 + s ⁄ p2 AUG = [p2 p1]. C = 1 0 . 1 1]. Ts=0. Method 'S_Tust' Type of bilinear transform shifted Tustin: ⎛ ⎞ 2 z–1 s = . is the “shift” coefficient. continuoustime to continuostime: ˜ αs + δ s = ˜ γs + β AUG = [ α β γ δ ] . Tustin continuous splane to discrete zplane transforms. 0 2]. general bilinear.D). % sampling time [syst] = c2d(sys. C= [1 0.C. 0 0]. bilinear poleshifting.
40).1. % Forward Rectangular w = logspace(2.1. % frequencies to plot sigma(sys.'Sft_jw'. s=zpk('s').sysp.Ts.1). % Prewarped Tustin [sysb] = bilin(sys. Bilinear continuous to continuous poleshifting 'S_ftjw' Design an H mixedsensitivity controller for the ACC Benchmark plant 1 G ( s ) = 2 2 s (s + 2) such that all closedloop poles lie inside a circle in the left half of the splane whose diameter lies on between points [p1.'Sft_jw'. % bilinear pole shifted plant Gt Kt=mixsyn(Gt. Example 2.sysf.3.Ts).[p1 p2]).syst. Figure 102: . % bilinear pole shifted controller K =bilin(Kt. Comparison of 4 Bilinear Transforms from Example 1.sysb.1.bilin [sysp] = c2d(sys.1.p2]=[12.2]: p1=12.50).[p1 p2]). G=ss(1/(s^2*(s^2+2))).'FwdRec'. . % Backward Rectangular [sysf] = bilin(sys.Ts). % final controller K 1029 . p2=2.'BwdRec'.1.[].w).'prewarp'. % original unshifted plant Gt=bilin(G.
is 4 –5 ( s – 12 ) G t ( s ) = 4.274s + 5. closedloop poles are placed in the left circle [p1 p2].p2] circle.= ( βA – δ I ) ( αI + γ A ) Cb Db –1 C ( αI – γ A ) ( αβ – γ δ ) ( αI – γ A ) B –1 D + γ C ( αI – γ A ) B –1 1030 . Algorithm bilin employs the statespace formulae in [3]: –1 Ab Bb . p2 = −12) 10 15 p2 p1 −p1 −p2 original plant poles (s−plane) shifted plant poles (s~−plane) shifted H−Inf closed−loop poles final H−Inf closed−loop poles Figure 103: 'S_ftjw' final closedloop poles are inside the left [p1.bilin As shown in Figure 103.918 ) Example of Bilinear Mapping: s~ = (−s + p1) / (s/p2 −1) 10 8 6 4 2 0 −2 −4 −6 −8 −10 −15 −10 −5 0 5 (p1 = −2. which has its nonstable poles shifted to the inside the right circle. The shifted plant.765 × 10 2 2 ( s – 2 ) ( s – 4.
May/June 1991.. Berlin. in R. [4] Chiang. AddisonWesley. [2] Safonov.bilin References [1] Franklin. SpringerVarlet. Flashner. G. 14. Guidance. 1987. R. 513520. “H∞ Synthesis using a Bilinear Pole Shifting Transform. Curtain (editor). [3] Safonov.to discretetime Convert from continous. Safonov. Guidance. 3. G. Powell. “H∞ Control Synthesis for a Large Space Structure. M. R.” AIAA J. Chiang and H. Robustness and Sensitivity Reduction in Control Systems. no. vol. F. J. D. pp. Y. pp. Control and Dynamics.. 15. “ImaginaryAxis Zeros in Multivariable H∞ Optimal Control”. 1980. pp. and M. G. F..” AIAA. 7181. 5.to discretetime Sector transformation 1031 . September–October 1992. Y. 11111117. See Also c2d d2c sectf Convert from continous. M. and J. Digital Control of Dynamics System. Control and Dynamics. G. Modelling.
redinfo] = bstmr(G. With only one input argument G.bstmr Purpose Syntax 10bstmr Balanced stochastic model truncation (BST) via Schur method GRED = bstmr(G) GRED = bstmr(G.key1. or a vector of integers. all the antistable 1032 ....value1..order) [GRED.value1.) bstmr returns a reduced order model GRED of G and a struct array redinfo Description containing the error bound of the reduced model and Hankel singular values of the phase matrix of the original system [2]. The error bound is computed based on Hankel singular values of the phase matrix of G. This method guarantees an error bound on the infinity norm of the multiplicative  GRED1(GGRED) ∞ or relative error  G1(GGRED) ∞ for wellconditioned model reduction problems [1]: n G ( G – Gred ) ∞ ≤ –1 ∏ ( 1 + 2σi ( k+1 2 1 + σi + σi ) ) – 1 This table describes input arguments for bstmr.redinfo] = bstmr(G. Argument G Description LTI model to be reduced (without any other inputs will plot its Hankel singular values and prompt for reduced order) (Optional) an integer for the desired order of the reduced model.key1.. For a stable system these values indicate the respective state energy of the system. the function will show a Hankel singular value plot of the phase matrix of G and prompt for model order number to reduce. Hence.order.. By default. or a vector of desired orders for batch runs ORDER A batch run of a serial of different reduced order models can be generated by specifying order = x:y. reduced order can be directly determined by examining these values.) [GRED.
the desired reduced order.C. 'MaxError' can be specified in the same fashion as an alternative for 'ORDER'. Display Hankel singular plots (default 'off'). Argument 'MaxError' Value Description Real number or vector of different errors 'on' or 'off' Reduce to achieve H∞ error. A STRUCT array with three fields: • REDINFO. because from control stability point of view. Use only if not specified as 2nd argument. Argument GRED Description LTI reduced order model.B.bstmr part of a system is kept. Algorithm Given a state space (A. vector or cell array This table describes output arguments. 'Display' 'Order' Integer. When present. continuous or discrete.UnstabSV (Hankel SV of unstable part of G) REDINFO G can be stable or unstable. 1033 . 'MaxError'overides ORDER input.ErrorBound (bound on  G1(GGRED) ∞) • REDINFO. Become multidimension array when input is a serial of different model order array. the following steps will produce a similarity transformation to truncate the original state space system to the kth order reduced model. getting rid of unstable state(s) is dangerous to model a system.StabSV (Hankel SV of stable part of G) • REDINFO. In this case.D) of a system and k. Order of reduced model. reduced order will be determined when the accumulated product of Hankel SVs shown in the above equation reaches the 'MaxError'.
SMALL] 4 Find the SVD of (VTL.bstmr 1 Find the controllability grammian P and observability grammian Q of the left spectral factor Φ = Γ(σ)Γ∗(−σ) = Ω∗(−σ)Ω(σ) by solving the following Lyapunov and Riccati equations AP + PAT + BBT = 0 BW = PCT + BDT QA + AT Q + (QBW . SMALL. T λ1 … … 0 … … 0 0 λn λn … … 0 … … 0 0 λ1 V A PQV A = V D PQV D = T 3 Find the left/right orthonormal eigenbases of PQ associated with the kth big Hankel singular values of the allpass phase matrix (W*(s))1G(s).BIG = VR.V L. k V A = [V R. BIG ] k V D = [ V R. V L. respectively.BIGVΣ(1:k.1:k)−1/2 1034 ⎫ ⎪ ⎬ ⎪ ⎭ ⎫ ⎪ ⎬ ⎪ ⎭ .BIG VR.BIG UΣ(1:k.1:k)−1/2 SR.BIG) = U Σ ςΤ 5 Form the left/right transformation for the final kth order reduced model SL. BIG .CT)T = 0 2 Find the Schur decomposition for PQ in both ascending and descending order.BIG = V L.CT) (DDT) (QBW .
stable or unstable system. 0.” Syst. [g4. By default. “Frequency weighted L• error bounds. % display Hankel SV plot % and prompt for order (try 15:20) [g2. Example Given a continuous or discrete. redinfo1] = bstmr(G). G = rss(30. the bstmr program will assign a full rank D matrix scaled by 0. 21. As long as the size of D matrix is insignificant inside the control bandwidth. you can attach a small but full rank D matrix to the original problem but remove the D matrix of the reduced order model afterwards.. [g1. This serves the purpose for most problems if user does not want to go through the trouble of model pretransformation.']). Vol.01.1234).5678). G.bstmr 6 Finally. eval(['sigma(G. the reduced order model should be fairly close to the true model. redinfo2] = bstmr(G. for otherwise the Riccati solver fails.05]). BIG S L.4). Alternatively.[10:2:18]).d = zeros(5. G. BIG B CS R. if its D matrix is not full rank to begin with. Contr. redinfo3] = bstmr(G. 1993. for i = 1:4 figure(i). end Reference [1] Zhou K.5. BIG D T The proof of the Schur BST algorithm can be found in [2].20).[0. the following commands can get a set of reduced order models based on your selections: rand('state'. BIG AS R. redinfo4] = bstmr(G.4). 115125. Note The BST model reduction theory requires that the original model D matrix be full rank. ˆ A ˆ C ˆ B ˆ D = S T L. 1035 . For any problem with strictly proper model. you can shift the jωaxis via bilin such that BST/REM approximation can be achieved up to a particular frequency range of interests. [g3. Lett.'MaxError'.g' num2str(i) ').001 of the minimum eigenvalue of the original model. randn('state'.
” International J. G. and R. 1988. 2. Vol. 259272. Chiang. of Adaptive Control and Signal Processing. “Model Reduction for Robust Control: A Schur Relative Error Method. pp.bstmr [2] Safonov M. See Also reduce balancmr hankelmr schurmr ncfmr hankelsv Top level model reduction function Balanced truncation via squareroot method Hankel minimum degree approximation Balanced truncation via Schur method Balanced truncation for normalized coprime factors Hankel singular value 1036 . Y.
bnd] [qopt.opt).V.qinit). different starting points often yield different final answers. upper bound µsynthesis problem 10cmsclsyn [qopt.opt.i). then the approximation to solving the constant matrix µ synthesis problem is twofold: only the upper bound for µ is minimized.'random'.'random'. The matrices R.U. then the iterative computation is performed multiple times the i'th optimization is initialized at Q = QINIT(:.BlockStructure. and a set ∆ ⊂ Cm×n. [QOPT. upper bound µsynthesis problem by minimization µ ∆( R + UQV ) min Q∈C r×t for given matrices R ∈ Cn×m.bnd] [qopt.bnd] = = = = cmsclsyn(R.OPT.V.OPT) uses the options specified by OPT in the calls to mussv. U ∈ Cn×r.BlockStructure. and NRV is the number of rows of V.BlockStructure.QINIT) initializes the iterative computation from Q = QINIT.cmsclsyn Purpose Syntax Approximately solves the constantmatrix.U.U.V.BlockStructure.BLK).BlockStructure.BND] = cmsclsyn(R. [QOPT.BND] = cmsclsyn(R. by choice of Q.V. QOPT is the optimum value of Q.V.V. and V. and the minimization is not convex.BlockStructure.V. If NCU is the number of columns of U. If U is full column rank. [QOPT. BND.U.BND] = cmsclsyn(R.bnd] [qopt. The default value for OPT is 'cUsw'. This applies to constant matrix data in R. hence the optimum is generally not found.U.OPT. The output arguments are associated with the best solution obtained in this brute force approach. [QOPT.V. BlockStructure is a matrix specifying the perturbation blockstructure as defined for mussv.U.BlockStructure).opt. If QINIT is an ND array.U and V are constant matrices of the appropriate dimension. See mussv for more information. or V is full 1037 . Due to the nonconvexity of the overall problem.BlockStructure) minimizes. cmsclsyn(R.:. the upper bound of mussv(R+U*Q*V. cmsclsyn(R. U.U.V ∈ Ct×m.N) initializes the iterative computation from N random instances of QINIT.BND] = cmsclsyn(R. cmsclsyn(R.N) Description cmsclsyn approximately solves the constantmatrix.U.
Zhou.” 30th IEEE Conference on Decision and Control. “A collection of robust control problems leading to LMI’s. 1038 . K. pp.cmsclsyn row rank. P.. then the problem can (and is) cast as a convex problem. and solved directly. If U or V is square and invertible. without resorting to the iteration. [PacZPB]. alternatively holding Q fixed. Brighton. and the global optimizer (for the upper bound for µ) is calculated. 1991. followed by holding the upper bound multipliers fixed. 1245–1250. and G. Algorithm The cmsclsyn algorithm is iterative. UK. and computing the mussv upper bound. A. then the optimizations reformulated (exactly) as an linear matrix inequality. Pandey. Becker. and minimizing the bound implied by choice of Q. dksyn hinfsyn mussv robuststab robustperf See Also Synthesize a robust controller via DK iteration Synthesize a H∞ controller Calculate bounds on the Structured Singular Value (m) Calculate stability margins of uncertain systems Calculate performance margins of uncertain systems Reference Packard.K.
p(t) = (p1(t).. . .. Two control parameters can be reset via options(1) and options(2): • If options(1)=0 (default).P] = decay(ps.e.decay Purpose Syntax Description 10decay Quadratic decay rate of polytopic or affine Psystems [drate. i. E) ∈ Co{(A1. See Also quadstab pdlstab psys Quadratic stability of polytopic or parameterdependent systems Robust stability of polytopic or affine parameterdependent systems (Psystem) Specification of uncertain statespace models 1039 . (An.. . E1).options) For affine parameterdependent systems · E(p) x = A(p)x. . decay runs in fast mode. . En)}. the smallest α ∈ R such that ATQE + EQAT < αQ holds for some Lyapunov matrix Q > 0 and all possible values of (A. Set options(1)=1 to use the least conservative conditions. The default is 109. pn(t)) decay returns the quadratic decay rate drate. E). or polytopic systems · E(t) x = A(t)x. (A. . • options(2) is a bound on the condition number of the Lyapunov matrix P. using the least expensive sufficient conditions.
X) returns an integer matrix decX of the same dimensions as X whose (i. xN. Example 1 Consider an LMI with two matrix variables X and Y with structure: • X = x I3 with x scalar • Y rectangular of size 2by1 If these variables are defined by setlmis([]) X = lmivar(1. .. decinfo can also be used in interactive mode by invoking it with a single argument.[2 1]) : : lmis = getlmis 1040 . j) entry is • 0 if X(i. or its opposite –xn. j) = –xn decX clarifies the structure of X as well as its entrywise dependence on x1. j) is a hard zero • n if X(i. the command decX = decinfo(lmisys. the free entries of all matrix variables described in lmisys. . xN. or equivalently.decinfo Purpose Syntax Description 10decinfo Describe how the entries of a matrix variable X relate to the decision variables decinfo(lmisys) decX = decinfo(lmisys. . .[3 0]) Y = lmivar(2. .X) The function decinfo expresses the entries of a matrix variable X in terms of the decision variables x1. Recall that the decision variables are the free scalar variables of the problem. . Each entry of X is either a hard zero. some decision variable xn. It then prompts the user for a matrix variable and displays in return the decision variable content of this variable.. This is useful to specify matrix variables with atypical structures (see lmivar). j) = xn (the nth decision variable) • –n if X(i. If X is the identifier of X supplied by lmivar.
X) dX = 1 0 0 0 1 0 0 0 1 dY = decinfo(lmis. x2. 1041 . Example 2 Suppose that the matrix variable X is symmetric block diagonal with one 2by2 full block and one 2by2 scalar block. and is declared by setlmis([]) X = lmivar(1.[2 1.Y) dY = 2 3 This indicates a total of three decision variables x1.decinfo the decision variables in X and Y are given by dX = decinfo(lmis.2 0]) : lmis = getlmis The decision variable distribution in X can be visualized interactively as follows: decinfo(lmis) There are 4 decision variables labeled x1 to x4 in this problem. x3 that are related to the entries of X and Y by ⎛ x 0 0 ⎜ 1 X = ⎜ 0 x1 0 ⎜ ⎜ 0 0 x 1 ⎝ ⎞ ⎟ ⎟. ⎟ ⎟ ⎠ ⎛ x ⎞ Y=⎜ 2⎟ ⎜ x ⎟ ⎝ 3⎠ Note that the number of decision variables corresponds to the number of free entries in X and Y when taking structure into account.
or 0 to quit): ?> 0 See Also lmivar mat2dec dec2mat Specify the matrix variables in an LMI problem Return the vector of decision variables corresponding to particular values of the matrix variables Given values of the decision variables.j>0.. derive the corresponding values of the matrix variables decnbr 1042 .x4}..j<0 stand for 0. Their entrywise distribution in X1 is as follows (0. respectively): X1 : 1 2 0 0 2 3 0 0 0 0 4 0 0 0 0 4 ********* Matrix variable Xk of interest (enter k between 1 and 1..decinfo Matrix variable Xk of interest (enter k between 1 and 1.xj.xj.. or 0 to quit): ?> 1 The decision variables involved in X1 are among {x1.
decnbr Purpose Syntax Description 10decnbr Give the total number of decision variables in a system of LMIs ndec = decnbr(lmisys) The function decnbr returns the number ndec of decision variables (free scalar variables) in the LMI problem described in lmisys. ndec is the length of the vector of decision variables. In other words. the number of decision variables is ndec = decnbr(LMIs) ndec = 10 Example This is exactly the number of free entries in X and Y when taking structure into account (see decinfo for more details). See Also dec2mat decinfo mat2dec Given values of the decision variables. For an LMI system lmis with two matrix variables X and Y such that • X is symmetric block diagonal with one 2by2 full block. derive the corresponding values of the matrix variables decnbr Describe how the entries of a matrix variable X relate to the decision variables Return the vector of decision variables corresponding to particular values of the matrix variables 1043 . and one 2by2 scalar block • Y is 2by3 rectangular.
.decvars. Recall that the decision variables are all free scalar variables in the LMI problem and correspond to the free entries of the matrix variables X1. . dec2mat computes the corresponding value valX of the matrix variable with identifier X. XK.dec2mat Purpose Syntax Description 10dec2mat Given values of the decision variables. This identifier is returned by lmivar when declaring the matrix variable. . mat2dec decnbr decinfo Return the vector of decision variables corresponding to particular values of the matrix variables Give the total number of decision variables in a system of LMIs Describe how the entries of a matrix variable X relate to the decision variables 1044 . Since LMI solvers return a feasible or optimal value of the vector of decision variables. . derive the corresponding values of the matrix variables valX = dec2mat(lmisys. Example See Also See the description of feasp. dec2mat is useful to derive the corresponding feasible or optimal values of the matrix variables.X) Given a value decvars of the vector of decision variables.
... defcx returns the values V1. See Also mincx decinfo Minimize a linear objective under LMI constraints Describe how the entries of a matrix variable X relate to the decision variables 1045 .... Given the identifiers X1..Vk of these variables when the nth decision variable is set to one and all others to zero..Xk) defcx is useful to derive the c vector needed by mincx when the objective is expressed in terms of the matrix variables...Xk of the matrix variables involved in this objective.Vk] = defcx(lmisys.......defcx Purpose Syntax Description Help specify cTx objectives for the mincx solver 10defcx [V1.X1.n.
Since this ranking is not modified by deletions. LMI3 with newlmi. To further delete (95).LMI2) lmis now describes the system of LMIs T T T A1 X1 + X1 A1 + Q1 < 0 A3 X3 + X3 A3 + Q3 < 0 T T (103) (104) and the second variable X2 has been removed from the problem since it no longer appears in the system (94)–(95). Example Suppose that the three LMIs A 1 X1 + X1 A1 + Q 1 < 0 A 2 X2 + X2 A2 + Q 2 < 0 A 3 X3 + X3 A3 + Q 3 < 0 have been declared in this order.3) 1046 . and stored in lmisys. To delete the second LMI.LMI3) or equivalently lmis = dellmi(lmis. The updated system is returned in newsys. The ranking n is relative to the order in which the LMIs were declared and corresponds to the identifier returned by newlmi. LMI2. matrix variables that only appeared in the deleted LMI are removed from the problem. type lmis = dellmi(lmis. labeled LMI1. it is safer to refer to the remaining LMIs by their identifiers.dellmi Purpose Syntax Description 10dellmi Remove an LMI from a given system of LMIs newsys = dellmi(lmisys. Finally. type lmis = dellmi(lmisys.n) dellmi deletes the nth LMI from the system of LMIs described in lmisys.
dellmi Note that (95) has retained its original ranking after the first deletion. See Also newlmi lmiedit lmiinfo Attach an identifying tag to LMIs Specify or display systems of LMIs as MATLAB expressions Interactively retrieve information about the variables and term content of LMIs 1047 .
delmvar Purpose Syntax Description 10delmvar Delete one of the matrix variables of an LMI problem newsys = delmvar(lmisys. To delete the variable X. Example Consider the LMI ⎛ T T 0 < ⎜ A Y + B YA + Q CX + D ⎜ T T T T ⎝ X C +D –( X + X ) ⎞ ⎟ ⎟ ⎠ involving two variables X and Y with identifiers X and Y. The identifier X should be the second argument returned by lmivar when declaring X. type lmisys = delmvar(lmisys.X) delmvar removes the matrix variable X with identifier X from the list of variables defined in lmisys. Note that Y is still identified by the label Y. The description of the resulting system of LMIs is returned in newsys. See Also lmivar setmvar lmiinfo Specify the matrix variables in an LMI problem Instantiate a matrix variable and evaluate all LMI terms involving this matrix variable Interactively retrieve information about the variables and term content of LMIs 1048 . All terms involving X are automatically removed from the list of LMI terms.X) Now lmisys describes the LMI ⎛ T ⎞ T 0 < ⎜ A YB + B YA + Q D ⎟ ⎜ ⎟ T ⎝ D 0 ⎠ with only one variable Y.
size(xg) FRD model with 4 output(s) and 1 input(s). at 80 frequency point(s).1). size(mxg) FRD model with 4 output(s) and 4 input(s).4. at 80 frequency point(s). If x is a matrix of uncertain system models or matrices. size(m) FRD model with 2 output(s) and 1 input(s).2. diag(diag(x)) is a diagonal matrix of uncertain system models or matrices. mxg = diag(xg). diag(x) puts x on the main diagonal.1) xg(3:4. xg = frd(x.80)).logspace(2. The statement produces a diagonal system mxg of size 4by4. xxg = [xg(1:2. diag(x) is the main diagonal of v.diag Purpose Syntax Description 10diag Diagonalize and diagonals of uncertain matrices and systems v = diag(x) If x is a vector of uncertain system models or matrices. m = diag(xxg). a vector of the diagonal elements of xxg are found using diag. at 80 frequency point(s). Given multivariable system xx. x = rss(3.1)]. Example See Also append Group models by appending their inputs and outputs 1049 .
This table lists the dkitopt object properties.value2.) dkitopt with no input or output arguments displays a complete list of option Description properties and their default values.value1. Default is an empty matrix ([]) which results in the frequency range and number of points chosen automatically Controller used to initiate first iteration. Starting iteration number.. Default is an empty SS object... When entering property names.) creates a dkitopt object called options with specific values assigned to certain properties. InitialController AutoIter DisplayWhileAutoIter StartingIterationNumber 1050 . the DK iteration procedure is interactive. options = dkitopt creates an options object with all the properties set to their default values. it is sufficient to only type the leading characters that identify the name uniquely. Default is 1. You are prompted to fit the DScale data and provide input on the control design process.value2. If the AutoIter property is set to 'off'.. Default is 'on'. options = dkitopt('name1'. Automated musynthesis mode.'name2'. Displays iteration progress in AutoIter mode. Default is 'off'.value1.'name2'... Any unspecified property is set to its default value.dkitopt Purpose Syntax 10dkitopt Create an options object for use with dksyn dkitopt options = dkitopt options = dkitopt('name1'. Object Property FrequencyVector Description Frequency vector used for analysis. Property names are not casesensitive.
AutoScalingOrder AutoIterSmartTerminate AutoIterSmartTerminateTol Default Meaning Example This statement creates a dkitopt options object called opt with all default values. A 1by1 structure of dkitopt property default values. Maximum state order for fitting Dscaling data. and the maximum state order of the fitted Dscale data to 9. Default is 'on'. the number of DK iterations to 16.005. 1051 . Default is 0. Automatic termination of iteration procedure based on progress of design iteration. The frequency vector is set to logspace(2. opt = dkitopt Property Object Values: FrequencyVector: InitialController: AutoIter: DisplayWhileAutoIter: StartingIterationNumber: NumberOfAutoIterations: AutoScalingOrder: AutoIterSmartTerminate: AutoIterSmartTerminateTol: Default: Meaning: [] [0x0 ss] 'on' 'off' 1 10 5 'on' 0.dkitopt Object Property NumberOfAutoIterations Description Number of DK iterations to perform. Tolerance used by AutoIterSmartTerminate.3.0050 [1x1 struct] [1x1 struct] The dksyn options object opt is updated with the following statements. Default is 5. Default is 10. A 1by1 structure text description of each property.80).
logspace(2.FrequencyVector = logspace(2.3.'AutoScalingOrder'..16. opt.NumberOfAutoIterations = 16.dkitopt opt. opt Property Object Values: FrequencyVector: [1x80 double] InitialController: [0x0 ss] AutoIter: 'on' DisplayWhileAutoIter: 'off' StartingIterationNumber: 1 NumberOfAutoIterations: 16 AutoScalingOrder: 16 AutoIterSmartTerminate: 'on' AutoIterSmartTerminateTol: 0.9) Property Object Values: FrequencyVector: [1x80 double] InitialController: [0x0 ss] AutoIter: 'on' DisplayWhileAutoIter: 'off' StartingIterationNumber: 1 NumberOfAutoIterations: 16 AutoScalingOrder: 9 AutoIterSmartTerminate: 'on' AutoIterSmartTerminateTol: 0. opt. m(i1) and m(i2). the same properties are set with a single call to robopt.. the previous two iterations and the options property 'AutoIterSmartTerminateTol'.0050 Default: [1x1 struct] Meaning: [1x1 struct] Algorithm The dksyn command stops iterating before the total number of automated iterations ('NumberOfAutoIterations') if 'AutoIterSmartTerminate' is set to 'on' and a stopping criteria is satisfied. 'NumberOfAutoIterations'.80).80)..0050 Default: [1x1 struct] Meaning: [1x1 struct] In this statement.AutoScalingOrder = 16. The D−K iteration procedure automatically terminates if the difference between each of 1052 .3. opt = dkitopt('FrequencyVector'. The stopping criteria involves the m(i) value of the current ith iteration.
See Also dksyn h2syn hinfsyn mussv robopt robuststab robustperf wcgopt Synthesize a robust controller via DK iteration Synthesize a H2 optimal controller Synthesize a H∞ optimal controller Calculate bounds on the Structured Singular Value (µ) Create a robustperf/robuststab option object Calculate stability margins of uncertain systems Calculate performance margins of uncertain systems Create a wcgain option object 1053 . µ(i).dkitopt the three µ values is less than the relative tolerance of AutoIterSmartTerminateTol ×µ(i) or the current µ value. has increased relative to the µ value of the previous iteration µ(i−1) by 20×AutoIterSmartTerminateTol.
p. It is assumed that the measurement outputs and control inputs correspond to the last nmeas outputs and ncont inputs.opt) [k. uncertain parameters (ucomplex) and unmodeled LTI dynamics (ultidyn) and performance and uncertainty weighting functions. bnd = robustperf(clp). k. The uncertain system p is an openloop interconnection containing known components including the nominal plant model.nmeas.clp.prevdkinfo. p is a uss and has nmeas measurement outputs and ncont control inputs. clp is the closedloop interconnection and bnd is the robust performance bound for the uncertain closedloop system clp. for the uncertain openloop plant model.nmeas. which corresponds to bnd.clp.clp.clp. k is the controller.nmeas.opt) [k. first over the controller variable K (holding the D variable associated with the scaled µ upper bound fixed). The DK iteration procedure is not guaranteed to converge to the minimum µ value.bnd. 1054 . You use weighting functions to include magnitude and frequency shaping information into the optimization. but often works well in practice.bnd] = dksyn(p. via the DK iteration.bnd.dkinfo] = dksyn(p) dksyn synthesizes a robust controller via DK iteration. p. and then over the D variable (holding the controller K variable fixed).nmeas.bnd] = dksyn(p. Description The DK iteration procedure involves a sequence of minimizations.dksyn Synthesis of a robust controller via µsynthesis DK iteration 10dksyn Purpose Syntax k = dksyn(p.k).ncont. The objective of µsynthesis is to minimize the structure singular value µ of the corresponding robust performance problem associated with the uncertain system p.bnd] = dksyn(p.ncont) synthesizes a robust controller.dkinfo] = dksyn(p. [k. clp. dksyn automates the DK iteration procedure and the option object dkitopt allows you to customize its behavior. k.ncont) [k.ncont.clp. and bnd are related as follows: clp = lft(p.ncont) [k. The control objective is to synthesize a stabilizing controller k that minimizes the robust performance µ value. The DK iteration procedure is an approximation to µsynthesis control design.nmeas.
DL DR MussvBnds MussvInfo k = dksyn(p) is a valid calling argument provided p is a uss object and has twoinputtwooutput partitioning as defined by mktito.StartingIterationNumber = 1) to determine the correct Dscalings to initiate the iteration procedure. opt is created via the command opt=dkitopt.. prevdkinfo is used when the dksyn starting iteration is not 1 (opt. See dkitopt for more details. Example The following statements create a robust performance control design for an unstable. below 2 rad/s.dkinfo] = dksyn(p. dkinfo is a Nby1 cell array where N is the total number of iterations performed. G.ncont. Upper and lower m bounds.prevdkinfo.[1 1]). Structure returned from mussv at each iteration. The i’th cell contains a structure with the following fields:. a ss object.clp.opt) specifies options described in dkitopt. [k.opt) uses information from a previous dksyn iteration. Around 2 rad/s the percentage variation starts to 1055 . is an unstable first order system . it can vary up to 25% from its nominal value. s–1 G = tf(1. a ss object.nmeas.bnd. Right Dscale. Field K Bnds Description Controller at i'th iteration. Robust performance bound on the closedloop system (double).bnd] = dksyn(p. The nominal plant s model.dksyn [k.ncont. At low frequency. uncertain singleinput/singleoutput plant model. Left Dscale.nmeas. a ss object. a frd object. prevdkinfo is a structure from a previous attempt at designing a robust controller using dksyn.clp. The model itself is uncertain.
[1 1]). 1 The sensitivity function.T The uncertain plant model. Wu = 0. see the above figure. The robust stability objective is to synthesize a stabilizing LTI controller for all the plant models parameterized by the uncertain plant model. Hence the product of the sensitivity weight Wp and actual closedloop sensitivity function is less 1056 . is defined as S = . The performance objective is defined as a weighted sensitivity minimization problem. Gpert.[1/32 1]). The control interconnection structure is shown in the following figure.25*tf([1/2 1]. Wu. InputUnc = ultidyn('InputUnc'. which corresponds to the frequency variation of the model uncertainty and the uncertain LTI dynamic object InputUnc. The percentage model uncertainty is represented by the weight. S.dksyn increase and reaches 400% at approximately 32 rad/s. represents the model of the physical system to be controlled. Gpert. A weighted sensitivity minimization problem selects a weight Wp which corresponds to the inverse of the desired sensitivity function of the closedloop system as a function of frequency.where P is the plant 1 + PK model and K is the controller. The goal is to synthesize a controller that stabilizes and achieves the closedloop performance objectives for all possible plant models in the Gpert. Plant model set: Gpert Input Line Wu u K + + d e G Wp Gpert = G*(1+InputUnc*Wu).
The sensitivity weight Wp has a gain of 100 at low frequency begins to decrease at 0. uncertain control design interconnection model. y] using the following commands.006 rad/s and reaches a minimum magnitude of 0. the closedloop performance objective is to achieve the desired sensitivity function for all plant models defined by the uncertain plant model. A block diagram of this uncertain closedloop system illustrating the performance objective (closedloop transfer function from d→e) is shown in the following figure. For the case when the plant model is uncertain. The defined sensitivity weight Wp implies that the desired disturbance rejection should be at least 100:1 disturbance rejection at DC. from [d.dksyn than 1 across all frequencies. clp.4 rad/s. d P y K From the definition of the robust performance control objective. bnd bnd = e u 1057 . Wp = tf([1/4 0.clp. The robustness and performance weights are selected such that if the robust performance structure singular value. bnd.25 after 2. You can form the uncertain transfer matrix. rise slowly between 0.bnd] = dksyn(P.1. P. 0. P = [Wp. the weighted.1). which includes the robustness and performance objectives. [K. and allow the disturbance rejection to increase above the openloop level.006]). can be constructed and is denoted by P. at high frequency. Gpert. u] to [e.4 rad/s.25.6].[1 0. of the closedloop uncertain system. 1 ]*[1 Gpert]. is less than 1 then the performance objectives have been achieved for all the plant models in the model set. The performance objective for an uncertain system is a robust performance objective.006 and 2.
bnd. first over the controller variable K (holding the D variable associated with the scaled µ upper bound fixed).clp. 7 (In the 1st iteration.wcu. this step is skipped. [rpnorm. complete step of the D–K iteration. of 0. Therefore you have achieved the robust performance objectives for the given problem. A model uncertainty exists of size 146% that results in a peak gain performance of 0. The D–K iteration procedure is not guaranteed to converge to the minimum µ value. The following is a list of what occurs during a single.1). disp(report{1}) Uncertain system.report] = robustperf(clp). dksyn automates the D–K iteration procedure. and then over the D variable (holding the controller K variable fixed). The 1058 .685. achieves robust performance.bnd] = dksyn(P. 1 ]*[1 Gpert]. scaled plant model P which is extracted from a uss object using the command lftdata.6860 P = [Wp. You can use the robustperf command to analyze the closedloop robust performance of clp.1.wcf.dksyn 0. [K. It involves a sequence of minimizations. Df.6860 The controller K achieves a robust performance µ value. Algorithm The D–K iteration procedure is an approximation to µ synthesis control design.) The µ calculation (from the previous step) provides a frequencydependent scaling matrices.686 at 0. the algorithm works with the generalized. clp.569 rad/s. Internally. bnd bnd = 0. The our analysis showed clp can tolerate 146% of the model uncertainty and achieve the performance and stability objectives. but often works well in practice.
K ) ( jω )D f ( jω ) ) are shown for comparison. If you set the DisplayWhileAutoIter field in dkitopt to 'on'. using the most recent controller and either appropriate value for the a frequency sweep (using the frequencydependent Ds) or a statespace calculation (with the rational D’s). this step is skipped. stable transfer function matrices. the bisection tolerance was too large.loop H∞ norm. K ) ( jω )D f ( jω ) ) and ˆ ˆ –1 σ ( D f ( jω )F L ( P.dksyn fitting procedure fits these scalings with rational.) A controller is designed using H∞ synthesis on the scaled openloop interconnection. This is simply a conservative value of the scaled closed. or performing another iteration is given. parameters if you set the AutoIter field in dkitopt to 'off'. c You are given the option to change the frequency range. This is convenient if. Using either the ˆ previous frequencydependent D’s or the justfit rational D . or if maximum gamma value was too small.) The rational D is absorbed into the openloop interconnection for the next controller synthesis. an estimate of an –1 H∞ norm is made. After fitting. plots of σ ( D f ( jω )F L ( P. as well as the peak value of µ of the closedloop frequency responses. showing all of the controller order. 8 (The 1st iteration begins at this point. b The singular values of the closedloop frequency response are plotted. 1059 . If you change it.. for instance. the following information is displayed: a The progress of the γiteration is displayed. 10 An iteration summary is displayed. 11 The choice of stopping. d You are given the option to rerun the H∞ synthesis with a set of modified 9 The structured singular value of the closedloop system are calculated and plotted. all relevant frequency responses are automatically recomputed. ˆ (In the 1st iteration.
you are prompted to enter your choice of options for fitting the Dscaling data.01 0 2 n e s Increment Fit Order Decrement Fit Order AutoPreFit Change MaxOrder to 3 Change AutoPreFit tol to 1. but for problems with full Dscalings. (perturbations of the form δI) they are different. these are identical operations.dksyn Subsequent iterations proceed along the same lines without the need to reenter the iteration number. This often provides valuable information about the progress of the robust controller synthesis procedure. when no fit exists). nd moves to the next scaling. the row/column of the scaling that is currently being fit. A summary at the end of each iteration is updated to reflect data from all previous iterations. After pressing return. During step 2 of the D–K iteration procedure.01 Fit with zeroth order Fit with second order Fit with n'th order Exit with Current Fittings See Status • nd and nb allow you to move from one Dscale data to another. In the (1. For scalar Dscalings. • The order of the current fit can be incremented or decremented (by 1) using i and d. and the order of the current fit (with d for data. the title displays the Dscaling Block number. 1060 . whereas nb moves to the next scaling block. Interactive fitting of DScalings Setting the AutoIter field in dkitopt to 'off' requires you interactively fit the Dscales each iteration. Enter Choice (return for list): Choices: nd Move to Next DScaling nb Move to Next DBlock i d apf mx 3 at 1.2) subplot window. the following is a list of your options.
The default maximum state order of individual Dscaling is 5.dksyn • apf automatically fits each Dscaling data. These applications include: vibration suppression for flexible structures. 1994. at allows you to define how close the rational. 17. 2. unfortunately) in determining where in the (D.J. scaled µ upper bound is to approximate the actual µ upper bound in a norm sense..J. and acoustic reverberation suppression in enclosures. • The D – K iteration is not guaranteed to converge to a global. A. J.T. This is a serious problem. and is not allowed. MarchApril. Lenz. • Doyle. Dynamics and Control. flight control. This is not a serious problem since the value of µ and its upper bound are often close. Allowable values for at are greater than 1. • e exits the Dscale fitting to continue the D–K iteration. This setting plays a role (mildly unpredictable. August 1991.” AIAA Journal of Guidance. Setting at 1 would require an exact fit of the Dscale data. G. no. The mx variable allows you to change the maximum Dscaling state order used in the automatic prefitting routine. µ∆(⋅)..C. the D – K iteration control design technique appears to work well on many engineering problems. chemical process control problems. “Design examples using µsynthesis: Space shuttle lateral axis FCS during reentry. mx must be a positive. and represents the biggest limitation of the design procedure. Limitations There are two shortcomings of the D – K iteration control design procedure: • Calculation of the structured singular value. It has been applied to a number of real world applications with success. Doyle.” NATO ASI 1061 . • Entering a positive integer at the prompt will fit the current Dscale data with that state order rational transfer function. 370–377. In spite of these drawbacks. “Application of µsynthesis techniques to momentum management and attitude control of the space station. G. New Orleans. Packard. • The variable s will display a status of the current and fits. has been approximated by its upper bound. Harduvel.K) space the D–K iteration converges. “Robust control of flexible modes in the controller crossover region. and A. • Balas. Navigation and Control Conference. Doyle. K. pp.C.K. Reference • Balas. Vol. nonzero integer. Packard and J.and J. or even local minimum [SteD].” AIAA Guidance.
1991. “Beyond singular values and loopshapes.. Balas. Modelling. A. June 1993. Robustness. 34. pp. 115. and Sensitivity Reduction in Control Systems. and G. “Linear. vol.. 2b. and J. 310–319. January. Doyle. 516. SpringerVerlag. vol. 1987. 1. • Packard. Doyle. 14.dksyn Series. Measurement and Control. no. pp. See Also dkitopt h2syn hinfsyn loopmargin mktito mussv robuststab robustperf wcgain wcsens wcmargin Create a dksyn options object Synthesize a H2 optimal controller Synthesize a H∞ optimal controller Comprehensive analysis of feedback loop Create a twoinput/twooutput LTI partition Calculate bounds on the Structured Singular Value (µ) Calculate stability margins of uncertain systems Calculate performance margins of uncertain systems Calculate worstcase gain of a system Calculate worstcase sensitivities for feedback loop Calculate worstcase margins for feedback loop 1062 . vol.” ASME Journal of Dynamic Systems.” AIAA Journal of Guidance and Control. multivariable robust control with a µ perspective. G. num. J. • [SteD:] Stein. 50th Anniversary Issue.
• The unit disk corresponds to the dotted red line. • GM and PM indicate the location of the classical gain and phase margins for the system L. dmplot(diskgm) plots the maximum allowable phase variation as a function of the actual gain variation for a given disk gain margin diskgm (the maximum gain variation being diskgm). Example When you call dmplot (without an argument). 1063 .6s + 16 ) • The Nyquist plot of L corresponds to the blue line. the resulting plot shows a comparison of a disk margin analysis with the classical notations of gain and phase margins.dpm] = dmplot returns the data used to plot the gain/phase variation ellipse. Both margins are derived from the largest disk that Description • contains the critical point (1.+ 1 30 L ( s ) = ( s + 1 ) ( s 2 + 1. • DGM and DPM correspond to the disk gain and phase margins.dmplot Purpose Syntax 10dmplot Interpret disk gain and phase margins dmplot dmplot(diskgm) [dgm. The Nyquist plot is of the loop transfer function L(s) s. The disk margins provide a lower bound on classical gain and phase margins. The closedloop system is guaranteed to remain stable for all combined gain/phase variations inside the plotted ellipse. [dgm.dpm] = dmplot dmplot plots disk gain margin (dgm) and disk phase margin (dpm). respectively.0) • does not intersect the Nyquist plot of the openloop response L diskgm is the radius of this disk and a lower bound on the classical gain margin.
Disk gain margin (DGM) and disk phase margin (DPM) in the Nyquist plot 1 0. represented by the dashed black line. corresponds to the largest disk centered at (GMD + 1/GMD)/2 that just touches the loop transfer function L.2 0 −0. in degrees.414).6 −0.5 −1 −0. This location is indicated by the red dot.6 Disk Margin 0.2dB gain variation and +/.5 Real Axis 0 0.8 Unit Disk Disk Margin Critical point DGM GM DPM PM −1 −1.14 deg phase variations.dmplot • The disk margin circle. the closedloop system can simultaneously tolerate +/. the closedloop system is stable for all phase and gain variations inside the blue ellipse.2 −0. and the yaxis corresponds to the phase variation allowable.5 1 The xaxis corresponds to the gain variation. For example.8 0. in dB.4 Imaginary Axis 0. For a disk gain margin corresponding to 3 dB (1. 1064 .4 −0.
dmplot dmplot(1. and I. Robust Multivariable Control of Aerospace Systems. Conservatism with robustness tests for linear feedback control systems. (1980).414) Max. Control Science and Dynamical Systems. Bates. Dailey. and Gangsass. Blight. Postlethwaite (2002). 93137. (1994).D. Thesis. Allowable Phase Variation for a 1. 59. M. (stability is guaranteed for all gain/phase variations inside the ellipse) 20 18 16 14 Phase Variation (deg) 12 10 8 6 4 2 0 −4 −3 −2 −1 0 1 Gain Variation (dB) 2 3 4 Reference Barrett. The Netherlands.” International Journal of Control. “Practical control law design for aircraft using multivariable techniques. Ph. University of Minnesota.L. Delft. See Also loopmargin Comprehensive stability analysis of feedback loops 1065 . ISBN: 9040723176. No..D. D. J. Vol. R.41dB Disk Gain Margin. D. Delft University Press.F. 1.
dmplot wcmargin Calculate worstcase margins for feedback loop 1066 .
or a constant matrix of the form [xmin xmax ymin ymax] specifying the plot window on the data. optional frd objects of initial set of points init_pts Output arguments: sysout pts Stable. Input arguments: data either a frequency response object that is plotted as a reference. • Typing r removes the point with frequency nearest that of the crosshairs. The new fit is displayed along with the points. • Typing w uses the crosshair location as the initial point in creating a window. the points will be plotted when the fitting. The fitting routine approximately minimizes the maximum error in a log sense. If the crosshairs are outside the plotting window. in magnitude. minimum phase ss object which approximately fits.pts] = drawmag(data. if it exists. all interaction with the program is through the mouse and/or the keyboard. Frequency response of points. • Typing any integer between 09 fits the existing points with a transfer function of that order. the pts data. Typing a is the same as clicking the mouse button.init_pts) drawmag interactively uses the mouse in the plot window to create pts (the frd object) and sysout (a stable minimumphase ss object). While drawmag is running. and the most recent previous fit. if there is one. or replotting modes are invoked. which approximately fits the frequency response (magnitude) in pts. Moving the crosshairs and clicking the mouse or pressing any key then gives 1067 . The program recognizes several commands: • Clicking the mouse button adds a point at the crosshairs.pts] = drawmag(data) [sysout. must be in the plot window. windowing. The mouse.drawmag Purpose Syntax Description 10drawmag A mousebased tool for sketching and fitting [sysout.
This is useful in fine tuning parts of the data. which is immediately replotted.drawmag a second point at the new crosshair location. Caution should be exercised when using this option. as it can wreak havoc on the program if variables are changed. • Typing p simply replots the data using a window that covers all the current data points as well as whatever was specified in in. These two points define a new window on the data. Windowing may be called repeatedly. See Also ginput loglog Graphically input information from mouse Plotting frequency responses on loglog scale 1068 . • Typing k invokes the keyboard using the keyboard command. Typically used after windowing to view all the data.
and righthand sides of each LMI are then returned by showlmi..xfeas] = feasp(lmis) 1069 . . XK.1) % LMI #1: I lmiterm([2 1 1 X].1) % LMI #2: X lmis = getlmis To compute a solution xfeas. The function evallmi is useful for validation of the LMI solvers’ output. This LMI system is defined by: ⎝ 0.7 ⎠ setlmis([]) X = lmivar(1. first form the corresponding decision vector x with mat2dec and then call evallmi with x as input. XK.[2 1]) % full symmetric X lmiterm([1 1 1 X]. The “evaluation” consists of replacing all terms involving X1..decvars) evallmi evaluates all LMI constraints for a particular instance decvars of the vector of decision variables. .1 – 0. evaluate all variable terms in the system of LMIs evalsys = evallmi(lmisys.evallmi Purpose Syntax Description 10evallmi Given a particular instance of the decision variables.1. . Recall that decvars fully determines the values of the matrix variables X1. The matrix values of the left. .1. .5 – 0.A) % LMI #1: A'*X*A lmiterm([1 1 1 X]. Observation evallmi is meant to operate on the output of the LMI solvers.2 ⎟ . .A'.. The output evalsys is an LMI system containing only constant terms. XK by their matrix value. To evaluate all LMIs for particular instances of the matrix variables X1. call feasp by [tmin.1) % LMI #1: X lmiterm([1 1 1 0]. Example Consider the feasibility problem of finding X > 0 such that ATXA – X + I < 0 ⎛ ⎞ where A = ⎜ 0. . . . The vector returned by these solvers can be fed directly to evallmi to evaluate all variable terms.
and righthand sides of the first and second LMIs are then given by [lhs1. evaluate all LMI constraints by typing evals = evallmi(lmis. See Also showlmi setmvar dec2mat mat2dec Return the left.xfeas.1519e+01 1. The solution X corresponding to the feasible decision vector xfeas would be given by X = dec2mat(lmis.1942e+02 The LMI constraints are therefore feasible since tmin < 0.xfeas) The left.2229e+01 5.8163e+01 confirms that the first LMI constraint is satisfied by xfeas.evallmi The result is tmin = 4.rhs2] = showlmi(evals.7117e+00 xfeas' = 1.rhs1] = showlmi(evals.1) [lhs2. derive the corresponding values of the matrix variables Return the vector of decision variables corresponding to particular values of the matrix variables 1070 .and righthand sides of an LMI after evaluation of all variable terms Instantiate a matrix variable and evaluate all LMI terms involving this matrix variable Given values of the decision variables.X).2) and the test eig(lhs1rhs1) ans = 8.1029e+02 1. To check that xfeas is indeed feasible.
The optimization code terminates as soon as a value of t below this target is reached. . The optional argument target sets a target value for tmin. If the problem is feasible but not strictly feasible. Note that xfeas is a solution in terms of the decision variables and not in terms of the matrix variables of the problem. tmin is positive and very small.xfeas] = feasp(lmisys. The default value is target = 0. This fiveentry vector is organized as follows: • options(1) is not used • options(2) sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default) • options(3) resets the feasibility radius. Some postanalysis may then be required to decide whether xfeas is close enough to feasible. xN) to lie within the ball 1071 . Use dec2mat to derive feasible values of the matrix variables from xfeas.target) The function feasp computes a solution xfeas (if any) of the system of LMIs described by lmisys. Given the LMI system T T N LxN ≤ M R ( x )M. The LMI constraints are feasible if tmin ð 0 and strictly feasible if tmin < 0. xfeas is computed by solving the auxiliary convex program: T T (105) Minimize t subject to N L ( x ) N – M R ( x ) M ≤ tI . The global minimum of this program is the scalar value tmin returned as first output argument by feasp. . Setting options(3) to a value R > 0 further constrains the decision vector x = (x1.feasp Purpose Syntax Description 10feasp Find a solution to a given system of LMIs [tmin.options.. Control Parameters The optional argument options gives access to certain control parameters for the optimization algorithm. . The vector xfeas is a particular value of the decision variables for which all LMIs are satisfied.
a large value results in natural convergence at the expense of a possibly large number of iterations. that is. In this mode. the Euclidean norm of xfeas should not exceed R. • options(5) = 1 turns off the trace of execution of the optimization procedure. the code terminates quickly but without guarantee of accuracy. and increased if necessary during the course of optimization • options(4) helps speed up termination. Setting option(i) to zero is equivalent to setting the corresponding control parameter to its default value. Type HELP MEMORY for your options. The default value is 10. for instance. MATLAB may run out of memory and display the message ??? Error using ==> feaslv Out of memory. Since the QR mode typically requires much more memory. the norm of the solution as a percentage of the feasibility radius R. there is no need to redefine the entire vector when changing just one control parameter. feasp displays the fradius saturation.5) options(2)=10 % default value for all parameters Memory Problems When the leastsquares problem solved at each iteration becomes ill conditioned. it suffices to type options=zeros(1. the feasp solver switches from Choleskybased to QRbased linear algebra (see“Memory Problems” on page 10191 for details). the code terminates if t did not decrease by more than one percent in relative terms during the last J iterations. When set to an integer value J > 0. accuracy. The feasibility radius is a simple means of controlling the magnitude of solutions. On the contrary. The default value is R = 109. 1072 . This parameter trades off speed vs. Upon termination. Consequently. Resetting options(5) to zero (default value) turns it back on. Setting options(3) to a negative value activates the “flexible bound” mode.feasp N ∑ xi < R 2 i=1 2 In other words. If set to a small value (< 10). the feasibility radius is initially set to 108. To set the maximum number of iterations to 10.
7 – 2. first enter the LMIs (98)–(910) by: setlmis([]) p = lmivar(1.0 ⎠ This problem arises when studying the quadratic stability of the polytope of matrices Co{A1. if no additional swap space is available. A 3 = ⎜ – 1. set options(4) = 1.4 0. A 2 = ⎜ – 0.xfeas] = feasp(lmis) 1073 . A2.1.1.1. A3}.1.5 ⎟ . To assess feasibility with feasp.1) % LMI #4: lmiterm([4 1 1 0].'s') % LMI lmiterm([2 1 1 p].1) % LMI #4: I lmis = getlmis #1 #2 #3 P T T T (106) (107) (108) Then call feasp to find a feasible decision vector: [tmin.3 – 2.[2 1]) lmiterm([1 1 1 p].'s') % LMI lmiterm([3 1 1 p]. This will prevent switching to QR and feasp will terminate when Cholesky fails due to numerical instabilities.8 1.feasp You should then ask your system manager to increase your swap space or.9 ⎟ ⎝ 1 –3 ⎠ ⎝ 1.7 ⎠ ⎝ 0.a2.a1.'s') % LMI lmiterm([4 1 1 p].a3. Example Consider the problem of finding P > I such that A 1 P + PA 1 < 0 A 2 P + PA 2 < 0 A 3 P + PA 3 < 0 with data ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ A 1 = ⎜ – 1 2 ⎟ .
4 ⎟ ⎝ 126.0.8 126. Y.10. derive the corresponding values of the matrix variables 1074 . Contr. you can bound the Frobenius norm of P by 10 while asking tmin to be less than or equal to –1. and P. Amer. pp. Conf..xfeas. SIAM. Baltimore.4 155.6912. 840–844. type P = dec2mat(lmis.xfeas] = feasp(lmis. A2.0. A3}. Nemirovski.feasp This returns tmin = 3.mex. This yields tmin = 1.. Philadelphia..[0.1745 and a matrix P with largest eigenvalue λmax(P) = 9. Maryland.1 ⎠ It is possible to add further constraints on this feasibility problem. This is done by [tmin. “The Projective Method for Solving Linear Matrix Inequalities. The optimization is performed by the CMEX file feaslv. Reference The feasibility solver feasp is based on Nesterov and Nemirovski’s Projective Method described in Nesterov.1363. Hence (98)–(910) is feasible and the dynamical · system x = A(t)x is quadratically stable for A(t) ∈ Co{A1. Interior Point Polynomial Methods in Convex Programming: Theory and Applications. and A. 1994. A. Gahinet. For instance.p) This returns ⎛ ⎞ P = ⎜ 270. To obtain a Lyapunov matrix P proving the quadratic stability.” Proc. See Also mincx gevp dec2mat Minimize a linear objective under LMI constraints Generalized eigenvalue minimization under LMI constraints Given values of the decision variables.1) The third entry 10 of options sets the feasibility radius to 10 while the third argument 1 sets the target value for tmin. Nemirovski.0]. 1994.
sys = tf([1 2 2]. B will be the same size as A. B = fitfrd(A. N should be a nonnegative scalar. RD must be a nonnegative integer. The default for RD can also be specified by setting RD to an empty matrix. it must be the same size as A. create Dscale frequency response data from a fifth order system.N.5 1.RD) B = fitfrd(A.RD. ss or frd.N. WT may be a double. If WT is a scalar.1]). The default value for RD is 0. The frequency response of B closely matches the Dscale frequency response data in A. If RD is a scalar.omeg). and each individual entry of WT acts as a weighting function on the corresponding entry of (AB). though it need not be 1by1. omeg = logspace(1. A must have either 1 row or 1 column. specifying the relative degree of each entry of B.RD) forces the relative degree of B to be RD.N) B = fitfrd(A. In all cases.1). For example.N) is a statespace object with statedimension N.5])*tf(1.[1 2.[1 0. sys = sys*tf([1 3. sysg = frd(sys.75 3. If WT is a vector.WT) uses the magnitude of WT to weight the optimization fit criteria. where A is a frd object and N is a nonnegative integer.RD. then it is used to weight all entries of the error criteria (AB). If A is a row (or column) then RD can be a vector of the same size as well.N. B = fitfrd(A.fitfrd Purpose Syntax 10fitfrd Fit Dscaling frequency response data with statespace model B = fitfrd(A.[1 2.5 13]).N. 1075 . then it specifies the relative degree for all entries of B. Description Example You can use the fitfrd command to fit Dscale data.WT) B = fitfrd(A.5].
with the frequency responses of the first and third order models calculated by fitfrd. with a first order system. b1.1).fitfrd bode(sysg. you can fit the Dscale data with a third order system.3).omeg). b3g = frd(b3.omeg). sysg. 1076 .'r'). 10 Original data from 5th order system 5 Magnitude (dB) Phase (deg) 0 −5 −10 −15 −20 0 −30 −60 −90 −120 10 −1 10 Frequency (rad/sec) 0 10 1 You can try to fit the frequency response Dscale data. b1g = frd(b1. Compare the original Dscale data. sysg. Similarly. b3 = fitfrd(sysg. b3. b1 = fitfrd(sysg.
'r'. N.fitfrd bode(sysg. fitmagfrd Fit magnitude data with stable LTI model 1077 .'b.b1g.') 10 First and third order fit to 5th order system 5 Magnitude (dB) Phase (deg) 0 −5 −10 −15 −20 0 −45 −90 −135 10 −1 5th order system 1st order fit 3rd order fit 10 Frequency (rad/sec) 0 10 1 Limitations See Also Numerical conditioning problems arise if the state order of the fit.b3g.'k:'. is selected to be higher than required by the dynamics of A.
minimumphase ss object.Frequency equal to A.RD) fitmagfrd(A.N.WT.1).N) is a stable.RD.N. double. or frd. sysg = abs(frd(sys. The default value for WT is 1. C should be a scalar. RD must be a nonnegative integer whose default value is 0.RD. Create frequency response data from a fifth order system. then enforce abs(B(w)) <= abs(A(w) • if C(w) == 1.5 1.N) fitmagfrd(A. then enforce abs(B(w)) >= abs(A(w)) • if C(w) == 0. minimumphase statespace model B B B B = = = = fitmagfrd(A.N. relative to A as: • if C(w) == 1. and each individual entry of WT acts as a weighting function on the corresponding entry of (AB). Example You can use the fitmagfrd command to fit frequency response magnitude data.WT.5])*tf(1.N. The default for RD can also be specified by setting RD to an empty matrix. with statedimension N. ss or frd.5].omeg)).RD) forces the relative degree of B to be RD.RD. 1078 .Frequency. with C. then it is used to weight all entries of the error criteria (AB).WT) fitmagfrd(A. whose frequency response magnitude closely matches the magnitude data in A. omeg = logspace(1.RD. If WT is a scalar.N.75 3. then no additional constraint where w denotes the frequency.[1 0. A is a 1by1 frd object.fitmagfrd Purpose Syntax 10fitmagfrd Fit frequency response magnitude data with a stable. and N is a nonnegative integer. sys = tf([1 2 2]. it must be the same size as A.C) enforces additional magnitude constraints on B.[1 2.[1 2.5 13]). B = fitmagfrd(A.1]). If WT is a vector. WT may be a double. and can also be specified by setting WT to an empty matrix.C) Description B = fitmagfrd(A. B = fitmagfrd(A.N.WT) uses the magnitude of WT to weight the optimization fit criteria. sys = sys*tf([1 3. B = fitmagfrd(A.
ord). b1g = frd(b1.fitmagfrd bodemag(sysg. stable third order system ord = 3. 1079 .omeg). Original data from 5th order system 10 5 0 Magnitude (dB) −5 −10 −15 −20 −1 10 10 Frequency (rad/sec) 0 10 1 Fit the magnitude data with a minimum phase.'r'). b1 = fitmagfrd(sysg.
'k:').[]. b2g = frd(b2.[].ord.fitmagfrd bodemag(sysg. b3 = fitmagfrd(sysg. 10 Third order fit to 5th order system 5 0 Magnitude (dB) −5 −10 −15 5th order system 3rd order fit −20 −1 10 10 Frequency (rad/sec) 0 10 1 Fit the magnitude data with a third order system constrained to lie below and above the given data.ord. b2 = fitmagfrd(sysg.b1g. 1080 .[].1).'r'.[].1).omeg). b3g = frd(b3.omeg).
'. a row or column vector. solving min f subject to (at every frequency point in A): d^2 /(1+ f/WT) < n^2/A^2 < d^2*(1 + f/WT) plus additional constraints imposed with C. 513. Prentice Hall.fitmagfrd bodemag(sysg.b2g. 1975. respectively.RD.'r'. pp.W.'k:'.'m') 10 Third order fits to 5th order system 5 0 Magnitude (dB) −5 −10 −15 5th order system 3rd order fit Bounded below fit Bounded above fit −20 −1 10 10 Frequency (rad/sec) 0 10 1 Algorithm fitmagfrd uses a version of logChebychev magnitude design.WT). d denote the numerator and denominator respectively and B = n/d. An alternate approximate method. 1081 . is B = fitfrd(genphase(A). n and d have orders (NRD) and N.b3g. which cannot enforce the constraints defined by C. The problem is solved using bisection on f and linear programming for fixed f..N. A. n. and R. Limitations Reference This input FRD object must be either a scalar 1by1 object. Schaffer. Oppenheim. Digital Signal Processing.b1g. New Jersey.V.'b.
fitmagfrd See Also fitfrd Fits frequency response data with statespace model 1082 .
[1 0.001) and the other plant is stable first order with transfer function 1/(s+0.nugap] = gapmetric(p0. subplot(2.Si. the feedback controller K=1 will stabilize both plants. [g.1).ng] = gapmetric(p1. The computed answers are guaranteed to satisfy gaptol < gapexact(p0. first order.p1.2.0020 K = 1.gapmetric Purpose Syntax Description 10gapmetric Computes upper bounds on the gap and nugap distances (metric) between two systems [gap. H2 = loopsens(p2.p1) <= gap Example Consider two plant models.tol) specifies a relative accuracy for calculating the gap metric and nugap metric. for instance. Intuitively.p2) g = 0. p1 = tf(1.001]).H2.001. since. The default value for tol is 0. A small value (relative to 1) implies that any controller which stabilizes p0 will likely stabilize p1.nugap] = gapmetric(p0. with transfer function 1/(s0. and moreover. H1 = loopsens(p1. this is obvious.p1) [gap.p1.[1 0. 1083 . Despite the fact that one plant is unstable and the other is stable. The gap and nugap values lie between 0 and 1.K). The input and output dimensions of p0 and p1 must be the same. p2 = tf(1. bode(H1.0029 ng = 0. A gap or nugap of 0 implies that p0 equals p1 and a value of 1 implies that the plants are far apart.Si.'').nugap] = gapmetric(p0. that the closedloop gains of the two closedloop systems will be similar. One plant is a unstable.p1) calculates upper bounds on the gap and nugap metric between systems p0 and p1.001]).tol) [gap.K).''.nugap] = gapmetric(p0. these plants are close in the gap and nugap metrics.001). [gap. and render the closedloop systems nearly identical.
2).''.H2.[1 50]).''.6156 ng = 0.''. bode(H1. Although the two systems have similar high frequency dynamics.gapmetric subplot(2.''). consider two stable plant models that differ by a first order system.H2.4). the same unity gain at low frequency.H2. bode(H1.'').ng] = gapmetric(p3.Ti.2.Ti.CSo. p4 = tf([8]. One plant is the transfer function 50/(s+50) and the other plant is the transfer function 50/(s+50) * 8/(s+8).[1 8])*p3. p3 = tf([50].'').CSo.2.p4) g = 0.PSi.PSi. [g. the plants are modestly far apart in the gap and nugap metrics.6147 1084 . subplot(2. subplot(2.3).2. 0 Magnitude (dB) Bode Diagram Magnitude (dB) 50 Bode Diagram −50 0 −100 180 Phase (deg) Phase (deg) −4 −2 0 2 −50 0 90 −45 0 10 10 10 Frequency (rad/sec) 10 −90 −2 10 10 −1 10 10 Frequency (rad/sec) 0 1 10 2 50 Magnitude (dB) Bode Diagram Magnitude (dB) 0 Bode Diagram 0 −50 −50 0 Phase (deg) Phase (deg) −1 0 1 2 −100 180 −45 90 −90 −2 10 0 10 10 10 Frequency (rad/sec) 10 10 −4 10 10 Frequency (rad/sec) −2 0 10 2 Next. bode(H1.
Note that 1/b(G. On the computation of the gap metric.. and exploited by Georgiou and Smith. The gap metric was introduced into the control literature by Zames and ElSakkary. 253–257. This makes the weighting functions compatible with the weighting structure in the H∞ loop shaping control design procedure (see loopsyn and ncfsyn for more details). 1085 . then the stability margin when G0 is perturbed to G1 and K0 is perturbed to K1 is degraded by no more than the above formula.p1) between 0 and 1 for the distance between a nominal system p0 (G0) and a perturbed system p1 (G1).K1) ≥ arcsin b(G0. Smith 1988. The computation of the nugap uses the method of Vinnicombe. The ν gap is always less than or equal to the gap. 1980. In the above robustness result. and Vinnicombe.G1)– arcsin δ(K0. 1993. vol. G needs to be replaced by W2GW1 and K by W 1 KW 2 (similarly for G0. 1992. weighting functions need to be introduced. 1990. 1990.K) is also the signal gain from disturbances on the plant input and output to the input and output of the controller. The computation of the gap amounts to solving 2block H∞ problems.. Systems Control Letters. For both of these metrics the following robust performance result holds from Qui and Davidson. To make use of the gap metrics in robust design. with “stability margin” b(G0.K0)– arcsin δ(G0. K ) = I ( I – GK ) – 1 G I K ∞ –1 The interpretation of this result is that if a nominal plant G0 is stabilized by controller K0.gapmetric Algorithm gap and nugap compute the gap and ν gap metrics between two LTI objects.K0). 1993 arcsin b(G1.K1) where b ( G. K0 and K1). The particular method used here for solving the H∞ problems is based on Green et al. The ν gap metric was derived by Vinnicombe. so its predictions using the above robustness result are tighter. 1993. G1. Georgiou.T. 1988. 11. –1 –1 Reference Georgiou. T. pp. Both quantities give a numerical value δ(p0.
Glover. T. pp. “Feedback stability under simultaneous gap metric uncertainties in plant and controller.” Systems Control Letters. 1992. D. and M. Department of Engineering. 673–686. K. and ElSakkary.gapmetric Georgiou. Oct. Zames. University of Cambridge. G. pp. of Control and Opt. and J.. “Unstable systems and feedback: The gap metric. Davison..C.” PhD dissertation.. 18–1. 1990. 1993. and E. 1980. Green. Qiu. “Optimal robustness in the gap metric. Smith. pp. Doyle.” SIAM J. L...loop shaping controller synthesis H∞ .” IEEE Transactions on Automatic Control.” Proceedings of the Allerton Conference. 28(6). “Measuring Robustness of Feedback Systems. vol. vol. Limebeer. “A Jspectral factorization approach to H∞ control.normalized coprime controller synthesis Calculate stability margins of uncertain systems Calculate worstcase sensitivities for feedback loop Calculate worstcase margins for feedback loop 1086 . pp.T. M. Vinnicombe. See Also loopmargin loopsyn ncfsyn robuststab wcsens wcmargin Comprehensive analysis of feedback loop H∞ .. 35. 380–385. 1350–1371. 9–22.J.. 1990. G.
Schaffer. minimum phase transfer function resp = genphase(d) genphase uses the complexcepstrum algorithm to generate a complex frequency response. rational. are frd objects. minimum phase function.V. and R. resp. New Jersey. and output. but whose phase corresponds to a stable. The input.genphase Purpose Syntax Description 10genphase Fit singleinput/singleoutput magnitude data with a real. pp.W. Prentice Hall. A. whose magnitude is equal to the real. d. 1975.. Reference See Also Oppenheim. fitmagfrd Fitting magnitude data with stable LTI model 1087 . positive response d. Digital Signal Processing. resp. 513.
See Also setlmis lmivar lmiterm newlmi Initialize the description of an LMI system Specify the matrix variables in an LMI problem Specify the term content of LMIs Attach an identifying tag to LMIs 1088 .getlmis Purpose Syntax Description 10getlmis Get the internal description of an LMI system lmisys = getlmis After completing the description of a given LMI system with lmivar and lmiterm. its internal representation lmisys is obtained with the command lmisys = getlmis This MATLAB representation of the LMI system can be forwarded to the LMI solvers or any other LMILab function for subsequent processing.
the last argument target sets some target value for λ.xopt] = gevp(lmisys. The initial point is ignored when infeasible. Provided that (911)–(912) are jointly feasible. x0) is available.nlfc.linit. If an initial feasible pair (λ0. x) with λ ð target. The argument lmisys describes the system of LMIs (911)–(913) for λ = 1. it can be passed to gevp by setting linit to λ0 and xinit to x0. The number of linearfractional constraints (913) is specified by nlfc. This positivity constraint is required for regularity and good formulation of the optimization problem. be cautious to • Always specify the linearfractional constraints (913) last in the LMI system.target) gevp solves the generalized eigenvalue minimization problem Minimize λ subject to: C(x) < D(x) B(x) (109) (1010) (1011) 0 < A(x) < λB(x) where C(x) < D(x) and A(x) < λB(x) denote systems of LMIs. gevp returns the global minimum lopt and the minimizing value xopt of the vector of decision variables x. 1089 . All other input arguments are optional.options.gevp Purpose Syntax Description 10gevp Generalized eigenvalue minimization under LMI constraints [lopt. Caution When setting up your gevp problem. Finally. The corresponding optimal values of the matrix variables are obtained with dec2mat. gevp systematically assumes that the last nlfc LMI constraints are linear fractional • Add the constraint B(x) > 0 or any other LMI constraint that enforces it (see Remark below). The LMIs involving λ are called the linearfractional constraints while (911)–(912) are regular LMI constraints. Note that xinit should be of length decnbr(lmisys) (the number of decision variables). The code terminates as soon as it has found a feasible pair (λ.xinit.
9 ⎟ . Setting option(i) to zero is equivalent to setting the corresponding control parameter to its default value. By progress. A 3 = ⎜ – 1.0 ⎠ consider the problem of finding a single Lyapunov function V(x) = xTPx that proves stability of dV ( x ) and maximizes the decay rate – .3 – 2. Its purpose and usage are as for feasp. we mean the amount by which λ decreases. The default value is 5 iterations. Example Given ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ A 1 = ⎜ – 1 2 ⎟ . This is equivalent to minimizing dt α subject to I<P A 1 P + PA 1 < αP T · x = Aix (i = 1.4 0.8 1.7 – 2.. • options(3) sets the feasibility radius.5 ⎟ . 2.7 ⎠ ⎝ 0. the code terminates when the progress in λ over the last J iterations falls below the desired relative accuracy. In gevp. 3) (1012) (1013) 1090 . ⎝ 1 –3 ⎠ ⎝ 1. this is a fiveentry vector organized as follows: • options(1) sets the desired relative accuracy on the optimal value lopt (default = 10–2).gevp Control Parameters The optional argument options gives access to certain control parameters of the optimization code. A 2 = ⎜ – 0. If set to an integer value J > 0. • options(2) sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default). • options(4) helps speed up termination. Resetting options(5) to zero (default value) turns it back on. • options(5) = 1 turns off the trace of execution of the optimization procedure.
64 ⎠ Remark Generalized eigenvalue minimization problems involve standard LMI constraints (911) and linear fractional constraints (913). the set of constraints (912) may reduce to a single constraint as in the example above.popt]=gevp(lmis.1) % P > I : I lemiterm([ 1 1 1 p].a2. p = lmivar(1.3) This returns alpha = 0.58 – 8.'s') % LFC lemiterm([ 3 1 1 p].1) % LFC # 1 lemiterm([3 1 1 p].gevp A 2 P + PA 2 < αP A 3 P + PA 3 < αP T T (1014) (1015) To set up this problem for gevp.122 as optimal value (the largest decay rate is therefore 0. first specify the LMIs (915)–(917) with α = 1: setlmis([]).a1. the single extra LMI “P > I” is enough to enforce positivity of all 1091 . For wellposedness. Although this could be done automatically from inside the code.'s') % LFC lemiterm([ 4 1 1 p]. the positive definiteness of B(x) must be enforced by adding the constraint B(x) > 0 to the problem. For instance.1.1. call gevp by [alpha.'s') % LFC lemiterm([ 2 1 1 p].1) % P > I : lemiterm([2 1 1 p]. In this case.1. To minimize α subject to (915)–(917).1) % LFC # 2 lemiterm([4 1 1 p].a3.[2 1]) lemiterm([1 1 1 0].122).1.35 18.1.1) % LFC # 3 lmis = getlmis P # 1 (lhs) (rhs) # 2 (lhs) (rhs) # 3 (lhs) (rhs) Note that the linear fractional constraints are defined last as required.35 ⎟ ⎝ – 8.1. This value is achieved for ⎛ ⎞ P = ⎜ 5.1. this is not desirable for efficiency reasons.
It is therefore left to the user to devise the least costly way of enforcing this positivity requirement. and A.mex. Interior Point Polynomial Methods in Convex Programming: Theory and Applications.gevp linearfractional righthand sides. Philadelphia. Reference The solver gevp is based on Nesterov and Nemirovski’s Projective Method described in Nesterov. Nemirovski. derive the corresponding values of the matrix variables Give the total number of decision variables in a system of LMIs Find a solution to a given system of LMIs Minimize a linear objective under LMI constraints 1092 . The optimization is performed by the CMEX file fpds.. Y. SIAM. 1994. See Also dec2mat decnbr feasp mincx Given values of the decision variables.
NAMES.fieldnames(A. These uncertain parameters are used to construct an uncertain transfer function p.N2.) takes N1 samples of the uncertain real parameters listed in NAMES1. is synthesized for the plant p based on the nominal values of gamma and tau.'Percentage'. The size of array B is equal to [size(A) N].NAMES1. [B.SampleValues] = gridreal(A.. If A includes uncertain objects other than ureal.N) substitutes N uniformlyspaced samples of the uncertain real parameters in A.. The uncertain closedloop system. tau = ureal('tau'. [B.. is formed. c. gamma = ureal('gamma'. [B.N) samples only the uncertain reals listed in the NAMES variable (CELL.4). or CHAR array).30 percent.SampleValues] = gridreal(A.N1.NAMES2.Names.N) [B. Note that gridureal(A. 1093 .Uncertainty).5.SampleValues).N) is the same as gridureal(A.N) [B.. and N2 samples of the uncertain real parameters listed in NAMES2 and so on...Names1.5 and its value can change by +/. clp. Description Example Create two uncertain real parameters gamma and tau...N) [B.N1. The nominal value of gamma is 4 and its range is 3 to 5.30).[tau 1]).Names2. then B will be an uncertain object. The N samples are generated by uniformly gridding each ureal in A across its range.N) additionally returns the specific sampled values (as a STRUCT whose fieldnames are the names of A’s uncertain elements) of the uncertain reals.SampleValues] = gridureal(A.N2. size(B) will equal [size(A) N1 N2 .) B = gridureal(A. Any entries of NAMES that are not elements of A are ignored.SampleValues] = gridreal(A. The nominal value of tau is 0.SampleValues] = gridureal(A.N). B is the same as usubs(A. Hence. An integral controller.].SampleValues] = gridureal(A..gridureal Purpose Syntax 10gridureal Grid ureal parameters uniformly over their range B = gridreal(A. p = tf(gamma.
gridureal
KI = 1/(2*tau.Nominal*gamma.Nominal); c = tf(KI,[1 0]); clp = feedback(p*c,1);
The figure below shows the openloop unit step response (top plot) and closedloop response (bottom plot) for a grid of 20 values of gamma and tau.
subplot(2,1,1); step(gridureal(p,20),6) title('Openloop plant step responses') subplot(2,1,2); step(gridureal(clp,20),6)
Open−loop plant step responses
6 5 4 Amplitude 3 2 1 0
0
1
2
3 Time (sec)
4
5
6
1.5
Closed−loop plant step responses
1 Amplitude 0.5 0 0
1
2
3 Time (sec)
4
5
6
Clearly this illustrates the lowfrequency closedloop insensitivity achieved by the PI control system.
More detailed example
The next example illustrates the different options in gridding highdimensional (e.g., n greater than 2) parameter spaces. An uncertain matrix, m, is constructed from four uncertain real parameters, a, b, c and d, each making up the individual entries.
a=ureal('a',1); b=ureal('b',2); c=ureal('c',3); d=ureal('d',4);
1094
gridureal
m = [a b;c d];
In the first case, the (a,b) space is gridded at five places, and the (c,d) space at 3 places, The uncertain matrix m is evaluated at these 15 “grid”points resulting in the matrix m1.
m1 = gridureal(m,{'a';'b'},5,{'c';'d'},3);
In the second case, the (a,b,c,d) space is gridded at 15 places, and the uncertain matrix m is sampled at these 15 points. The resulting matrix is m2.
m2 = gridureal(m,{'a';'b';'c';'d'},15);
The (2,1) entry of m is just the uncertain real parameter c. Below, you see the histogram plots of the (2,1) entry of both m1 and m2. The (2,1) entry of m1 only takes on 3 distinct values, while the (2,1) entry of m2 (which is also c) takes on 15 distinct values uniformly through its range.
subplot(2,1,1) hist(m1(2,1,:)) title('2,1 entry of m1') subplot(2,1,2) hist(m2(2,1,:))
1095
gridureal
title('2,1 entry of m2')
2,1 entry of m1 5 4 3 2 1 0
2
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6
3.8
4
2,1 entry of m2 2
1.5
1
0.5
0
2
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6
3.8
4
See Also
usample usubs
Generates random samples of an atom Substitutes values for atoms
1096
h2hinfsyn
Purpose Syntax Description
10h2hinfsyn
Mixed H2/H∞ synthesis with pole placement constraints
[gopt,h2opt,K,R,S] = hinfmix(P,r,obj,region,dkbnd,tol) h2hinfyn performs multiobjective outputfeedback synthesis. The control problem is sketched in this figure.
w P(s) y K(s)
z∞ z2
u
Figure 104: Mixed H2/H∞ synthesis
If T∞(s) and T2(s) denote the closedloop transfer functions from w to z∞ and z2, respectively, hinfmix computes a suboptimal solution of the following synthesis problem: Design an LTI controller K(s) that minimizes the mixed H2/H∞ criterion α T∞ ∞ + β T2 2 subject to • T∞∞ < γ0 • T22 < ν0 • The closedloop poles lie in some prescribed LMI region D. Recall that .∞ and .2 denote the H∞ norm (RMS gain) and H2 norm of transfer functions. More details on the motivations and statement of this problem can be found in “MultiObjective H• Synthesis” on page 515.
P is any SS, TF, or ZPK LTI representation of the plant P(s), and r is a
2 2
threeentry vector listing the lengths of z2, y, and u. Note that z∞ and/or z2 can be empty. The fourentry vector obj = [γ0, ν0, α, β] specifies the H2/H∞
1097
h2hinfsyn
constraints and tradeoff criterion, and the remaining input arguments are optional: • region specifies the LMI region for pole placement (the default region = [] is the open lefthalf plane). Use lmireg to interactively build the LMI region description region • dkbnd is a userspecified bound on the norm of the controller feedthrough matrix DK. The default value is 100. To make the controller K(s) strictly proper, set dkbnd = 0. • tol is the required relative accuracy on the optimal value of the tradeoff criterion (the default is 10–2). The function h2hinfsyn returns guaranteed H∞ and H2 performances gopt and h2opt as well as the SYSTEM matrix K of the LMIoptimal controller. You can also access the optimal values of the LMI variables R, S via the extra output arguments R and S. A variety of mixed and unmixed problems can be solved with hinfmix. In particular, you can use hinfmix to perform pure pole placement by setting obj = [0 0 0 0]. Note that both z∞ and z2 can be empty in such case.
Reference
Chilali, M., and P. Gahinet, “H∞ Design with Pole Placement Constraints: An LMI Approach,” to appear in IEEE Trans. Aut. Contr., 1995. Scherer, C., “Mixed H2 H∞ Control,” to appear in Trends in Control: A European Perspective, volume of the special contributions to the ECC 1995.
See Also
lmireg msfsyn
Specify LMI regions for pole placement purposes Multimodel/multiobjective statefeedback synthesis
1098
h2syn
Purpose Syntax Description
10h2syn
H2 control synthesis for an LTI plant
[K,CL,GAM,INFO]=H2SYN(P,NMEAS,NCON) h2syn computes a stabilizing H2 optimal LTI/SS controller K for a partitioned LTI plant P. The controller, K, stabilizes the plant P and has the same number
A B1 B2 P = C 1 D 11 D 12 C 2 D 21 D 22 of states as P. The LTI system P is partitioned where inputs to B1 are the disturbances, inputs to B2 are the control inputs, output of C1 are the errors to be kept small, and outputs of C2 are the output measurements provided to the controller. B2 has column size (NCON) and C2 has row size (NMEAS). If P is constructed with mktito, you can omit NMEAS and NCON from the arguments. The closedloop system is returned in CL and the achieved H2 cost γ in GAM. — see Figure 2. INFO is a STRUCT array that returns additional information about the design.
u
1
u2
A B1 B2 D 11 D P = C 1P(s) 12 C 2 D 21 D 22
y1 y2
K F(s)
Figure 105:
H2 control system CL=
lft(P,K)= T y u . 1 1
1099
h2syn
. Output arguments: K
CL= lft(P,K) GAM = norm(CL) INFO
LTI controller LTI closedloop system T y u 1 1 Ty u the H optimal cost γ=
2
1 1
2
additional output information
Additional output — structure array INFO containing possible additional information depending on METHOD):
INFO.NORMS
norms of 4 different quantities, full information control cost (FI), output estimation cost (OEF), direct feedback cost (DFL) and full control cost (FC). NORMS = [FI OEF DFL FC]; fullinformation gain matrix (constant feedback) u (t) = K x(t) 2 FI
INFO.KFI
INFO.GFI
fullinformation closedloop system
GFI=ss(AB2*KFI,B1,C1D12*KFI,D11)
INFO.HAMX INFO.HAMY
X Hamiltonian matrix (statefeedback) Y Hamiltonian matrix (Kalman filter)
Examples
Example 1: Stabilize 4by5 unstable plant with 3states, NMEAS=2, NCON=2
rand('seed',0);randn('seed',0); P=rss(3,4,5)'; [K,CL,GAM]=h2syn(P,2,1); open_loop_poles=pole(P) closed_loop_poles=pole(CL)
10100
h2syn
Example 2: MixedSensitivity H2 loopshaping. Here the goal is to shape the
sigma plots of sensitivity S:=(I+GK)1 and complementary sensitivity
T:=GK(I+GK)1, by choosing a stabilizing K the minimizes the H2 norm of
∆
W 1S ( W 2 ⁄ G )T W3 T
Ty u = 1 1
0.1 ( s + 1000 ) s–1 where G ( s ) =  , W 1 =  , W 2 = 0.1 , no W3. 100s + 1 s–2
s=zpk('s'); G=10*(s1)/(s+1)^2; W1=0.1*(s+100)/(100*s+1); W2=0.1; W3=[]; P=(G,W1,W2,W3); [K,CL,GAM]=h2syn(P); L=G*K; S=inv(1+L); T=1S; sigma(L,'k.',S,'r',T,'g')
Algorithm
The H2 optimal control theory has its roots in the frequency domain interpretation the cost function associated with timedomain statespace LQG control theory [1]. The equations and corresponding nomenclature used here are taken from the Doyle et al., 1989 [2][3].
h2syn solves the H2 optimal control problem by observing that it is equivalent
to a conventional LinearQuadratic Gaussian (LQG) optimal control problem. For simplicity, we shall describe the details of algorithm only for the continuoustime case, in which case the cost function JLQG satisfies
10101
h2syn
.
⎧1 T T ⎫ J LQG = lim E ⎨  y 1 y 1 dt ⎬ T → ∞ ⎩T 0 ⎭ ⎧ ⎫ ⎪ 1 T T T Q Nc x ⎪ = lim E ⎨  [ x u 2 ] dt ⎬ T u2 T → ∞ ⎪T 0 ⎪ Nc R ⎩ ⎭
∫
∫
⎧ ⎫ T ⎪ 1 T T T C1 ⎪ x  [ x u 2 ] = lim E ⎨ [ C 1 D 12 ] dt ⎬ T u2 T → ∞ ⎪T 0 ⎪ D 12 ⎩ ⎭
∫
with plant noise u1 channel of intensity I, passing through the matrix [B1;0;D12] to produce equivalent white correlated with plant ξ and white measurement noise θ having joint correlation function Ξ Nf ⎧ T⎫ E ⎨ ξ ( t ) [ ξ ( τ ) θ ( τ ) ] ⎬= δ(t – τ) T ⎩ θ( t) ⎭ Nf Θ = B1 D 21
T T B 1 D 21 δ ( t – τ )
The H2 optimal controller K(s) is thus realizable in the usual LQG manner as a fullstate feedback KFI and a Kalman filter with residual gain matrix KFC.
1 Kalman Filter
· ˆ ˆ ˆ x = A x + B 2 u 2 + K FC ( y 2 – C 2 x – D 22 u 2 ) K FC = ( YC 2 + N f ) Θ
T –1
= ( YC 2 + B 1 D 21 ) ( D 21 D 21 )
T
T
T –1
where Y= YT≥0 solves the Kalman filter Riccati equation YA + A Y – ( YC 2 + N f ) Θ ( C 2 Y + N f ) + Ξ = 0
T T –1 T
10102
h2syn
2 FullState Feedback
ˆ u 2 = K FI x K FI = R ( B 2 X + N c ) = ( D 12 D 12 ) ( B 2 X + D 12 C 1 ) where X = XT≥0 solves the statefeedback Riccati equation A X + X A – ( XB 2 + N c ) R ( B 2 X + N c ) + Q = 0 The final positivefeedback H2 optimal controller u 2 = K ( s )y 2 has a familiar closedform A – K FC C 2 – B 2 K FI + K FC D 22 K FI K f K ( s ) := – K FI 0
h2syn implements the continuos optimal H2 control design computations using
T –1 T T –1 T T T –1 T T
the formulae described in the Doyle, et al.[2]; for discretetime plants, h2syn uses the same controller formula, except that the corresponding discrete time Riccati solutions (dare) are substituted for X and Y. A Hamiltonian is formed and solved via a Riccati equation. In the continuoustime case, the optimal H2norm is infinite when the plant D11 matrix associated with the input disturbances and output errors is nonzero; in this case, the optimal H2 controller returned by h2syn is computed by first setting D11 to zero.
3 Optimal Cost GAM.
The full information (FI) cost is given by the equation
( trace ( B 1 X 2 B 1 ) ) 2
′
1 
′
1 
. The
output estimation cost (OEF) is given by ( trace ( F 2 Y 2 F 2 ) ) 2 , where F 2 =: – ( B 2 X 2 + D 12 C 1 ) . The disturbance feedforward cost (DFL) is ( trace ( L 2 X 2 L 2 ) ) 2 , where L2 is defined by – ( Y 2 C 2 + B 1 D 21 ) and the full control cost (FC) is given by ( trace ( C 1 Y 2 C 1 ) ) 2 . X2 and Y2 are the solutions to the X and Y Riccati equations, respectively. For for continuoustime plants with zero feedthrough term (D11 = 0), and for all discretetime plants, the optimal H2 cost γ= T y 1 u 1 2 is
′
1 
′
′
′
1 
′
′
10103
h2syn
GAM =sqrt(FI^2 + OEF^2+ trace(D11*D11'));
otherwise, GAM = Inf.
Limitations
1 (A, B2, C2) must be stabilizable and detectable. 2 D12 must have full column rank and D21 must have full row rank
References
[1] Safonov, M. G., A. J. Laub, and G. Hartmann, “Feedback Properties of Multivariable Systems: The Role and Use of Return Difference Matrix,” IEEE Trans. of Automat. Contr., AC26, pp. 4765, 1981. [2] Doyle, J. C., K. Glover, P. Khargonekar, and B. Francis, “Statespace solutions to standard H2 and H∞ control problems,” IEEE Transactions on Automatic Control, vol. 34, no. 8, pp. 831–847, August 1989. [3] Glover, K., and J.C. Doyle, “Statespace formulae for all stabilizing controllers that satisfy an H∞ norm bound and relations to risk sensitivity,” Systems and Control Letters, 1988. vol. 11, pp. 167–172, August 1989.
See Also
augw hinfsyn
Augment plant weights for control design H∞ synthesis controller
10104
hankelmr
Purpose Syntax
10hankelmr
Hankel Minimum Degree Approximation (MDA) without balancing GRED = hankelmr(G) GRED = hankelmr(G,order) [GRED,redinfo] = hankelmr(G,key1,value1,...) [GRED,redinfo] = hankelmr(G,order,key1,value1,...) hankelmr returns a reduced order model GRED of G and a struct array redinfo
Description
containing the error bound of the reduced model and Hankel singular values of the original system. The error bound is computed based on Hankel singular values of G. For a stable system Hankel singular values indicate the respective state energy of the system. Hence, reduced order can be directly determined by examining the system Hankel SV’s, σι. With only one input argument G, the function will show a Hankel singular value plot of the original model and prompt for model order number to reduce. This method guarantees an error bound on the infinity norm of the additive error  GGRED ∞ for wellconditioned model reduced problems [1]:
n
G – Gred ∞ ≤ 2
∑ σi
k+1
Note It seems this method is similar to the additive model reduction routines balancmr and schurmr, but actually it can produce more reliable reduced order model when the desired reduced model has nearly controllable and/or observable states (has Hankel singular values close to machine accuracy). hankelmr will then select an optimal reduced system to satisfy the error bound criterion regardless the order one might naively select at the beginning.
10105
hankelmr
This table describes input arguments for hankelmr.
Argument G Description
LTI model to be reduced (without any other inputs will plot its Hankel singular values and prompt for reduced order) (Optional) an integer for the desired order of the reduced model, or optionally a vector packed with desired orders for batch runs
ORDER
A batch run of a serial of different reduced order models can be generated by specifying order = x:y, or a vector of integers. By default, all the antistable part of a system is kept, because from control stability point of view, getting rid of unstable state(s) is dangerous to model a system. 'MaxError' can be specified in the same fashion as an alternative for 'ORDER'. In this case, reduced order will be determined when the sum of the tails of the Hankel sv's reaches the 'MaxError'.
Argument 'MaxError' Value Description
Real number or vector of different errors
{Wout,Win} cell
Reduce to achieve H∞ error. When present, 'MaxError'overides ORDER input. Optimal 1x2 cell array of LTI weights Wout (output) and Win (input). Default for both is identity. Weights must be invertible. Display Hankel singular plots (default 'off'). Order of reduced model. Use only if not specified as 2nd argument.
'Weights'
array
'Display'
'on' or 'off'
'Order'
Integer, vector or cell array
10106
hankelmr
Weights on the original model input and/or output can make the model reduction algorithm focus on some frequency range of interests. But weights have to be stable, minimum phase and invertible. This table describes output arguments.
Argument GRED Description
LTI reduced order model. Become multidimensional array when input is a serial of different model order array. A STRUCT array with 4 fields: • REDINFO.ErrorBound (bound on  GGRED ∞) • REDINFO.StabSV (Hankel SV of stable part of G) • REDINFO.UnstabSV (Hankel SV of unstable part of G) • REDINFO.Ganticausal (Anticausal part of Hankel MDA)
REDINFO
G can be stable or unstable, continuous or discrete.
Note If size(GRED) is not equal to the order you specified. The optimal Hankel MDA algorithm has selected the best Minimum Degree Approximate it can find within the allowable machine accuracy.
Algorithm
Given a statespace (A,B,C,D) of a system and k, the desired reduced order, the following steps will produce a similarity transformation to truncate the original state space system to the kth order reduced model.
1 Find the controllability and observability grammians P and Q. 2 Form the descriptor
E = QP – ρ I where σ k > ρ ≥ σ k + 1 , and descriptor statespace
2
10107
hankelmr
Es – A C
B D
= ρ A + QAP QB CP D
2
T
Take SVD of descriptor E and partition the result into kth order truncation form E = [U E1,U E2]
T Σ E 0 V E1 T 0 0 V E2
3 Apply the transformation to the descriptor statespace system above we
have
T A 11 A 12 U E1 2 T = ( ρ A + QAP ) V E1 V E2 T A 21 A 22 U E2
B1 B2
=
T U E1 T U E2
QB – C
T
C1 C2 =
CP – ρB
T
V E1 V E2
D1 = D
4 Form the equivalent statespace model. 5 The final kth order Hankel MDA is the stable part of the above statespace
realization. Its anticausal part is stored in redinfo.Ganticausal.
10108
hankelmr
˜ A ˜ C
–1 –1 † † ˜ B = Σ E ( A 11 – A 12 A 22 A 21 ) Σ E ( B 1 – A 12 A 22 B 2 ) ˜ C 1 – C 2 A 22† A 21 D 1 – C 2 A 22† B 2 D
The proof of the Hankel MDA algorithm can be found in [2]. The error system between the original system G and the Zeroth Order Hankel MDA G0 is an allpass function [1].
Example
Given a continuous or discrete, stable or unstable system, G, the following commands can get a set of reduced order models based on your selections:
rand('state',1234); randn('state',5678);G = rss(30,5,4); [g1, redinfo1] = hankelmr(G); % display Hankel SV plot % and prompt for order (try 15:20) [g2, redinfo2] = hankelmr(G,20); [g3, redinfo3] = hankelmr(G,[10:2:18]); [g4, redinfo4] = hankelmr(G,'MaxError',[0.01, 0.05]); rand('state',12345); randn('state',6789); wt1 = rss(6,5,5); wt1.d = eye(5)*2; wt2 = rss(6,4,4); wt2.d = 2*eye(4); [g5, redinfo5] = hankelmr(G, [10:2:18], 'weight',{wt1,wt2}); for i = 1:5 figure(i); eval(['sigma(G,g' num2str(i) ');']); end
Figure 106, Singular Value Bode Plot of G (30state, 5 outputs, 4 inputs), on page 10110 shows a singular value Bode plot of a random system G with 20 states, 5 output and 4 inputs. The error system between G and its Zeroth order Hankel MDA has it infinity norm equals to an all pass function, as shown in Figure 107, AllPass Error System Between G and Zeroth Order G Anticausal, on page 10111 (ref.: [5]). The Zeroth order Hankel MDA and its error system sigma plot are obtained via commands
[g0,redinfo0] = hankelmr(G,0); sigma(Gredinfo0.Ganticausal)
This interesting allpass property is unique in Hankel MDA model reduction.
10109
hankelmr
Figure 106: Singular Value Bode Plot of G (30state, 5 outputs, 4 inputs)
10110
hankelmr
Figure 107: AllPass Error System Between G and Zeroth Order G Anticausal
Reference
[1] Glover K. “All Optimal Hankel Norm Approximation of Linear Multivariable Systems, and Their L∝ − error Bounds,” Int. J. Control, vol. 39, no. 6, pp. 11451193, 1984. [2] Safonov M. G. and R. Y. Chiang, and D. J. N. Limebeer, “Optimal Hankel Model Reduction for Nonminimal Systems,” IEEE Trans. on Automat. Contr., vol. 35, no. 4, April 1990, pp. 496502.
See Also
reduce balancmr schurmr bstmr ncfmr hankelsv
Top level model reduction routines Balanced truncation via squareroot method Balanced truncation via Schur method Balanced stochastic truncation via Schur method Balanced truncation for normalized coprime factors Hankel singular value
10111
hankelsv
Purpose Syntax
10hankelsv
Compute Hankel singular values for stable/unstable or continuous/discrete system
hankelsv(G) hankelsv(G,ErrorType,style) [sv_stab,sv_unstab]=hankelsv(G,ErrorType,style) [sv_stab,sv_unstab]=hankelsv(G,ErrorType,style) returns a column vector SV_STAB containing the Hankel singular values of the stable part of G and SV_UNSTAB of antistable part (if it exists). The Hankel SV's of antistable part ss(a,b,c,d) is computed internally via ss(a,b,c,d). Discrete model is
Description
converted to continuous one via the bilinear transform.
hankelsv(G) with no output arguments draws a bar graph of the Hankel
singular values such as the following:
10112
hankelsv computes the Hankel singular value of the normalized coprime factor pair of the model [3]. Lo = Uq*diag(sqrt(diag(Sq))). Then form the square roots of the grammians: Lr = Up*diag(sqrt(diag(Sp))). This method not only takes the advantages of robust SVD algorithm. compute the SVD of P and Q: [Up. [Uq.Sq. hankelsv computes the Hankel singular value of the phase matrix of G [2]. 10113 . For ErrorType = `ncf'.hankelsv This table describes optional input arguments for hankelsvd. hankelsv implements the numerically robust square root method to compute the Hankel singular values [1]. Its algorithm goes as follows: Given a stable model G.Vp] = svd(P). The Hankel singular values are simply: σH = svd(Lo'*Lr). with controllability and observability grammians P and Q. but also ensure the computations stay well within the “square root” of the machine accuracy.Sp. For ErrorType = `mult'.Vq] = svd(Q). Argument ERRORTYPE Value 'add' Description 'mult' 'ncf' STYLE Regular Hankel SV’s of G Hankel SV’s of phase matrix Hankel SV’s of coprime factors Absolute value logarithm scale 'abs' 'log' Algorithm For ErrorType = `add'.
1985. AC2. pp. and R. and R. 729733. See Also reduce balancmr schurmr bstmr ncfmr hankelmr Top level model reduction routines Balanced truncation via squareroot method Balanced truncation via Schur method Balanced stochastic truncation via Schur method Balanced truncation for normalized coprime factors Hankel minimum degree approximation 10114 . July 1989. G. no.” IEEE Trans. vol. Chiang. “A Schur Method for Balanced Model Reduction. on Automat. “Model Reduction for Robust Control: A Schur Relative Error Method.A Factorization Approach. 7. Chiang. [2] Safonov M. Contr..hankelsv Reference [1] Safonov M. Control System Synthesis . pp. Y. Y. [3] Vidyasagar M. of Adaptive Control and Signal Processing. G.” International J. 2. Vol. 1988. London: The MIT Press. 259272.
tolred) Given an affine parameterdependent plant · ⎧ x = A ( p )x + B ( p )w + B u 1 2 ⎪ P ⎨ z = C 1 ( p )x + D 11 ( p )w + D 12 u ⎪y = C x + D w + D u 2 21 22 ⎩ where the timevarying parameter vector p(t) ranges in a box and is measured in real time. by aff2pol.m2] if y ∈ Rp2 and u ∈ Rm2).pdK.hinfgs Purpose Syntax Description 10hinfgs Synthesis of gainscheduled H∞ controllers [gopt.r.S] = hinfgs(pdP. e. Note that hinfgs also accepts the polytopic model of P returned.tol.R.gmin. hinfgs seeks an affine parameterdependent controller · ⎧ ζ = A K ( p )ζ + B K ( p )y K⎨ ⎩ u = C K ( p )ζ + D K ( p )y scheduled by the measurements of p(t) and such that • K stabilizes the closedloop system w P y K for all admissible parameter trajectories p(t) • K minimizes the closedloop quadratic H∞ performance from w to z.g. To test if a z u 10115 . hinfgs returns the optimal closedloop quadratic performance gopt and a polytopic description of the gainscheduled controller pdK.. The description pdP of the parameterdependent plant P is specified with psys and the vector r gives the number of controller inputs and outputs (set r=[p2.
∑ αj i=1 = 1 This convex decomposition is computed by polydec. The the values KΠj of ⎜ K j ⎜ ⎟ ⎝ CK ( p ) DK ( p ) ⎠ command Kj = psinfo(pdK. 6 Express p(t) as a convex combination of the ³j: N p(t) = α1³1 + . set the third input gmin to γ. Given the measurements p(t) of the parameters at time t. S of the characteristic LMI system. CK(t). αj Š 0. The arguments tol and tolred control the required relative accuracy on gopt and the threshold for order reduction.j) gives the corresponding corner ³j of the parameter box (pv is the parameter vector description).'sys'. DK(t) to update the controller statespace equations. 7 Compute the controller statespace matrices at time t as the convex combination of the vertex controllers KΠj: ⎛ A (t) B (t) K ⎜ K ⎜ CK ( t ) DK ( t ) ⎝ ⎞ ⎟ = ⎟ ⎠ N ∑ αj KΠ .hinfgs closedloop quadratic performance γ is achievable. Finally.'par') vertx = polydec(pv) Pj = vertx(:.j) returns the jth vertex controller KΠj while pv = psinfo(pdP. Controller Implementation The gainscheduled controller pdK is parametrized by p(t) and characterized by ⎛ A (p) B (p) ⎞ K ⎟ at the corners ³ of the parameter box.+ αN³N. BK(t). . . The controller scheduling should be performed as follows. j i=1 8 Use AK(t). 10116 . hinfgs also returns solutions R.
A. October 1995.. 23 (1994). box corners 10117 .” Systems and Control Letters.hinfgs Reference Apkarian. 22 (1994). See Also psys pvec pdsimul polydec Specification of uncertain statespace models Quantification of uncertainty on physical parameters Time response of a parameterdependent system along a given parameter trajectory Compute polytopic coordinates wrt. P. Packard. Gahinet.. “Gain Scheduling via Linear Fractional Transformations. pp. Packard. P..” submitted to Automatica. pp. Becker.. “Robust Performance of LinearParametrically Varying Systems Using ParametricallyDependent Linear Feedback. Contr. Letters. 205–215. 79–92. P.” Syst. Becker. “SelfScheduled H∞ Control of Linear ParameterVarying Systems. and G. G.
inputs to B2 are the control inputs.GAM.. u 1 u2 A B1 B2 D 11 D P= C 1 P(s) 12 C 2 D 21 D 22 y1 y2 K F(s) Figure 108: H∞ control system CL= lft(P. solution method and so forth—see Figure 1 for details. output of C1 are the errors to be kept small.NMEAS. 1 1 10118 .GAM.) hinfsyn computes a stabilizing H∞ optimal LTI/SS controller K for a partitioned LTI plant P.. and outputs of C2 are the output measurements provided to the controller. The optional KEY and VALUE inputs determine tolerance.INFO] = hinfsyn(P) [K.CL..GAM.K)= T y u . The closedloop system is returned in CL and the achieved H∞ cost γ in GAM.VALUE1.CL. stabilizes the P and has the same number of states as P.NMEAS.CL.NCON.NCON) [K.KEY2.hinfsyn Purpose Syntax Compute an H∞ optimal controller for an LTI plant 10hinfsyn [K. Description A B1 B2 P = C 1 D 11 D 12 C 2 D 21 D 22 The controller. B2 has column size (NCON) and C2 has row size (NMEAS).KEY1. The SYSTEM P is partitioned where inputs to B1 are the disturbances.VALUE2.INFO] = hinfsyn(P. K. INFO is a STRUCT array that returns additional information about the design—see Figure 2.INFO] = hinfsyn(P.
K) LTI controller LTI closedloop system T y 1 u1 10119 . Output arguments: K CL= lft(P. LMI solution maximum entropy solution (default) no command window display. The maximum eigenvalue of X∞Y∞. the minimum magnitude. scaled by γ–2. the hinfsyn program displays several variables indicating the progress of the algorithm. only applies to METHOD 'maxe' (default) standard 2Riccati solution. real part of the eigenvalues of the X and Y Hamiltonian matrices are displayed along with the minimum eigenvalue of X∞ and Y∞. For each γ value being tested. VALUE) pairs When DISPLAY='on'.01) (default=Inf) frequency S0 at which entropy is evaluated. or command window displays synthesis progress information 'METHOD' 'DISPLAY' 'off' 'on' Figure 109: Optional input arguments (KEY.hinfsyn Key 'GMAX' 'GMIN' 'TOLGAM' 'S0' Value Meaning real real real real `ric' 'lmi' 'maxe' initial upper bound on GAM (default=Inf) initial lower bound on GAM (default=0) relative error tolerance for GAM (default=.is also displayed. which are the solutions to the X and Y Riccati equations. respectively. A # sign is placed to the right of the condition that failed in the printout .
01) 10120 .KFC full control gain matrix (constant outputinjection. The γiteration is a bisection algorithm that iterates on the value of γ in an effort to approach the optimal H∞ control design. i.e.KFI all solutions controller.[8].dω 2π – ∞ s 2 + ω2 0 ∫ where T y 1 u 1 is the closedloop transfer function CL.[5]) with loopshifting [6]. With 'METHOD' 'maxe'. The stopping criterion for the bisection algorithm requires the relative difference between the last γ value that failed and the last γ value that passed be less than TOLGAM (default=.GAMFC H∞ cost for full information KFI H∞ cost for full control KFC Algorithm The default 'ric' method uses the tworiccati formulae ([4]. KFC is the dual of KFI) INFO. In the case of the ‘lmi’ method..Inf) INFO the H∞ cost γ= T y 1 u1 ∞ additional output information Additional output — structure array INFO containing possible additional information depending on METHOD): INFO. With all methods. 2 s 02 –2 γ ∞ Entropy = – ln detI – γ T y1 u1 ( jω )′ T y 1 u 1 ( jω ) .AS INFO. hinfsyn employs the LMI technique ([7].hinfsyn GAM = norm(CL. Starting with high and low estimates of γ. LTI twoport LFT full information gain matrix (constant feedback x(t) u (t) = K 2 FI u ( t ) 1 INFO.GAMFI INFO. K returns the max entropy H∞ controller that minimize an entropy integral relating to the point s0.[9]). hinfsyn uses a standard γiteration technique to determine the optimal value of γ.
A # sign is placed to the right of the condition that failed in the printout. the controller may have very lightly damped poles near that frequency ω. • spectral radius of (X∞. semidefinite. A similar DISPLAY is produced with method 'lmi' The algorithm works best when the following conditions are satisfied by the plant: • D12 and D21 have full rank.Y∞) must be less than or equal to γ2. If either of the latter two rank conditions does not hold at some frequency ω. which indicate which of the above conditions are satisfied for each γ value being tested. DISPLAY is 'on'. the conditions checked for the existence of a solution are: • H and J Hamiltonian matrices (which are formed from the statespace data of P and the γ level) must have no imaginaryaxis eigenvalues. scaled by γ–2. • A – jωI B 1 has full row rank for all ω ∈ R. The maximum eigenvalue of X∞Y∞. the H∞ controller K may have large highfrequency gain. respectively. the hinfsyn program displays several variables. real part of the eigenvalues of the X and Y Hamiltonian matrices along with the minimum eigenvalue of X∞ and Y∞.hinfsyn At each value of γ. In the case of the default 'ric' method. the controller may have undesirable properties: If D12 and D21 are not full rank. is also displayed. In the case of the 'ric' method. When. 10121 . C 2 D 21 When the above rank conditions do not hold. • •( A – jωI B 2 C1 D 12 has full column rank for all ω ∈ R. • the stabilizing Riccati solutions X∞ and Y∞ associated with the Hamiltonian matrices must exist and be positive. the display includes the current value of γ being tested. which are the solutions to the X and Y Riccati equations. the algorithm employed requires tests to determine whether a solution exists for a given γ value.
solution controller parameterization KAS(s) such that all solutions to the infinitynorm control problem are parameterized by a free stable contraction map U(s) constrained by ( U ( s ) ∞ < 1 ) (see Figure 59).AS field of INFO give you in addition the all. every stabilizing controller K(s) that makes T y1 u1 = sup σ max ( T y1 u1 ( jω ) ) < γ ∞ ω K=lft(INFO.U) ∆ where U is a stable LTI system satisfying norm(U. In such cases. the solution to the infinitynorm optimal control problem is nonunique.AS An important use of the infinitynorm control theory is for direct shaping of closedloop singular value Bode plots of control systems. u 1 y1 y2 u2 P(s) KAS(s) K(s) U(s) F(s) K(s) Figure 1010: Allsolution KAS(s) returned by INFO.hinfsyn In general. when the ‘ric’ method is selected. the INFO. the system P(s) will typically be the plant augmented with suitable loopshaping filters — see and mixsyn.AS. that is. 10122 .Inf) <1. Whereas the K returned by hinfsyn is only a particular F(s).
10123 . [K.. W3=[]. P=rss(3. The optimal H∞ cost in this case is GAM=0. sigma(CL.1.4.B2) must be stabilizable and (C2.1*(s+100)/(100*s+1).2641.0). ∆ with a sigma plot 0. [K. Otherwise.1 .ss(GAM)). GAM=0.CL. G=(s1)/(s+1). W 2 = 0.W2.0). W2=0.5).6386 db Example 3: Mixed sensitivity with W1 removed. s=zpk('s').GAM]=hinfsyn(P). P=(G. 100s + 1 s–2 no W3.W1.. W 1 = . P=(G. an the hinfsyn returns an error.GAM]=hinfsyn(P) In this case.A) must be detectable.2.ss(GAM)). NMEAS=2. Example 1: A random 4by5 plant with 3states. GAM = 0.1 ( s + 1000 ) s–1 Example 2: MixedSensitivity G ( s ) = .GAM]=hinfsyn(P. W3=[].CL.hinfsyn Examples Following are three simple problems solved via hinfsyn.W3) [K.randn('seed'. W1=0. K=0. and CL=K*(1+G*K)=0. W1=[].2).1.CL.1854 = 14. NCON=2 rand('seed'.W3). W2=0. G=(s1)/(s+1).W2. You verify that T y 1 u 1 ∞ = sup σ max ( T y 1 u 1 ( jω ) ) < γ ω sigma(CL. s=zpk('s').W1. In this case. Limitation The plant must be stabilizable from the control inputs u2 and detectable from the measurement output y2: • (A.
[7] A. Gahinet and P. Y. August 1994. Safonov. Pandey. no. 1988. Robust and Nonlinear Control.C. J. vol. J. J.hinfsyn References [4] Glover. K..C. and J.” IEEE Transactions on Automatic Control. August 1989 [6] . “Simplifying the H∞ Theory via Loop Shifting. pp. Balas. D. 831–847. Matrix Pencil and Descriptor Concepts”. 1992. J. Zhou. K. pp. Packard. 4(4):421–448. Skelton. N. Khargonekar. Glover. [5] Doyle. Francis. constant I/O similarity scaling for fullinformation and statefeedback problems. “Statespace solutions to standard H2 and H∞ control problems. 1989. “All controllers for the general H∞ control problem: LMI existence conditions and state space formulas. Leonhardson.” Systems and Control Letters. 6. 30(8):1307–1317. vol. 19:271–280. [9] T. Contr.normailized coprime controller synthesis 10124 . [8] P. “A linear matrix inequality approach to H1control. Doyle. Iwasaki and R. “Optimal. July–August 1994. Chiang. pp.. 8. no. Apkarian. P.” Automatica. E.” Systems and Control Letters.M. 167–172.” Int J. 24672488. “Statespace formulae for all stabilizing controllers that satisfy an H∞norm bound and relations to risk sensitivity. G. Limebeer and R. vol.loop shaping controller synthesis 2input 2output partition of a USS object H∞ . 11. P. 34. and G. and B. Int.. 50. See Also augw h2syn loopsyn mktito ncfsyn Augments plant weights for control design H2 synthesis controller H∞ . K.
Equation{1} = equate(e. while 'Equation'. y]. 'Input' and 'Output' are icsignal objects. M.[1 0])*e).ry). Define two constraints among the variables: e = ry.Output = [e. e = icsignal(1). r = icsignal(1). y = icsignal(1). An iconnect object has 3 fields to be set by the user.Input = r. relating the variables defined in 'Input' and 'Output'.is a cellarray of equality constraints (using equate) on icsignal objects. and are used to build complex interconnections of uncertain matrices and systems. M = iconnect. Example iconnect can be used to create the transfer matrix M as described in the following figure.tf(2. e and y. Once these are specified. 'Output' and 'Equation'.iconnect Purpose Syntax Description 10iconnect Creates empty iconnect (interconnection) objects H = iconnect Interconnection objects (class iconnect) are an alternate to sysic. then the 'System' property is the input/output model. tf(M. implied by the constraints in 'Equation'. y]. Ý Å Ö Ý ¾ × Ö Create three scalar icsignal’s. M. and the input to be r. Create an empty iconnect object. 'Input'. r. M.System) The transfer functions from input to outputs are 10125 . Get the transfer function representation of the relationship between the input (r) and the output [e. M.y]. Define the output of the interconnection to be [e. and y = (2/s) e.Equation{2} = equate(y.
r = icsignal(1).Equation{1} = equate(y. #1: s s + 2 2 s + 2 #2: You can also specify uncertain. I = icsignal(1).W].T.I]. y = icsignal(1). W = icsignal(1).'Percentage'. M. VR*IK*W=0.[30 30]). N.I. T = icsignal(1). 10126 .Input = [W.[10 40]).tf(2. N.2e3.y]. N = iconnect. V = icsignal(1). this can be done more concisely with only one equality constraint.'Percentage'. tf(N. K = ureal('K'. M = iconnect. N.iconnect #1: s s + 2 2 s + 2 #2: By not explicitly introducing e. multivariable interconnections using iconnect. and T=K*I.T] = B*[W.[1 0])*(ry)). R = ureal('R'.System) You have created the same transfer functions from input to outputs.I].Input = r.1. Find the uncertain 2x2 matrix B so that [V.Output = [ry. Consider two uncertain motor/generator constraints among 4 variables [V.
+ 6 g+ p wt act deltemp 6 + noise k setpoint P = rss(3.NominalValue ans = 0.1.K*I). twooutput SYSTEM matrix T. 2 occurrences R: real.Equation{1} = equate(VR*IK*W. M. nominal = 0.1.3 g.2).0020 A simple system interconnection.1).iconnect M.0000 0 0.1.System UMAT: 2 Rows. 2 Columns K: real. nominal = 1.0020 1. variability = [10 40]%. M.iczero(1)). W = rss(1. identical to the system illustrated in the sysic reference pages. K = rss(1. variability = [30 30]%. M = iconnect.002. Consider a threeinput. A = rss(1. noise = icsignal(1). y1 y2 T noise deltemp setpoint which has internal structure y1 y2 57.Output = [V. B = M. 1 occurrence B.1).2.Equation{2} = equate(T. 10127 .T].2).
setpointyp(2)].deltemp. In sysic. That is. the state dimension of the interconnection equals the sum of the state dimensions of the components.Input = [noise.e. the imp2exp function makes the implicit relationship between them explicit. This is in contrast to sysic.iconnect deltemp = icsignal(1). Algorithm Each equation represents an equality constraint among the variables. M.A*K*[noise+yp(2). nonminimal) representations where the state dimension of the interconnection is greater than the sum of the state dimensions of the components.Output = [rad2deg*yp(1). 3 inputs.Equation{1} = equate(yp.3 rad2deg = 57. Without care.System.3000 M. You choose the input and output variables.P*[W*deltemp. you can build inefficient (i. Hence.. and 6 states. equate icsignal sysic Limitations See Also Equates expressions for icsignal objects Constructs an icsignal object Constructs system interconnection 10128 . yp = icsignal(2). M. setpoint = icsignal(1). The syntax for iconnect objects and icsignal's is very flexible. rad2deg = 57. the syntax used to specify inputs to systems (the input_to_ListedSubSystemName variable) forces you to include each subsystem of the interconnection only once in the equations. T = M.setpoint]]). size(T) Statespace model with 2 outputs.setpoint]. interconnections formed with sysic are componentwise minimal.
v = icsignal(n).'name'). See Also iconnect sysic Equates expressions for icsignal objects Constructs system interconnections 10129 . used in conjunction with iconnect (interconnection) objects to specify the signal constraints described by an interconnection of components.icsignal Purpose Syntax Description 10icsignal Create an icsignal object. The icsignal object is used with iconnect objects to specify signal constraints described by the interconnection of components. icsignal objects are symbolic column vectors. v = icsignal(n. which is a symbolic column vector. The value of n must be a nonnegative integer. v = icsignal(n) creates an icsignal object of vector length n. v = icsignal(n. with internal name identifier given by the character string argument 'name'. icsignal creates an icsignal object.name) creates an icsignal object of dimension n.
Matrix algebraic constraint Consider two motor/generator constraints among 4 variables [V.Uidx) B = 1. B = imp2exp(A.0020 1.75.Uidx) B = 0. and then B = imp2exp(A.75u. Yidx = 1. You form the equation using imp2exp A = [4 7]. ZPK and FRD objects as well as an uncertain object. The vectors yidx and uidx refer to the columns (inputs) of A as referenced by the explicit relationship for B. including UMAT.0000 0 0. Uidx = 2.T.0020 10130 .[yidx. TF.Yidx.uidx) B = imp2exp(A. USS and UFRD.I. The result B will be of the same class. Uidx = [4 2].uidx) transforms a linear constraint between variables Y and U of the form A(:.U] = 0 into an explicit input/output relationship Y = B*U.yidx. The constraint matrix A may be a DOUBLE.imp2exp Purpose Syntax Description 10imp2exp Convert an implicit linear relationship to an explicit inputoutput relation B = imp2exp(A.0 2e3 1 0]*[V.W].uidx])*[Y.T] = B*[W.Yidx. Solving for y gives y = 1.I. SS. Yidx = [1 3].0 2e3 1 0]. Example Scalar Algebraic constraint Consider the constraint 4y + 7u = 0.yidx.I] using imp2exp.T. A = [1 1 0 2e3.7500 yields B equal to 1.W] = 0. You can find the 2x2 matrix B so that [V. namely [1 1 0 2e3.
Ý Ù ËÝ× Ö Ý È Ù Ö P = tf([1]. % [y. variability = [30 30]%.e.'Percentage'. You can find the uncertain 2x2 matrix B so that [V.InputIndex).2.imp2exp You can find the 2x2 matrix C so that [I.f] InputIndex = [1.T. f=d+u. namely [1 R 0 K.Uidx) UMAT: 2 Rows. C = tf([2*.[1 0]).W]. A = [1 R 0 K. Yidx = [1 3].V] Yidx = [2 4]. 2 occurrences R: real. K = ureal('K'.0 0 0 0 P 1].Yidx.4].d] Sys = imp2exp(A.0 K 1 0]*[V. 1 occurrence Scalar dynamic system constraint Consider a standard singleloop feedback connection of controller C and an uncertain plant P. described by the equations e=ry.W] = C*[T.OutputIndex. variability = [10 40]%.0 K 1 0]. 2 Columns K: real.W] = 0.2e3.I.707*1 1^2].[10 40]).I.[30 30]).1. C = imp2exp(A.T] = B*[W. A = [1 1 0 0 0 1.u.5]. Uidx = [4 2]. 10131 . u=Ce. nominal = 1.I].[1 0]).Uidx) C = 500 0 250000 500 Uncertain matrix algebraic constraint Consider two uncertain motor/generator constraints among 4 variables [V. nominal = 0.0 0 1 1 1 0. R = ureal('R'. B = imp2exp(A.'Percentage'.Yidx.T. OutputIndex = [6. % [r.002. y=Pf. Uidx = [3 1].3.0 C 1 0 0 0.
5 −1 2 1 To: f 0 −1 −2 Step Response From: d 0 1 2 3 4 5 6 7 8 0 1 Time (sec) 2 3 4 5 6 7 8 Algorithm See Also The number of rows of A must equal the length of yidx.'f'}.0.OutputName = {'y'. Sys. pole(Sys) ans = 0.'u'.'e'.7072i 0.'d'}.7072i step(Sys) From: r 2 1.imp2exp Sys.5 To: y 1 0.InputName = {'r'.5 2 1 To: u 0 −1 Amplitude −2 1 0. iconnect inv Equates expressions for icsignal objects Forms the system inverse 10132 .7070 .7070 + 0.5 To: e 0 −0.5 0 −0.
10133 .d. ny = (no.c. d) and an exact realization of y. The inputs ts.nu.H 2 ( : )′ . of rows of y)/nu.hsv] = imp2ss(y..totbnd. A continuoustime realization is computed via the inverse Tustin transform (using bilin) if t is positive. [a. tol = 0. σ 2 . b.imp2ss Purpose Syntax 10imp2ss System realization via Hankel singular value decomposition. in the MIMO case y is an N+1column matrix containing N + 1 time samples of the matrixvalued impulse response H0..tol) [ss. ny..… ]′ returns the singular values (arranged in descending order of magnitude) of the Hankel matrix: H1 H2 H3 … HN Γ = H2 H3 H4 … 0 H3 H4 H5 … 0 . b.01σ 1.d...ny.H N ( : )′ ] The variable tol bounds the H∞ norm of the error between the approximate realization (a.. say n. c. In the SISO case the variable y is the impulse response vector.H 3 ( : )′ . nu. .t.'imp').hsv] = imp2ss(imp..totbnd. The output hsv = [ σ 1 .ny. . d) is determined by the infinity norm error bound specified by the input variable tol. if not present they default to the values ts = 0. the order..hsv] = imp2ss(y) [a. c.hsv] = imp2ss(imp) [ss. of the realization (a. H N 0 … … 0s .tol) Description The function imp2ss produces an approximate statespace realization of a given impulse response imp=mksys(y. otherwise a discretetime realization is returned.totbnd. using the Hankel SVD method proposed by S..totbnd. tol are optional.b.b.nu. nyoutput system stored row wise: y = [ H 0 ( : )′ .… .c.. …. HN of an nuinput. nu = 1.ts. . Kung [2].
V 13 ∈ C nu × n . respectively. 3 Partition the matrices U1 and V1 into three matrix blocks: U 11 U 13 V 11 V 13 ny × n U 1 = U 12 .…. The algorithm is as follows: 1 Form the Hankel matrix Γ from the data y. 2 Perform SVD on the Hankel matrix Γ = UΣV∗ = [ U 1 U 2 ] Σ1 0 V∗ 1 0 Σ 2 V∗ 2 = U 1 Σ 1 V∗ 1 where Σ1 has dimension n × n and the entries of Σ2 are nearly zero. The infinity norm error bound for discrete balanced truncation was later derived by AlSaggaf and Franklin [1].y(N)} [3]. b. U1 and V1 have ny and nu columns. 10134 . V 12 where U 11 . Kung’s SVD realization procedure was subsequently shown to be equivalent to doing balanced truncation (balmr) on an exact state space realization of the finite impulse response {y(1). the loworder approximate model G enjoys the H∞ norm bound G – G N ∞ ≤ totbnd where N totbnd = 2 ∑ i = n+1 σi Algorithm The realization (a. d) is computed using the Hankel SVD procedure proposed by Kung [2] as a method for approximately implementing the classical Hankel factorization realization algorithm. U 13 ∈ C and V 11 . c.imp2ss Denoting by GN a highorder exact realization of y.
AlSaggaf and G.” IEEE Trans.Twelth Asilomar Conf. “A New Identification and Model Reduction Algorithm via Singular Value Decompositions. Bettayeb. 1980. 815819. 1987. CA. San Francisco.” Proc.. pp. F. American Control Conf. [3] L.. on Autom. Kung. 1978. on Circuits. “Optimal Approximation of Linear Systems.. November 68. Silverman and M. this step is omitted and the discretetime realization calculated in Step 4 is returned. pp. 705714. 10135 . References [1] U.imp2ss 4 A discrete statespace realization is computed as A = Σ1 B = Σ1 –1 ⁄ 2 –1 ⁄ 2 UΣ 1 1⁄2 V∗ 11 –1 ⁄ 2 C = U 11 Σ 1 D = H0 where U 11 ′ U 12 U= U 12 U 13 5 If the sampling time t is greater than zero. M. M. Franklin. AC32.. Y. Contr. then the realization is converted to continuous time via the inverse of the Tustin transform z–1 s = 2 . tz+1 Otherwise.” Proc. [2] S. “An Error Bound for a Discrete Reduced Order Model of a Linear Multivariable System. Systems and Computers.
ispsys Purpose Syntax Description See Also 10ispsys True for parameterdependent systems bool = ispsys(sys) bool = ispsys(sys) returns 1 if sys is a polytopic or parameterdependent system. psys psinfo Specification of uncertain statespace models Inquire about polytopic or parameterdependent systems created with psys 10136 .
ufrd. A = umat([2 3. false otherwise. the result of isuncertain(A) is true. and does not actually verify that the input argument is truly uncertain. Create a umat by lifting a constant (i. you verify the correct operation of isuncertain on double. not uncertain) matrix to the umat class. ultidyn. based on class. ureal.3.3. ureal.4)) ans = 1 isuncertain(rss(4.2)*[ureal('p1'. isuncertain(rand(3. Nevertheless.isuncertain Purpose Syntax Description 10isuncertain Checks if argument is an uncertain class type B = isuncertain(A) Returns true if input argument is uncertain. isuncertain(simplify(A)) ans = 10137 .2)) ans = 0 isuncertain(rss(4.. it is not actually uncertain. ucomplexm and udyn. isuncertain(A) ans = 1 The result of simplify(A) is a double. In this example. ss and uss objects.4 5. ucomplex.6 7]). uss.e. Uncertain classes are umat.4) 6. and hence not uncertain. Note that although A is in class umat.0 1]) ans = 1 Example Limitations isuncertain only checks the class of the input argument.4)) ans = 0 isuncertain(ureal('p'.
isuncertain 0 The number of rows of A must equal the length of yidx. See Also isvalid Checks if uncertain object is an uncertain atom 10138 .
¹ ÐØ Å If A is a umat.Delta] = lftdata(A). lftdata decomposes an uncertain object into a fixed certain part and a normalized uncertain part. if A is a ufrd. lftdata can also partially decompose an uncertain Description object into an uncertain part.e.Delta] = lftdata(A) separates the uncertain object A into a certain object M. Uncertain objects (umat. In all cases. ufrd. lftdata(A.Delta. This uncertainty description can be passed directly to the lowlevel structured singular value analysis function MUSSV. [M. [M.List) separates the uncertain object A into an uncertain object M.fieldnames(A. and a normalized uncertain part.BLKSTRUCT] = LFTDATA(A) returns an Nby1 structure array BLKSTRUCT. All other uncertainty in A remains in M. not uncertain) objects in feedback with blockdiagonal concatenations of uncertain elements.. [M. in feedback with a normalized uncertain matrix Delta.Delta. Each normalized element has the string 10139 .DELTA.Normunc] = lftdata(A).Blkstruct] = lftdata(A). and a normalized uncertain matrix Delta. if A is a uss.Blkstruct. [M.lftdata Purpose Syntax 10lftdata Decompose uncertain objects into fixed normalized and fixed uncertain parts [M. such that A is equal to lft(Delta. then M will be ss.M) as shown below.List). [M. then M will be frd. then M will be double. List is a cell array (or char) of names of uncertain elements of A that make up Delta. [M.NORMUNC] = LFTDATA(A) returns the cell array NORMUNC of normalized uncertain elements. [M. uss) are represented as certain (i.Delta] = lftdata(A.DELTA.BLKSTRUCT.Delta] = lftdata(A. Delta is a umat. where BLKSTRUCT(i) describes the ith normalized uncertain element.Uncertainty)) is the same as lftdata(A).
You can inspect the difference between the original uncertain matrix.1 p2].6033i 0 10140 .3) = 0.0000 You can check the worstcase norm of the uncertain part using wcnorm.:. You can decompose A into it certain. M. Compare samples of the uncertain part A with the uncertain matrix A.M)) ans = 0 0 0 0 M M = 0 0 1.0954 1.4919 0 ans(:. Delta. A. A.2499 + 0.2) = 0.Delta] = lftdata(A).0000 3. Example Create an uncertain matrix.6946i 0 0.2). with 3 uncertain parameters.M) is equivalent to A. and the result formed by combining the two results from the decomposition.2863 + 0.8012 0 ans(:. Note that LFT(BLKDIAG(NORMUNC{:}).'perc'.lftdata 'Normalized' appended to its original name to avoid confusion.1) = 0.0000 2.0954 1. p1. p2 and p3.40). p1 = ureal('p1'.0001 usample(Delta.3.0000 1. wcn = wcnorm(Delta) wcn = lbound: 1.:.0954 0 0 0 1.0000 1.0000 ubound: 1. A = [p1 p1+p2.1040 0 0. p2 = ucomplex('p2'.0000 0 1. simplify(Alft(Delta. [M.:.5) ans(:. and normalized uncertain parts.0000 1.
1 ureal('p2'.3562i Create an uncertain matrix. You can decompose G into it a certain system.5) = 0.40) 1.ActualOut = 1.Delta] = lftdata(sys).0).:. [Msys.6886 0 Uncertain Systems 0. with 2 uncertain real parameters. and simple matrices for the input and output.0].OutputGroup.8296 0 ans(:. sys.1124i 0 0.0. A. Msys.7322 .0838 + 0.lftdata 0 ans(:. and the input and output groups have been adjusted.[1.[0 1]. v1 and v2 and create an uncertain system G using A as the dynamic matrix. Delta.4) = 0. A = [ureal('p1'. and a normalized uncertain matrix.3.095 0 x2 0 1 10141 .InputGroup.:. You can see from Msys that it is certain.2)].3752i 0 0. sys = ss(A.'perc'. Msys a = x1 x2 x1 3 1 x2 1 2 b = x1 x2 u1 1.ActualIn = 1. sys.6831 + 0.095 0 u2 0 1 u3 1 0 c = y1 y2 x1 1.
imp2exp. and normalized uncertain parts.1.'Percentage'.3). You can decompose B into it certain. R and K.'p2'. K = ureal('K'. Delta.'inf') ans = 0 0 0 0 0 0 0 0 0 0 0 0 Partial decomposition Create an uncertain matrix. norm(usample(syslft(Delta.[10 40]). and derive an uncertain matrix B using an implicittoexplicit conversion. 10142 .'Percentage'. You can compute the norm on samples of the difference between the original uncertain matrix and the result formed by combining Msys and Delta.2e3. M. A. R = ureal('R'. Note that B has 2 uncertain parameters.lftdata y3 0 1 d = y1 y2 y3 u1 0 0 0 u2 0 0 0 u3 0 0 0 Input groups: Name ActualIn p1_NC p2_NC Output groups: Name ActualOut p1_NC p2_NC Channels 3 1 2 Channels 3 1 2 Continuoustime model.[30 30]).4.Msys).'p1'.
'K').Uidx).Deltaall] = lftdata(B. nominal = 0.MK)) ans = 0 0 0 0 simplify(Blft(DeltaK.DeltaR] = lftdata(B. MK UMAT: 3 Rows. 1 occurrence simplify(Blft(Delta.'R'}).0 K 1 0]. [M. The same operation can be performed by defining the uncertain parameters. 2 occurrences [MR. 4 Columns R: real. nominal = 1. Uidx = [4 2].MR)) ans = 0 0 0 0 Sample and inspect the uncertain part as well as the difference between the original uncertain matrix. [MK. variability = [30 30]%.{'K'. variability = [10 40]%.Yidx.DeltaK] = lftdata(B.002. [Mall. to be extracted. K and R. Yidx = [1 3].lftdata A = [1 R 0 K. simplify(Mall)M ans = 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 10143 .'R').M)) ans = 0 0 0 0 simplify(Blft(DeltaR. MR UMAT: 4 Rows. 3 Columns K: real. You can see the result formed by combining the two results from the decomposition. B = imp2exp(A.Delta] = lftdata(B).
lftdata 0 0 0 0 0 0 0 0 0 0 See Also lft ssdata Forms Redheffer star product of systems Returns uncertain statespace data 10144 .
.. Please use one line per matrix variable in the text editing areas. The result is written in a MATLAB variable with the same name as the LMI system 10145 . Once the LMI system is fully specified. • Write in a file the sequence of lmivar/lmiterm commands needed to specify a particular LMI system (write button) • Generate the internal representation of the LMI system by pressing create. and G for other structures) and by an additional structure matrix similar to the second input argument of lmivar. you can perform the following operations by pressing the corresponding button: • Visualize the sequence of lmivar/lmiterm commands needed to describe this LMI system (view commands buttons) • Conversely. buttons) • Save the symbolic description of the LMI system as a MATLAB string (save button). However.. 1 Give it a name (top of the window). This description can be reloaded later on by pressing the load button • Read a sequence of lmivar/lmiterm commands from a file (read button). R for unstructured. Typing lmiedit calls up a window with two editable text areas and various buttons. An LMI can stretch over several lines. To specify an LMI system. 3 Specify the LMIs as MATLAB expressions in the lower half of the window. display the symbolic expression of the LMI system produced by a particular sequence of lmivar/lmiterm commands (click the describe. 2 Declare each matrix variable (name and structure) in the upper half of the window.lmiedit Purpose Syntax Description 10lmiedit Specify or display systems of LMIs as MATLAB expressions lmiedit lmiedit is a graphical user interface for the symbolic specification of LMI problems. do not specify more than one LMI per line. The matrix expression of the LMI system specified by these commands is then displayed by clicking on describe the LMIs.. The structure is characterized by its type (S for symmetric block diagonal.
lmivar lmiterm newlmi lmiinfo See Also Specify the matrix variables in an LMI problem Specify the term content of LMIs Attach an identifying tag to LMIs Interactively retrieve information about the variables and term content of LMIs 10146 . To activate the scroll mode. The scroll mode is only active when all visible lines have been used. maintain the mouse button down. click in the text area.lmiedit Remark Editable text areas have builtin scrolling capabilities. and move the mouse up or down.
Special cases are I and I (I = identity matrix). second. Exceptions to this rule are the notations 10147 . X2.lmiinfo Purpose Syntax Description 10lmiinfo Interactively retrieve information about the variables and term content of LMIs lmiinfo(lmisys) lmiinfo provides qualitative information about the system of LMIs lmisys. The index j in Aj. Cj. • Qj is used exclusively with scalar variables as in x3*Q1.M denote the outer factors and L.2. the LMI is simply written as L(x) < R(x) If its righthand side is zero. If the outer factors are missing. Lowercase letters such as a2 indicate a scalar coefficient. and the term content of each block.3 refer to the first. x3 denote the problem variables. This includes the type and structure of the matrix variables. Hence C1 may appear in several blocks or several LMIs without implying any connection between the corresponding constant terms. The labels 1. Bj denote the left and right coefficients of variable terms. Uppercase X indicates matrix variables while lowercase x indicates scalar variables. it is displayed as N' * L(x) * N < 0 Information on the block structure and term content of L(x) and R(x) is also available. lmiinfo is an interactive facility where the user seeks specific pieces of information. General LMIs are displayed as N' * L(x) * N < M' * R(x) * M where N. The term content of a block is symbolically displayed as C1 + A1*X2*B1 + B1'*X2*A1' + a2*X1 + x3*Q1 with the following conventions: • X1. and third matrix variable in the order of declaration.R the left and right inner factors. the number of diagonal blocks in the inner factors. Bj. • Aj. Qj is a dummy label. • Cj refers to constant terms.
and z scalar. If this LMI is described in lmis. information about X and the LMI block structure can be obtained as follows: lmiinfo(lmis) LMI ORACLE This is a system of 1 LMI with 3 variable matrices Do you want information on (v) matrix variables (l) LMIs (q) quit ?> v Which variable matrix (enter its index k between 1 and 3) ? 1 X1 is a 2x2 symmetric block diagonal matrix its (1. Example Consider the LMI ⎛ ⎞ T T T 0 ð ⎜ – 2X + A YB + B Y A + I XC ⎟ ⎜ ⎟ T ⎝ C X – zI ⎠ where the matrix variables are X of Type 1.1)block is a full block of size 2 This is a system of 1 LMI with 3 variable matrices Do you want information on (v) matrix variables (l) LMIs (q) quit ?> l Which LMI (enter its number k between 1 and 1) ? 1 This LMI is of the form 10148 . Y of Type 2.lmiinfo A1*X2*A1' and A1*X2*B1 + B1'*X2'*A1' which indicate symmetric terms and symmetric pairs in diagonal blocks.
1) : A3*X1 block (2. or you can prompt for specific blocks with option (b).2) : x3*A4 (w) whole factor (b) only one block (o) other LMI (t) back to top level This is a system of 1 LMI with 3 variable matrices Do you want information on (v) matrix variables (l) LMIs (q) quit ?> q It has been a pleasure serving you! Note that the prompt symbol is ?> and that answers are either indices or letters. All blocks can be displayed at once with option (w).1) : I + a1*X1 + A2*X2*B2 + B2'*X2'*A2' block (2.lmiinfo 0 < R(x) where the inner factor(s) has 2 diagonal block(s) Do you want info on the right inner factor ? (w) whole factor (b) only one block (o) other LMI (t) back to top level ?> w Info about the right inner factor block (1. Remark lmiinfo does not provide access to the numerical value of LMI coefficients. 10149 .
lmiinfo See Also decinfo lminbr matnbr decnbr Describe how the entries of a matrix variable X relate to the decision variables Return the number of LMIs in an LMI system Return the number of matrix variables in a system of LMIs Give the total number of decision variables in a system of LMIs 10150 .
lminbr Purpose Syntax Description See Also 10lminbr Return the number of LMIs in an LMI system k = lminbr(lmisys) lminbr returns the number k of linear matrix inequalities in the LMI problem described in lmisys. lmiinfo matnbr Interactively retrieve information about the variables and term content of LMIs Return the number of matrix variables in a system of LMIs 10151 .
strips.e. reg2.lmireg Purpose Syntax Description 10lmireg Specify LMI regions for pole placement purposes region = lmireg region = lmireg(reg1. and any intersection of the above. Calling lmireg without argument starts an interactive query/answer session where you can specify the region of your choice. M] is returned upon termination. M] description of the intersection of these regions. conic sectors. This class of regions encompasses half planes.. ellipses.. The function lmireg can also be used to intersect previously defined LMI regions reg1....) lmireg is an interactive facility to specify the LMI regions involved in multiobjective H∞ synthesis with pole placement constraints (see msfsyn).. The output region is then the [L. This matrix description of the LMI region can be passed directly to msfsyn for synthesis purposes. The matrix region = [L.. i. See Also msfsyn Multimodel/multiobjective statefeedback synthesis 10152 .. disks. D = {z ∈ C : L + Mz + MT z < 0} for some fixed real matrices M and L = LT.reg2. Recall that an LMI region is any convex subset D of the complex plane that can be characterized by an LMI in z and z .
the lefthand side always refers to the smaller side of the LMI. In the calling of limterm. by convention. When describing an LMI with several blocks.flag) lmiterm specifies the term content of an LMI one term at a time.2) in a twoblock LMI. the LMI description must be initialized with setlmis and the matrix variables must be declared with lmivar. LMI terms are one of the following entities: • outer factors • constant terms (fixed matrices) • variable terms AXB or AXTB where X is a matrix variable and A and B are given matrices called the term coefficients. specify the blocks (1. remember to specify only the terms in the blocks on or below the diagonal (or equivalently. (2.1). ⎧ [ 0.A.lmiterm Purpose Syntax Description 10lmiterm Specify the term content of LMIs lmiterm(termID.B. termID is a fourentry vector of integers specifying the term location and the matrix variable involved. The index p is relative to the order of declaration and corresponds to the identifier returned by newlmi. Recall that LMI term refers to the elementary additive terms involved in the blockmatrix expression of the LMI. For instance.1). j ] for terms in the ( i. Recall that. and (2. j )th ⎪ block of the left or right inner factor ⎩ 10153 . only the terms in blocks on or above the diagonal). Each lmiterm command adds one extra term to the LMI system currently described. Before using lmiterm. termID (1) = ⎨ ⎧+p ⎩ p where positive p is for terms on the lefthand side of the pth LMI and negative p i s for terms on the righthand side of the pth LMI. 0 ] for outer factors ⎪ termID (2:3) = ⎨ [ i.
For instance. The extra argument flag is optional and concerns only conjugated expressions of the form (AXB) + (AXB)T = AXB + BTX(T)AT in diagonal blocks. 10154 .1.A.1) lmiterm([1 1 1 X]. The arguments A and B contain the numerical data and are set according to: Type of Term A B outer factor N constant term C variable term AXB or AXTB matrix value of N matrix value of C matrix value of A (1 if A is absent) omit omit matrix value of B (1 if B is absent) Note that identity outer factors and zero constant terms need not be specified.A. lmiterm([1 1 1 X]. Setting flag = 's' allows you to specify such expressions with a single lmiterm command.'s') adds the symmetrized expression AX + XTAT to the (1.1.A') Aside from being convenient. this shortcut also results in a more efficient representation of the LMI.lmiterm ⎧ 0 for outer factors ⎪ termID (4) = ⎨ x for variable terms AXB ⎪ T ⎩ – x for variable terms AX B where x is the identifier of the matrix variable X as returned by lmivar.1) block of the first LMI and summarizes the two commands lmiterm([1 1 1 X].
After initializing the LMI description with setlmis and declaring the matrix variables with lmivar.1) % f*X2 Note that CX1CT + C X 1 CT is specified by a single lmiterm command with the flag 's' to ensure proper symmetrization.2*A.C'. the terms on the lefthand side of this LMI are specified by: lmiterm([1 1 lmiterm([1 1 lmiterm([1 1 lmiterm([1 2 lmiterm([1 2 Here X1.1) % I should be the variable identifiers returned by lmivar.1.A') % 2*A*X2*A' x3]. X3 1 1 1 1 2 X2].1.D*D') % D*D' X1]. the term content of the righthand side is specified by: lmiterm([1 0 0 0]. T See Also setlmis lmivar getlmis lmiedit newlmi Initialize the description of an LMI system Specify the matrix variables in an LMI problem Get the internal description of an LMI system Specify or display systems of LMIs as MATLAB expressions Attach an identifying tag to LMIs 10155 .C.'s') % C*X1*C'+C*X1'*C' lmiterm([1 2 2 X2]. X2 are matrix variables of Types 2 and 1. respectively.f. Similarly.E) % x3*E 0]. X2.lmiterm Example Consider the LMI ⎛ T T T ⎜ 2AX 2 A – x 3 E + DD B X 1 ⎜ T ⎜ X1 B –I ⎝ ⎞ ⎛ T T T ⎟ T CX 1 C + CX 1 C 0 ⎟ <M ⎜ ⎜ ⎟ 0 –f X2 ⎝ ⎠ ⎞ ⎟M ⎟ ⎠ where X1.B) % X1'*B 0].M) % outer factor M lmiterm([1 1 1 X1]. and x3 is a scalar variable (Type 1).
scalar (a multiple of the identity matrix). . Set struct = [m. lmivar optionally returns two extra outputs: (1) the total number n of scalar 10156 . These constitute the set of decision variables associated with X.n] in this case. 0 for scalar.sX] = lmivar(type.struct) lmivar defines a new matrix variable X in the LMI system currently described. 1 for zero block). type=3: Other structures. Each diagonal block is either full (arbitrary symmetric matrix).lmivar Purpose Syntax Description 10lmivar Specify the matrix variables in an LMI problem X = lmivar(type. With Type 3.n.2) is the type of the rth block (1 for full. . To specify a variable X of Type 3. To help specify matrix variables of Type 3.j)= n if X(i. If the problem already involves n decision variables.j)=0 if X(i..1) is the size of the rth block • struct(r.j)=n if X(i. j) = –xn Sophisticated matrix variable structures can be defined with Type 3. xn+p. label the new free variables as xn+1. . struct is an Rby2 matrix where • struct(r. The structure of X is then defined in terms of xn+1. first identify how many free independent entries are involved in X. The first argument type selects among available types of variables and the second argument struct gives further information on the structure of X depending on its type. Accordingly. j) is a hard zero • struct(i.struct) [X. struct is a matrix of the same dimensions as X such that • struct(i. If X has R diagonal blocks. The optional output X is an identifier that can be used for subsequent reference to this new variable. . type=2: Full mbyn rectangular matrix. xn+p as indicated above. . j) = xn struct(i. or identically zero. each entry of X is specified as zero or ±xn where xn is the nth decision variable. . Available variable types include: type=1: Symmetric matrices with a blockdiagonal structure..
[2 4]) % Type 2 of dim. X2. respectively.[5 1. consider a matrix variable X with structure ⎛ X 0 X = ⎜ 1 ⎜ 0 X 2 ⎝ ⎞ ⎟ ⎟ ⎠ where X1 and X2 are 2by3 and 3by2 rectangular matrices. Type 3 allows you to specify fairly complex matrix variable structures. 2x4 X3 = lmivar(1. X3 such that • X1 is a 3 × 3 symmetric matrix (unstructured). xn. respectively. These three variables are defined by setlmis([]) X1 = lmivar(1. . • X2 is a 2 × 4 rectangular matrix (unstructured).sX1] = lmivar(2.. • X3 = ⎛ ∆ 0 0 ⎜ ⎜ 0 δ1 0 ⎜ ⎝ 0 0 δ2 I2 ⎞ ⎟ ⎟ ⎟ ⎠ where ∆ is an arbitrary 5 × 5 symmetric matrix. and I2 denotes the identity matrix of size 2. δ1 and δ2 are scalars.[2 3]) 10157 . Example 1 Consider an LMI system with three matrix variables X1. Example 2 Combined with the extra outputs n and sX of lmivar.lmivar decision variables used so far and (2) a matrix sX showing the entrywise dependence of X on the decision variables x1.n.[3 1]) % Type 1 X2 = lmivar(2. .2 0]) % Type 1 The last command defines X3 as a variable of Type 1 with one full block of size 5 and two scalar blocks of sizes 1 and 2. For instance. You can specify this structure as follows: 1 Define the rectangular variables X1 and X2 by setlmis([]) [X1. .1 0.
sX2] = lmivar(2.1)=7 means that the (1.lmivar [X2.[3 2]) The outputs sX1 and sX2 give the decision variable content of X1 and X2: sX1 sX1 = 1 4 sX2 sX2 = 7 9 11 2 5 3 6 8 10 12 For instance.sX2]) The resulting variable X has the prescribed structure as confirmed by sX sX = 1 4 0 0 0 2 5 0 0 0 3 6 0 0 0 0 0 0 0 7 8 9 10 11 12 See Also setlmis lmiterm getlmis lmiedit skewdec delmvar Initialize the description of an LMI system Specify the term content of LMIs Get the internal description of an LMI system Specify or display systems of LMIs as MATLAB expressions Form a skewsymmetric matrix Delete one of the matrix variables of an LMI problem 10158 .[sX1. sX2(1.n.zeros(2).n. 2 Use Type 3 to specify the matrix variable X and define its structure in terms of those of X1 and X2: [X.zeros(3).sX] = lmivar(3.1) entry of X2 is the seventh decision variable.
lmivar setmvar Instantiate a matrix variable and evaluate all LMI terms involving this matrix variable 10159 .
Freq. or jointly as an frd. sys2 = tf(0. loglog(sys) loglog(sys. loglog(abs(sys1g). The Xdata and Ydata can be specified individually. Plot the magnitude of these transfer functions versus frequency on an loglog plot.omega).40)..9. sys2g and sys3g..1 1]).frd/loglog Purpose Syntax Description 10frd/loglog Loglog scale plot of frd objects. The argument list consists of (many) Xdata. as doubles.9.. Example Generate three frequency response objects.9.and Yaxis.[1 1]). sys1 = tf(0. triples. 10160 . sys2g = frd(sys2.omega). sys1g = frd(sys1. sys3 = tf(0.omega). sys3g = frd(sys3.[. sys1g.linetype) loglog is the same as plot except a logarithmic (base 10) scale is used for both the X.[10 1]). Ydata and line type.sys3g.'r+'.2. omega = logspace(2.abs(sys2g).
frd/loglog abs(sys3g.Resp(:)). 10 0 10 −1 10 −2 10 −3 10 −4 10 −2 10 −1 10 0 10 1 10 2 See Also plot semilogx semilogy Plots on linear axis Plots semilog scale plot Plots semilog scale plot 10161 .'').
That is. C. or singleloop margin. such that L(i. The disk margin for the ith feedback channel defines a circular region centered on the negative real axis at the average GainMargin (GM). The wcmargin command can be used to analyze singleloop margins in the presence of the worstcase uncertainty.loopmargin Purpose Syntax Description 10loopmargin Comprehensive analysis of feedback loops [sm.smo.mmi.dm. As in the case for disk margin. L. since variations in all channels are handled simultaneously. (GMlow+GMhigh)/2.dmo.dmi.mmio] = loopmargin(P. Note that mm is a single structure. mm calculates the largest region such that for all gain and phase variations. the guaranteed bounds are calculated based on a balanced sensitivity function. If L is an ufrd or uss object. e.mm] = loopmargin(L) [smi.C) [sm. The disk margin and multiloop disk margin calculations are performed in the frequency domain. of L is used in the loop analysis. without reference channels.mmo. variations in the individual channels of loop transfer matrix L. is an Nby1 structure corresponding to loopatatime gain and phase margins for each channel. P.c) analyzes the multivariable feedback loop consisting of the controller. mm. then the nominal value. dm is an Nby1 structure corresponding to loopatatime disk gain and phase margins for each channel.mmi. occurring independently in each channel. in negative feedback with the plant.smo. C should only be the compensator in the feedback path.dm.g. [smi.i) does not enter that region. If L is a ss/tf/zpk/uss object. independent of the number of channels. independent. calculated based on the balanced sensitivity function (see Algorithm). or multiloop disk margin. if it is a 2dof architecture. is a structure corresponding to simultaneous.dmi.mmo.mmio] = loopmargin(p.mm] = loopmargin(L) analyzes the multivariable feedback loop consisting of the loop transfer matrix L (size NbyN) in negative feedback with an NbyN identity matrix.NominalValue. if the 10162 . which lie inside the region the closedloop system is stable. sm. Gain and phase disk margin bounds are derived from the radius of the circle.dmo. the frequency range and number of points used to calculate dm and mm margins are chosen automatically (see bode command).
Basic syntax [sm. the frequency range and number of points used to calculate sm. variations in all the individual input and output channels of the feedback loops. or multiinput/multioutput margins. If the closedloop system is a ss/tf/zpk/uss. is a structure corresponding to simultaneous. independent. = 1/G where G is the gain at crossover) Corresponding phase margins (in degrees) All 0 dB crossover frequencies (in rad/s) PhaseMargin PMFrequency 10163 . dmi and mmi though they correspond to the plant outputs. disk margins and multiloop channel margins at the plant input respectively.dm. The nominal values of P and C are used in the analysis if P and/or C are uncertain system or frequency response objects. dmo and mmo have the same fields as described for smi. Ö Ö Ò ¹ ¹ ¾¹ Ó Ö ¹ Ø È ØÙÖ ¹ ½¹ Ó Ö ¹ Ø È ØÙÖ smi. mmio. The sm is a structure with the following fields: Field GMFrequency GainMargin Description All 180 deg crossover frequencies (in rad/s) Corresponding gain margins (g. mmio has the same fields as mmi and mmo.m. The structures smo.dmi and mmi structures correspond to the loopatatime gain and phase margins.mm] = loopmargin(L) sm is calculated using the allmargin command and has the same fields as allmargin.loopmargin closedloop system has a 2dof architecture the reference channel of the controller should be eliminated resulting in a 1dof architecture as shown in the figure below. dm and mm margins are chosen automatically.
loopmargin Field DelayMargin Description Delay margins (in seconds for continuoustime systems. If L is a frd or ufrd object. gain variations allowed in all plant channel(s). independent. phase variations allowed in all plant channel(s) (degrees). in degrees. Associated with GainMargin/PhaseMargin fields (rad/s). Smallest phase variation. Associated with GainMargin/PhaseMargin fields (rad/s). PhaseMargin Frequency mm is a structure with the following fields: Field GainMargin Description Guaranteed bound on simultaneous. PhaseMargin Frequency Example MIMO LoopataTime Margins 10164 . is a structure with the following fields Field GainMargin Description Smallest gain variation (GM) such that a disk centered at the point (GM(1) + GM(2))/2 would just touch the loop transfer function. or Disk Margin. independent. Stable dm. the Stable flag is set to NaN. and multiples of the sample time for discretetime systems) 1 if nominal closed loop is stable. 0 otherwise. Guaranteed bound on simultaneous. corresponding to the disk described in the GainMargin field (degrees).
2).6 d   G G and K are 2 × 2 multiinput/multioutput (MIMO) systems.c. b = eye(2).mmi. The nominal closedloop system considered here is shown as follows .6 d K . dmi. smi(1). smi(1) ans = GMFrequency: [1x0 double] GainMargin: [1x0 double] PMFrequency: 21 10165 .K). defined as 2 1 α ( s + 1 ) .10 1].10 0].dmo.dmi. We will see that margins of the individual loops may be very sensitive to small perturbations within other loops.b.mmo.0 1].s – α 2 2 2 s + α –α ( s + 1 ) s – α2 Set α:= 10. [smi.smo. The disk margin analysis. d = zeros(2.loopmargin This example is designed to illustrate that loopatatime margins (gain.d). G = ss(a. K = [1 2. of the first channel provide similar results. c = [1 8.K=I G: = . phase. First consider the margins at the input to the plant. The first input channel has infinite gain margin and 90 degrees of phase margin based on the results from the allmargin command.mmio]=loopmargin(G. and/or distance to –1) can be inaccurate measures of multivariable robustness margins. construct G in statespace form and compute its frequency response. a = [0 10.
1912 39. Allowing independent variation of the input channels further reduces the tolerance of the closedloop system to variations at the input to the plant.105 and phase margin variations of +/.1912] 0. Hence even though the first channel had infinite gain margin and 90 degrees of phase margin. The multivariable margin analysis. mmi.728 and 1. The disk margin analysis.87 degs.373 and phase margin variations of +/.1168 The second input channel has a gain margin of 2.105 and infinite phase margin based on the singleloop analysis. leads to a maximum allowable gain margin variation of 0.475 and 2.1053 [1x0 double] [1x0 double] [1x0 double] [1x0 double] 1 [0. smi(2) ans = GMFrequency: GainMargin: PMFrequency: PhaseMargin: DMFrequency: DelayMargin: Stable: dmi(2) ans = GainMargin: PhaseMargin: Frequency: 0 2.39.0748 1 [0 Inf] [90 90] 1.1056] [39. independent gain and phase margin variations in each channel.18 degs.4749 2. allowing variation in both input channels leads to a factor of two reduction in the gain and phase margin.loopmargin PhaseMargin: DMFrequency: DelayMargin: Stable: dmi(1) ans = GainMargin: PhaseMargin: Frequency: 90 21 0. smi(2). which allows for simultaneous gain and phase variations a loopatatime results in maximum gain margin variations of 0.17. dmi(2). 10166 .0200 The multiple margin analysis of the plant inputs corresponds to allowing simultaneous.
simultaneous.753 dB) and phase margin variations up to ± 17. mmi.8659] Frequency: 9. gain margin variation up to 0. The output multivariable margin analysis.475 and 2. This is denoted by the region associated with the large ellipse in the following figure. The disk margin analysis.7283 1. leads to a maximum allowable gain margin variation of 0. Gain and Phase Margin Guaranteed Disk Margin: Input 2 Guaranteed Multivariable Input Margin 10 5 Gain Variation (dB) 0 −5 −10 −40 −30 −20 −10 0 10 Phase Variation (degrees) 20 30 40 The output channels have single loop margins of infinite gain and 90 degs phase variation.105 (± 6.18 degs in the second input channel.87 degs (the dark ellipse region) in both input channels.465 dB) and phase margin variations of ± 39. indicates that the closedloop system will be stable for independent. dmi(2).728 and 1.607 and 1.649 and phase 10167 . indicates the closedloop system will remain stable for simultaneous gain variations of 0.8659 17. The multivariable margin analysis at the input to the plant. mmo.3730] PhaseMargin: [17.5238e004 The guaranteed region of phase and gain variations for the closedloop system can be illustrated graphically.373 (±2.loopmargin mmi mmi = GainMargin: [0.
8402 10. mmio. If signals e and y are summed. mmio mmio = GainMargin: [0.53 degs.2287 Algorithm Two well known loop robustness measures are based on the sensitivity function. The peak value of µ(S+T) or σmax(S+T) gives a robustness guarantee 10168 . the balanced sensitivity function.8267 1. S is the transfer matrix from summing junction input u to summing junction output e.6065 1. the transfer matrix from u to e+y is given by (I+L)⋅(I−L)−1.27.2287 If all the input and output channels are allow to vary independently. simultaneous variations in both channels significantly reduces the margins at the plant outputs. mmo mmo = GainMargin: [0. the gain margin variation of 0. T is the transfer matrix from u to y.5293] Frequency: 0.827 and 1. It can be shown (Dailey 1991. Daily and Gangass 1994) that each brokenloop gain may be perturbed by the complex gain (1+∆)(1−∆) where ∆<1/µ(S+T) or ∆<1/σmax(S+T) at each frequency without causing instability at that frequency. S=(I−L)−1 and the complementary sensitivity function.84 degs. In the following figure.8402] Frequency: 0.6489] PhaseMargin: [27.210 and phase margin variations of +/10.loopmargin margin variations of +/. T=L(I−L)−1 where L is the loop gain matrix associated with the input or output loops broken simultaneously. Hence even though both output channels had infinite gain margin and 90 degrees of phase margin. Blight.2097] PhaseMargin: [10.5293 27.
Daily and Gangass 1994).loopmargin for all frequencies and for µ(S+T). Ý ¹¦ Ù ¹¦ Ä ·Ý Ý ·Ý ´Á Äµ ½ Ù Ä´Á Äµ ½ Ù ´Á · Äµ ¡ ´Á Äµ ½Ù ËÙ ÌÙ ´Ë · Ì µÙ 10169 . the guarantee is nonconservative (Blight.
5 −1 −0.5 1 10170 .loopmargin This figure shows a comparison of a disk margin analysis with the classical notations of gain and phase margins.4 Imaginary Axis 0. Disk gain margin (DGM) and disk phase margin (DPM) in the Nyquist plot 1 0.4 −0.6 −0.2 −0.6 Disk Margin 0.2 0 −0.8 Unit Disk Disk Margin Critical point DGM GM DPM PM −1 −1.8 0.5 Real Axis 0 0.
all the channels are included in the analysis. The disk margin corresponds to the largest disk centered at (GMD + 1/GMD)/2 that just touches the loop transfer function L. For a given peak value of µ(S+T).loopmargin The Nyquist plot is of the loop transfer function L(s) s . The disk margin and multiple channel margins calculation involve the balanced sensitivity function S+T.analysis problem is formulated with each channel perturbed by an independent. Instead of calculating µ(S+T) a single loop at a time. This location is indicated by the red dot. • DGM and DPM correspond to the disk gain and phase margins. complex perturbation. This corresponds to the disk margin calculation to find dmi and dmo. Similarly. the multiple channel margins calculation involves the balanced sensitivity function S+T. any simultaneous phase and gain variations applied to each loop independently will not destabilize the system if the perturbations remain inside the corresponding circle or disk. A µ. The disk margins provide a lower bound on classical gain and phase margins. • The disk margin circle corresponds to the dashed black line. independent phase and gain variations applied to each loop simultaneously will not destabilize the system if they remain inside the corresponding circle or disk of size µ(S+T). The peak µ(S+T) value guarantees that any simultaneous.6s + 16 ) • The Nyquist plot of L corresponds to the blue line • The unit disk corresponds to the dotted red line • GM and PM indicate the location of the classical gain and phase margins for the system L.+ 1 30 L ( s ) = ( s + 1 ) ( s 2 + 1. See Also allmargin bode loopsens mussv robuststab wcgain wcsens Finds all stability margins and crossover frequencies Plots Bode frequency response of LTI models Calculates sensitivity functions of feedback loop Calculate bounds on the Structured Singular Value (µ) Calculate stability margins of uncertain systems Calculates worstcase gain of a system Calculate worstcase sensitivities for feedback loop 10171 .
Vol. 10172 . University of Minnesota. R. ISBN: 9040723176. 1. Ph. Delft University Press. D. Postlethwaite (2002). Thesis. Dailey. (1994). Delft.loopmargin wcmargin Calculate worstcase margins for feedback loop Reference Barrett. M. Blight. “Practical control law design for aircraft using multivariable techniques. The Netherlands.D. D.” International Journal of Control. Conservatism with robustness tests for linear feedback control systems. J. Bates.. Control Science and Dynamical Systems. 93137.D.L. No. and I. and Gangsass. (1980). Robust Multivariable Control of Aerospace Systems. 59.F.
C) creates a Struct. double. P and C. The closedloop system consists of the controller. The plant and compensator.C) loops = loopsens(P. NaN for frd/ufrd objects. 0 otherwise. if it is a 2dof controller as seen in the figure below. NaN for frd/ufrd objects. Ö Ö Ò ¹ ¹ ¾¹ Ó Ö ¹ Ø È ØÙÖ ¹ ½¹ Ó Ö ¹ Ø È ØÙÖ The loops structure contains the following fields: Field Poles Stable Description Closedloop poles. lti objects. loops.loopsens Purpose Syntax Description 10loopsens Sensitivity functions of plantcontroller feedback loop loops = loopsens(P. whose fields contain the multivariable sensitivity. not any reference channels. can be constant matrices. or uncertain objects. C should only be the compensator in the feedback path. C. 1 if nominal closed loop is stable. Si Ti Li So To Lo PSi CSo Inputtoplant sensitivity function Inputtoplant complementary sensitivity function Inputtoplant loop transfer function Outputtoplant sensitivity function Outputtoplant complementary sensitivity function Outputtoplant loop transfer function Plant times inputtoplant sensitivity function Compensator times outputtoplant sensitivity function 10173 . frd/ss/tf/zpk. P. complementary and openloop transfer functions. umat/ufrd/uss. in negative feedback with the plant.
One of the integrators.1 0 0.uncint.02 and 1.5. uncint = int + addunc.[1 1]. complementary sensitivity and loop transfer functions. defines the input/output sensitivity.0. ½ ½ ¹ ¾ Ñ ¹ È ¹ Ñ ¿ ¾ Description Equation Input Sensitivity ( TF e1 ← d1 ) Input Complementary Sensitivity ( TF e2 ← d1 ) Output Sensitivity ( TF e3 ← d2 ) Output Complementary Sensitivity ( TF e4 ← d2 ) Input Loop Transfer Function Output Loop Transfer Function (I+CP)−1 CP(I+CP)−1 (I+PC)−1 PC(I+PC)−1 CP PC Example Consider a threeinput and fouroutput plant with three integrators. addunc = ultidyn('addunc'.0 1 0]*int3. is uncertain.. p = [1 . int = tf(1. int3 = blkdiag(int.5 0 1.2).int). 10174 . complementary sensitivity and loop transfer functions. ranging between 0. shown below. You can use the loopsens command to construct the closedloop system sensitivity. c. A four input.4) entry is uncertain. three output constant gain controller. uncint. The gain of the controller (1.'Bound'.2 .loopsens The multivariable closedloop interconnection structure. is used to stabilize the closedloop system.5.[1 0]).
4) = 0.0.0. i.0. complementary sensitivity and loop transfer functions have three inputs and three outputs.3) = 0.gainval). 'Range'.2176i ans(:.[3 1.9. pole(nomSi.2 .20 1.0004 + 0.loopsens gain = ureal('gain'. complementary sensitivity and loop transfer functions have four inputs and four outputs.:.3 0 0.2176i 0.1) = 0.0072 .1].32 gain. loops = loopsens(p.c) loops = Poles: [3x1 double] Stable: 1 Si: [3x3 uss] Ti: [3x3 uss] Li: [3x3 uss] So: [4x4 uss] To: [4x4 uss] Lo: [4x4 uss] PSi: [4x3 uss] CSo: [3x4 uss] Note that the input sensitivity.02 .0018 + 0.:.0.8537 0. c = [.1663 0.5]).8541 0. gainval = reshape([0.2) = 0.215 0.NominalValue) ans(:.2164i ans(:.0072 + 0. nomSi = usubs(loops.2164i 0.0004 .8556 0.'gain'.1 .0018 . the closedloop system is stable only for positive values of the gain parameter.6505 10175 .e.:.5]. whereas the output sensitivity. 0 1 . For the nominal plant model.:.22 0.2207i 0.[1 1 4]).Si.2207i ans(:.3 . the additive uncertainty addunc set to zero.
loopsens 1.7533 See Also loopmargin robuststab wcsens wcmargin Performs a comprehensive analysis of feedback loop Calculate stability margins of uncertain systems Calculate worstcase sensitivities for feedback loop Calculate worstcase margins for feedback loop 10176 .
GAM. the shaped plant Gs = GW. then.σ ( G d ( jω ) ) γ for all ω < ω 0 for all ω > ω 0 (1016) (1017) The STRUCT array INFO returns additional design information.Gd.. 1 σ ( G ( jω )K ( jω ) ) ≥ .) Input arguments: G Gd RANGE LTI Plant Desired loopshape (LTI Model) (optional.σ ( G d ( jω ) ) γ 1 σ ( G ( jω )K ( jω ) ) ≤ .INFO]=loopsyn(G.GAM. including a MIMO stable minphase shaping prefilter W.ωmax}.Gd) [K. as well as the frequency range {ωmin.loopsyn Purpose Syntax Description 10loopsyn Compute a controller for plant G to optimally match target loop shape Gd [K.INFO]=loopsyn(G.ωmax} over which the loop shaping is achieved.Inf}) Desired frequency range for loopshaping. a 1by2 cell array {ωmin. ωmax should be at least ten times ωmin Output arguments: K CL= G*K/(I+GK) LTI controller LTI closedloop system 10177 .CL. roughly.RANGE) loopsyn is an H∞ optimal method for loopshaping control synthesis. the controller for the shaped plant Ks=WK.CL. It computes a stabilizing H∞controller K for plant G to shape the sigma plot of the loop transfer function GK to have desired loop shape Gd with accuracy γ=GAM in the sense that if ω0 is the 0db crossover frequency of the sigma plot of Gd(jω). default {0.
and 3 {ωmin. loopsyn first computes a stableminimumphase loopshaping.ωmax}=[0.ωmax} by the shaped plant. 10178 . and the desired shape Gd is achieved with good accuracy in the frequency range {ωmin. i.∞]. squaringdown prefilter W such that the shaped plant Gs = GW is square. loopsyn uses the GloverMcFarlane [2] normalizedcoprimefactor control synthesis theory to compute an optimal “loopshaping” controller for the shaped plant via Ks=ncfsyn(Gs). with GAM=1 being perfect fit additional output information LTI prefilter W satisfying σ(Gd)=σ(GW) for all ω. Otherwise.. loopsyn uses a bilinear poleshifting bilinear transform [3] of the form Gshifted=bilin(G. If the plant G is a continuous time LTI and 1 G has a fullrank Dmatrix.Gs INFO.'S_Tust'. W is always minimumphase LTI shaped plant: Gs = GW LTI controller for the shaped plant: Ks=WK {ωmin.loopsyn GAM loopshaping accuracy (GAM ≥ 1. and returns K=W*Ks.ωmax} cellarray containing the approximate frequency range over which loopshaping could be accurately achieved to with accuracy G. The output INFO.W INFO.ωmax]).[ωmin. Then.e.Ks INFO. σ(Gd) ≈ σ(Gs) for all ω ∈ {ωmin.ωmax}. then GW theoretically achieves a perfect accuracy fit σ(Gd) = σ(GW) for all frequency ω. and 2 no finite zeros on the jωaxis.range Algorithm Using the GCD formula of Le and Safonov [1].1.range is either the same as or a subset of the input range INFO INFO.
{.CL. For best results.100}) % plot result This figure shows that the LOOPSYN controller K optimally fits sigma(G*K) = sigma(Gd)–GAM % dB In the above example.GAM.Gd*GAM. in such cases.1.ωmax].loopsyn which results in a perfect fit for transformed Gshifted and an approximate fit over the smaller frequency range [ωmin. rand('seed'. sigma(G*K. w0=5.'k.'r'.'k. and returns the reduced range [ωmin.randn('seed'.0423 = 6. In some cases.ωmax] for the original unshifted G provided that ωmax >> ωmin. The result in shown in Figure 1011. % desired bandwith w0=5 G=((s10)/(s+100))*rss(3. 4output.range={ωmin.Gd). s=tf('s').4.INFO]=loopsyn(G. you should choose ωmax to be at least 100 times greater than ωmin. Gshifted has a Dmatrix that is rankdeficient). Gd=5/s. loopsyn automatically reduces the frequency range further. as when Gshifted has undamped zeros or.ωmax] as a cell array in the output INFO. % 4by5 nonminphase plant [K.0).'. in the continuoustime case only. the computation of the optimal W for Gshifted may be singular or illconditioned for the range [ωmin.2026 dB.'. 5input plant with a fullrank nonminimum phase zero at s=+10.Gd/GAM. 10179 .5).ωmax} Example The following code generates the optimal loopsyn loopshaping control for the case of a 5state. GAM = 2.0).
2) ≥ size(G. In the this example.1) • rank(freqresp(G.w)) = size(G.0423 = 6. and must be full rank.2026 db. The order of the controller K can be large. i.e.loopsyn Figure 1011: LOOPSYN controller The loopsyn controller K optimally fits sigma(G*K). must have at least as many outputs and inputs. • size(G. when Gd is given as a SISO LTI. As shown in Figure 1011. it is sandwiched between sigma(Gd/GAM) and sigma(Gd*GAM) in accordance with the inequalities in Equation 1016 and Equation 1017. GAM = 2. Limitations The plant G must be stabilizable and detectable. then the order NK of the controller K satisfies NK = NGs + NW = NyNGd + NRHP + NW = NyNGd + NRHP + NG 10180 . Generically.1) for some frequency w.
Autom. X. [3] Chiang. Robust stabilization of normalized coprime factor plant descriptions with H∞bounded uncertainty. However.. AIAA J. See Also loopsyn_demo mixsyn ncfsyn A demo of this function H∞ mixedsensitivity controller synthesis H∞ . If we apply the Hankel based model reduction routines directly such as hankelmr. bstmr. apply the Hankel based model routines to 10181 . References [1] Le. McFarlane. Control and Dynamics. rigid body dynamics. or schurmr to the original model. and M.loopsyn where • Ny denotes the number of outputs of the plant G. and M. Safonov. AC36(3):384–392. Model reduction can help reduce the order of K — see reduce and ncfmr. Control. Y.. NGd and NW denote the respective orders of G.. NGs. G. IEEE Trans. and D. V. balancmr. if we apply modreal to the model first. e. R. K. Note This routine is extremely useful when model has jωaxis singularities. G..g.Control. isolate the rigid body dynamics part from the rest. including those on the stability boundary and at infinity. Safonov. [2] Glover. Gd and W. September–October 1992. we can form a similarity transformation out of these ordered real eigenvectors such that he resulting systems G1 and/or G2 are in block diagonal modal form. March 1992. 15(5):1111– 1115. Guidance. AC34(8):821–830. Rational matrix GCD’s and the design of squaringdown compensators—a state space theory.normailized coprime controller synthesis Algorithm Using a real eigen structure decomposition reig and ordering the eigenvectors in ascending order according to their eigenvalue magnitudes. and • NG. H∞ synthesis using a bilinear poleshifting transform. • NRHP denotes the total number of nonstable poles and nonminimumphase zeros of the plant G. Gs. the reduction routines can failed due to unsolvable grammians. IEEE Trans. Autom. August 1992.
1234).5678). % cut = 2 for two rigid body modes G2.d = G1. G1.G2) See Also reduce balancmr schurmr bstmr ncfmr hankelmr hankelsv Top level model reduction routines Balanced truncation via squareroot method Balanced truncation via Schur method Balanced stochastic truncation via Schur method Balanced truncation for normalized coprime factors Hankel minimum degree approximation Hankel singular value 10182 .2). G = rss(50.loopsyn the nonrigid part.cut: randn('state'.d.2). G. then add back the rigid dynamics.2.2).d = zeros(2.G2] = modreal(G. Example Given a continuous stable or unstable system. we can do a very successful model reduction task this way. the following commands can get a set of modal form realizations depending on the split index . % assign the DC gain to G2 sigma(G1. rand('state'. [G1.
SVL. then a reasonable default is used 10183 .RHO.RHO.W1] [K. if OPT='OUTPUT' statecost matrix XI=Q.F. RHO W vector containing a set of recovery gains (optional) vector of frequencies (to be used for plots).W) ltrsyn(G.ltrsyn Purpose Syntax 10ltrsyn LQG loop transferfunction recovery (LTR) control synthesis [K.'OUTPUT') computes the solution to the ‘dual’ problem of filter loop recovery for LTI plant G where F is a Kalman filter gain matrix. Only the LTI controller K for the final value RHO(end)is returned. THETA sensor noise intensity or.THETA.OPT) Description [K.F. In this case.SVL.W1] [K.F.F. that is. if OPT='OUTPUT' controlcost matrix THETA=R. if input W is not supplied.XI.F.F.XI.F1.W1] = ltrsyn(G.SVL. where L= ss(A.W1] = = = = ltrsyn(G.Q.OPT) ltrsyn(G.SVL.XI.XI. at any frequency w>0. [K.THETA. where L1 denotes the LTI filter loop feedback loop transfer function L1= ss(A.C.F. or. w)) → 0 as ρ → ∞.RHO) ltrsyn(G.SVL.THETA. and max(sigma(G*KL.W1] = ltrsyn(G.W.RHO.RHO) computes a reconstructedstate outputfeedback controller K for LTI plant G so that K*G asymptotically ‘recovers’ plantinput fullstate feedback loop transfer function L(s) = F(IsA)1B+D.SVL. w)) → 0 as ρ → ∞. max(sigma(K*GL.W1] [K. the recovery is at the plant output.R.RHO.B. Inputs G F XI LTI plant LQ fullstatefeedback gain matrix plant noise intensity.D).THETA.XI.D) is the LTI fullstate feedback loop transfer function.TH.
W1] = ltrsyn(G.ltrsyn Outputs K K(s) — LTI LTR (looptransferrecovery) outputfeedback. The ‘fictitious noise’ term RHO(i)*B*B' results in looptransfer recovery as RHO(i) → ∞. [K.SVL. The Kalman filter gain is K f = ΣC Θ T T T –1 –1 where Σ satisfies the Kalman filter Riccati T equation 0 = Σ A + A Σ – Σ C Θ CΣ + Ξ + ρBB .Q.SVL.XI+RHO(i)*B*B'.e.W1] = ltrsyn(G. Similarly for the ‘dual’ problem of filter loop recovery case. See [1] for further details. [K.RHO.XI.THETA).THETA. Nyquist loci SVL = [re(1:nr) im(1:nr)] SVL W1 frequencies for SVL plots. RHO(end)) sigma plot data for the ‘recovered’ loop transfer function if G is MIMO or.'OUTPUT') computes a filter loop recovery controller of –1 10184 .F. for SISO G only.F. same as W when present Algorithm For each value in the vector RHO.RHO) computes the fullstatefeedback (default OPT='INPUT') LTR controller K ( s ) = [ K c ( Is – A + B K c + K f C – K f DK c ) K f ] where Kc=F and Kf=lqr(A'.R.C'. for the last element of RHO (i..
: Kalman Filter Gain . ltrsyn is only for continuous time plants (Ts==0) [1] J.B.. 416.G=ss(1e4/((s+1)*(s+10)*(s+100))).0*F*B).: q = 1e10 .C. References 10185 . the Dmatrix of the plant should be all zeros. Conversely for filter LTR (when OPT='OUTPUT'). the plant should not have fewer outputs than inputs.THETA.RHO. on Automat. L=ss(A. & . XI=100*C'*C. but with Kf=F is being the input filter gain matrix and the control gain matrix Kc computed as Kc=lqr(A. 1981.B. i. Example s=tf('s').XI..W).ltrsyn the same form.1e12].& .D]=ssdata(G). the plant should not have fewer inputs than outputs.” IEEE Trans. F=lqr(A. pp. See also ltrdemo Limitations The ltrsyn procedure may fail for nonminimum phase plants..C'*C.F.: q = 1 .eye(size(B.: q = 1e5 .W=logspace(2.W1]=ltrsyn(G.& . Stein “Multivariable Feedback Design: Concepts for a Classical/Modern Synthesis.Q+RHO(i)*C'*C.R).F. & . nyquist(L. The plant must be strictly proper.[A..& .1)).hold.e.2). THETA=eye(size(C.: q = 1e15 150 200 250 300 10 4 10 3 10 2 10 1 10 0 Rad/Sec 10 1 10 2 10 3 10 4 Figure 1012: Example of LQG/LTR at Plant Output.SVL.'k. For fullstate LTR (default OPT='INPUT'). LQG/LTR @ PLANT OUTPUT 50 0 50 SV (DB) 100 . [K. AC26. Doyle and G.1e9.'). Contr.B.B.2))). RHO=[1e3.1e6.
normalized coprime controller synthesis 10186 .loop shaping controller synthesis Demo of LQG/LTR optimal control synthesis H∞ .ltrsyn See Also h2syn hinfsyn lqg loopsyn ltrdemo ncfsyn H2 controller synthesis H∞ controller synthesis Continuous linearquadraticGaussian control synthesis H∞ .
matnbr
Purpose Syntax Description See Also
10matnbr
Return the number of matrix variables in a system of LMIs
K = matnbr(lmisys) matnbr returns the number K of matrix variables in the LMI problem described by lmisys. decnbr lmiinfo decinfo
Give the total number of decision variables in a system of LMIs Interactively retrieve information about the variables and term content of LMIs Describe how the entries of a matrix variable X relate to the decision variables
10187
mat2dec
Purpose Syntax Description
10mat2dec
Return the vector of decision variables corresponding to particular values of the matrix variables
decvec = mat2dec(lmisys,X1,X2,X3,...)
Given an LMI system lmisys with matrix variables X1, . . ., XK and given values X1,...,Xk of X1, . . ., XK, mat2dec returns the corresponding value decvec of the vector of decision variables. Recall that the decision variables are the independent entries of the matrices X1, . . ., XK and constitute the free scalar variables in the LMI problem. This function is useful, for example, to initialize the LMI solvers mincx or gevp. Given an initial guess for X1, . . ., XK, mat2dec forms the corresponding vector of decision variables xinit. An error occurs if the dimensions and structure of X1,...,Xk are inconsistent with the description of X1, . . ., XK in lmisys.
Example
Consider an LMI system with two matrix variables X and Y such that • X is A symmetric block diagonal with one 2by2 full block and one 2by2 scalar block • Y is a 2by3 rectangular matrix Particular instances of X and Y are ⎛ ⎜ X0 = ⎜ ⎜ ⎜ ⎝ 1 3 0 0 3 –1 0 0 0 0 5 0 0 0 0 5 ⎞ ⎟ ⎟, ⎟ ⎟ ⎠ ⎛ ⎞ Y0 = ⎜ 1 2 3 ⎟ ⎝ 4 5 6⎠
and the corresponding vector of decision variables is given by
decv = mat2dec(lmisys,X0,Y0) decv' ans = 1 3 1 5 1 2 3 4 5 6
10188
mat2dec
Note that decv is of length 10 since Y has 6 free entries while X has 4 independent entries due to its structure. Use decinfo to obtain more information about the decision variable distribution in X and Y.
See Also
dec2mat decinfo decnbr
Given values of the decision variables, derive the corresponding values of the matrix variables Describe how the entries of a matrix variable X relate to the decision variables Give the total number of decision variables in a system of LMIs
10189
mincx
Purpose Syntax Description
10mincx
Minimize a linear objective under LMI constraints
[copt,xopt] = mincx(lmisys,c,options,xinit,target)
The function mincx solves the convex program minimize c x subject to
T
N L ( x ) N ≤ MT R ( x )M
T
(1018)
where x denotes the vector of scalar decision variables. The system of LMIs (918) is described by lmisys. The vector c must be of the same length as x. This length corresponds to the number of decision variables returned by the function decnbr. For linear objectives expressed in terms of the matrix variables, the adequate c vector is easily derived with defcx. The function mincx returns the global minimum copt for the objective cTx, as well as the minimizing value xopt of the vector of decision variables. The corresponding values of the matrix variables is derived from xopt with dec2mat. The remaining arguments are optional. The vector xinit is an initial guess of the minimizer xopt. It is ignored when infeasible, but may speed up computations otherwise. Note that xinit should be of the same length as c. As for target, it sets some target for the objective value. The code terminates as soon as this target is achieved, that is, as soon as some feasible x such that cTx ð target is found. Set options to [] to use xinit and target with the default options.
Control Parameters
The optional argument options gives access to certain control parameters of the optimization code. In mincx, this is a fiveentry vector organized as follows: • options(1) sets the desired relative accuracy on the optimal value lopt (default = 10–2). • options(2) sets the maximum number of iterations allowed to be performed by the optimization procedure (100 by default). • options(3) sets the feasibility radius. Its purpose and usage are as for feasp.
10190
mincx
• options(4) helps speed up termination. If set to an integer value J > 0, the code terminates when the objective cTx has not decreased by more than the desired relative accuracy during the last J iterations. • options(5) = 1 turns off the trace of execution of the optimization procedure. Resetting options(5) to zero (default value) turns it back on. Setting option(i) to zero is equivalent to setting the corresponding control parameter to its default value. See feasp for more detail.
Tip for SpeedUp
In LMI optimization, the computational overhead per iteration mostly comes from solving a leastsquares problem of the form min Ax – b
x
where x is the vector of decision variables. Two methods are used to solve this problem: Cholesky factorization of ATA (default), and QR factorization of A when the normal equation becomes ill conditioned (when close to the solution typically). The message
* switching to QR
is displayed when the solver has to switch to the QR mode. Since QR factorization is incrementally more expensive in most problems, it is sometimes desirable to prevent switching to QR. This is done by setting options(4) = 1. While not guaranteed to produce the optimal value, this generally achieves a good tradeoff between speed and accuracy.
Memory Problems
QRbased linear algebra (see above) is not only expensive in terms of computational overhead, but also in terms of memory requirement. As a result, the amount of memory required by QR may exceed your swap space for large problems with numerous LMI constraints. In such case, MATLAB issues the error
??? Error using ==> pds Out of memory. Type HELP MEMORY for your options.
You should then ask your system manager to increase your swap space or, if no additional swap space is available, set options(4) = 1. This will prevent
10191
mincx
switching to QR and mincx will terminate when Cholesky fails due to numerical instabilities.
Reference
The solver mincx implements Nesterov and Nemirovski’s Projective Method as described in Nesterov, Yu, and A. Nemirovski, Interior Point Polynomial Methods in Convex Programming: Theory and Applications, SIAM, Philadelphia, 1994. Nemirovski, A., and P. Gahinet, “The Projective Method for Solving Linear Matrix Inequalities,” Proc. Amer. Contr. Conf., 1994, Baltimore, Maryland, pp. 840–844. The optimization is performed by the CMEX file pds.mex.
See Also
defcx dec2mat decnbr feasp gevp
Help specify cTx objectives for the mincx solver Given values of the decision variables, derive the corresponding values of the matrix variables Give the total number of decision variables in a system of LMIs Find a solution to a given system of LMIs Generalized eigenvalue minimization under LMI constraints
10192
mixsyn
Purpose Syntax
10mixsyn
H∞ mixedsensitivity synthesis method for robust control loopshaping design
[K,CL,GAM,INFO]=mixsyn(G,W1,W2,W3) [K,CL,GAM,INFO]=mixsyn(G,W1,W2,W3,KEY1,VALUE1,KEY2,VALUE2,...)
Description
[K,CL,GAM,INFO]=mixsyn(G,W1,W2,W3) computes a controller K that minimizes the H∞ norm of the closedloop transfer function the weighted mixed sensitivity
∆
W 1S
Ty u = W2 R 1 1 W3 T where S and T are called the sensitivity and complementary sensitivity, respectively and S, R and T are given by S = ( I + GK )
–1 –1 –1
R = K ( I + GK )
T = GK ( I + GK )
AUGMENTED PLANT P(s) y
W1 W2 + u 1 u 2 e u G y W3
}
y
1
y
y 2
CONTROLLER F(s) K(s)
Figure 1013: Closedloop transfer function T y u for mixed sensitivity 1 1 mixsyn.
10193
mixsyn
The returned values of S, R, and T satisfy the following loop shaping inequalities: σ ( S ( jω ) ) ≤ γ σ ( W 1 ( jω ) ) σ ( R ( jω ) ) ≤ γ σ ( W 2 ( jω ) ) σ ( T ( jω ) ) ≤ γ σ ( W 3 ( jω ) ) where γ=GAM. Thus, W1, W3 determine the shapes of sensitivity S and complementary sensitivity T. Typically, you would choose W1 to be small inside the desired control bandwidth to achieve good disturbance attenuation (i.e., performance), and choose W3 to be small outside the control bandwidth, which helps to ensure good stability margin (i.e., robustness). For dimensional compatibility, each of the three weights W1, W2 and W3 must be either empty, scalar (SISO) or have respective input dimensions NY, NU, and NY where G is NYbyNU. If one of the weights is not needed, you may simply assign an empty matrix []; e.g., P = AUGW(G,W1,[],W3) is SYS but without the second row (without the row containing W2).
–1 –1 –1
Algorithm
[K,CL,GAM,INFO]=mixsyn(G,W1,W2,W3,KEY1,VALUE1,KEY2,VALUE2,...)
is equivalent to
[K,CL,GAM,INFO]=... hinfsyn((G,W1,W2,W3),KEY1,VALUE1,KEY2,VALUE2,...). mixsyn accepts all the same key value pairs as hinfsyn.
Example
The following code illustrates the use of mixsyn for sensitivity and complementary sensitivity ‘loopshaping’.
s=zpk('s'); G=(s1)/(s+1)^2; W1=0.1*(s+100)/(100*s+1); W2=0.1; [K,CL,GAM]=mixsyn(G,W1,W2,[]); L=G*K; S=inv(1+L); T=1S; sigma(S,'g',T,'r',GAM/W1,'g.',GAM*G/ss(W2),'r.')
10194
mixsyn
Figure 1014: mixsyn(G,W1,W2,[ ]) shapes sigma plots of S and T to conform to γ/W1 and γG/W2, respectively.
Limitations
The transfer functions G, W1, W2 and W3 must be proper, i.e., bounded as s → ∞ or, in the discretetime case, as z → ∞ . Additionally, W1, W2 and W3 should be stable. The plant G should be stabilizable and detectable; else, P will not be stabilizable by any K.
augw hinfsyn
See Also
Augments plant weights for control design H∞ controller synthesis
10195
mkfilter
Purpose Syntax Description
10mkfilter
Generate a Bessel, Butterworth, Chebyshev or RC filter
sys = mkfilter(fc,ord,type) sys = mkfilter(fc,ord,type,psbndr) sys = mkfilter(fc,ord,type) returns a singleinput, singleoutput analog lowpass filter, sys as an ss object. The cutoff frequency (Hertz) is fc and the filter order is ord, a positive integer. The string variable, type, specifies the type of filter and can be one of the following type, String variable 'butterw' 'cheby' 'bessel' 'rc' Description
Butterworth filter Chebyshev filter Bessel filter Series of resistor/capacitor filters
The dc gain of each filter (except even order Chebyshev) is set to unity.
sys = mkfilter(fc,ord,type,psbndr) contains the input argument psbndr
that specifies the Chebyshev passband ripple (in dB). At the cutoff frequency, the magnitude is psbndr dB. For even order Chebyshev filters the DC gain is also psbndr dB.
Example
butw = mkfilter(2,4,'butterw'); cheb = mkfilter(4,4,'cheby',0.5); rc = mkfilter(1,4,'rc'); bode(butw_g,'',cheb_g,'',rc_g,'.')
10196
mkfilter
megend('Butterworth','Chebyshev','RC filter')
Butterworth, RC & Chebyshev Filters
Log Magnitude
10
0
10
2
10
1
10
0
10 Frequency (radians/sec)
1
10
2
0 Phase (degrees) 100 200 300 10
1
10
0
10 Frequency (radians/sec)
1
10
2
Limitations See Also
The Bessel filters are calculated using the recursive polynomial formula. This is poorly conditioned for high order filters (order > 8).
augw
Augments plant weights for control design
10197
mktito
Purpose Syntax Description
10mktito
Make a TITO (twoinputtwooutput) system out of a MIMO LTI system
SYS=mktito(SYS,NMEAS,NCONT) SYS=mktito(SYS,NMEAS,NCONT) tadds TITO (twoinputtwooutput) partitioning to LTI system SYS, assigning OutputGroup and InputGroup properties such that NMEAS = dim ( y 2 ) NCONT = dim ( u 2 )
y1 P y2
SYS
u1 u2
Any preexisting OutputGroup or InputGroup properties of SYS are overwritten. TITO partitioning simplifies syntax for control synthesis functions like hinfsyn and h2syn.
Algorithm
[r,c]=size(SYS); set(SYS,'InputGroup', struct('U1',1:cNCONT,'U2',cNCONT+1:c)); set(SYS,'OutputGroup',struct('Y1',1:rNMEAS,'Y2',rNMEAS+1:r));
Example
You can type
P=rss(2,4,5); P=mktito(P,2,2); disp(P.OutputGroup); disp(P.InputGroup);
to create a 4by5 LTI system P with OutputGroup and InputGroup properties
U1: [1 2 3] U2: [4 5] Y1: [1 2] Y2: [3 4]
See also
augw hinfsyn
Augments plant weights for control design H∞ synthesis controller
10198
mktito h2syn ltiprops sdhinfsyn H2 synthesis controller Help on LTI model properties H∞ discretetime controller 10199 .
C 2. G 2 = ( A 2. G 1 = ( A 1. Argument G cut Description LTI model to be reduced. The real eigenvalues will be put in 1x1 blocks and complex eigenvalues will be put in 2x2 blocks. B 1. (Optional) an integer to split the realization. we can form a 10200 . Without it. D 1 ) .cut) returns a set of statespace LTI objects G1 and G2 in modal form given a statespace G and the model size of G1. a complete modal form realization is returned This table lists output arguments. These diagonal blocks are ordered in ascending order based on eigenvalue magnitudes.modreal Purpose Syntax Description 10modreal Modal form realization and projection [G1. Algorithm Using a real eigen structure decomposition reig and ordering the eigenvectors in ascending order according to their eigenvalue magnitudes. The modal form realization has its A matrix in block diagonal form with either 1x1 or 2x2 blocks. cut. B 2. The complex eigenvalue a+bj is appearing as 2x2 block a b –b a This table describes input arguments for modreal.G2] = modreal(G.G2 Description LTI models in modal form G can be stable or unstable.cut) [G1. C 1. Argument G1.G2] = modreal(G. D 2 ) and D 1 = D + C 2 ( – A 2 ) –1 B 2 is calculated such that the system DC gain is preserved.
G1. It has been incorporated inside Hankel based model reduction routines . bstmr. [G1.2). rand('state'.1234). balancmr. Example Given a continuous stable or unstable system. the following commands can get a set of modal form realizations depending on the split index . G = rss(50.2. G. % cut = 2 for two rigid body modes G1.g..hankelmr. Note This routine is extremely useful when model has jωaxis singularities.cut: randn('state'. e.2).2). and schurmr to isolate those jωaxis poles from the actual model reduction process.5678).G2) See Also reduce balancmr schurmr bstmr ncfmr hankelmr hankelsv Top level model reduction routines Balanced truncation via squareroot method Balanced truncation via Schur method Balanced stochastic truncation via Schur method Balanced truncation for normalized coprime factors Hankel minimum degree approximation Hankel singular value 10201 .d = zeros(2. % remove the DC gain of the system from G1 sigma(G. rigid body dynamics.modreal similarity transformation out of these ordered real eigenvectors such that he resulting systems G1 and/or G2 are in block diagonal modal form.G2] = modreal(G.
tol) Given an LTI plant P with statespace equations ⎧ ⎪ ⎨ ⎪ ⎩ · x = Ax + B 1 w + B 2 u C 1 x + D 11 w + D 12 u C 2 x + D 22 u z∞ = z2 = msfsyn computes a statefeedback control u = Kx that • Maintains the RMS gain (H∞ norm) of the closedloop transfer function T∞ from w to z∞ below some prescribed value γ 0 > 0 • Maintains the H2 norm of the closedloop transfer function T2 from w to z2 below some prescribed value υ 0 > 0 • Minimizes an H2/H∞ tradeoff criterion of the form α T∞ ∞ + β T2 2 • Places the closedloop poles inside the LMI region specified by region (see lmireg for the specification of such regions).r.region.K.X] = msfsyn(P. Note also that z∞ or z2 can be empty. α. Pcl the closedloop transfer function from w to ⎜ ⎟ . ⎝ z2⎠ ⎛ z ∞⎞ 2 2 The function msfsyn is also applicable to multimodel problems where P is a polytopic model of the plant: 10202 .h2opt. On output.msfsyn Purpose Syntax Description 10msfsyn Multimodel/multiobjective statefeedback synthesis [gopt. You can perform pure pole placement by setting obj = [0 0 0 0]. ν0. ν0.obj. α. and X the corresponding Lyapunov matrix.Pcl. and β. β] to specify the problem dimensions and the design parameters γ0. Set r = size(d22) and obj = [γ0. The default is the open lefthalf plane. K is the optimal statefeedback gain. gopt and h2opt are the guaranteed H∞ and H2 performances.
msfsyn ⎧ ⎪ ⎨ ⎪ ⎩ · x = A ( t )x + B 1 ( t )w + B 2 ( t )u C 1 ( t )x + D 11 ( t )w + D 12 ( t )u C 2 ( t )x + D 22 ( t )u z∞ = z2 = with timevarying statespace matrices ranging in the polytope ⎧⎛ ⎛ A(t) B (t) B (t) ⎞ ⎪ ⎜ A k B k B 2k 1 2 ⎜ ⎟ ⎜ C ( t ) D ( t ) D ( t ) ⎟ ∈ Co ⎪ ⎜ C D ⎨ ⎜ 1k 11k D 12k 11 12 ⎜ 1 ⎟ ⎪⎜ ⎜ C (t) ⎟ 0 D 22 ( t ) ⎠ ⎪ ⎝ C 2k 0 D 22k ⎝ 2 ⎩ ⎫ ⎞ ⎪ ⎟ ⎟ : k = 1. Note that polytopic plants should be defined with psys and that the closedloop system Pcl is itself polytopic in such problems. …. msfsyn seeks a statefeedback gain that robustly enforces the specifications over the entire polytope of plants. K ⎪ ⎬ ⎟ ⎪ ⎟ ⎪ ⎠ ⎭ In this context. See Also lmireg psys Specify LMI regions for pole placement purposes Specification of uncertain statespace models 10203 . Affine parameterdependent plants are also accepted and automatically converted to polytopic models.
• if BlockStructure(i. 10204 .BlockStructure.2). • If BlockStructure is omitted. its default is ones(size(M.'s') bounds = mussv(M. which implies a perturbation structure of all 1by1 complex blocks.q] = mussv(M. then the ith block is a rbyr repeated. and as many rows as uncertainty blocks in the perturbation structure. BlockStructure is a matrix specifying the perturbation block structure.F. for a given block structure. then the computation is performed pointwise along the third and higher array dimensions. then the ith block is a rbyr repeated.2). then the ith block is a rbyc complex fullblock perturbation.muinfo] = mussv(M. If M is a frd object.:) = [r 0]. for which the matrix I .BlockStructure) [ubound.:) = [r 0].:) = [r c].BlockStructure) calculates upper and lower bounds on the structured singular value.M*DeltaS is singular.F.BlockStructure) [bounds. If M is an ND array (with N ≥ 3). then bounds is a 1x2 array containing an upper (1st column) and lower (2nd column) bound of structured singular value of M. diagonal real scalar perturbation.Options) [ubound. Moreover.mussv Purpose Syntax 10mussv Compute upper and lower bounds on the Structured Singular Value (µ) and upper bounds on Generalized structured singular value bounds = mussv(M. an error results. If M is a 2dimensional matrix. • if BlockStructure(i. In this case.M*Delta is not singular.BlockStructure) [bounds. the computations are performed pointwise in frequency (as well as any array dimensions).1) does not equal size(M. For all matrices Delta with blockdiagonal structure defined by BlockStructure and with norm less than 1/bounds(1) (upper bound).1). • If BlockStructure(i. or frd object. the matrix I .muinfo] = mussv(M.BlockStructure. diagonal complex scalar perturbation. if size(M. BlockStructure has 2 columns. there is a matrix DeltaS with blockdiagonal structure defined by BlockStructure and with norm equal to 1/bounds(2) (lower bound).q] = mussv(M. or µ. M is a double. The ith row of BlockStructure defines the dimensions Description of the i’th perturbation block.
the computations are always performed pointwise in frequency. Options is a character string.i). The information within muinfo must be extracted using mussvextract. and bounds(1.i). See mussvextract for more details. use 19). any i between 1 and d1⋅d2…dF (the product of the dk) would be valid. containing any combination of the following characters: Option 'a' 'an' 'd' 'f' 'i' Meaning Upper bound to greatest accuracy.Options) specifies computation options.:.Frequency equals M.Frequency. for frd). Suppress progress information (silent) Decrease iterations in lower bound computation (faster but not as tight as default) 'm7' 's' 'x' [bounds. bounds(1.mussv If M is an frd. For example. Here. The output argument bounds is a 1by2 frd of upper and lower bounds at each frequency. but without automaticprescaling Display warnings Fast upper bound (typically not as tight as the default) Initialize lower bound computation using previous matrix (only relevant if M is ND array or FRD) Randomly initialize lower bound iteration multiple times (in this case 7 times.BlockStructure) returns muinfo.:. Using single index notation.i) is the upper bound for the structured singular value of M(:. bounds = mussv(M. Then size(bounds) is 1×2×d1×…×dF. a structure containing more detailed information.muinfo] = mussv(M. If M is an ND array (either double or frd). the upper and lower bounds are computed pointwise along the 3rd and higher array dimensions (as well as pointwise in frequency. Note that bounds.2.1. using LMI solver Same as 'a'. 10205 .i) is the lower bound for the structured singular value of M(:.BlockStructure. suppose that size(M) is r×c×d1×…×dF. larger number typically gives better lower bound.
5). an uncertain real parameter δ2. as well as the fact that the upper bound for generalized µ comes from an optimized µ upper bound. The block structure. This is verified by the matrix Q. points completed (of 1) . is an uncertain real parameter δ1. BlockStructure = [1 0.Q] = mussv(M. Other options will be ignored for generalized µ problems.BlockStructure.mussv Generalized Structured Singular Value ubound = mussv(M. the matrix [IDelta*M. F = randn(2. optbounds = mussv(M+Q*F.5) + sqrt(1)*randn(2.F.BlockStructure. an uncertain complex parameter δ3 and a twice repeated uncertain complex parameter δ4.F.F] is guaranteed not to lose column rank. 1 bounds = mussv(M. which satisfies mussv(M+Q*F. The quantities optbounds(1) and ubound should be extremely close. A simple example for generalized structured singular value can be done with random complex matrices. M is a complex 5by5 matrix and F is a complex 2by5 matrix. F is an additional (double or frd)..5) + sqrt(1)*randn(5.'C')<=ubound. only an upper bound is calculated.BlockStructure).'s') adds an option to run silently.BlockStructure) calculates an upper bound on the generalized structured singular value (generalized µ) for a given block structure.1 0. depending on M and F). 10206 . [ubound..BlockStructure. M is a double or frd object. and significantly lower than bounds(1) and bounds(2). Consequently ubound is 1by1 (with additional array dependence. Example See mussvextract for a detailed example of the structured singular value.'C5').F. BlockStructure. M = randn(5. ubound is an upper bound of the generalized structured singular value of the pair (M.2 0]. M and BlockStructure are as before. with respect to the blockdiagonal uncertainty described by BlockStructure. illustrating the relationship between the upper bound for µ and generalized µ.BlockStructure). For all matrices Delta with blockdiagonal structure defined by BlockStructure and norm<1/ubound.1 1.5).F). Note that in generalized structured singular value computations. ubound = mussv(M.
1992. It gives the exact computation of µ for positive matrices with scalar blocks. for computing the upper bound from Fan et. and is described in detail in Young et. 1988. or a Perron approach. al. and the upper bound is computed using the balanced/AMI technique. (Safonov. Young et. 1991. concluding with general purpose LMI optimization (Boyd et. 1991. compared with less than n2 for Osborne.. al. al. A sequence of improvements to the upper bound is then made based on various equivalent forms of the upper bound. 1988. 1993. but is comparable to Osborne on general matrices.5925 [bounds(1) bounds(2)] ans = 3.. 1982). Peter Young and Matt Newlin wrote the original Mfiles.8184 3. The upperbound is an implementation of the bound from Fan et. Perron is faster for small matrices but has a growth rate of n3.5917 1. al. The optimal choice of Q (to minimize the upper bound) in the generalized mu problem is solved by reformulating the optimization into a semidefinite program (Packard. 1992. Both the Perron and Osborne methods have been modified to handle repeated scalar and full blocks. This is partly due to the MATLAB implementation. al. Young and Doyle 1990 and Packard et.7135 Algorithm The lower bound is computed using a power method. This generates the standard upper bound for the associated complex µ problem. the matrix is first balanced using either a variation of Osborne’s method (Osborne. A number of descent techniques are used which exploit the structure of the problem. The Perron eigenvector method is based on an idea of Safonov. al.mussv [optbounds(1) ubound] ans = 1. al. The lowerbound power algorithm is from Young and Doyle. In the upper bound computation.). 1990. and Packard et. 10207 . 1991). etal. to obtain the final answer.. The default is to use Perron for simple block structures and Osborne for more complicated block structures. which greatly favors Perron.. 1960) generalized to handle repeated scalar and full blocks.
and J. M. P. Newlin. pp. 7. 1960.” Proc. December 1988. El Ghaoui.. Doyle. June. pp. “A power method for the structured singular value. M.” IEEE Proc. “Stability margins for diagonally perturbed multivariable feedback systems. Tits.” Linear Algebra and Its Applications. pp. “Practical computation of the mixed problem. 1990. A. 63–111. 1982. and J. pp. • Osborne. 251–256. and J. • Packard. 2190–2194.” Proceedings of the 29th IEEE Conference on Decision and Control. “On preconditioning of matrices. M. • Fan.” IEEE Transactions on Automatic Control. • Young. vol. • Young. S. AC–36.K. 1992. A. 129. 188–189. 2132–2137. P. Doyle. pp. Fan and J. • Safonov. “Computation of with real and complex uncertainties. 338–345. pp.. Doyle. pp. and L. M. 1991. 1230–1235. 25–38. “Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics. See Also loopmargin mussvextract robuststab robustperf wcgain wcsens wcmargin Comprehensive analysis of feedback loop Extract compressed data returned from mussv Calculate stability margins of uncertain systems Calculate performance margins of uncertain systems Calculate worstcase gain of a system Calculate worstcase sensitivities for feedback loop Calculate worstcase margins for feedback loop 10208 . vol.” Proceedings of the American Control Conference. Part D. 1993. “Methods of centers for minimizing generalized eigenvalues. E..mussv Reference • Boyd. vol.” Journal of Associated Computer Machines. of 1988 IEEE Conference on Control and Decision. Doyle. vol.
The relation/interpretation of these quantities with the numerical results in bounds is described below.VSigma. VSigma. VSigma is used to verify the Newlin/Young upper bound and has fields DLeft. VDelta is used to verify the lower bound. In fact. then all G matrices will be zero. Dc. and VLmi.mussvextract Purpose Syntax Description 10mussvextract Extracts the muinfo structure returned by mussv [VDelta. and Gcr.M*Delta) is nonzero for all blockstructured matrices Delta with norm smaller than 1/bounds(1). Any such β is an upper bound of mussv(M. and GRight.VLmi] = mussvextract(muinfo) A structured singular value computation of the form [bounds. The Newlin/Young method consists of finding a scalar β and matrices D and G. and the expression above simplifies to σ ( D r ) ≤ β. such that –1 10209 . VLmi is used to verify the LMI upper bound and has fields Dr. there is a left and right D and G. The LMI method consists of finding a scalar β and matrices D and G.muinfo] = mussv(M. mussvextract is used to extract the compressed information within muinfo into a readable form.– jG m⎟ (I + G r ) ⎟ ≤ 1 ⎝ ⎝ β ⎠ ⎠ 1 –1 1 Since some uncertainty blocks and M need not be square. consistent with BlockStructure. the matrices D and G have a few different manifestations.BlockStructure) returns detailed information in the structure muinfo. as well as a middle G. DRight. Upper Bound Information The upper bound is based on a proof that det(I . in the formula above. consistent with BlockStructure. It is true that if BlockStructure consists only of complex blocks. such that – – ⎛ ⎞ 2 4 ⎛ Dr 2 4⎞ σ ⎜ (I + G l ) ⎜ .BlockStructure). Grc. GLeft. The most general call to mussvextract extracts three usable quantities: VDelta. GMiddle.
4). BlockStructure = [1 1.VLmi] = mussvextract(muinfo).8917 0 0 0 10210 .8917 0 0 0. then all G matrices will be zero.VSigma.4) + sqrt(1)*randn(4. If BlockStructure consists only of complex blocks.BlockStructure). The corresponding scalings are Dl and Dr.1 1. Example Suppose M is a 4by4 complex matrix. 2 Lower Bound Information The lower bound of mussv(M.DRight Dr = 1.0000 0 0 0. [VDelta.2 2].BlockStructure). Again.DLeft Dl = 1. You can first verify the Newlin/Young upper bound with the information extracted from muinfo. Dl = VSigma.mussvextract M'D rM – β D c + j* ( G cr M – M'G rc ) is negative semidefinite. It will always be true that the lower bound (bounds(2)) will be the reciprocal of norm(VDelta). Any such β is an upper bound of mussv(M. D and G have a few different manifestations to match the row and column dimensions of M. the matrix M*VDelta has an eigenvalue equal to 1.8393 0 0 0 0 0. M = randn(4.8917 0 0 0 0 0. [bounds. Take the block structure to be two 1by1 complex blocks and one 2by2 complex block. You can calculate bounds on the structured singular value using the mussv command and extract the scaling matrices using mussvextract.0000 0 0 0.muinfo] = mussv(M.BlockStructure) is based on finding a “small” (hopefully the smallest) blockstructured matrix VDelta that causes det(I M*VDelta) to equal 0. and negative semidefinitness of M′DrM−β2Dc is sufficient to derive an upper bound. Equivalently.8393 0 0 0 0 Dr = VSigma.
2380 0.2960 .0341 and that M*VDelta has an eigenvalue exactly at 1.2009 .0.0000 + 0. Dc = VLmi.6532i 10211 . Dr = VLmi. eig(M*VDelta) ans = 1.8917 You can first verify the LMI upper bound with the information extracted from muinfo.1460i [norm(VDelta) 1/bounds(2)] ans = 0.0.1277i 0 0 0.0000 .0837 + 0.1012i 0 0 + 0. VDelta VDelta = 0.0. The corresponding scalings are Dr and Dc.mussvextract 0 0 [norm(Dl*M/Dr) bounds(1)] ans = 4.9719 + 0.0000i 0.Dr.0.0.1037 + 0. and the norm of VDelta agrees with the lower bound.0000i 5.1648 .0225i 0.2013 4.0000i Note that VDelta matches the structure defined by BlockStructure.0000i 0.0291 0.0408 + 0.Dc. eig(M'*Dr*M .bounds(1)^2*Dc) ans = 15.2013 0 0.0000i 0.2380 0 0 0 0 0.1070i 0.0.0773i 0.0000 + 0.0000i 10.1718i 0 0 .0635 .1161 + 0.
Dr = VSigma2.GRight.25. As before. since the uncertainty set defined by BlockStructure2 is a proper subset of that defined by BlockStructure.GLeft. but change BlockStructure to be a 2by2 repeated. Run mussv with the 'C' option to tighten the upper bound. Gm = VSigma2. you can first verify the Newlin/Young upper bound with the information extracted from muinfo. Note that bounds2 should be smaller than bounds. The corresponding scalings are Dl.Dr.DRight.VSigma2.sqrt(1)*Gm.GMiddle. 10212 . [bounds2. Gm and Gr. norm(SL*dmd*SR) ans = 1. BlockStructure2 = [2 0. [VDelta2. The corresponding scalings are Dr. Gl = VSigma2.'C').Grc. Gcr = VLmi2. Dr = VLmi2. 1 0. Dr. Gl.DLeft. G and Delta from muinfo2 using mussvextract.VLmi2] = mussvextract(muinfo2). bounds2] ans = 4. Dl = VSigma2. SR = (eye(4)+Gr*Gr)^0.Gcr. You can compare the computed bounds. 1 0]. dmd = Dl*M/Dr/bounds2(1) .0005 4. Gr = VSigma2. Dc = VLmi2. [bounds.mussvextract Keep the matrix the same. Grc = VLmi2. SL = (eye(4)+Gl*Gl)^0.muinfo2] = mussv(M.2013 4. Grc and Gcr.Dc.25.0000 You can first verify the LMI upper bound with the information extracted from muinfo.2009 4. Dc. real scalar block and two complex 1by1 blocks.0005 You can extract the D.BlockStructure2.
2287 + 0.mussvextract eig(M'*Dr*M ans = 0.0000i Note that VDelta2 matches the structure defined by BlockStructure. eig(M*VDelta2) ans = 1.2500 0.0075 0.4897i See Also mussv Calculate bounds on the Structured Singular Value (µ) 10213 .0130 + 0.2496i 0 0.0004  .3097 + 0.0009 0.2500 0 0 0.2069i and that M*VDelta2 has an eigenvalue exactly at 1.0.0000i 0.1402 0 0 0 + 0. VDelta2 VDelta2 = 0.bounds(1)^2 *Dc + j*(Gcr*MM'*Grc)) 0.2500 0 0 0 0 [norm(VDelta2) 1/bounds2(2)] ans = 0.6127i 0.2692 .0747i 0.2500 0 0 0.0000i 0.0000i 0.0000 + 0. and the norm of VDelta2 agrees with the lower bound.0000i 0.0001 0.
of the multivariable feedback loop consisting of C in negative feedback with P. marg. tol specifies a relative accuracy for calculating the normalized coprime factor metric and must be between 105 and 102. Ö Ö Ò ¹ ¹ ¾¹ Ó Ö ¹ Ø È ØÙÖ ¹ ½¹ Ó Ö ¹ Ø È ØÙÖ The normalized coprime factor robust stability margin lies between 0 and 1 and is used as an indication of robustness to unstructured perturbations. such as the 1dof architecture shown below (on the right). [marg.C.freq] = ncfmargin(P. If the compensator has 2dof architecture shown below (on the left).freq] = ncfmargin(P.ncfmargin Purpose Syntax Description 10ncfmargin Calculate the normalized coprime stability margin of the plantcontroller feedback loop [marg. C ) = I ( I – PC ) –1 .C) calculates the normalized coprime factor/gap metric robust stability margin b(P. you must eliminate the reference channels before calling ncfmargin. P I C ∞ –1 C should only be the compensator in the feedback path.C) [marg. freq is the frequency associated with the upper bound on marg.tol) calculates the normalized coprime factor/gap metric robust stability of the multivariable feedback loop consisting of C in negative feedback with P.001 is the default value.C.3 generally indicate good robustness margins. Values of marg greater than 0. tol=0. C).tol) [marg. The normalized coprime factor b(P.freq] = ncfmargin(P. 10214 . C) is defined as b ( P.freq] = ncfmargin(P.
[1 0.001 clp2 = feedback(x.10) Transfer function: 4 s + 40 The closedloop system with controller k1. [marg1.7071 freq1 = Inf [marg2.ncfmargin Example Consider the plant model 4/(s0. an unstable first order and two constant gain controllers. The closedloop system with controller k2.71 which is achieved at infinite frequency.freq1] = ncfmargin(x.1) marg1 = 0. Calculate the robust stability of the closedloop system with the feedback gain 1 and 10. by adding a 11% unmodeled dynamics to the nominal system x. This indicates that the closedloop system is not robust to unstructured perturbations. 4 s + 4. has a normalized coprime factor robust stability margin of 0.10) marg2 = 0.freq2] = ncfmargin(x. Both controllers stabilize the closedloop system x = tf(4.001]).001). clp2. 10215 .0995 freq2 = Inf Construct an uncertain system.1) The transfer function clp1 is shown as is clp2. This indicates that the closedloop system is very robust to unstructured perturbations. has a normalized coprime factor robust stability margin of 0. clp1 = feedback(x. xu. k1 = 1 and k2 = 10.10. clp1.
1993.and K. du10. Glover. . Robust Controller Design using Normalised Coprime Factor Plant Descriptions. du1. The computation of the nugap uses the method of Vinnicombe.. the closedloop system with K=1 can tolerate 909% (or 9. The closedloop system with K=1 is robustly stable in the presence of the unmodeled dynamics based on the robust stability analysis.. Lecture Notes in Control and Information Sciences.C. University of Cambridge. Smith 1988. pp.” IEEE Transactions on Automatic Control. Algorithm The computation of the gap amounts to solving 2block H∞ problems.A destabilizing combination of 909% the modeled uncertainty exists. 37.It can tolerate up to 909% of modeled uncertainty. In fact. report1] = robuststab(feedback(xu. Georgiou. 6.909*11%) of the unmodeled LTI dynamics.9% of modeled uncertainty. D.ncfmargin xu = x + ultidyn('uncstruc'. disp(report10{1}) Uncertain System is NOT robustly stable to modeled uncertainty. G. June 1992. “A Loop Shaping Design Procedure using Synthesis.” PhD dissertation.9% the modeled uncertainty exists. • Vinnicombe.64e+003 rad/s. and K. Glover.A destabilizing combination of 90.11). • McFarlane. Reference See Also loopmargin gapmetric norm Performs a comprehensive analysis of feedback loop Computes the gap and the Vinnicombe gap metric Computes the norm of a system 10216 . vol. Department of Engineering. vol.[1 1]. 1989.09*11%) of the unmodeled LTI dynamics. The closedloop system with K=10 implemented can only tolerate 90. • McFarlane. 759769.C.'Bound'. 138.It can tolerate up to 90. [stabmarg10. Whereas the closedloop system is not robustly stable with a constant gain of 10 controller.1)). D.. . report10] = robuststab(feedback(xu. 1993. . causing an instability at 165 rad/s. Springer Verlag.9% (or. The particular method used here for solving the H∞ problems is based on Green et al. no. “Measuring Robustness of Feedback Systems.0. . 1990.10)). causing an instability at 1. [stabmarg1. disp(report1{1}) Uncertain System is robustly stable to modeled uncertainty.
ncfmargin wcmargin Calculate worstcase margins for feedback loop 10217 .
redinfo] = ncfmr(G... and V l ( s ) such that Ur Nr + Vr Mr = I Nl Ul + Ml Vl = I The left/right coprime factors are stable.) [GRED. 10218 . V r ( s ) .e. when forming – G = N r ( s )M r 1 ( s ) ..key1. there should be no polezero cancellations. i.ncfmr Purpose Syntax 10ncfmr Balanced model truncation for normalized coprime factors..value1..order) [GRED.order. Description Hankel singular values of coprime factors of such a stable system indicate the respective “state energy” of the system. Hence.redinfo] = ncfmr(G. With only one input argument G. U l ( s ) . hence implies Mr(s) should contain as RHPzeros all the RHPpoles of G(s).) ncfmr returns a reduced order model GRED formed by a set of balanced normalized coprime factors and a struct array redinfo containing the left and right coprime factors of G and their coprime Hankel singular values. The left and right normalized coprime factors are defined as [1] – • Left Coprime Factorization: G = M l 1 ( s )N l ( s ) – • Right Coprime Factorization: G = N r ( s )M r 1 ( s ) where there exist stable U r ( s ) ..value1. reduced order can be directly determined by examining the system Hankel SV’s. GRED = ncfmr(G) GRED = ncfmr(G.key1. The comprimeness also implies that there should be no common RHPzeros in Nr(s) and Mr(s).. the function will show a Hankel singular value plot of the original model and prompt for model order number to reduce.
ncfmr method allows the original model to have jωaxis singularities. Use only if not specified as 2nd argument.ncfmr This table describes input arguments for ncmfr. vector or cell array Weights on the original model input and/or output can make the model reduction algorithm focus on some frequency range of interests. getting rid of unstable state(s) is dangerous to model a system. Argument G Description LTI model to be reduced (without any other inputs will plot its Hankel singular values and prompt for reduced order) (Optional) an integer for the desired order of the reduced model. 'MaxError'overides ORDER input. all the antistable part of a system is kept. Order of reduced model. When present. or optionally a vector packed with desired orders for batch runs ORDER A batch run of a serial of different reduced order models can be generated by specifying order = x:y. By default. 10219 . because from control stability point of view. or a vector of integers. In this case. reduced order will be determined when the sum of the tails of the Hankel sv's reaches the 'MaxError' Argument Value Description 'MaxError ' 'Display' 'Order' A real number or a vector of different errors 'on' or 'off' Reduce to achieve H∞ error. Display Hankel singular plots (default 'off'). minimum phase and invertible. integer. But weights have to be stable. 'MaxError' can be specified in the same fashion as an alternative for 'ORDER'.
continuous or discrete. the desired reduced order.GR (right coprime factor) • REDINFO. the following steps will produce a similarity transformation to truncate the original state space system to the kth order reduced model. 3 The reduced model GRED is [2]: ˆ A ˆ C ˆ B ˆ D = Ac – Bm Cl Bn – Bm Dl Dl Cl where 10220 . Algorithm Given a state space (A.B. A STRUCT array with 3 fields: • REDINFO.D) of a system and k.C. Argument GRED Description LTI reduced order model.hsv (Hankel singular values) REDINFO G can be stable or unstable. 1 Find the normalized coprime factors of G by solving Hamiltonian described in [1]. that becomes multidimensional array when input is a serial of different model order array.GL (left coprime factor) • REDINFO.ncfmr This table describes output arguments. Gl = Nl Ml Gr = Nr Mr 2 Perform kth order square root balanced model truncation on Gl (or Gr) [2].
d = zeros(5. AC2. Safonov and R. G.']). vol. redinfo1] = ncfmr(G).'MaxError'. the following commands can get a set of reduced order models based on your selections: rand('state'. [g1.5. July 1989.20). Bn. [2] M. “A Schur Method for Balanced Model Reduction. on Automat. G.g' num2str(i) '). stable or unstable system. Bm. 729733. Vidyasagar. [g3.[0. London: The MIT Press. Contr. [g4.1234).05]). redinfo2] = ncfmr(G.. Control System Synthesis . 0. Dm) Cl = (Dm)1Cc Dl = (Dm)1Dn Example Given a continuous or discrete. for i = 1:4 figure(i). Dn) Ml:= (Ac. % display Hankel SV plot % and prompt for order (try 15:20) [g2. eval(['sigma(G. 7.01.A Factorization Approach.[10:2:18]). Cc.4). Cc. redinfo4] = ncfmr(G. See Also reduce balancmr schurmr bstmr hankelmr hankelsv Top level model reduction routines Balanced truncation via squareroot method Balanced truncation via Schur method Balanced stochastic truncation via Schur method Hankel minimum degree approximation Hankel singular value 10221 . Chiang. end Reference [1] M. G. 1985.5678).” IEEE Trans. pp.ncfmr Nl:= (Ac. G = rss(30. no. randn('state'.4). redinfo3] = ncfmr(G. Y.
CL. db = σ(W2GW1). I ] Gs ∞ Roughly speaking. db = σ(W2GW1). roughly to within plus or minus 20*log10(GAM) decibels.INFO]=ncfsyn(G) [K. with values in the range 1<GAM<3 corresponding to satisfactory stability margins for most typical control system designs.W1) [K.GAM.W2) ncfsyn is a method for designing controllers that uses a combination of loop Description shaping and robust stabilization as proposed in McFarlane and Glover [1][2]. db ± γ. The first step is for you to select a pre.W1. GAM gives a good indication of robustness of stability to a wide class of unstructured plant variations.INFO]=ncfsyn(G.GAM.K) and is always greater than 1. I ] s s K ∞ I –1 ( I – KG s ) [ K. db σ(K∞W2GW1).ncfsyn Purpose Syntax 10ncfsyn Loop shaping design using the GloverMcFarlane method [K.GAM.CL.and postcompensator W1 and W2. where Ks =K∞ is an optimal minimizes the two H∞ cost functions γ := min K γ := min K H∞ controller that simultaneously I ( I – G K )–1 [ G .CL. db. Algorithm K=W2*Ks*W1. The second step is to use ncfsyn to compute an optimal positive feedback controllers K. 10222 . The number margin GAM=1/ncfmargin(Gs. db ± γ. this means for most plants that σ(W2GW1 K∞).INFO]=ncfsyn(G. so that the gain of the ‘shaped plant’ Gs:= W2GW1 is sufficiently high at frequencies where good disturbance attenuation is required and is sufficiently low at frequencies where good robust stability is required. The optimal Ks has the property that the sigma plot of the shaped loop Ls=W2*G*W1*Ks matches the target loopshape Gs optimally.
For a more precise bounds on loopshaping accuracy. the final controller to be implemented is K=W1K∞W2. does not substantially affect the loop shape in frequencies where the gain of W2GW1 is either high or low. LTI H∞ optimal closed loop K∞ 10223 . see Theorem 16. Default is W1=I. Finally it can be shown that the controller. The closedloop H∞norm objective has the standard signal gain interpretation. either SISO or MIMO. and will guarantee satisfactory stability margins in the frequency region of gain crossover. Theory ensures that if Gs=NM–1is a normalized coprime factorization (NCF) of the weighted plant model Gs satisfying Gs=N(jw)*N(jw) + M(jw)*M(jw) = I.W2 Output arguments K CL LTI controller K= W1*Ks*W2 I –1 ( I – W 2 GW 1 K ∞ ) [ W 2 GW 1. Input arguments G LTI plant to be controlled stable minphase LTI weights.12 of Zhou and Glover [1]. W2=I W1. I ] . K∞. ∆2 satisfying ∆1 ∆2 ∞ < MARG:=1/GAM .ncfsyn so you can use the weights W1 and W2 for loopshaping. In the regulator setup. ˜ then the control system will remain robustly stable for any perturbation G s to the weighted plant model Gs that can be written –1 ˜ Gs = ( N + ∆1 ) ( M + ∆2 ) for some stable pair ∆1.
emax nugap robustness emax=1/GAM=ncfmargin(Gs. s=zpk('s').TOL) calculates the normalized coprime factor/gap metric robust stability margin assuming negative feedback.K. FREQ is the peak frequency — i.G*W1. and TOL (default=.'. K ∞ ) structure array containing additional information GAM INFO Additional output INFO fields: INFO. MARG=b ( G.5/s.001) is the tolerance used to compute the H∞ norm..= hinfnorm(CL) ≥ 1 b ( W 2 GW 1. sigma(G*K.FREQ] = ncfmargin(G.'r'. – K ) = 1 ⁄ I ( I + GK ) – 1 [ G .'r.G*W1*GAM.'.G*W1/GAM. K ∞ ) 'shaped plant' Gs=W2*G*W1 Ks = K∞ = NCFSYN(Gs) = NCFSYN(W2*G*W1) INFO.GAM]=ncfsyn(G.W1). –K ∞ where G and K are LTI plant and controller.e. W1=0.CL.'k. the frequency at which the infinity norm is reached to within TOL. The following code shows how ncfsyn can be used for loopshaping. I ] .') 10224 . Algorithm Example See the McFarlane and Glover [1][2] for details.'k. The achieved loop G*K has a sigma plot is equal to that of the target loop G*W1 to within plus or minus 20*log10(GAM) decibels.ncfsyn 1 H∞ optimal cost γ = . [K.Ks)= b ( W 2 GW 1.Ks [MARG. G=(s1)/(s+1)^2.Gs INFO.
6. Lecture Notes in Control and Information Sciences. D. and K.loop shaping controller synthesis Normalized coprime stability margin of the plantcontroller feedback loop 10225 .. NY: PrenticeHall. and K. [4] Zhou. Department of Engineering.” PhD dissertation. C.C. no. vol. G. Doyle. 1998. and J. See Also gapmetric hinfsyn loopsyn ncfmargin Computes the gap and the Vinnicombe gap metric H∞ controller synthesis H∞ .. Springer Verlag. 759– 769. Robust Controller Design using Normalised Coprime Factor Plant Descriptions. 138. C. University of Cambridge. vol.ncfsyn Figure 1015: Achieved loop G*K and shaped loop Gs. Glover. Glover. pp. June 1992. K. D.. “Measuring Robustness of Feedback Systems. Essentials of Robust Control.. [2] McFarlane.” IEEE Transactions on Automatic Control. ±20log(GAM) db Reference [1] McFarlane. 1993. [3] Vinnicombe. 37. 1989. “A Loop Shaping Design Procedure using Synthesis.
See Also setlmis lmivar lmiterm getlmis lmiedit dellmi Initialize the description of an LMI system Specify the matrix variables in an LMI problem Specify the term content of LMIs Get the internal description of an LMI system Specify or display systems of LMIs as MATLAB expressions Remove an LMI from a given system of LMIs 10226 . showlmi.newlmi Purpose Syntax Description 10newlmi Attach an identifying tag to LMIs tag = newlmi newlmi adds a new LMI to the LMI system currently described and returns an identifier tag for this LMI. or dellmi commands to refer to the newly declared LMI. Their value is simply the ranking of each LMI in the system (in the order of declaration). They prove useful when some LMIs are deleted from the LMI system. In such cases. the identifiers are the safest means of referring to the remaining LMIs. This identifier can be used in lmiterm. Tagging LMIs is optional and only meant to facilitate code development and readability. Identifiers can be given mnemonic names to help keep track of the various LMIs.
[1 5]).0000 normalized2actual(a.0000 5. actual2normalized(a. a = ureal('a'. NV.0000 1.NV) converts the value for atom in normalized coordinates.[1. to the corresponding actual value. where each end point is 1 unit from the nominal.'range'.0000 normalized2actual(a. If NV is an array of values.0000 6.5 1.[1 1]) ans = 1. while points that lie outside the range are greater than 1 unit from the nominal. Points that lie inside the range are less than 1 unit from the nominal.[1 3 5]) ans = 1.3.0000 See Also actual2normalized robuststab robustperf Calculates of an uncertain atom Calculates robust stability margin Calculates robust performance margin 10227 .5]) ans = 0.0000 0.NV) avalue = normalizedactual2(A.normalized2actual Purpose Syntax Description 10normalized2actual Convert the value for an atom in normalized coordinates to the corresponding actual value avalue = normalizedactual2(A. Example Create uncertain real parameters with a range that is symmetric about the nominal value. then avalue will be an array of the same dimension.
Only sufficient conditions for the existence of such Lyapunov functions are available in general. For an affine parameterdependent system · E(p) x = A(p)x + B(p)u y = C(p)x + D(p)u with p = (p1. . .Q0.. 10228 . . The system description pds is specified with psys and contains information about the range of values and rate of variation of each parameter pi. . p)/dt < 0 along all admissible parameter trajectories. . α) = xTQ(α)–1x. i=1 (1019) pdlstab seeks a Lyapunov function of the form V(x. pdlstab seeks a Lyapunov function of the form V(x. For a timeinvariant polytopic system · E x = Ax + Bu y = Cx + Du with ⎛ A + jE B ⎞ ⎜ ⎟ = ⎝ C D⎠ ⎛ A + jE B ⎞ i i ⎟ αi ⎜ . Q(p) = Q0 + p1Q1 + . the resulting robust stability tests are always less conservative than quadratic stability tests when the parameters are either timeinvariant or slowly varying.pnQn such that dV(x.. . Nevertheless. α)/dt < 0 for all polytopic decompositions (919).Q1.options) pdlstab uses parameterdependent Lyapunov functions to establish the stability of uncertain statespace models over some parameter range or polytope of systems. Q(α) = α1Q1 + . . ∑ αi = 1 ...] = pdlstab(pds.+ αnQn such that dV(x. p) = xTQ(p)–1x.pdlstab Purpose Syntax Description 10pdlstab Assess the robust stability of a polytopic or parameterdependent system [tau. pn) ∈ Rn. ⎜ Ci Di ⎟ ⎠ i=1 ⎝ n n ∑ αi ≥ 0 .
pdlstab automatically restarts and tests stability on the dual system (921) when it fails on (920). there is equivalence between the robust stability of · E ( p )x = A ( p )x and that of the dual system T· T E(p) z = A(p) z (1020) (1021) However. the second system may admit an affine parameterdependent Lyapunov function while the first does not. Set options(2)=1 to use the least conservative conditions Remark For affine parameterdependent systems with timeinvariant parameters. See Also quadstab Quadratic stability of polytopic or affine parameterdependent systems 10229 . pdlstab uses simplified sufficient conditions for faster running times. In such case.pdlstab Several options and control parameters are accessible through the optional argument options: • Setting options(1)=0 tests robust stability (default) • When options(2)=0.
tf. Finally.y.pdsimul Purpose Syntax Description 10pdsimul Time response of a parameterdependent system along a given parameter trajectory pdsimul(pds.tf.y] = pdsimul(pds. When invoked without output arguments. pdsimul plots the output trajectories y(t). The final time and initial state vector can be reset through tf and xi (their respective default values are 5 seconds and 0). The affine system pds is specified with psys. Otherwise.'ut'.options) [t. options gives access to the parameters controlling the ODE integration (type help gear for details). The parameter trajectory and input signals are specified by two time functions p=traj(t) and u=ut(t). The function pdsimul also accepts the polytopic representation of such systems as returned by aff2pol(pds) or hinfgs. See Also psys pvec Specification of uncertain statespace models Quantification of uncertainty on physical parameters 10230 .options) pdsimul simulates the time response of an affine parameterdependent system · E(p) x = A(p)x + B(p)u y = C(p)x + D(p)u along a parameter trajectory p(t) and for an input signal u(t).'traj'.pv. If 'ut' is omitted. the response to a step input is computed by default.'traj'. it returns the vector of integration time points t as well as the state and output trajectories x.'ut'.xi.x.xi.
+ cn = 1 The list vertx of corners can be obtained directly by typing vertx = polydec(PV) See Also pvec pvinfo aff2pol hinfgs Quantification of uncertainty on physical parameters Describe a parameter vector specified with pvec Convert affine parameterdependent models to polytopic ones Synthesis of gainscheduled H∞ controllers 10231 .polydec Purpose Syntax Description 10polydec Compute polytopic coordinates wrt.n) cj >=0 ..P) takes an uncertain parameter vector PV and a value P of the parameter vector PV... and returns the convex decomposition C of P over the set VERTX of box corners: P = c1*VERTX(:.vertx] = polydec(PV. [C. + cn*VERTX(:. and returns the corners or vertices of the box as columns of the matrix vertx. box corners vertx = polydec(PV) [C. c1 + ..1) + .vertx] = polydec(PV.P) vertx = polydec(PV) takes an uncertain parameter vector PV taking values ranging in a box.
Then P determines the quadratic part xTPx of the Lyapunov function and D and S are the Popov multipliers.S. The uncertain system must be described as the interconnection of a nominal LTI system sys and some uncertainty delta.N] = popov(sys.1) See Also quadstab pdlstab Quadratic stability of polytopic or affine parameterdependent systems Robust stability of polytopic or affine parameterdependent systems (Psystem) 10232 . the conservatism of the Popov criterion can be reduced by first performing a simple loop transformation.P.delta.delta) tests the robust stability of this interconnection. If the uncertainty delta contains real parameter blocks. call popov with the syntax [t.popov Purpose Syntax Description 10popov Perform the Popov robust stability test [t.P. To use this refined test.delta. Robust stability is guaranteed if t < 0.N] = popov(sys.S.S. The command [t.P.N] = popov(sys.flag) popov uses the Popov criterion to test the robust stability of dynamical systems with possibly nonlinear and/or timevarying uncertainty.
For affine parameterdependent systems defined by the SYSTEM matrices S0. the entries of p should be polytopic coordinates p1.k..psinfo Purpose Syntax 10psinfo Inquire about polytopic or parameterdependent systems created with psys psinfo(ps) [type.+ pnSn For polytopic systems with SYSTEM matrix ranging in Co{S1. . . ..'par') returns the parameter vector description (for parameterdependent systems only).p) instantiates the system for a given vector p of parameter values or polytopic coordinates.. • sk = psinfo(ps.'sys'.p) psinfo is a multiusage function for queries about a polytopic or parameterdependent system ps created with psys. and the numbers of ns.ns. . . . the entries of p should be real parameter values p1. The ranking k is relative to the list of systems syslist used in psys. . . and outputs of the system. inputs.k) sys = psinfo(ps. the number k of SYSTEM matrices involved in its definition. • sys = psinfo(ps.k) returns the kth SYSTEM matrix involved in the definition of ps. Sn}. • pv = psinfo(ps. pn satisfying pj Š 0 and the result is the interpolated LTI system of SYSTEM matrix p1 S1 + … + pn Sn S = p1 + … + pn See Also psys Specification of uncertain statespace models 10233 . . ni.'eval'. S1. . . . This information can be optionally stored in MATLAB variables by providing output arguments.no] = psinfo(ps) pv = psinfo(ps. no of states. .'eval'.ni.'sys'. . pn and the result is the LTI system of SYSTEM matrix S(p) = S0 + p1S1 + .. Sn. It performs the following Description operations depending on the calling sequence: • psinfo(ps) displays the type of system (affine or polytopic).'par') sk = psinfo(ps.
. Sk (polytope of matrices with vertices S1. .. A k + jE k B k ⎪ ⎨ ⎬ C(t) D(t) C1 D1 Ck Dk ⎪ ⎪ ⎩ ⎭ S(t) S1 Sk ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ where S1. . . . ...syslist) psys specifies statespace models where the statespace matrices can be uncertain.. . Sk} = ⎨ ⎪ ⎪ i=1 ⎩i = 1 ⎭ ∑ ∑ denotes the convex hull of S1. . . . Sk are given “vertex” systems and k ⎧ k ⎫ ⎪ ⎪ αi Si : αi ≥ 0 . . . . Two types of uncertain statespace models can be manipulated in the LMI Control Toolbox: • Polytopic systems · E(t) x = A(t)x + B(t)u y = C(t)x + D(t)u whose SYSTEM matrix takes values in a fixed polytope: ⎧ ⎫ A ( t ) + jE ( t ) B ( t ) ∈ Co ⎪ A 1 + jE 1 B 1 . αi = 1 ⎬ Co{S1. .psys Purpose Syntax Description 10psys Specify polytopic or parameterdependent linear systems pols = psys(syslist) affs = psys(pv.. . timevarying. Sk) • Affine parameterdependent systems · E(p) x = A(p)x + B(p)u 10234 . or parameterdependent.
s3. S4 is created by pols = psys([s1. .psys y = C(p)x + D(p)u where A(. S1. .s2..s4]) while an affine parameterdependent model with 4 real parameters is defined by affs = psys(pv. B(. Sn are given SYSTEM matrices. . See Also psinfo pvec aff2pol Inquire about polytopic or parameterdependent systems created with psys Quantification of uncertainty on physical parameters Convert affine parameterdependent models to polytopic ones ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ Sn 10235 . .). .) are fixed affine functions of some vector p = (p1.s3. In addition. E(. The argument syslist lists the SYSTEM matrices Si characterizing the polytopic value set or parameter dependence. .. the description pv of the parameter vector (range of values and rate of variation) is required for affine parameterdependent models (see pvec for details).s2.s1. .[s0. A ( p ) + jE ( p ) B ( p ) = C(p) D(p) S(p) ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ A 0 + jE 0 B 0 A + jE 1 B 1 A + jE n B n + p1 1 + . i.. ..s4]) The output is a structured matrix storing all the relevant information. . .). pn) of real parameters. . Both types of models are specified with the function psys.e. Thus. . . The parameters pi can be timevarying or constant but uncertain. a polytopic model with vertex systems S1. . + pn n C0 D0 C1 D1 Cn Dn ⎫ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎭ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ S0 S1 where S0..
The type 'pol' corresponds to parameter vectors p ranging in a polytope of the parameter space Rn. .vertices) pvec is used in conjunction with psys to specify parameterdependent systems. . rates is also an nby2 matrix and its jth row specifies lower and upper bounds ν j and ν j dp on j : dt dp j ν j ≤ . Such parameter vectors are declared by the command pv = pvec('pol'.. Such systems are parametrized by a vector p = (p1. The function pvec defines the range of values and the rates of variation of these parameters.rates) pv = pvec('pol'. all parameters are assumed timeinvariant... Example Consider a problem with two timeinvariant parameters p1 ∈ [–1. If the third argument rates is omitted. .range. vn]) where the second argument is the concatenation of the vectors v1.... The second argument range is an nby2 matrix that stacks up the extremal values p j and p j of each pj.vn.≤ ν j dt Set ν j = Inf and ν j = Inf if pj(t) can vary arbitrarily fast or discontinuously. The output argument pv is a structured matrix storing the parameter vector description. Otherwise. . pn) of uncertain or timevarying real parameters pi. . .. Use pvinfo to read the contents of pv.[v1. . 2]. The type 'box' corresponds to independent parameters ranging in intervals pj ≤ pj ≤ pj The parameter vector p then takes values in a hyperrectangle of Rn called the parameter box. . 50] 10236 . p2 ∈ [20. This polytope is defined by a set of vertices V1. Vn corresponding to “extremal” values of the vector p. .pvec Purpose Syntax Description 10pvec Specify the range and rate of variation of uncertain or timevarying parameters pv = pvec('box'.v2.
v2 = ⎜ –1 ⎝ 20 ⎠ ⎝ 50 ⎞ ⎟. you could also specify p by pv = pvec('pol'.20 50]) Alternatively. ⎠ ⎛ ⎞ v3 = ⎜ 2 ⎟ . ⎝ 20 ⎠ ⎛ ⎞ v4 = ⎜ 2 ⎟ ⎝ 50 ⎠ Hence.v4]) p2 50 20 1 2 p1 Figure 1016: Parameter box See Also pvinfo psys Describe a parameter vector specified with pvec Specification of uncertain statespace models 10237 .v3. The four corners of this rectangle are the four vectors ⎛ ⎞ ⎛ v1 = ⎜ –1 ⎟ .[v1.v2. p2) is specified by pv = pvec('box'.[1 2.pvec The corresponding parameter vector p = (p1.2. this vector can be regarded as taking values in the rectangle drawn in Figure 9.
'par'. Specifically. dpmin ≤ .'par'.dpmax] = pvinfo(pv.pmax. .. pn) of real parameters declared with pvec and stored in pv.'eval'. .j) vj = pvinfo(pv. The vector c must be of length k and have nonnegative entries.c) returns the value of the parameter vector p given its barycentric coordinates c with respect to the polytope vertices (V1. .dpmin.'par'.j) p = pvinfo(pv. Description For the type 'box': [pmin. the number n of scalar parameters.j) returns the jth vertex of the polytope of Rn in which p ranges. and for the type 'pol'.pvinfo Purpose Syntax 10pvinfo Describe a parameter vector specified with pvec [typ. dp j pmin ≤ p ( t ) ≤ pmax . the number of vertices used to specify the parameter range. The corresponding value of p is then given by p = ∑i = 1 c i V i k ∑i = 1 c i k See Also pvec psys Quantification of uncertainty on physical parameters Specification of uncertain statespace models 10238 .c) pvec retrieves information about a vector p = (p1.dpmin. . The command pvinfo(pv) displays the type of parameter vector ('box' or 'pol'). .nv] = pvinfo(pv) [pmin.k.'eval'.dpmax] = pvinfo(pv. while pvinfo(pv.pmax..'par'.j) returns the bounds on the value and rate of variations of the jth real parameter pj. .Vk).≤ dpmax j dt For the type 'pol': pvinfo(pv.
such that T 2 T dV ∀u ∈ L 2 .+ y y – γ u u < 0 dt P>0 Minimizing γ over such quadratic Lyapunov functions yields the quadratic H∞ performance. .quadperf Purpose Syntax Description Compute the quadratic H∞ performance of a polytopic or parameterdependent system [perf. The command [perf. perf is the largest portion of the parameter box where the quadratic RMS gain remains smaller than the positive value g (for affine parameterdependent systems only).P] = quadperf(ps. is the smallest γ > 0 such that y L ≤γ u L 2 2 y = C ( t )x + D ( t )u (1022) (1023) for all input u(t) with bounded energy. an upper bound on the true RMS gain. The Lyapunov matrix P yielding the performance perf is returned in P. The default value is 0 10239 . The optional input options gives access to the following task and control parameters: • If options(1)=1. A sufficient condition for (923) is the existence of a quadratic Lyapunov function V(x) = xTPx.P] = quadperf(ps) computes the quadratic H∞ performance perf when (922) is a polytopic or affine parameterdependent system ps (see psys).g.options) 10quadperf The RMS gain of the timevarying system · E ( t )x = A ( t )x + B ( t )u.
See Also quadstab psys Quadratic stability of polytopic or affine parameterdependent systems Specification of uncertain statespace models 10240 . quadperf uses the least conservative quadratic performance test.quadperf • If options(2)=1. The default is options(2)=0 (fast mode) • options(3) is a userspecified upper bound on the condition number of P (the default is 109).
pi0 + δi]. E) Q>I The global minimum of this problem is returned in tau and the system is quadratically stable if tau < 0 • if options(1)=1. (A. . Specifically.. This matrix is returned in P. p(t) = (p1(t).options) For affine parameterdependent systems · E(p) x = A(p)x. E1). if each parameter pi varies in the interval pi ∈ [pi0 – δi. . En)}. . Given the solution Qopt of the LMI optimization. (An. quadstab computes the largest θ > 0 such that quadratic stability holds over the parameter box pi ∈ [pi0 – θδi.. . pi0 + θδi] This “quadratic stability margin” is returned in tau and ps is quadratically stable if tau Š 1. The affine or polytopic model is described by ps (see psys). or polytopic systems · E(t) x = A(t)x. . The task performed by quadstab is selected by options(1): • if options(1)=0 (default).P] = quadstab(ps. the Lyapunov matrix P is –1 given by P = Q opt . pn(t)) quadstab seeks a fixed Lyapunov function V(x) = xTPx with P > 0 that establishes quadratic stability. . E) ∈ Co{(A1. quadstab assesses quadratic stability by solving the LMI problem Minimize τ over Q = QT such that ATQE + EQAT < τI for all admissible values of (A.quadstab Purpose Syntax Description 10quadstab Quadratic stability of polytopic or affine parameterdependent systems [tau. 10241 . quadstab computes the largest portion of the specified parameter range where quadratic stability holds (only available for affine models).
Set options(2)=1 to use the least conservative conditions • options(3) is a bound on the condition number of the Lyapunov matrix P.quadstab Other control parameters can be accessed through options(2) and options(3): • if options(2)=0 (default). See Also pdlstab decay quadperf psys Robust stability of polytopic or affine parameterdependent systems (Psystem) Quadratic decay rate of polytopic or affine Psystems Compute the quadratic H∞ performance of a polytopic or parameterdependent system Specification of uncertain statespace models 10242 . using the least expensive sufficient conditions. The default is 109. quadstab runs in fast mode.
'ucomplex'. The class is of this object is randomly selected between 'ureal'.[4 3]) Uncertain GainBounded LTI Dynamics: Name OOJGS. Valid values for Type include 'ureal'.70893 1. A = randatom(Type. You can control the result of randatom by setting seeds for both random number generators before calling the function. 'ultidyn'.29). Example The following statement creates the ureal uncertain object xr. rand(`seed'. Description In general. xr = randatom('ureal') Uncertain Real Parameter: Name BMSJA. Range [7. both rand and randn are used internally. randn(`seed'. 4x3. NominalValue 6.75. You will get the results shown below if both the random variable seeds are set to 29. results in a 1by1 uncertain object.646 See Also rand randn randumat Generates uniformly distributed random numbers Generates normally distributed random numbers Creates a random uncertain matrix 10243 .29).sz) A = randatom A = randatom(Type) generates a 1by1 type uncertain object. xlti = randatom('ultidyn'. and 'ucomplexm'. where randatom has no input arguments.randatom Purpose Syntax 10randatom Generate random uncertain atom objects A = randatom(Type) A = randatom(Type.89278] The following statement creates the variable ultidyn uncertain object xlti with three inputs and four outputs. A = randatom. If Type is set to 'ureal' or 'ucomplex'.'ultidyn' and 'ucomplex'. Valid values for Type include 'ultidyn' or 'ucomplexm'. Gain Bound = 0. Note that your display may differ because a random seed is used. the size variable is ignored and A is a 1by1 uncertain object.sz) generates an sz(1)bysz(2) uncertain object.
randatom randuss ucomplex ucomplexm ultidyn Creates a random uncertain system Creates an uncertain complex parameter Creates an uncertain complex matrix Creates an uncertain linear timeinvariant object 10244 .
3) UMAT: 2 Rows.646.27.2) UMAT: 4 Rows. 3 Columns ROQAW: complex. +/.13i. Note that your result may differ because a random seed is used. radius = 0. 3 occurrences VDTIH: complex.81i.nu) generates an uncertain matrix of size nybynu. randn(`seed'.366+2.98681 0.568. nominal = 9. radius = 1. 2 Columns SSAFF: complex. 1 occurrence See Also rand randn randatom randuss ucomplex Generate uniformly distributed random numbers Generate normally distributed random numbers Create a random uncertain atom Create a random uncertain system Creates an uncertain complex parameter 10245 .28174].92+4. um = randumat results in a 1by1 umat uncertain object.133993].033i. nominal = 3. and 'ucomplex'.91). nominal = 5. randumat randomly selects from uncertain atoms of type 'ureal'. Example The following statement creates the umat uncertain object x1 of size 2by3.73202 4. 2 occurrences The following statement creates the umat uncertain object x2 of size 4by2 with the seed 91. variability = [1.0628. 2 occurrences XOLLJ: real. radius = 1. rand(`seed'. 3 occurrences VVNHL: complex.76.81. 1 occurrence UEPDY: real. x1 = randumat(2.nu) um = randumat um = randumat(ny.99. x2 = randumat(4. nominal = 5. including up to four uncertain objects.91). nominal = 0. range = [3.randumat Purpose Syntax Description 10randumat Generate random uncertain umat objects um = randumat(ny.5%.84i. nominal = 0. 'ultidyn'.
randumat ultidyn Creates an uncertain linear timeinvariant object 10246 .
gain = 2.07%. both rand and randn are used internally. Note your display may differ since a random seed is used. variability = [3.m. of size 2by3. +/. 3 Inputs.2.m) randuss(n. max. nominal = 8.36+3. s1 = randuss(5. usys usys usys usys usys = = = = = randuss(n) randuss(n. In general. You can control the result of randuss by setting seeds for both random number generators before calling the function.p. 1 occurrence OEDJK: complex. s1.p.3) USS: 5 States.74667 22.p. Continuous System CTPQV: 1x1 LTI.p.m) generates an nth order uncertain continuoustime system with p outputs and m inputs.7816]%. 'ultidyn'. nominal = 4.randuss Purpose Syntax 10randuss Generates stable. radius = 0. continuoustime uncertain system. 2 Outputs.2.p) generates an nth order singleinput uncertain continuoustime system with p outputs. 1 occurrence MLGCD: complex. The sample time is Ts.7. randuss randomly selects from uncertain atoms of type 'ureal'. usys = randuss (without arguments) results in a 1by1 uncertain continuoustime uss object with up to four uncertain objects. 1 occurrence See Also rand Generates uniformly distributed random numbers 10247 . nominal = 0.3460. Example The statement creates a 5th order.p) randuss(n. and 'ucomplex'. usys = randuss(n.03. 1 occurrence IGDHN: real.895. random uss objects.m. usys = randuss(n. usys = randuss(n.09i.Ts) randuss Description usys = randuss(n) generates an nth order singleinput/singleoutput uncertain continuoustime system.296i.Ts) generates an nth order uncertain discretetime system with p outputs and m inputs.
randuss randn randatom randumat ucomplex ultidyn Generates normally distributed random numbers Creates a random uncertain atom Creates a random uncertain matrix Creates an uncertain complex parameter Creates an uncertain linear timeinvariant object 10248 .
rcond operates on x. rcond(x) is near EPS. See Also cond norm condest normest Calculates condition number with respect to inversion Calculates matrix or vector norm Calculates a 1norm condition number estimate Calculates a matrix 2norm estimate 10249 . If x is badly conditioned.0. If x is well conditioned. r = rcond(x) rcond(x) is an estimate for the reciprocal of the condition of the frd object x in the 1norm obtained by the LAPACK condition estimator.ReponseData of the x frd at each frequency to construct r. r=rcond(x) returns r as an frd object.frd/rcond Purpose Syntax Description 10frd/rcond LAPACK reciprocal condition estimator of a frd object. rcond(x) is near 1.
Argument G Description LTI model to be reduced (without any other inputs will plot its Hankel singular values and prompt for reduced order). Hankel singular values of a stable system indicate the respective state energy of the system.'ErrorType'.'value1'. Hence. which based on Hankel SV's are grouped by their error bound types.redinfo] = reduce(G. ORDER A batch run of a serial of different reduced order models can be generated by specifying order = x:y.  GGRED ∞.order.[5].'value1'. But for systems with lightly damped poles and/or zeros. This table describes input arguments for reduce. In many cases.. An error bound is a measure of how close GRED is to G and is computed based on either additive error. Hankel singular values of the original system and some other relevant model reduction information.redinfo] = reduce(G.'mult')) that minimizes the relative error between G and GRED tends to produce a better fit.  G1(GGRED) ∞.: ncfmr) [1]. multiplicative error. all the antistable part of a physical system is kept.. reduced order can be directly determined by examining the system Hankel SV's.'key1'. or nugap error (ref.'key1'.order) [GRED...) reduce returns a reduced order model GRED of G and a struct array redinfo Description containing the error bound of the reduced model.reduce Purpose Syntax 10reduce Simplified access to Hankel singular value based model reduction functions GRED = reduce(G) GRED = reduce(G. because from control stability point of view.ORDER) is adequate to provide a good reduced order model.[4]. or a vector of integers. (Optional) an integer for the desired order of the reduced model. 10250 . GRED=reduce(G.. a multiplicative error method (namely. Model reduction routines.. By default.) [GRED. getting rid of unstable state(s) is dangerous to model a system. or optionally a vector packed with desired orders for batch runs. the additive error method GRED=reduce(G.ORDER.
In this case. 'MaxError' overrides ORDER input. reduced order will be determined when the sum of the tails of the Hankel SV's reaches the 'MaxError'. Argument 'Algorithm' Value 'balance' 'schur' 'hankel' 'bst' 'ncf' 'add' 'mult' 'ncf' Description Default for 'add' (balancmr) Option for 'add' (schurmr) Option for 'add' (hankelmr) Default for 'mult' (bstmr) Default for 'ncf' (ncfmr) Additive error (default) Multiplicative error at model output NCF nugap error Reduce to achieve H∞ error. 'Order' Integer. Order of reduced model. Use only if not specified as 2nd argument. Optimal 1x2 cell array of LTI weights Wout (output) and Win (input). But weights have to be stable.reduce 'MaxError' can be specified in the same fashion as an alternative for 'ORDER' after an 'ErrorType' is selected. 'Display' 'on' or 'off' Display Hankel singular plots (default 'off'). 10251 . Weights must be invertible. When present. vector or cell array Weights on the original model input and/or output can make the model reduction algorithm focus on some frequency range of interests. minimum phase and invertible.Win} 'Weights' cell array both identity. default is 'ErrorType' 'MaxError' A real number or a vector of different errors {Wout. used only with 'ErrorType'. 'add'.
Example Given a continuous or discrete.1234). stable or unstable system. G and GRED can be either continuous or discrete.reduce This table describes output arguments.20).ErrorBound • REDINFO. the following commands can get a set of reduced order models based on your selections: rand('state'.Ganticausal For 'ncf' option.[10:2:18].'schur'). 10252 . Becomes multidimensional array when input is a serial of different model order array.UnstabSV For 'hankel' algorithm. % default to balancmr [g3.ErrorBound • REDINFO.'MaxError'.5.'algorithm'.05]). redinfo3] = reduce(G. A STRUCT array with 3 fields: • REDINFO.'ErrorType'.hsv REDINFO G can be stable or unstable. redinfo1] = reduce(G). STRUCT array becomes: • REDINFO. randn('state'.4). STRUCT array becomes: • REDINFO. redinfo2] = reduce(G.GR • REDINFO. A successful model reduction with a wellconditioned original model G will ensure that the reduced model GRED satisfies the infinity norm error bound.StabSV • REDINFO.G = rss(30. 0. redinfo] = reduce(G.'mult'. Argument GRED Description LTI reduced order model.StabSV • REDINFO.5678). % select schurmr [g4. G. % display Hankel SV plot % and prompt for order [g2. [g1.01.UnstabSV • REDINFO.GL • REDINFO.[0.
Y. Contr. no.g' num2str(i) '). 1990.. randn('state'. pp.'algorithm'. [4] M. See Also balancmr schurmr bstmr ncfmr hankelmr hankelsv Balanced truncation via squareroot method Balanced truncation via Schur method Balanced stochastic truncation via Schur method Balanced truncation for normalized coprime factors Hankel minimum degree approximation Hankel singular value 10253 . [g5. “All Optimal Hankel Norm Approximation of Linear Multivariable Systems. 35. on Automat. Zhou.']). Chiang and D. Control. vol. G. “Frequency weighted L• error bounds. April.” Int. for i = 1:6 figure(i). wt1.wt2}). 'weight'.5). 496502. “A Schur Method for Balanced Model Reduction. 'maxerror'. no.4).12345). N. 11451193. pp. Y.reduce rand('state'. J.{wt1. 6.” International Journal of Adaptive Control and Signal Processing. pp. G.. Vol. . end Reference [1] K.. Y. 21. pp. [2] M. Contr. on Automat. Limebeer. and Their L∝ − error Bounds.4.. vol.” Syst. “Model Reduction for Robust Control: A Schur RelativeError Method. AC2. 4. 259272.01]).” IEEE Trans.'hankel'. redinfo5] = reduce(G.'add'. [5] K. Lett. [g6. “Optimal Hankel Model Reduction for Nonminimal Systems.. J. 1984. G. redinfo6] = reduce(G. R. [10:2:18].[0.d = eye(5)*2. Chiang. vol. vol. wt2 = rss(6. 39.” IEEE Trans. Safonov and R. 1993. Chiang. July 1989. Glover. 729733. wt1 = rss(6. 2. 115−125.5. 7.6789).'ErrorType'. eval(['sigma(G. Contr. Safonov and R.d = 2*eye(4). Safonov. [3] M. 1988. No. wt2.
[4 2]) 10254 .. repmat(A.2).2..]) tiles the array A to produce an MbyNbyPby.N)... Example Simple examples of using repmat are: repmat(randumat(2.[M N P .3) repmat(ureal(`A'.repmat Purpose Syntax Description 10repmat Replicate and tile an array B = repmat(A. A can be ND.N) B = repmat(A. B = repmat(A.M.N) creates a large matrix B consisting of an MbyN tiling of copies of A.N) for scalar A is commonly used to produce an MbyN matrix filled with values of A.M.[M N]) accomplishes the same result as repmat(A.6).M. block array. B = repmat(A.M.
Default is 'sm9'. Any unspecified property is set to its default value.) options = robopt (with no input arguments) creates an options object with all the properties set to their default values. Default is 'on' Percentage variation of uncertainty used as a stepsize in finitedifference calculations to estimate sensitivity. Option used in internal structured singular value calculations (when calling mussv). Case is ignored for property names.value1. VaryUncertainty Mussv 10255 ..value2. Default is 'off' Sensitivity Computes margin sensitivity to individual uncertainties..robopt Purpose Syntax Description 10robopt Create an options object for use with robuststab and robustperf opts = robopt opts = robopt('name1'.'name2'.'name2'.. It is sufficient to type only enough leading characters to define the property name uniquely. options = robopt('name1'.. Default is 25. Fields The following are the robopt object properties: Object Property Display Description Displays progress of computations.value2. robopt with no input or output arguments displays a complete list of option properties and their default values.value1..) creates an robopt object in which specified properties have the given values..
The robopt options properties 'Sensitivity' and 'VaryUncertainty' are individually set. opt. fieldnames are robopt properties. In the following statements. fieldnames are robopt properties.robopt Object Property Default Description Structure. opt = robopt Property Object Values: Display: 'off' Sensitivity: 'on' VaryUncertainty: 25 Mussv: 'sm9' Default: [1x1 struct] Meaning: [1x1 struct] In estimating local sensitivities.VaryUncertainty = 50. an elementary finitedifference scheme is used. and values text description of property Meaning Example You can create a robopt options object called opt with all default values. you are requesting that the sensitivity of the robust stability margin calculation to a 50% variation individual uncertainties be calculated. opt Property Object Values: Display: 'off' Sensitivity: 'on' VaryUncertainty: 50 Mussv: 'sm9' Default: [1x1 struct] Meaning: [1x1 struct] See Also dkitopt robuststab Creates an options object for dksyn Calculates stability margins of uncertain systems 10256 . Structure. The property VaryUncertainty denotes the stepsize used in estimating the derivatives necessary in computing sensitivities. opt = robopt. and values are the default values.
robopt robustperf wcgopt wcsens wcmargin Calculates performance margins of uncertain systems Creates a wcgain option object Calculates worstcase sensitivities for a feedback loop Calculates worstcase margins for a feedback loop 10257 .
info] = robustperf(sys) [perfmarg.info] = robustperf(sys. The relationship between robustperf and other measures. “Generalized Robustness Analysis. then an appropriate frequency grid is generated (automatically). The results of [perfmarg. robustperf computes the robust performance margin. Largely included for historical purposes.wcu.report. N denotes the number of points in the frequency grid. such as robuststab and wcgain. perfmarg = robustperf(sys) [perfmarg. Basic syntax Suppose sys is a ufrd or uss with M uncertain elements. The exact robust performance margin is guaranteed to lie in between these upper and lower bounds.perfmargunc. only bounds on the performance margin are computed. If the input system sys is a ufrd.wcu. which is one measure of the level of degradation brought on by the modeled uncertainty. If the input system sys is a uss. and the analysis performed on that frequency grid. The computation used in robustperf is a frequencydomain calculation.Report] = robustperf(sys) are such that perfmarg is a structure with the following fields Field LowerBound Description Lower bound on robust performance margin. positive scalar. In all discussion that follows below.” As with other uncertainsystem analysis tools. then the analysis is performed on the frequency grid within the ufrd. is described in Chapter 7.opt) Description The performance of a nominallystable uncertain system model will generally degrade for specific values of its uncertain elements. 10258 .robustperf Purpose Syntax 10robustperf Calculates the robust performance margin of an uncertain multivariable system.report.
The value of frequency at which the performance degradation curve crosses the y=1/x curve.4305e001 CriticalFrequency: 5.” There are M fieldnames.P*K). “Generalized Robustness Analysis. BW = 0. P = tf(1.5 rads/s).punc] = robustperf(S). the performance margin should be less than 1.8 rad/sec. Design a “proportional” controller K which puts the nominal closedloop bandwidth at 0. and include additive unmodeled dynamics uncertainty of a level of 0. K = tf(BW.3096e+000 10259 .[1 0]) + ultidyn('delta'. Report is a text description of the robust performance analysis results. Rolloff K at a frequency 25 times the nominal closedloop bandwidth.[1/(25*BW) 1]).0. and the performance degradation curve is monotonically increasing (see Chapter 7. Assess the performance margin of the closedloop sensitivity function. See section Generalized Robustness Analysis of the online documentation. S = feedback(1.4).4305e001 LowerBound: 7. [perfmargin. See Chapter 7. Since the nominal gain of the sensitivity function is 1. perfmargin perfmargin = UpperBound: 7. CriticalFrequency perfmargunc is a struct of values of uncertain elements associated with the intersection of the performance degradation curve and the y=1/x curve.robustperf Field UpperBound Description Upper bound on robust performance margin. which are the names of uncertain elements of sys. positive scalar.[1 1]. “Generalized Robustness Analysis”). Form the closedloop sensitivity function.8. Example Create a plant with nominal model of an integrator.4 (this corresponds to 100% model uncertainty at 2.'bound'.
stabmargin stabmargin = UpperBound: 3.0862e+000 While the robust stability margin is easy to describe (poles migrating from stable region into unstable region). the analysis finds values of uncertain elements “corresponding to the intersection point of the performance degradation curve with a y=1/x hyperbola.” Rather than finding values for uncertain elements which lead to instability. as described in Chapter 4. nsize = actual2normalized(S. 10260 . “Generalized Robustness Analysis.delta) nsize = perfmargin.delta.UpperBound ans = 7. and check that this agrees with the upper bound. mentioned above in the description of perfmarg.4305e001 Compute the system gain with that value substituted. First compute the normalized size of the value of the uncertain element. and verify that the product of the normalized size and the system gain is greater than or equal to 1.CriticalFrequency and perfmargunc will be used often in the descriptions below.0000e+000 Finally. describing the robust performance margin is less elementary.. See the diagrams and figures in Chapter 7.punc).Uncertainty.00001). [stabmargin] = robuststab(S).” This characterization. punc. verify that the robust performance margin is less than the robust stability margin (it should always be.1251e+000 LowerBound: 3. as a sanity check. “Robustness Analysis”). nsize*gain ans = 1.inf.robustperf You can verify that the upper bound of the performance margin corresponds to a point on or above the y=1/x curve. gain = norm(usubs(S.1251e+000 DestabilizingFrequency: 4.
each the local sensitivity of the overall Stability Margin to that element’s uncertainty range. Info is a structure with the following fields Field Sensitivity Description A struct with M fields. For instance. Values of fields are positive.perfmargunc. Structure of compressed data from mussv.BadUncertainValues{k} is a struct of values of uncertain elements resulting from a robust performance analysis at frequency Info. Nby1 cell array. The (1.1) entry is the µupper bound (corresponds to perfmarg.2) entry is the µlower bound (for perfmarg. turning on/off the sensitivity computation.UpperBound). A 1by2 frd. described previous. and controlling the option argument used in the underlying call to mussv) can be specified using the robustness analysis options robopt object. The k’th entry Info. a value of 25 indicates that if the uncertainty range is enlarged by 8%.Report. with one entry for each frequency point. N×1 frequency vector associated with analysis. including sensitivities and frequencybyfrequency information.Info] = robustperf(sys) In addition to the first 3 output arguments.g. the values are set to NaN.. with upper and lower bounds from mussv. fieldnames are names of uncertain elements of sys.Frequency(k). controlling what is displayed during the computation. [perfmarg. then the stability margin should drop by about 2% (25% of 8).LowerBound) and the (1. If the Sensitivity property of the robopt object is 'off'. 10261 . setting the “stepsize” in the sensitivity computation. Frequency BadUncertainValues MussvBnds MussvInfo Options (e.robustperf Basic syntax with 4th output argument A 4th output argument yields more specialized information.
for each i.opt) are perfmarg is a structure with the following fields Field LowerBound Description d1×…×dF. Using singleindexing. lower bound on stability margin across the array dimensions.CriticalFrequency(i). The results of [perfmarg. “Generalized Robustness Analysis.'off'. assume that there are N frequency points and M uncertain elements.i) is perfmarg. d1×…×dF.Destabunc.Info] = robustperf(sys. Using singleindexing.Report.robustperf For instance. you can turn the on display and turn off the sensitivity by executing opt = robopt('Sensitivity'.” UpperBound CriticalFrequency 10262 .Report. for each i.UpperBound(i).'Display'. suppose that the size of sys is r×c×d1×d2×…×dF.:. Again.Info] = robustperf(sys.perfmargunc.'on'). upper bound on performance margin across the array dimensions. refer to the d1×d2×…×dF as the array dimensions) then the margin calculation is performed “pointwise” (individually. the value of frequency at which the performance degradation curve crosses the y=1/x curve. See Chapter 7.opt) Handling array dimensions If sys has array dimensions (for example. [PerfMarg. at each and every array value) and the computed answers all have array dimensions as well.:. d1×…×dF.i) is perfmarg. Details are described below. the frequency at which the performance degradation curve crosses the y=1/x curve in robust performance analysis of sys(:. the upper bound on the performance margin of sys(:.
it follows that the (1. Using single indexing notation.UpperBound(i) for the uncertain system sys(:.BadUncertainValues{k} is a d1×…×dF struct of values of uncertain elements resulting from a d1×…×dF family of robust performance computations at frequency Info.1. Report is a character array. containing text description of the robustness analysis results at each grid in the array dimensions. with one entry for each frequency point. described previous. Structure of compressed data from mussv. Sensitivity(i) contains the sensitivities of perfmarg. with upper and lower bounds from mussv.. . In addition to the first 3 output arguments.robustperf perfmargunc is a d1×…×dF structure array of values of uncertain elements.:. Using singleindexing. for each i.Frequency(k). the struct of values of uncertain elements for uncertain system sys(:. 4. The k'th entry Info.i) entry is the µupper bound (reciprical of perfmarg.:.2. Frequency BadUncertainValues N×1 frequency vector associated with analysis.UpperBound(i)) while the (1. associated with the intersection of the performance degradation curve and the y=1/x curve.UpperBound(i))... Using singleindexing for the dimensions associated with the array dimensions. MussvBnds MussvInfo 10263 . 1×2×d1×…×dF frd.i). dimensions 3. Nby1 cell array. Info is a structure with the following fields Field Sensitivity Description A d1×…×dF struct.i) entry is the µlower bound (reciprocal of perfmarg. F+2 are d1×…×dF. fieldnames are names of uncertain elements of sys. See section Generalized Robustness Analysis of the online documentation.i) is perfmargunc(i).
the exact performance margin is guaranteed to be no smaller than LowerBound. then the first requirement of stability of nominal value is explicitly checked within robustperf. is typically a continuous function (unlike the robust stability margin. However. Limitations Because the calculation is carried out with a frequency gridding. but may require user attention. Hence. answers arbitrarily close to the actual answers are obtainable with finite frequency grids. considered a function of problem data and frequency. However. Similarly. and is instead assumed.robustperf The smallest performance margin over all array dimensions can be computed min(perfmarg. increasing the density of the frequency grid will always increase the accuracy of the answers. the problem in robustperf is less acute. This is similar to the problem in robuststab. it is possible (likely) that the true critical frequency is missing from the frequency vector used in the analysis. The exact performance margin is guaranteed to be no larger than UpperBound (some uncertain elements associated with this magnitude cause instability – one instance is returned in the structure perfmargunc). if sys is an ufrd. in comparing to robuststab. then the verification of nominal stability from the nominal frequency response data is not performed. The algorithm in robustperf follows this in spirit. Computing i = find(UpperBound==min(UpperBound(:))) and then selecting perfmargunc(i) yields values for an uncertainty corresponding to the smallest performance margin across all array dimensions. 10264 . in robust performance margin calculations.UpperBound(:)). The robust performance margin. Algorithm A rigorous robust performance analysis consists of two steps: 1 verify that the nominal system is stable. If sys is a uss object. and then 2 robust performance analysis on an augmented system. described in the Robust Control Toolbox demo entitled Getting Reliable Estimates of Robustness Margins in the online documentation). The instability created by perfmargunc occurs at frequency value in CriticalFrequency. and in the limit.
Calculate bounds on the Structured Singular Value (µ) Calculate LTI system norms Create a robuststab/robustperf options object Calculates stability margins of uncertain systems Normalizes range of uncertain atoms Calculate worstcase gain of uncertain systems Calculate worstcase sensitivities for feedback loop Calculate worstcase margins for feedback loop 10265 .robustperf See Also loopmargin mussv norm robopt robuststab actual2normalized wcgain wcsens wcmargin Comprehensive analysis of feedback loop.
there is a collection of uncertain elements which are less than or equal to 0. If the input system sys is a uss. The computation used in robuststab is a frequencydomain calculation. then the uncertain system is not robustly stable. only bounds on the exact stability margin are computed.3 implies that the uncertain system remains stable for all values of uncertain elements up to 30% outside of their modeled uncertain ranges.5 normalized units away from their nominal values that results in instability. then the analysis is performed on the frequency grid within the ufrd.report. If the input system sys is a ufrd. If the uncertain system is stable for all values of uncertain elements within their allowable ranges (ranges for ureal. The exact robust stability margin is guaranteed to lie in between these upper and lower bounds. Similarly. a margin of 1.report. norm bound or positivereal constraint for ultidyn. then an appropriate 10266 .info] = robuststab(sys. As with other uncertainsystem analysis tools. [stabmarg. within their specified ranges. and. a margin of 0. Determining the values of the uncertain elements “closest” to their nominal values for which instability occurs is a robust stability calculation. Conversely. weighted ball for ucomplexm).destabunc.robuststab Purpose Syntax Description 10robuststab Calculates robust stability margins of an uncertain multivariable system. radius for ucomplex.5 normalized units away from their nominal values. A stability robustness margin less than 1 implies that certain allowable values of the uncertain elements. if there is a combination of element values that cause instability. and all lie within their allowable ranges. Numerically. the uncertain system is robustly stable.destabunc. lead to instability. robuststab computes the margin of stability robustness for an uncertain system. See actual2normalized for converting between actual and normalized deviations from the nominal value of an uncertain element.opt) A nominally stable uncertain system will generally be unstable for specific values of its uncertain elements.5 (for example) implies two things: the uncertain system remains stable for all values of uncertain elements which are less than 0. A stability robustness margin greater than 1 means that the uncertain system is stable for all values of its modeled uncertainty.info] = robuststab(sys) [stabmarg.
Upper bound on stability margin.robuststab frequency grid is generated (automatically). In all discussion that follows below. The value of each field is the corresponding value of 10267 . and the analysis performed on that frequency grid. If less than 1. which are the names of uncertain elements of sys.UpperBound and stabmarg.Report] = robuststab(sys) are stabmarg is a structure with the following fields Field LowerBound Description Lower bound on stability margin. then stabmarg. If the nominal value of the uncertain system is unstable. the poles migrate across the stability boundary (imaginaryaxis in continuoustime systems. closest to nominal. which cause instability. positive scalar. Basic syntax Suppose sys is a ufrd or uss with M uncertain elements.destabunc. The results of [stabmarg. with uncertain elements “closest” to their nominal values. The critical value of frequency at which instability occurs. There are M fieldnames. unitdisk in discretetime systems) at the frequency given by DestabilizingFrequency. then the uncertain system is guaranteed stable for all values of the modeled uncertainty.LowerBound will equal −∞. the uncertain system is not stable for all values of the modeled uncertainty. If greater than 1. N denotes the number of points in the frequency grid. positive scalar. At a particular value of uncertain elements (see destabunc below). UpperBound DestabilizingFrequency destabunc is a structure of values of uncertain elements.
1321 As the margin is less than 1. C = tf([1 1].15 1])*delta. The report variable is specific.1 rads/s. introduced with a ultidyn object and a shaping filter. stabmarg stabmarg = UpperBound: 0. robuststab is used to compute the stability margins of the closedloop system with respect to the plant model uncertainty.destabunc.Uncertainty.A. in the form of additive unmodeled dynamics. Example Construct a feedback loop with a second order plant and a PID controller with approximate differentiation.8181 DestabilizingFrequency: 9.A) will be less than or equal to UpperBound.[1 0]). P = tf(4. Pu = P + 0.Pu*C). lead to instability.'SampleStateDim'. You can view the stabmarg variable. 10268 . The secondorder plant has frequencydependent uncertainty.[. with the poles migrating across the stability boundary at 9. If A is an uncertain atom of sys.info] = robuststab(S). The command pole(usubs(sys. delta = ultidyn('delta'.8 4]).8181 LowerBound: 0.25*tf([1].robuststab the uncertain element. proving that UpperBound is indeed an upper bound on the robust stability margin. Report is a text description of the robustness analysis results. then actual2normalized(destabunc. and for at least one uncertain element of sys.5).report. the closedloop system is not stable for plant models covered by the uncertain model Pu.1 1]) + tf(2.[1 . There is a specific plant within the uncertain behavior modeled by Pu (actually about 82% of the modeled uncertainty) that leads to closedloop instability. giving a plainlanguage version of the conclusion.[.[1 1].sys. [stabmarg. such that when jointly combined. this normalized distance will be equal to UpperBound.destabunc)) shows the instability. S = feedback(1.
causing an instability at 9.0116i step(S.A destabilizing combination of 81. Since the problem has only one uncertain element.destabunc). 10269 . Increasing 'delta' by 25% leads to a 25% decrease in the margin.delta) ans = 0. the stability margin is completely determined by this element.0106 + 0. .81 (slightly less than the stability margin). set the gainbound on the uncertain delta to 0. .8181 Use usubs to substitute the specific value into the closedloop system.'g'.13 rad/s.0203 + 0.4). actual2normalized(S. Sample the closedloop system at 100 values.Sensitivity with respect to uncertain element . Verify that there is a closedloop pole near j9.0000 + 0. Finally.0203 .0.0e+002 * 3.0106 .. and plot the unitstep response of the nominal closedloop system.It can tolerate up to 81. You can verify that the destabilizing value of delta is indeed about 0. Sbad = usubs(S.0913i 0. and compute the poles of all of these systems.82 normalized units from its nominal value.0211i 0.0.delta. . 'delta' is 100%.robuststab report report = Uncertain System is NOT robustly stable to modeled uncertainty.Sbad.0000 .Uncertainty.destabunc.0116i 0.2318 0.2539 0.8% the modeled uncertainty exists.0.1. as an adhoc test. and hence the margin exhibits 100% sensitivity to this uncertain element. as well as the unstable closedloop system. pole(Sbad) ans = 1.'r'..0211i 0.0913i 0.NominalValue.8% of modeled uncertainty.
described previous. Values of fields are positive.100).Report.Destabunc. including sensitivities and frequencybyfrequency information.robuststab S. each the local sensitivity of the overall Stability Margin to that element’s uncertainty range.81.delta. If the Sensitivity property of the robopt object is 'off'.Bound = 0.Uncertainty. max(real(p100(:))) ans = 6. the values are set to NaN. Frequency 10270 .4647e007 As expected. N×1 frequency vector associated with analysis. Info is a structure with the following fields Field Sensitivity Description A struct with M fields. p100 = pole(S100). a value of 25 indicates that if the uncertainty range is enlarged by 8%. Basic syntax with 4th output argument A 4th output argument yields more specialized information. then the stability margin should drop by about 2% (25% of 8). [StabMarg. fieldnames are names of uncertain elements of sys. For instance. all poles have negative realparts. S100 = usample(S.Info] = robuststab(sys) In addition to the first 3 output arguments.
.BadUncertainValues{:})) generates an Nby1 ss array.robuststab Field BadUncertainValues Description Nby1 cell array. The command usubs(sys.'Display'.Frequency. which cause the system poles to migrate across the stability boundary at frequency Info. The k’th entryInfo.BadUncertainValues{k} is a struct of values of uncertain elements.Info] = robuststab(sys. Structure of compressed data from mussv. with poles on the stability boundary at frequencies given by the vector Info. For instance. MussvInfo Options (e.'off'.1) entry is the µupper bound (corresponds to stabmarg.2) entry is the µlower bound (for stabmarg. This migration to instability has been achieved with the “smallest” normalized deviations in the uncertain elements from their nominal values. turning on/off the sensitivity computation. setting the “stepsize” in the sensitivity computation.opt) 10271 .Info.BadUncertainValues{k})) shows the migration.LowerBound) and the (1. controlling what is displayed during the computation. with upper and lower bounds from mussv.'on'). The command pole(usubs(sys.UpperBound).cat(1. and the sensitivity calculation off be executing opt = robopt('Sensitivity'.Report. and controlling the option argument used in the underlying call to mussv) can be specified using the robustness analysis options robopt object. [StabMarg. Each instance is unstable. MussvBnds A 1by2 frd. The (1. you can turn the display on.Info. with one entry for each frequency point.g.Frequency(k).Destabunc. closest to their nominal values.
. dimensions 3. for each i. .Info] = robuststab(sys. assume that there are N frequency points and M uncertain elements.destabunc.UpperBound(i).i) is stabmarg. at each and every array value) and the computed answers all have array dimensions as well.DestabilizingFrequency(i). Report is a character array. Details are described below. upper bound on stability margin across the array dimensions.:.Report. UpperBound DestabilizingFrequency destabunc is a d1×…×dF structure array of values of uncertain elements. Using singleindexing.i) is destabunc(i). associated with stabmarg. frequency at which instability occurs. 10272 .i) is stabmarg. d1×…×dF. Using singleindexing.robuststab Handling array dimensions If sys has array dimensions (for example. refer to the d1×d2×…×dF as the array dimensions) then the margin calculation is performed “pointwise” (individually. Using singleindexing. which cause instability. for each i.opt) are stabmarg is a structure with the following fields Field LowerBound Description d1×…×dF. containing text description of the robustness analysis results at each grid in the array dimensions..:. d1×…×dF. F+2 are d1×…×dF.:. for each i. the upper bound on the stability margin of sys(:. the frequency at which instability occurs in robust stability analysis of sys(:. 4. the destabilizing values of uncertain elements for uncertain system sys(:. . lower bound on stability margin across the array dimensions. The results of [stabmarg. Again.UpperBound. suppose that the size of sys is r×c×d1×d2×…×dF.
2. Nby1 cell array.1.i) entry is the µupper bound (corresponding to stabmarg. which cause the system poles to migrate across the stability boundary at frequency Info. MussvBnds A 1×2×d1×…×df frd.Info. N×1 frequency vector associated with analysis. Alternatively. with upper and lower bounds from mussv. Using singleindexing for the dimensions associated with the array dimensions. it follows that the (1.Info.robuststab In addition to the first 3 output arguments.:.cat(F+1. fieldnames are names of uncertain elements of sys.Frequency(k).UpperBound(i) for the uncertain system sys(:. Sensitivity(i) contains the sensitivities of stabmarg. MussvInfo You can compute the smallest stability margin over all array dimensions via 10273 .LowerBound(i)) while the (1.UpperBound(i)). with one entry for each frequency point. The command usubs(sys.BadUncertainValues{:})) produces an ss array of size d1×…×dF×N with the substitutions made. Structure of compressed data from mussv. Info is a structure with the following fields Field Sensitivity Description A d1×…×dF struct. described previous. Using single indexing notation.i) entry is the µlower bound (which corresponds to stabmarg.BadUncertainValues{k} is a d1×…×dF struct of values of uncertain elements.BadUncertainValues{k}) Frequency BadUncertainValues produces an ss array of size d1×…×dF with the substitutions made.i). usubs(sys. The k’th entry Info. closest to their nominal values.
UpperBound(:)). 10274 . the algorithm only detects migration of poles across the stability boundary at the frequencies in info. if sys is an ufrd. but may require user attention. Since the stability boundary is also associated with the frequency response. The exact stability margin is guaranteed to be no larger than UpperBound (some uncertain elements associated with this magnitude cause instability – one instance is returned in the structure destabunc). The instability created by destabunc occurs at frequency value in DestabilizingFrequency. Computing i = find(UpperBound==min(UpperBound(:))) and then destabunc(i) yields values for an uncertainty corresponding to the smallest stability margin across all array dimensions. the system is guaranteed stable. the second step can be interpreted (and carried out) as a frequency domain calculation. then the verification of nominal stability from the nominal frequency response data is not performed. rather than check all points on stability boundary. the exact stability margin is guaranteed to be no smaller than LowerBound. See the “Limitations” section below about issues related to migration detection. then the first requirement of stability of nominal value is explicitly checked within robuststab. Algorithm A rigorous robust stability analysis consists of two steps: 1 verify that the nominal system is stable. However. This amounts to a classical µanalysis problem.robuststab min(stabmarg. These bounds are derived using upper bound for the structured singular value. In other words. which is essentially optimallyscaled. smallgain theorem analysis. If sys is a uss object.Frequency. and then 2 verify that no poles cross the stability boundary as the uncertain elements vary within their ranges. Similarly. In the second step (monitoring the stability boundary for the migration of poles). for all modeled uncertainty with magnitude up to LowerBound. and is instead assumed. The algorithm in robuststab follows this in spirit.
namely stability margins of ∞. is a continuous function of frequency. the robust stability margin that occurs at each frequency is a continuous function of the problem data at that frequency. Since the problem data. See Also loopmargin mussv robopt robustperf wcgain wcsens wcmargin Comprehensive analysis of a feedback loop Calculates bounds on the Structured Singular Value (µ) Creates a robuststab/robustperf options object Calculates performance margins of uncertain systems Calculates worstcase gain of uncertain systems Calculates worstcase sensitivities for a feedback loop Calculates worstcase margins for a feedback loop 10275 . Any frequency grid which excludes these critical frequencies (and almost every grid will exclude them) will result in undetected migration and misleading results. it follows that finite frequency grids are usually adequate in correctly assessing robust stability bounds. there are simple examples which violate this. the migration of poles from stable to unstable only occurs at a finite collection of specific frequencies (generally unknown to you). in turn. See the Robust Control Toolbox demo titled Getting Reliable Estimates of Robustness Margins in the online documentation about circumventing the problem in an engineeringrelevant fashion. assuming the frequency grid is “dense” enough. In some problems.robuststab Limitations Under most conditions. Nevertheless.
x must be square.t] t [u.ReponseData of the frd object at each frequency point to construct u and t.t] t = = = = = = schur(x) schur(x) schur(x.0) schur(x.t] t [u.t] = schur(x) operates on x. [u.0) schur(x. qz schur See Also Creates a QZ factorization for generalized eigenvalues Calculates a Schur decomposition 10276 .'econ') Description frd/schur applies the schur command to frd objects. u and t are frd objects.'econ') schur(x.frd/schur Purpose Syntax 10frd/schur Schur decomposition of an frd object. See the built in schur command for details. [u.
) schurmr returns a reduced order model GRED of G and a struct array redinfo containing the error bound of the reduced model and Hankel singular values of the original system.) [GRED... Hence.redinfo] = schurmr(G. reduced order can be directly determined by examining the system Hankel SV’s.key1.value1. or a vector of integers.order) [GRED. or optionally a vector packed with desired orders for batch runs ORDER A batch run of a serial of different reduced order models can be generated by specifying order = x:y. For a stable system Hankel singular values indicate the respective state energy of the system. Argument G Description LTI model to be reduced (without any other inputs will plot its Hankel singular values and prompt for reduced order). By default.key1.. σι. This method guarantees an error bound on the infinity norm of the additive error  GGRED ∞ for wellconditioned model reduced problems [1]: n G – Gred ∞ ≤ 2 ∑ σi k+1 This table describes input arguments for schurmr.order. the function will show a Hankel singular value plot of the original model and prompt for model order number to reduce..redinfo] = schurmr(G. all the antistable 10277 . (Optional) an integer for the desired order of the reduced model.value1. With only one input argument G...schurmr Purpose Syntax 10schurmr Balanced model truncation via Schur method GRED = schurmr(G) GRED = schurmr(G. Description The error bound is computed based on Hankel singular values of G.
because from control stability point of view. 10278 . Argument 'MaxError' Value Description A real number or a vector of different errors {Wout. Use only if not specified as 2nd argument. 'MaxError' can be specified in the same fashion as an alternative for 'ORDER'. default is both identity. getting rid of unstable state(s) is dangerous to model a system. 'MaxError'overides ORDER input.Win} Reduce to achieve H∞ error. reduced order will be determined when the sum of the tails of the Hankel sv's reaches the 'MaxError'. When present. In this case. 'Weights' cell array 'Display' 'on' or 'off' Optimal 1x2 cell array of LTI weights Wout (output) and Win (input).schurmr part of a system is kept. vector or cell array Weights on the original model input and/or output can make the model reduction algorithm focus on some frequency range of interests. minimum phase and invertible. Display Hankel singular plots (default 'off'). 'Order' Integer. Order of reduced model. Weights must be invertible. But weights have to be stable.
4 Find the SVD of (VTL. respectively. G and GRED can be either continuous or discrete. A STRUCT array with 3 fields: • REDINFO.StabSV • REDINFO.ErrorBound • REDINFO.schurmr This table describes output arguments. λ1 … … 0 … … 0 0 λn V A PQV A = T T V D PQV D λn … … = 0 … … 0 0 λ1 3 Find the left/right orthonormal eigenbases of PQ associated with the kth big Hankel singular values. 1 Find the controllability and observability grammians P and Q.C. 2 Find the Schur decomposition for PQ in both ascending and descending order.BIG VR. Becomes multidimensional array when input is a serial of different model order array.BIG) = U Σ VT 10279 .D) of a system and k. Algorithm Given a state space (A. the desired reduced order. Argument GRED Description LTI reduced order model. the following steps will produce a similarity transformation to truncate the original state space system to the kth order reduced model [16].UnstabSV REDINFO G can be stable or unstable.B.
the following commands can get a set of reduced order models based on your selections: rand('state'.1:k)1/2 SR. ⎫ ⎪ ⎬ ⎪ ⎭ ⎫ ⎪ ⎬ ⎪ ⎭ ˆ A ˆ C ˆ B ˆ D = S T L. J.05]). BIG D T The proof of the Schur balance truncation algorithm can be found in [2].” Int. redinfo3] = schurmr(G.1:k)1/2 6 Finally. BIG S L. SMALL. V L. [g1. randn('state'. 10280 . % display Hankel SV plot % and prompt for order (try 15:20) [g2. Glover.5. [10:2:18]. [g4.']).wt2}).4. wt2.12345).4). Control.[10:2:18]). SMALL] 5 Form the left/right transformation for the final kth order reduced model SL.[0. G. “All Optimal Hankel Norm Approximation of Linear Multivariable Systems. redinfo4] = schurmr(G.1234). BIG AS R. vol.BIGVΣ(1:k. wt1.schurmr V A = [V R. randn('state'. redinfo5] = schurmr(G. 11451193.V L.4).5678).g' num2str(i) '). no. pp.BIG = V L. 6. BIG B CS R. for i = 1:5 figure(i). rand('state'. end Reference [1] K. 1984.BIG = VR. 39. Example Given a continuous or discrete. stable or unstable system. and Their L∝− error Bounds. redinfo1] = schurmr(G). [g3. 0. wt1 = rss(6. [g5.d = 2*eye(4). wt2 = rss(6.6789).d = eye(5)*2.G = rss(30.'MaxError'.{wt1. BIG . redinfo2] = schurmr(G. eval(['sigma(G.01.5).BIG UΣ(1:k. BIG ] V D = [ V R.20). 'weight'.5.
See Also reduce balancmr bstmr ncfmr hankelmr hankelsv Top level model reduction routines Balanced truncation via squareroot method Balanced stochastic truncation via Schur method Balanced truncation for normalized coprime factors Hankel minimum degree approximation Hankel singular value 10281 . no.schurmr [2] M. on Automat.. vol. “A Schur Method for Balanced Model Reduction. Safonov and R. July 1989. 7. Y. 729733.” IEEE Trans. 34. pp. G. Contr. Chiang.
k.h. [gaml.sdhinfnorm Purpose Syntax Computes the L2 norm of a continuoustime system in feedback with a discretetime system [gaml. sdsys. The outputs.k) [gaml.k.gamu] = sdhinfnorm(sdsys.delay.k.h.[1 30])*tf([1].'inf') ans = 1 10282 . such that the constant feedback gain must be zero.delay) includes the input argument. delay is a nonnegative integer associated with the number of computational delays of the controller. continuoustime transfer function p = 30/s(s+30) and a continuoustime controller k = 4/(s+4).gamu] = sdhinfnorm(sdsys.delay) [gaml.gamu] = sdhinfnorm(sdsys. sdsys must be strictly proper.001. in feedback with a discretetime controller. gamu and gaml. The default value of tol is 0.k). norm(cl.delay.[1 4])). are upper and lower bounds on the induced L2 norm of the sampleddata closedloop system.k) computes the L2 induced norm of a continuoustime LTI plant.gamu] = sdhinfnorm(sdsys. p = ss(tf(30. Example Consider an openloop. delay.k.gamu] = sdhinfnorm(sdsys. k = ss(tf(4. tol. The default value of the delay is 0. cl = feedback(p. connected through an ideal sampler and a zeroorder hold (see figure below).gamu] = sdhinfnorm(sdsys.[1 0])).tol) [gaml. The closedloop continuoustime system has a peak magnitude across frequency of 1. which defines the difference between upper and lower bounds when search terminates. 10sdhinfnorm Description Þ × ×Ý× Û ¹ Ë¹ Ð Ý ¹ Ã¹ À [gaml.tol) includes the input argument.k.
2.0.kd).'zoh'). AC–37.gl] = sdhinfnorm([1.5 Hz. Reference Bamieh.79. 1]*p*[1 1]. This requires a search and is.0044 1.B. 1]*p*[1 1]. [gu. gapmetric hinfsyn norm sdhinfsyn sdlsim See Also Computes the gap and the Vinnicombe gap metric Synthesizes a H∞ optimal controller Calculates the system norm of an LTI object Synthesizes a sampledata H∞ optimal controller Simulates response of a sampleddata feedback system 10283 .sdhinfnorm Initially the controller is to be implemented at a sample rate of 1.) A preliminary step is to determine whether the norm of the continuoustime system over one sampling period without control is less than the givenvalue. [gu. B. 418–435. and J.kd). a relatively expensive step.. computationally. 1992. Pearson.0. kd = c2d(k.7908 3. the sample rate of the controller is increased from 1. The sampledata norm of the closedloop system with the discretetime controller is 1. The sampledata norm of new closedloop system is 3.gl] = sdhinfnorm([1.'zoh'). [gu gl] ans = 1.75. (These variations are done to improve the numerical conditioning of the algorithms.7929 Due to the large difference in norm between the continuoustime and sampleddata closedloop system. pp.5 Hz to 5 Hz.A. “A General Framework for Linear Periodic Systems with Applications to SampledData Control. vol. [gu gl] ans = 3.0049 Algorithm sdhinfnorm uses variations of the formulae described in the Bamieh and Pearson paper to obtain an equivalent discretetime system.” IEEE Transactions on Automatic Control. kd = c2d(k.0.
Note that the D matrix is must be zero. only one γ value is tested.VALUE1.) sdhinfsyn is concerned with the control of a continuoustime LTI system P by a discretetime controller K. If GMAX = GMIN. and the output measurements that are sampled by the controller correspond to the C2 partition... [K.sdhinfsyn Purpose Syntax Description 10sdhinfsyn sdhinfsyn computes an H∞ controller for a sampleddata system.GAM]=sdhinfsyn(P. the bisection method is used to iterate on the value of γ in an effort to approach the optimal H∞ control design.KEY2. The stopping criteria for the bisection algorithm 10284 . the function sdhinfsyn employs a γ iteration. KEY1.NCON) [K.GAM]=sdhinfsyn(P. Given a high and low value of γ. sdhinfsyn synthesizes a discretetime LTI controller K to achieve a given norm (if possible) or find the minimum possible norm to within tolerance TOLGAM y1 u 1 y2 P u2 sampler delay Ts K hold Similar to hinfsyn.NMEAS.VALUE2. the continuoustime errors to be kept small correspond to the C1 partition.NMEAS. GMAX and GMIN. the outputs from the controller are held constant between sampling instants and enter through B2. has statespace realization partitioned as follows A B1 B2 P = C1 0 0 C2 0 0 where the continuoustime disturbance inputs enter through B1.. B2 has column size (ncon) and C2 has row size (nmeas). The continuoustime LTI plant P.NCON.
or command window displays synthesis progress information real real real real integer 'DELAY' 'DISPLAY' 'off' 'on' Output arguments: K GAM H∞ controller final γ value of H∞ cost achieved Algorithm sdhinfsyn uses a variation of the formulae described in the Bamieh and Pearson paper [1] to obtain an equivalent discretetime system. VALUE) pairs are similar to hinfsyn. but with additional KEY values 'Ts' and 'DELAY'.01) (default=1) sampling period of the controller to be designed (default=0) a nonnegative integer giving the number of sample periods delays for the control computation (default) no command window display.) A preliminary step is 10285 . (This is done to improve the numerical conditioning of the algorithms. Input arguments: P NMEAS NCON LTI plant number of measurements output to controller number of control inputs Optional input arguments (KEY.sdhinfsyn requires the relative difference between the last γ value that failed and the last γ value that passed be less than TOLGAM. KEY 'GMAX' 'GMIN' 'TOLGAM' 'Ts' VALUE MEANING initial upper bound on GAM (default=Inf) initial lower bound on GAM (default=0) relative error tolerance for GAM (default=.
” IEEE Transactions on Automatic Control. norm hinfsyn sdhinfnorm See Also System norm of an LTI object Synthesize an H∞ optimal controller Calculate norm of sampleddata feedback system 10286 .A. 1992. Pearson. this requires a search and is computationally a relatively expensive step. vol. Reference [1] Bamieh. B. pp. AC–37.sdhinfsyn to determine whether the norm of the continuoustime system over one sampling period without control is less than the given γvalue. 418–435.B.. “A General Framework for Linear Periodic Systems with Applications to SampledData Control. and J.
w.w. [vt.w. and k must be discretetime.k).tf.w.t.k) forced by the continuous input signal defined by w and t (values and times.tf) plots the time response of the hybrid feedback system. with a specified sampling time (the unspecified sampling time.z0. sdlsim(p. sdlsim(p.ut.w.x0.k.tf) [vt.tf.int) [vt.z0.z0) specifies the initial state vector x0 of p.t. The default value for x0 and z0 is zero. p must be a continuoustime system.t. Stored in this manner.t.w.int) specifies the continuoustime integration step size int.t.vt{2}) plot(vt{:}) Signals yt and ut are respectively the input to k and output of k.tf) sdlsim(p.k.int) sdlsim(p.tf.t.tf.k. 1. as in lsim).k. lft(p.t] = sdlsim(p. is not allowed). yt and ut are 2by1 cell arrays: in each the first entry is a time vector.t] = sdlsim(p. with a specified sampling time (the unspecified sampling time.tf. and k must be Description discretetime LTI system. the signal vt can be plotted by using one of the following commands: plot(vt{1}. Nonzero initial conditions are allowed for p (and/or k) only if p (and/or k) is an ss object.x0.sdlsim Purpose Syntax 10sdlsim Time response of sampleddata feedback system.Ts)/N int where N>4 is an integer. The outputs vt. is not allowed). then the time simulation is performed “pointwise” across the array dimensions.yt. The final time is specified with tf.w. If any of these optional arguments are omitted. 1.z0. sdlsim(p. p must be a continuoustime LTI system. 10287 .k.t] = sdlsim(p.k.ut. and z0 of k. at time t(1). then default values are used. and the 2nd entry is the signal values.t.tf) computes the continuoustime response of the hybrid feedback system lft(p.yt.k.t.w. as in lsim).ut.k.x0.z0) sdlsim(p.x0. sdlsim forces int = (k.w.yt. or passed as empty matrices.t. If p and/or k are LTI arrays with consistent array dimensions. is forced by the continuous input signal described by w and t (values and times. The final time is specified with tf.x0.k.
5]. input_to_Pd = '[C]'. You can use sysic to construct the interconnected feedback system. cleanupsysic = 'yes'.ut.td] = step(dclp.x0. Simulating this with lsim gives the system response at the sample points.5 2. and z0 (initial condition for k).sdlsim If p and/or k are LTI arrays with consistent array dimensions. x0 (initial condition for p).t] = sdlsim(p. plot(vt). T = 1.'zoh'). for example.T). systemnames = 'Pd C'.5 T/4.Ts)/N.[ . 2/T . The default value for x0 and z0 is zero. C = ss([1.. sdlsim forces int = (k.20*T). All responses can be plotted simultaneously.yt. [1/T^2 1.5/T]. where N>4 is an integer.. consider the application of a discrete controller to a an integrator with near integrator.z0. sdlsim is then used to calculate the intersample behavior.w. The outputs are 2by1byArray Dimension cell arrays.. outputvar = '[Pd]'. inputvar = '[ref]'. P = tf(1. [1/T^2 0].t.0/20. Pd = c2d(P. [yd. input_to_C = '[ref . 10288 . then default values are used. The closedloop digital system is now set up.0]). lsim is used to simulate the digital step response. Nonzero initial conditions are allowed for p (and/or k) only if p (and/or k) is an ss object. 1e5. or passed as empty matrices.tf. If any of these arguments are omitted. A continuous plant and a discrete controller are created. A sample and hold equivalent of the plant is formed and the discrete closedloop system is calculated.T. [vt. sysic.1/T 1/T].[1. Pd]'. Example To illustrate the use of sdlsim. sysoutname = 'dclp'.k. then the time simulation is performed “pointwise” across the array dimensions.int) The optional arguments are int (integration step size).
5 1 0.y1{:}.y1{:}.1.1. &continuous 1.[0.3 0.'r*'.'b') axis([0.u.u. t = [0:.4 0.time system.7 0.sdlsim The continuous interconnection is set up and the sampled data response is calculated with sdlsim.0.t.9 1 You can see he effect of a nonzero initial condition in the continuous.01:1]'.1.'b'.5 0 0 0.2 0. plot(td.5]) 10289 .y2{:}.1.C.1. M = [0.8 0. &continuous') Step response: discrete (*).1]*blkdiag(1. y1 = sdlsim(M.'g') axis([0.0.yd. y2 = sdlsim(M.5 0.'r*'.1.0]). Note how examining the system at only the sample points will underestimate the amplitude of the overshoot.25.0. plot(td.1.1 0.yd.C. u = ones(size(t)).P).6 Time: seconds 0.0.t).5]) xlabel('Time: seconds') title('Step response: discrete (*).0.
1]*blkdiag(1.''.meas.5 1 0. dist = 0.[u dist].dist.1.1.'g.1. plot(y3{:}.1 0. This controller has not been designed to reject such a disturbance and the system does not contain antialiasing filters.3 0. M2 = [0.4 0.C.8 0. you can examine the effect of a sinusoidal disturbance at the continuoustime plant output.u. Simulating the effect of antialiasing filters is easily accomplished by including them in the continuous interconnection structure.t.1.1).5 0 0 0. [y3.7 0.t.act] = sdlsim(M2.0.2 0. t = [0:.0.1*sin(41*t).sdlsim xlabel('Time: seconds') title('Step response: non zero initial condition') Step response: non zero initial condition 1.5 0.P).6 Time: seconds 0.'b'.') xlabel('Time: seconds') 10290 .0.9 1 Finally.001:1]'.t. u = ones(size(t)).1.
N times the sample rate of the controller k.2 1 0.9 1 Algorithm See Also sdlsim over samples the continuoustime.6 Time: seconds 0.7 0.2 0 −0.8 0.5 0.sdlsim title('Step response: disturbance (dashed) & Step response: disturbance (dashed) & output (solid) 1.3 0.6 0.4 0.4 output (solid)') 1.2 0. gapmetric hinfsyn norm sdhinfnorm sdhinfsyn sdlsim Computes the gap and the Vinnicombe gap metric Synthesizes a H∞ optimal controller Computes the system norm of an LTI object Calculates the norm of a sampleddata feedback system Synthesizes a sampledata H∞ optimal controller Simulates response of a sampleddata feedback system 10291 .2 0 0.4 0.1 0.8 0.
K) is in sector SECF if and only if the system lft(G.NU.SECF.F.sectf Purpose Syntax Description 10sectf Statespace sector bilinear transformation [G.NY) where NU and NY are the dimensions of uT2 and yT2.SECG) computes a linear fractional transform T such that the system lft(F.SECG) [G.SECF.T] = sectf(F. 10292 .NY).K) is in sector SECG where G=lft(T. G yG1 yT1 yT2 T uT1 uT2 uG1 yF1 yG2 yF2 F uF1 uF2 uG2 K Figure 1017: Sector transform G=lft(T. respectively—see Figure 1017.T] = sectf(F.F. sectf are used to transform general conicsector control system performance specifications into equivalent H∞norm performance specifications.NU.
SECF: [1. S is a twoport system S=mksys(a.S22 are either scalars or square matrices. Input arguments: F SECG. a.b are vectors. ∞] or square matrices.B] or [A.F) LFT sector transform.1] [0.b2. maps conic sector SECF into conic sector SECG 10293 .S21.S22] is a square matrix whose blocks S11.Inf] [A.sectf .B are scalars in [–∞.'tss') with transfer function S(s) = .1] or [1.) Output arguments: G T S 11 ( s ) S 12 ( s ) S 21 ( s ) S 22 ( s ) transformed plant G(s)=lftf(T.B] [a.b] or [a.Inf] or [0.b1.b] S S LTI statespace plant Conic Sector: y 2 ≤ u 2 0 ≤ Re [ y∗ u ] 0 ≥ Re [ ( y – Au )∗ ( y – Bu ) ] 0 ≥ Re [ ( y – diag ( a )u )∗ ( y – diag ( b )u ) ] 0 ≥ Re [ ( S 11 u + S 12 y )∗ ( S 21 u + S 22 y ) ] 0 ≥ Re [ ( S 11 u + S 12 y )∗ ( S 21 u + S 22 y ) ] where A.S21.S12. . S=[S11 S12.
1]) computes a transformed P1(s):= P1 such that if lft(G.1]. ∞].[1. i. norm(lft(G..K) is inside sector[0.[0. See Figure 1019 u1 u2 P(s) y1 y2 K(s) Figure 1018: Sector Transform Block Diagram.K) is inside sector[–1. In other words.inf)<1 if and only if lft(F.∈ sec tor [ – 1. 1 ] → P 1 ( s ) = . ∞] implies that P1(s) is stable and P1(jω) is positive real.F): The linear fractional transformation T(s)=T T Examples The statement G(jω) inside sector[–1.[0. 1] is equivalent to the H∞ inequality sup σ ( G ( jω ) ) = G ∞ ≤ 1 ω Given a twoport openloop plant P(s) := P. s+1 s You can compute this by simply executing the following commands: P = ss(tf(1. The Nyquist plots for this transformation are depicted in Figure 1019.Inf]).∞ ]. 1] if and only if lft(F. 1 s+2 P ( s ) = .[1 1])). the command P1 = sectf(P. P1 = sectf(P.e.K). Here is a simple example of the sector transform.sectf Output variables are: G The transformed plant G(s)=lftf(T.[1. Example of Sector Transform. The condition P1(s) inside [0..Inf]. 10294 .K) is strictly positive real.∈ sec tor [ 0 .
Next the equation Sg ( s ) ug1 yg1 = Sf ( s ) u f1 yf1 is solved for the twoport transfer function T(s) from u g y f to u f y g .8 2 Figure 1019: Example of Sector Transform.2 0.5 0 0.3 0.9 IMAG(P) 0.4 0.F).45 0.4 0.5 0.3 0.8 1 REAL(P1) 1.2 1.35 0. First the sector input data Sf= SECF and Sg=SECG is converted to twoport statespace form. nondynamical sectors are handled with empty a.1 0. b2. 1 1 1 1 the function lftf is used to compute G(s) as G=lftf(T.7 0.8 0.4 0.sectf ∗ P1 ( jω ) + P 1 ( jω ) ≥ 0 P(s) = 1/(s+1) in SEC[1.1 0. b1.6 0.25 0.2 0.1] 0 0.6 0.inf] 0 0.9 1 1 0 0.2 0. 10295 .1 0. Algorithm sectf uses the generalization of the sector concept of [3] described by [1].15 ∀ω P1 = (s+2)/s in SEC[0.5 REAL(P) 0.05 0.7 0.4 1. c1.8 0. c2 matrices.6 0.6 1. Finally.3 IMAG(P1) 0.2 0.4 0.
E. M. vol. Contr. 228238. [3] Zames.. MA: MIT Press. G. 1987. no. See Also lft hinfsyn Forms Redheffer star product of systems H∞ controller synthesis 10296 .” IEEE Trans. G.sectf ⎛ A wellposed conic sector must have det ( B – A ) ≠ 0 or det ⎜ ⎝ s 11 s 12 . s 21 s 22 ⎠ Limitations Also. you must have dim ( u F1 ) = dim ( y F1 ) since sectors are only defined for square systems. on Automat. J.. Control. Stability and Robustness of Multivariable Feedback Systems. “Synthesis of Positive Real Multivariable Feedback Systems. 45.” Int. Verma and D. Cambridge. 1966. Jonckheere. M. [2] Safonov. pp. M. 1980. 817842. pp. 3. J. N. “On the InputOutput Stability of TimeVarying Nonlinear Feedback Systems ≥— Part I: Conditions Using Concepts of Loop Gain. G.. AC11. ⎞ ⎟ ≠0. References [1] Safonov.. Limebeer. A. Conicity. and Positivity.
semilogx(.. loglog plot semilogy Plot frd object on a loglog scale Plot frd object on a linear scale Plot frd object on a semilog scale 10297 ..) is the same as plot(.....).frd/semilog Purpose Syntax Description See Also 10frd/semilog Semilog scale plot of a frd object.) same as plot semilogx(. except a logarithmic (base 10) scale is used for the Xaxis.
type setlmis([]) to initialize its internal representation. To add on to an existing LMI system. See Also getlmis lmivar lmiterm newlmi Get the internal description of an LMI system Specify the matrix variables in an LMI problem Specify the term content of LMIs Attach an identifying tag to LMIs 10298 .setlmis Purpose Syntax Description 10setlmis Initialize the description of an LMI system setlmis(lmi0) Before starting the description of a new LMI system with lmivar and lmiterm. use the syntax setlmis(lmi0) where lmi0 is the internal representation of this LMI system. Subsequent lmivar and lmiterm commands will then add new variables and terms to the initial LMI system lmi0.
this is equivalent to finding P > 0 and K such that (A + BK)P + P(A + BK)T + I < 0. The function setmvar is useful to freeze certain matrix variables and optimize with respect to the remaining ones. The integer X is the identifier returned by lmivar when X is declared. Example Consider the system · x = Ax + Bu and the problem of finding a stabilizing statefeedback law u = Kx where K is an unknown matrix. All terms involving X are evaluated.[n 1]) % P full symmetric Y = lmivar(2. With the change of variable Y := KP. this condition reduces to the LMI AP + PAT + BY + YTBT + I < 0. Instantiating X with setmvar does not alter the identifiers of the remaining matrix variables.A. This LMI is entered by the commands n = size(A.'s') % AP+PA' 10299 .[ncon n]) % Y rectangular lmiterm([1 1 1 P]. and X is removed from the list of matrix variables. A description of the resulting LMI system is returned in newsys.Xval) setmvar sets the matrix variable X with identifier X to the value Xval.2) % number of inputs setlmis([]) P = lmivar(1.1. It saves time by avoiding partial or complete redefinition of the set of LMI constraints. By the Lyapunov Theorem.setmvar Purpose Syntax Description 10setmvar Instantiate a matrix variable and evaluate all LMI terms involving this matrix variable newsys = setmvar(lmisys.X. the constant terms are updated accordingly.1) % number of states ncon = size(B.
B.'s') % BY+Y'B' lmiterm([1 1 1 0].P. Its feasibility is assessed by calling feasp: [tmin.xfeas] = feasp(news) Y = dec2mat(news.Y) The computed Y is feasible whenever tmin < 0.1) % I lmis = getlmis To find out whether this problem has a solution K for the particular Lyapunov matrix P = I.1) The resulting LMI system news has only one variable Y = K. set P to I by typing news = setmvar(lmis.1.setmvar lmiterm([1 1 1 Y].xfeas. evaluate all variable terms in the system of LMIs Delete one of the matrix variables of an LMI problem 10300 . See Also evallmi delmvar Given a particular instance of the decision variables.
rhs] = showlmi(evalsys.showlmi Purpose Syntax Description 10showlmi Return the left. the function evallmi evaluates all variable terms in a system of LMIs.and righthand sides are given by [lhs. The left. If evalsys is the output of evallmi.n) For given values of the decision variables. Example See Also See the description of evallmi. the values lhs and rhs of these left. evallmi setmvar Given a particular instance of the decision variables.and righthand sides of the nth LMI are then constant matrices that can be displayed with showlmi.rhs] = showlmi(evalsys. evaluate all variable terms in the system of LMIs Instantiate a matrix variable and evaluate all LMI terms involving this matrix variable 10301 .and righthand sides of an LMI after evaluation of all variable terms [lhs.n) An error is issued if evalsys still contains variable terms.
and uses 'full' reduction techniques. B = simplify(A. L(1) UMAT: 1 Rows. L = [2 p1]. and uses 'basic' reduction techniques. Example Create a simple umat with a single uncertain real parameter.simplify Purpose Syntax 10simplify Simplify representations of uncertain objects B B B B = = = = simplify(A) simplify(A. The AutoSimplify property of each uncertain element in A governs what reduction methods are used. NominalValue 3.'basic') overrides all uncertain element’s AutoSimplify property.'Range'. Range [2 5] 10302 . Depending on the result. Select specific elements.[2 5]). nominal = 3. any uncertain element which does not actually affect the result is deleted from the representation. p1 = ureal('p1'. 1 Columns L(2) UMAT: 1 Rows. range = [2 5]. note that result remains in class umat.'off') Description B = simplify(A) performs modelreductionlike techniques to detect and eliminate redundant copies of uncertain elements.'full') simplify(A. Simplify those same elements. and the class of B may be lower than the class of A.3. the class of B may be lower than A. and note that class changes. any uncertain elements in A with zero occurences are eliminated.'off') does not perform reduction. However. B = simplify(A. 1 occurrence simplify(L(1)) ans = 2 simplify(L(2)) Uncertain Real Parameter: Name p1. B = simplify(A. After reduction.'basic') simplify(A. 1 Columns p1: real.'full') overrides all uncertain element’s AutoSimplify property.
..45*xcg*xcg*cw*cw . 1 occurrence cw = m/va. 1 Columns m: real.1*zcg . 1 Columns m: real. 8 occurrences xcg: real. nominal = 1.80.1*xcg 14.15 0.25*xcg*xcg .[100000 150000]). range = [100000 150000].105.28*xcg*xcg*cw*cw*zcg . with a default value of AutoSimplify ('basic').31]).17230*xcg*xcg*cw ..[0 .105.31]. 10303 .25e+005. range = [70 90].29*xcg*xcg*cw*zcg ..12*xcg*zcg + 24. 4 occurrences xcg: real.'full') UMAT: 1 Rows.'Range'. range = [0.21].15 . cw = simplify(m/(va*va)*va. nominal = 80. range = [0 0... Simplify the expression. nominal = 0. nominal = 80. 8 copies of va. +100. zcg = ureal('zcg'..25e+005. nominal = 80.25e+005. 18 occurrences zcg: real. 18 copies of xcg and 1 copy of zcg.. and define a high order polynomial [1]..'Range'. range = [0 0. range = [100000 150000].[. 1 occurrence The result of the highorder polynomial is an inefficient representation involving 18 copies of m.16726*xcg*cw*cw*zcg .21].1. xcg = ureal('xcg'. +1. 18 occurrences va: real.2.9*xcg*cw . fac2 = . +. 1 Columns m: real.15 0.91*cw*cw*zcg . range = [70 90]. nominal = 1.9*xcg*cw*zcg .[70 90])..23. range = [70 90].. +.'Range'.'full') UMAT: 1 Rows. ... 2 occurrences zcg: real.58*cw*cw . va = ureal('va'. nominal = 0.6*cw*zcg . 1 occurrence which results in a much more economical representation...0. nominal = 1.07*xcg*xcg*zcg + .31]. m = ureal('m'.105.46.. range = [100000 150000].'Range'.. 3.7*xcg*cw*cw . 4 occurrences va: real. using the 'full' simplification algorithm fac2s = simplify(fac2. range = [0.34*cw .125000. 1 occurrence va: real.1.23. + 4.85 UMAT: 1 Rows.21]).simplify Create four uncertain real parameters..23. nominal = 0. nominal = 0.
0474 .34*cw . da = ureal('da'..0072*dx.28*xcg*xcg*cw*cw*zcg .65 . nominal = 1. +.0.2 .. +100.031 + da*(..29*xcg*xcg*cw*zcg .07*xcg*xcg*zcg + . range = [1 1]. a12 = ..00308) + .37*da)) + ... nominal = 0.'Range'.15 + da*(4.7 + da*177)).32 + da*(.27) + dx*(2.00071 + da*(0.987 + 3. change the AutoSimplify property of each parameter to 'full' before forming the polynomial. da and dx.45*xcg*xcg*cw*cw .0011*dx.5. range = [0 0.15 0. +..1*da).9*xcg*cw*zcg . ABmat.AutoSimplify = 'full'. 2 occurrences zcg: real..da*(55.da*. nominal = 0. 19 occurrences 10304 .66 + da*(1.[1 1]).0.9*xcg*cw .1*xcg 14.. nominal = 80. 3..'Range'.561*da*da)) . 1 Columns m: real.078 + da*(. b1 = 0.. cw = m/va. .da*2. a22 = ..a21 a22 b2] UMAT: 2 Rows. 4 occurrences va: real.. nominal = 0...0. m. va.1.25e+005.85 UMAT: 1 Rows.1*zcg .6*cw*zcg .AutoSimplify = 'full'.7*xcg*cw*cw . involving polynomial expressions in the two real parameters [2]. dx = ureal('dx'.91*cw*cw*zcg .25*xcg*xcg . and a 2by3 matrix.17230*xcg*xcg*cw ..1. a21 = 1.16726*xcg*cw*cw*zcg .66 .464 + 1. + dx*(9. fac2f = .AutoSimplify = 'full'. ABmat = [a11 a12 b1. You can form the polynomial..105. 4 occurrences xcg: real. range = [0.58*cw*cw .39*da)) + .46. 3 Columns da: real. zcg. xcg.. a11 = .97 .12*xcg*zcg + 24.23. +1.8089 + da*(. range = [70 90].39 + da*(21.934 + da*(.. b2 = 0. 1 occurrence Create two real parameters. + 4.302*da).simplify Alternatively. range = [100000 150000].2. which immediately gives a low order representation.31].00175 .21].[1 1]).15*dx.AutoSimplify = 'full'.
nominal = 0.464 + 1.heuristics. range = [1 1].00071 + da*(0.. Multidimensional model reduction and realization theory are only partially complete theories.. can affect the details of the representation (i. dx. nominal = 0.da*2. range = [1 dx: real. range = [1 1].7 + da*177)). 7 occurrences 1].da*(55. It is possible that simplify’s naive methods cannot completely resolve these differences.da*.1*da).2 . nominal = 0.. b2 = 0. ABmatsimp = simplify(ABmat. + dx*(9. 2 occurrences Alternatively.66 + da*(1.'full') UMAT: 2 Rows. 7 occurrences dx: real.. da.00175 .37*da)) + . 3 Columns da: real.0072*dx. range = [1 1].0474 ..302*da). The heuristics used by simplify are that .39*da)) + .8089 + da*(. The order in which expressions involving uncertain elements are built up. Now you can rebuild the matrix a11 = . nominal = 0. you can set the parameter’s AutoSimplify property to 'full'. a22 = . range = [1 1].AutoSimplify = 'full'. a12 = .15*dx.e. ABmatFull = [a11 a12 b1. 2 occurrences Algorithm Limitations simplify uses heuristics along with onedimensional model reduction algorithms to partially reduce the dimensionality of the representation of an uncertain matrix or system.39 + da*(21.simplify dx: real.a21 a22 b2] UMAT: 2 Rows. b1 = 0.5. nominal = 0.561*da*da)) .934 + da*(.65 . 10305 .27) + dx*(2.078 + da*(. so one may be forced to work with “nonminimal” representations of uncertain systems.00308) + .AutoSimplify = 'full'. 2 occurrences Use 'full' simplification to reduce the complexity of the description. 3 Columns da: real.32 + da*(. a21 = 1.66 .031 + da*(.97 . eg. distributing across addition and multiplication.987 + 3. the number of occurences of a ureal in an uncertain matrix).0011*dx.15 + da*(4.
“Symbolic and numerical software tools for LFTbased low order uncertainty modeling. pp. Lim and E. 5.M. and G. K. See Also umat uss ucomplex ureal uss Creates an uncertain matrix object Creates an uncertain system object Creates an uncertain complex parameter Creates an uncertain real parameter Creates an uncertain system 10306 . C. pp. “Computer aided uncertainty modeling for nonlinear parameterdependent systems Part II: F16 example. A.” IEEE International Symposium on Computer Aided Control System Design. [2] Belcastro. 17. Morelli.” IEEE International Symposium on Computer Aided Control System Design.A.B. 1999.simplify References [1] Varga. Looye.. 1999.
n) skewdec(m. set n to the number of decision variables already used. See Also decinfo lmivar Describe how the entries of a matrix variable X relate to the decision variables Specify the matrix variables in an LMI problem 10307 . In this case.n) forms the mbym skewsymmetric matrix 0 –( n – 1 ) –( n – 2 ) (n + 1) 0 –( n – 3 ) (n + 2) (n + 3) 0 … … … … … … … … … … … This function is useful to define skewsymmetric matrix variables.skewdec Purpose Syntax Description 10skewdec Form a skewsymmetric matrix x = skewdec(m.
C . you get λ i ( A 11 ) < λ i ( A 22 ) .G2] = slowfast(G. and ˆ ˆ ˆ ˆ [ G ( s ) ] : = ( A . stabproj employs the algorithm in [1] as follows: Find a unitary matrix V via the ordered Schur decomposition routines blksch or rschur such that ˆ ˆ A 11 A 12 T A = V AV = ˆ 0 A 22 ˆ ˆ Based on the style of ordered Schur form. Finally solving the matrix equation for X ˆ ˆ ˆ A 11 X – X A 22 + A 12 = 0 you get the statespace projections ˆ ˆ ˆ ˆ [ G ( s ) ] s : = ( A 11. C 1. B 2. B . D ) denotes the fast part. B 1. D 2 ) where 10308 . B 1. D 1 ) and ˆ ˆ ˆ ˆ [ G ( s ) ] f : = ( A 22. C 1. D 1 ) denotes the slow part of G(s).slowfast Purpose Syntax Description 10slowfast Slow and fast modes decomposition [G1. C 2.cut) slowfast computes the slow and fast modes decompositions of a system G(s) such that G ( s ) = [ G ( s ) ]s + [ G ( s ) ]f ˆ ˆ ˆ ˆ where [ G ( s ) ] s : = ( A 11. The variable cut denotes the f 22 2 2 2 index where the modes will be split.
J. E. 45. Control. 1987. Int. no. N. blkrsch cschur rschur schur modreal See Also Block ordered Schur realization Complex Schur realization Real Schur decomposition Schur decomposition Modal form realization 10309 . Limebeer. “Synthesis of Positive Real Multivariable Feedback Systems”. 3. Safonov.slowfast ˆ B1 : = I – X VB ˆ 0 I B2 and T I X ˆ ˆ C 1 C 2 : = CV 0 I References [1] M. M. A. Jonckheere. J. pp. Verma and D. G. 817842. vol.
dim)==1. A singleton is a dimension such that size(A. Changes size of matrix 10310 . 2D arrays are unaffected by squeeze so that row vectors remain rows. See Also permute reshape Permutes array dimensions.squeeze Purpose Syntax Description 10squeeze Remove singleton dimensions for umat objects B = squeeze(A) B = squeeze(A) returns an array B with the same elements as A but with all the singleton dimensions removed.
to handle repeated uncertain parameters. wc. BLTflag=1 and transforms the continuoustime system usys to a discretetime system for balancing. no bilinear transformation is performed.wc. Setting FSflag=1 uses full matrix scalings to balance the repeated uncertain parameter blocks. ssbal does not work on an array of uncertain systems. mxeig corresponds to the value of real of the most positive pole of usys. uses a single. BLTflag. Example Consider a twoinput. positive scalar to balance the repeated uncertain parameter blocks.Wc) ssbal(usys. twooutput.uss/ssbal Purpose Syntax 10uss/ssbal Scale state/uncertainty while preserving the uncertain input/output map of an uncertain system. By default.Wc. The numerical conditioning of usysout is usually better than that of usys. the uncertain statespace is mapped by using a bilinear transformation into discretetime for balancing.wc) defines the critical frequency. The balancing algorithm uses mussv to balance the constant uncertain statespace matrices in discretetime. usysout = ssbal(usys. Note that if usys is a discretetime system. BLTflag=0 results in balancing the continuoustime statespace data from usys. usysout usysout usysout usysout = = = = ssbal(usys) ssbal(usys. the default. for the bilinear prewarp transformation from continuoustime to discretetime. FSflag=0. improving the accuracy of additional computations performed with usysout. an uss object.FSflag) ssbal(usys. FSflag. 10311 . If usys is a continuoustime uncertain system. An error message is generated to alert you of this.FSflag) sets the scaling flag.BLTflag) Description usysout = ssbal(usys) yields a system whose input/output and uncertain properties are the same as usys.FSflag. usysout is an uss object. The default value of wc is 1 when the nominal uncertain system is stable and 1. p1 and p2.BLTflag) sets the bilinear transformation flag. usysout = ssbal(usys.25*mxeig when it is unstable.wc. twostate uncertain system with two real parameter uncertainties.FSflag. usysout = ssbal(usys.Wc.
range = [19 11]. range = [19 11].NominalValue a = x1 x2 x1 12 3.034 .00019 2]. nominal = 17.43]%.17. usys = ss(A.C.0076 2 u1 120 503 u2 809 24 Continuoustime model.43]%.001 p2].B.2.503 24].43 0. 2 Inputs. 1 occurrence usys.43). ssbal is used to balance the uncertain system.zeros(2.'Percentage'.'Range'.034 0.2)) USS: 2 States. Continuous System p1: real.uss/ssbal p2=ureal('p2'. variability = [0. usysout = ssbal(usys) USS: 2 States. nominal = 17. 2 Inputs. 1 occurrence p2: real. nominal = 3.2. A = [12 p1. p1=ureal('p1'. 2 Outputs.2 x2 0.0076. variability = [0. .3. 2 Outputs.00019 x2 0. Continuous System p1: real.001 17 b = x1 x2 c = y1 y2 d = y1 y2 u1 0 0 u2 0 0 x1 0. nominal = 3. usys. 1 occurrence p2: real.43 0.[19 11]). 1 occurrence 10312 . B = [120 809.2.0.. C = [.
7 u2 5.229 0.512 Continuoustime model.02922 x2 0.009692 17 b = x1 x2 c = y1 y2 d = y1 y2 u1 0 0 u2 0 0 x1 5.7802 31.NominalValue a = x1 x2 x1 12 0.26 1.uss/ssbal usysout.1206 31. See Also canon c2d d2c mussv mussvextract ss2ss Forms canonical statespace realizations Converts continuoustime models to discretetime Converts discretetime models to continuoustime Sets bounds on the Structure Singular Value (µ) Extracts compressed data returned from mussv Changes state coordinates for statespace models 10313 .3302 x2 0.74 u1 0.
B 1. B . D ) denotes the antistable part.: = ( A 11.or [ G ( s ) ] s : = ( A 11.m] = stabproj(G) stabproj computes the stable and antistable projections of a minimal realization G(s) such that G ( s ) = [ G ( s ) ] . D 1 ) denotes the stable part of G(s). D 2 ) 10314 .. B 1. B 2. C 1. C 2. D 1 ) ˆ ˆ ˆ ˆ and [ G ( s ) ] + or [ G ( s ) ] us : = ( A 22. you can get a stable A 11 and an ˆ ˆ ˆ antistable A .+ [ G ( s ) ] + where ˆ ˆ ˆ ˆ [ G ( s ) ] . 22 i 11 i 22 Finally solving the matrix equation for X ˆ ˆ ˆ A 11 X – X A 22 + A 12 = 0 ˆ ˆ ˆ ˆ you get the statespace projections [ G ( s ) ] . The variable m + 22 2 2 2 returns the number of stable eigenvalues of A.G2. and ˆ ˆ ˆ ˆ [ G ( s ) ] : = ( A . C 1. λ ( A ) < λ ( A ) for the case of slowfast.stabproj Purpose Syntax Description 10stabproj Stable and antistable projection [G1. C . Algorithm stabproj employs the algorithm in [1] as follows: Find a unitary matrix V via the ordered Schur decomposition routines blksch or rschur such that T A = V AV = ˆ ˆ A 11 A 12 ˆ 0 A 22 ˆ Based on the style of ordered Schur form.
J. J. pp. A. Limebeer. 1987. Int. 817842. N. 45. G. no. blkrsch cschur rschur schur modreal See Also Block ordered Schur realization Complex Schur realization Real Schur decomposition Schur decomposition Modal form realization 10315 . vol. M. Control. Jonckheere. Safonov.stabproj where ˆ B1 : = I – X VB ˆ 0 I B2 and T I X ˆ ˆ C 1 C 2 : = CV 0 I References [1] M. Verma and D. “Synthesis of Positive Real Multivariable Feedback Systems”. 3. E.
.0.7]. usys2.4 4]. 3 occurrences zeta: real.1. or arrays along array dimensions of an uncertain array.. nominal = 0.[0.usys1.[0.. The column/row dimensions are not counted in the array dimensions. range = [0. Note that the input/output dimensions are not considered for arrays.5.usys1.) produces an array of uncertain matrices.3 0. 1 occurrence 10316 . models. or usysout. tf(zeta.5.) produces an array of uncertain models.. nominal = 1.. along the array dimension arraydim. models.3 0... Example Consider usys1 and usys2. 1 Input. by stacking (concatenating) the ufrd or uss matrices (or ufrd or uss arrays) usys1. umat2. umatout = stack(arraydim.3 0. ureal('wn'. 1 occurrence You can stack along the second dimension to produce a 1by2 uss array...5..) usysout = stack(arraydim. umatout = stack(arraydim.usys2. 1 x 2] wn: real.4 4]).P1) USS: 2 States. 3 occurrences zeta: real. All models must have the same number of columns and rows (the same input/output dimensions). stack(2. or arrays umatout = stack(arraydim.. ufrd or uss.P1.usys2.) stack constructs an uncertain array by stacking uncertain matrices.. USS: 2 States.umat1. 1 Output.7]. by stacking (concatenating) the umat matrices (or umat arrays) umat1. two singleinput/singleoutput uss models: zeta wn = P1 = P2 = = ureal('zeta'. nominal = 1. nominal = 0. Continuous System [array.P1.'Range'.. 1 Input. range = [0.4 4].umat2. tf(1. along the array dimension arraydim.'Range'.. 2 x 1] wn: real. umatout. range = [0.P2) % produces a 1by2 USS array. Continuous System [array. 1 Output. range = [0. You can stack along the first dimension to produce a 2by1 uss array....umat1. stack(1.7]).[1 10]).umat2.[1 2*zeta*wn wn^2]).. All models must have the same number of columns and rows.stack Purpose Syntax Description 10stack Construct an array by stacking uncertain matrices.
3 occurrences zeta: real. 1 Output.P2) % produces a 1by1by2 USS array. 1 x 1 x 2] wn: real. USS: 2 States.7]. range = [0.P1. nominal = 0.5.4 4].3 0.stack You can stack along the third dimension to produce a 1by1by2 uss array. Continuous System [array. nominal = 1. stack(3. 1 occurrence See Also append blkdiag horzcat vertcat Groups models by appending their inputs and outputs Groups models by appending their inputs and outputs Performs horizontal concatenation Performs vertical concatenation 10317 . range = [0. 1 Input.
U and V are unitary matrices and frd objects.V] = svd(X) S = svd(X) operates on X. see the built in svd command for more details.S.ReponseData at each frequency to construct S. For more information.S. such that X = U*S*V'. [U.frd/svd Purpose Syntax Description 10frd/svd Singular value decomposition of a frd object S = svd(X) [U. See Also schur svd Constructs a Schur decomposition Constructs a singular value decomposition 10318 .V] = svd(X) produces a diagonal frd S that has the same dimensions as X and includes positive diagonal elements in decreasing order.
n) forms an mbym symmetric matrix of the form (n + 1) (n + 2) (n + 4) (n + 2) (n + 3) (n + 5) (n + 4) (n + 5) (n + 6) … … … … … … … … … … … This function is useful to define symmetric matrix variables. See Also decinfo Show how matrix variables depend on decision variables 10319 .n) symdec(m. n is the number of decision variables.symdec Purpose Syntax Description 10symdec Form a symmetric matrix x = symdec(m.
a single name with a defined integer dimension can be specified. zpk. The order of names in inputvar determines the order of inputs in the interconnection. Semicolons delineate separate components of the interconnection’s outputs. arguments within parentheses specify which subsystem outputs are to be used and in what order. ufrd. then all outputs from that subsystem are used. containing the names of the subsystems (double. Fy. If a subsystem is listed in outputvar without arguments. a 3component (x. Alternatively. outputvar is a char. For instance plant(2:4.10. input_to_ListedSubSystemName must exist in the calling workspace.11 from the subsystem plant. uss.y. For instance.3. etc) that comprise the interconnection. and multiplied by scalars. frd. defining the names of the external inputs to the interconnection. sysout = sysic sysic requires that 3 variables with fixed names are present in the calling workspace: systemnames. inputvar is a char. ss. Fz. The 10320 .9:11) specifies outputs 2. The names must be separated by spaces with no additional punctuation. systemnames is a char.4.9. using the subsystem data in the names found in systemnames. sysout = sysic will perform the interconnection described by the variables above. The names are separated by semicolons. For multivariable subsystems. a corresponding variable. sysic also requires that for every subsystem name listed in systemnames. Each named variable must exist in the calling workspace. describing the outputs of the interconnection. inputvar and outputvar. signals can be added and subtracted. Inputs can be scalar or multivariate.1. tf.1.z) force input can be specified with 3 separate names.sysic Purpose Syntax Description 10sysic Build interconnections of certain and uncertain matrices and systems. This variable is similar to outputvar – it defines the input signals to this particular subsystem as linear combinations of individual subsystem’s outputs and external inputs. Between semicolons. Outputs do not have names – they are simply linear combinations of individual subsystem’s outputs and external inputs. and the entire list is enclosed in square brackets [ ]. Fx. as in F{3}.
1. g + 6 + p wt act deltemp 6 + noise k setpoint P = rss(3.1).2).setpoint]'. input_to_P = '[W. input_to_A = '[K]'. identical to the system illustrated in the iconnect description. A = rss(1.3*P(1). W = rss(1.1.2. 10321 .1). listed above as sysout. outputvar = '[57.3 g. T = sysic. Consider a threeinput. Example A simple system interconnection.2).setpoint]'.A]'. inputvar = '[noise.deltemp. K = rss(1.sysic resulting interconnection will be returned in the output argument. systemnames = 'W A K P'. input_to_K = '[P(2)+noise.setpointP(2)]'.1. input_to_W = '[deltemp]'. twooutput LTI matrix T. y1 y2 T noise deltemp setpoint which has internal structure y1 y2 57.
The iconnect interconnection object can also be used to define complex interconnections.sysic Limitations The syntax of sysic is limited. and for the most part is restricted to what is shown here. errorchecking routines monitor the consistency and availability of the subsystems and their inputs. Within sysic. and has a more flexible syntax. See Also iconnect Equates expressions for icsignal objects 10322 . These routines provide a basic level of error detection to aid the user in debugging.
The uncertain parameter’s possible values are a complex disc of radius 1.nominalvalue) A = ucomplex('NAME'. Property/Value pairs may also specified upon creation.25) sets the nominal value to 6j..400).) Description An uncertain complex parameter is used to represent a complex number whose value is uncertain. circ = sin(w) + j*cos(w). NominalValue 4+3i. • B = ucomplex('B'..Value2..percentage is 20 (radius is 1/5 of the magnitude of the nominal value).'Percentage'. The default Mode is 'Radius' and the default radius is 1.. Radius 1 You can visualized the uncertain complex parameter by sampling and plotting the data. Uncertain complex parameters have a name (the Name property). implicitly.ucomplex Purpose Syntax 10ucomplex Create an uncertain complex parameter A = ucomplex('NAME'. and a nominal value (the NominalValue property).4+3*j) Uncertain Complex Parameter: Name A. Example Create an uncertain complex parameter with internal name A. the Mode to 'Percentage'. 10323 .nominalvalue. w = linspace(0. 'Property2'. The uncertainty (potential deviation from the nominal value) is described in two different manners: • Radius (radius of disc centered at NominalValue) • Percentage (disc size is percentage of magnitude of NominalValue) The Mode property determines which description remains invariant if the NominalValue is changed (the other is derived). A = ucomplex('A'.2*pi.200). The value of A. centered at 4+3j. the percentage uncertainty to be 25 and.NominalValue+circ). rc = real(A.Value1. For instance.6j. sa = usample(A...'Property1'.
5 3 3.5 See Also get umat ucomplexm ultidyn ureal Gets object properties Creates an uncertain matrix object Creates an uncertain complex matrix Creates an uncertain LTI dynamic object Creates an uncertain real parameter 10324 .5]) ylim([1.5 4 4.ucomplex ic = imag(A.5 5.ic.5 5 5. plot(real(sa(:)).imag(sa(:)).'k') xlim([2.5]) axis equal Sampled complex parameter A 4.5 2.'o'.NominalValue+circ).5 2 1.5 4.rc.5 4 3.5 3 2.
Trailing Property/Value pairs are allowed.nominalvalue.. typicaldev = usample(FF.1 .4 5 6]. invertible. nominal value [1 2 3.[1 2 3.Value) M = ucomplexm('Name'.diag([.40).2])). Example Create a ucomplexm with the name 'F'.'WR'. M = ucomplexm('Name'.NominalValue)*inv(M.4 .WR)) <= 1.3]).V1.'P1'. centered at a NominalValue and named Name. F = ucomplexm('F'. 'WR'.NominalValue. no simplification performed.'WL'.NominalValue) M = ucomplexm('Name'. Sample the difference between the uncertain matrix and its nominal value at 80 points...1) entry as well as the deviations in the (2.8 1.WLvalue.WL)*(H M. The default values for WL and WR are identity matrices of appropriate dimensions. Plot histograms of the deviations in the (1.ucomplexm Purpose Syntax 10ucomplexm Create uncertain complex matrix M = ucomplexm('Name'..1.'Property'.4 .'WL'. Other values for AutoSimplify are 'off''.WRvalue) creates an uncertain complex matrix with weights WL and WR.'WR'.WLvalue.3) entry...NominalValue) creates an uncertain complex matrix represents a “ball” of complexvalued matrices. the values represented by M are all matrices H which satisfy norm(inv(M.'WL'. WL and WR are square. 4 5 6]. Specifically. yielding a 2by3by80 matrix typicaldev.8 1.diag([.NominalValue. which means elementary methods of simplification are applied as operations are completed.NominalValue.WRvalue) M = ucomplexm('Name'. Its default value is 'basic'. and Description weighting matrices that quantify the size and shape of the ball of matrices represented by this object.V2.) The property AutoSimplify controls how expressions involving the uncertain matrix are simplified.3]). and 'full' which applies modelreductionlike techniques to the uncertain object.'P2'. 10325 .NominalValue.2]). as in M = ucomplexm('NAME'. and weighting matrices WL = diag([. WR = diag([.
subplot(2.:))).1).1) entry should be about 10 times smaller than the typical deviations in the (2.F(1. Typical deviations in the (1.2 0.2 0.1).3) − F(2. hist(abs(typicaldev(1.1) − F(1.ucomplexm The absolute value of the (1.1.05 0.3.xlim([0 .15 0.1.1) .3) entry are shown by histogram plots.3).:))).NominalValue') Sampled F(1.xlim([0 .1) entry and the (2.1 0.1 0.25 Sampled F(2.3).2).NominalValue') subplot(2.3) .1.25]) title('Sampled F(2. hist(abs(typicaldev(2.NominalValue 12 10 8 6 4 2 0 0 0.F(2.3) entry.1).15 0.25]) title('Sampled F(1.NominalValue 20 15 10 5 0 0 0.25 See Also get umat ucomplem ultidyn ureal Gets object properties Creates an uncertain matrix object Creates an uncertain complex parameter Creates an uncertain LTI dynamic object Creates an uncertain real parameter 10326 .05 0.
size 2x3 size(N) ans = 2 3 get(N) Name: 'N' NominalValue: [2x3 double] AutoSimplify: 'basic' See Also ureal ultidyn ucomplex ucomplexm Creates an uncertain real parameter Creates an uncertain linear timeinvariant object Creates an uncertain complex parameter Creates an uncertain complex matrix 10327 .g. As such. This object represents the class of completely unknown multivariable. cascade) operate properly. multiplication (i. n = udyn('name'.. n = udyn('name'.e. The analysis tools (e. these uncertain elements represent noncommuting symbolic variables (placeholders). robuststab) do not currently handle these types of uncertain elements. For practical purposes. subtraction. these elements do not provide a significant amount of usability.iosize) creates an unstructured uncertain dynamic system class. and their role in the toolbox is small.. timevarying nonlinear systems. Example You can create a 2by3 udyn element and check its size and properties.[2 3]) Uncertain Dynamic System: Name N. with input/output dimension specified by iosize.udyn Purpose Syntax Description 10udyn Create an unstructured uncertain dynamic system object. All algebraic operations. and substitution (with usubs) is allowed. N = udyn('N'.iosize). such as addition.
V1.frequency. ufrd models also result when frequency response data models (frd) are combined with uncertain matrices (umat). P2.'P2'.RefSys) usys = ufrd(response.frequency.frequency. Note that you are unlikely to use this option. and a color choice of red for random samples..'Units'.. response should be a umat array. and blue for nominal.frequency.Ts) usys = ufrd(response. Example In the first example.. Compute the uncertain frequency response.frequency) creates a ufrd from the response and frequency arguments. usysfrd = ufrd(usys..V2.frequency. usysfrd = ufrd(sysfrd) converts an frd model sysfrd to a ufrd model usysfrd with no uncertain elements.'P1'.Ts) usys = ufrd(response.'Units'.units.'Units' specifies the units of the frequencies in frequency. the default for frequency units is 'rad/s'.frequency) usys = ufrd(response.. V2.units.3)) aligns with the frequency. usysfrd = ufrd(usys.'Units'. 10328 . .frequency. and plot the Bode plot. Any of the previous syntaxes can be followed by property name/property value pairs. which may be 'rad/s' or 'Hz'. to the values V1. usysfrd = ufrd(usys.units.) set the properties P1.frequency.Ts.RefSys) 10ufrd Description Uncertain frequency response data (ufrd) models results from the conversion of an uncertain statespace (uss) system to its uncertain frequency response..frequency) usysfrd = ufrd(usys.units) converts a uss model usys to a ufrd model usysfrd by frequency response. usys = ufrd(response. whose 1st array dimension (i.ufrd Purpose Syntax Create an uncertain frequency response data (ufrd) object. If the last two arguments are omitted.e.'Units'.'Units'. size(response. you create a continuoustime uncertain system with both parametric uncertainty and unmodeled dynamics uncertainty. .. or convert another model type to a ufrd model.. using 20 random samples.units) usysfrd = ufrd(sysfrd) usys = ufrd(response.
[1 1]).2.logspace(2. A = [p1 0.4).'Plusminus'.2.5.ufrd p1 = ureal('p1'.30.NominalValue.'r'.10). 1 Input. p3 = ultidyn('p3'.B.C. B = [0. usys = uss(A.logspace(2.NominalValue. bode(usysfrd. usysfrd = ufrd(usys.p2 p1].60)).'b+') Example 2 Convert a notuncertain frd model to ufrd without uncertainties.3. Verify that the equality of the nominal value and simplified representation to the original system.0. Wt = makeweight(. G = frd(tf([1 2 3].[1 2 3 4]).'class').G) ans = 1 isequal(simplify(usys.usysfrd. p2 = ureal('p2'.0)*(1+Wt*p3). 40 Frequency points isequal(usys.[2 6]). Continuous System. C = [1 1]. usys = ufrd(G) UFRD: 1 Output.'Range'.40)).G) ans = 1 See Also frd ss Creates or converts to frequency response data model Creates or converts to statespace model 10329 .15.p2].
iosize) H = ultidyn('Name'. H=ultidyn('name'. which means elementary methods of simplification are applied as operations are completed.ultidyn Purpose Syntax 10ultidyn Create an uncertain linear timeinvariant object. timeinvariant objects are used to represent unknown dynamic objects. • If Type is 'PositiveReal' then the knowledge is a lower bound on the real part. and an input/output size (ioSize property). no simplification performed. which quantifies the bound on the frequency response of the uncertain object as described above..iosize) creates an uncertain linear. • If Type is 'GainBounded'. defining the state dimension of random samples of the uncertain object when sampled with usample.Value2.) H = ultidyn('Name'.. namely Real(H) >= Bound at all frequencies.'Property2'. whose only known attributes are bounds on their frequency response. absolute value).Value2.Value1. Description The property Type is 'GainBounded' (default) or 'PositiveReal'.) The property SampleStateDim is a positive integer.'Property1'. The matrix generalization of this is H+H' >= 2*Bound. The property AutoSimplify controls how expressions involving the uncertain matrix are simplified. timeinvariant objects have a name (the Name property). Uncertain linear. Its default value is 'basic'. then the knowledge is a upper bound on the magnitude (ie.Value1. Trailing Property/Value pairs are allowed in the construction.'Property2'. The property Bound is a real... and describes in what form the knowledge about the object’s frequency response is specified. H = ultidyn('Name'. The matrix generalization of this is H<= Bound. Other values for AutoSimplify are 'off''. scalar.iosize. 10330 .. namely abs(H)<= Bound at all frequencies.iosize..'Property1'. The default value is 1. and 'full' which applies modelreductionlike techniques to the uncertain object..
SampleStateDim = 5.30)) 30 Nyquist Diagram 20 10 Imaginary Axis 0 −10 −20 −30 −10 0 10 20 Real Axis 30 40 50 60 See Also get ureal uss Gets object properties Creates an uncertain real parameter Creates an uncertain LTI system object 10331 .'Type'. B = ultidyn('B'. dimension 2by3.2. M+M' >= 2*(2.'Bound'.5) Uncertain PositiveReal LTI Dynamics: Name B. norm bounded by 7.ultidyn Example Example 1 Create a ultidyn object with internal name 'H'. 1x1.[1 1].5) B. Change the SampleStateDim to 5.7) Uncertain GainBounded LTI Dynamics: Name H. 2x3. H = ultidyn('H'. whose frequency response has a real part greater than 2. Gain Bound = 7 Example 2 Create a scalar ultidyn object with an internal name 'B'.'PositiveReal'. nyquist(usample(B.5.'Bound'.[2 3]. and plot the Nyquist plot of 30 random samples.
0.'Plusminus'. 1 occurrence View the properties of M with get get(M) NominalValue: [3x2 double] Uncertainty: [1x1 atomlist] 10332 .Uncertainty. range = [2 6].umat Purpose Syntax Description 10umat Uncertain matrices h = umat(m) Uncertain matrices are usually created by manipulation of uncertain atoms (ureal. then H = umat(M) recasts M as an uncertain matrix (UMAT object) whose value is merely the uncertain atom. Similarly. Most standard matrix manipulations are valid.0.3. multiplication. b = ucomplex('b'. if M is an uncertain atom. etc).'Range'.1+j. nominal = 1+1i. Specific rows/columns of a uncertain matrix can be referenced and assigned. If M is a umat. and then a 3by2 umat. including addition. inverse. then M. then M.'class') is the same as M. nominal = 3. then H = umat(M) recasts M as an uncertain matrix (UMAT object) without any uncertainties. If M is a double.b*a 7.5). The command umat is rarely used. M = [a b. ultidyn. a = ureal('a'. 1 occurrence b: complex.4].B accesses the uncertain atom B in M. horizontal and vertical concatenation. and their properties modified with this Uncertainty gateway.4 0. For instance. 2 Columns a: real.5. nominal = 5.[2 6]).5.4). Example Create 3 uncertain atoms.ca b^2] UMAT: 3 Rows. if B is an uncertain real parameter in M. ucomplex. radius = 0.NominalValue is the result obtained by replacing each uncertain atom in M with its own nominal value.Uncertainty is an object describing all of the uncertain atoms in M. There are two situations where it may be useful. double matrices and other uncertain matrices. simplify(H. If M is a umat. c = ureal('c'. 4 occurrences c: real.'Radius'. All atoms can be referenced. then M. In both cases. variability = [0.
usample(M) ans = 2.2) UMAT: 2 Rows.0000 1. obtained by taking random samples of the uncertain atoms within M.3829 0. The nominal value of M reflects this change.NominalValue = 4.0000i 7.3960i Select the 1st and 3rd rows.Uncertainty. 1 Columns b: complex.7358 + 2. M.0000 + 1. M.7808i 1.0000i 2.a. and 2nd column of M.umat The nominal value of M is the result when all atoms are replaced by their nominal values.0000 0 + 2.NominalValue ans = 4.0000i Change the nominal value of a within M to 4. View the properties of M with get M.0000 1. nominal = 1+1i.1715 + 2.0000 + 4.0000i 4.0000 + 1.0000 1.3854i 7.0000 1.0072 1. The result is a 2by1 umat.0000i 7.NominalValue ans = 5.0000 5. whose dependence is only on b. M([1 3].0000i Get a random sample of M.8647 + 1. radius = 0.0000 + 5. 3 occurrences See Also ureal ultidyn ucomplex ucomplexm usample Creates an uncertain real parameter Creates an uncertain linear timeinvariant object Creates an uncertain complex parameter Creates an uncertain complex matrix Generates random samples of an uncertain object 10333 .0000 0 + 2.5.
Xdata.. The syntax is the same as the MATLAB plot command except that all data is contained in frd objects.Xdata.) uplot(G1.Ydata) H = uplot(G1..G2) H = uplot(G1..d' 'liv.Xdata.Xdata.linetype.) uplot(G1.linetype) uplot(type..p' 'liv.Ydata.linetype.linetype) uplot(G1...G1.Ydata..lm' 'iv.Ydata. and the axes are specified by type.m' Description data versus independent variable (default) magnitude versus independent variable log(magnitude) versus independent variable phase versus independent variable magnitude versus log(independent variable) data versus log(independent variable) magnitude versus log(independent variable) 10334 .) H = uplot(G1. The (optional) type argument must be one of: Type 'iv..G2) uplot(G1.uplot Purpose Syntax 10uplot Plot multiple frequency response objects and doubles on the same graph uplot(G1) uplot(G1.linetype) H = uplot(G1) H = uplot(G1..linetype.linetype...linetype) H = uplot(G1.d' 'iv.linetype.Xdata.Xdata.Xdata.linetype) Description uplot plots double and frd objects.G2.Ydata.Ydata) uplot(G1.) H = uplot(G1.m' 'liv..m' 'iv.G2.Ydata.
. a1 = [1.1].'.d1).5]. A constant is treated as such across all frequency.d2).2.omega). and their frequency responses are calculated for each over different frequency ranges.g.05:0.1.0. b1 = [0.1.5] [1.6:.0.01:1.b1. a2 = [.lm' 'liv.). c1 = [1. A frd object with only one frequency point will always show up as a point.i' Description log(magnitude) versus log(independent variable) phase versus log(independent variable) real versus imaginary (parametrize by independent variable) real versus imaginary (parametrize by independent variable) Nicholas plot Bode magnitude and phase plot 'nyq' 'nic' 'bode' The remaining arguments of uplot take the same form as the MATLAB plot command.05]. sys2 = ss(a2. There is a subtle distinction between constant and frd objects with only one independent variable. 10335 .1.c2.1:1.1.'+'. d2 = 0. d1 = 0. sys1g = frd(sys1. Example Two SISO secondorder systems are created.c1.5:20] [0. 'g.p' 'r.0].0]. 'x'. and consequently shows up as a line on any graph with the independent variable as an axis. sys1 = ss(a1. b2 = [1. Line types (for example. c2 = [0. etc. omega2 = sort(omega2).omega2).100).1.1] ].b2. You may need to specify one of the more obvious point types in order to see it (e.9:0.2]. omega = logspace(2. or '*r') can be optionally specified after any frequency response argument. '+'.uplot Type 'liv. omega2 = [ [0. sys2g = frd(sys2.5.1.
uplot A frd object with a single frequency is also created. Note the distinction between the frd object and the constant matrix in the subsequent plots.sys1g. sys3 = rss(1.2).1. uplot('liv. rspot = frd(sys3.lm'.'.'b. xlabel('log independent variable') ylabel('log magnitude') title('axis specification: liv.lm' plot_type specification.lm') 10 1 axis specification: liv.1).lm 10 0 log magnitude 10 −1 10 −2 10 −3 10 −4 10 −2 10 −1 10 0 10 1 10 2 log independent variable See Also bode plot nichols nyquist semilogx semilogy sigma Plots Bode frequency response Plots on linear axis Plots Nichols frequency response Plots Nyquist frequency response Plots semilog scale plot Plots semilog scale plot Plots singular values of a LTI system 10336 .'r*'. The following plot uses the 'liv.rspot.sys2g).
The range of uncertainty need not be symmetric about NominalValue..'Property1'.. The possible values for the Mode property are 'Range'. Its default value is 'basic'.ureal Purpose Syntax 10ureal Create an uncertain real parameter..) Description An uncertain real parameter is used to represent a real number whose value is uncertain. variability = [1 1] 10337 . p = ureal('name'. 'Property2'.. and 'full' which applies modelreductionlike techniques to the uncertain object.5) Uncertain Real Parameter: Name a. The property AutoSimplify controls how expressions involving the uncertain matrix are simplified.nominalvalue) p = ureal('name'. a = ureal('a'. Uncertain real parameters have a name (the Name property).Value1.Value2. Example Example 1 Create an uncertain real parameter and use get to display the properties and their values.. Other values for AutoSimplify are 'off''. and [1 1] is the default value for the 'PlusMinus' property. and a nominal value (NominalValue property). which means elementary methods of simplification are applied as operations are completed.. no simplification performed. Create uncertain real parameter object a with the internal name 'a' and nominal value 5. The uncertainty (potential deviation from NominalValue) is described (equivalently) in 3 different properties: • PlusMinus: the additive deviation from NominalValue • Range: the interval containing NominalValue • Percentage: the percentage deviation from NominalValue The Mode property specifies which one of these 3 descriptions remains unchanged if the NominalValue is changed (the other two descriptions are derived). NominalValue 5.nominalvalue. The default Mode is 'PlusMinus'. 'Percentage' or 'PlusMinus'.
0000 40.'AutoSimplify'. while percentage description of uncertainty is [20 20]. This leaves Mode and NominalValue unchanged.'full'). 10338 .4000] Percentage: [30.Range = [3 9]. and that value of PlusMinus is indeed [1 1]. As expected. get(b) Name: 'b' NominalValue: 6 Mode: 'Percentage' Range: [4. a.[30 40].6.'Percentage'.0000] AutoSimplify: 'full' Note that Mode is automatically set to 'Percentage'. get(a) Name: NominalValue: Mode: Range: PlusMinus: Percentage: AutoSimplify: 'a' 5 'PlusMinus' [3 9] [2 4] [40 80] 'basic' Example 2 Property/Value pairs may also specified upon creation.4000] PlusMinus: [1. b = ureal('b'. Set the range to [3 9].2000 8.ureal get(a) Name: NominalValue: Mode: Range: PlusMinus: Percentage: AutoSimplify: 'a' 5 'PlusMinus' [4 6] [1 1] [20 20] 'basic' Note that the Mode is 'PlusMinus'.8000 2. the range description of uncertainty is [4 6]. but all three descriptions of uncertainty have been modified.
'Range'. get(c) Name: 'c' NominalValue: 4 Mode: 'Range' Range: [3 5] PlusMinus: [1 1] Percentage: [25 25] AutoSimplify: 'basic' See Also ucomplex umat uss Creates an uncertain complex parameter Creates an uncertain matrix Creates an uncertain. c = ureal('c'.'Percentage'.25).'Mode'.ureal Example 3 Specify the uncertainty in terms of percentage. linear dynamic object 10339 . but force Mode to 'Range'.4.
[B. not uncertain) array of size [size(A)].) B = usample(A) substitutes a random sample of the uncertain objects in A.usample Purpose Syntax 10usample Generate random samples of an uncertain object.) takes N1 samples of the uncertain elements listed in Names1. [B.. [B.N1.N2. B = usample(A.SampleValues] = usample(A.SampleValues)..Names. If Names does not include all of the uncertain objects in A.Names1..N2. not uncertain) array of size [size(A) N].Names.Names2.N) additionally returns the specific sampled values (as a Struct whose fieldnames are the names of A’s uncertain elements) of the uncertain elements..SampleValues] = usample(A.SampleValues] = usample(A...N).. Asample = usample(A.N) [B.. Description Example Sample a real parameter. Note that usample(A.N) substitutes N random samples of the uncertain objects in A. B = usample(A). B is the same as usubs(A.500). size(A) ans = 1 1 size(Asample) ans = 1 1 500 class(Asample) ans = 10340 . size(B) will equal [size(A) N1 N2 .Names2.SampleValues] = usample(A..].N) [B.fieldnames(A.SampleValues] = usample(A..Uncertainty). Any entries of Names that are not elements of A are simply ignored. returning a certain (ie.SampleValues] = usample(A. and plot a histogram A = ureal('A'.N) [B. returning a certain (ie.N) is the same as usample(A. B = usample(A.N) samples only the uncertain elements listed in the Names variable (cell. Hence.N1.Names1. or char array). and N2 samples of the uncertain elements listed in Names2 and so on. then B will be an uncertain object.5).
Create uncertain closedloop system CLP = feedback(P*C. C = tf(KI.Nominal). Create an integral controller based on nominal plant parameters KI = 1/(2*tau.usample double hist(Asample(:)) The second example illustrates the open and closedloop response of an uncertain plant model. You can create two uncertain real parameters.4). tau = ureal('tau'.Nominal*gamma. gamma = ureal('gamma'.[1 0]).Values1D] = usample(P.20). [Psample1D.1). and an uncertain plant.. P = tf(gamma. You can sample the plant at 20 values (distributed uniformly about the tau and gamma parameter cube).'Percentage'.[tau 1]).5.30). size(Psample1D) 10341 .
5 4 1. and plot the step response using usubs.10.1.Values1D)) Step Response 6 5 4 Amplitude 3 2 1 0 0 0.5 2 Time (sec) 2. You can plot the 1D sampled plant step responses subplot(2. step(usubs(CLP.usample 20x1 array of statespace models Each model has 1 output. step(Psample1D) You can also evaluate the uncertain closedloop at the same values.5 1 1. and 1 state.15). and 15 values in the gamma parameter. 1 input.2). subplot(2.1.1). 1 input. P.Values2D] = usample(P. at 10 values in the tau parameter. size(Psample2D) 10x15 array of statespace models Each model has 1 output. and 1 state.5 3 3. You can sample the plant.5 0 0 1 2 3 4 Time (sec) 5 6 7 8 See Also usubs Substitutes values for uncertain atoms 10342 . [Psample2D.5 Step Response 1 Amplitude 0.'tau'.'gamma'.
. Both usys.c.'Range'.d.c.. These 4 matrices can be packed together to form a 1output.d.d).Ts) uss(d) uss(a.. b.b. usys = uss(a.. These are the 4 matrices associated with the linear differential equation model to describe the system.p2 p1].3.c.c.Value.'P1'. ..'Plusminus'.0.) uss(sys) Description uss is used to create uncertain statespace models (uss objects) or to convert LTI models to the uss class.d.p2]. usys = uss(sys) converts an arbitrary ss or tf or zpk model sys to an uncertain state space object without uncertainties..'class') are the same as ss(sys).) set the properties P1. 1input.b.uss Purpose Syntax 10uss Specify uncertain state space models or convert a LTI model to an uncertain state space model.b. B = [0. Any of these syntaxes can be followed by property name/property value pairs.[].b. A = [p1 0.d.4).V1. to the values V1. usys = uss(d) specifies a static gain matrix and is equivalent to usys = uss([].b.NominalValue and simplify(usys.d.c.) uss(a.Value..c.b.d) uss(a.'P2'..b. P2.. usys usys usys usys usys usys = = = = = = uss(a.Ts. p2 = ureal('p2'. usys = uss(a. c and d may be umat and/or double and/or uncertain atoms..d) creates a continuoustime uncertain statespace object.Property. V2.[]. 2state continuoustime uncertain state space system.Property.V2.. Example You can first create 2 uncertain atoms and use them to create two uncertain matrices.. usys = uss(a...5. The matrices a. p1 = ureal('p1'.[2 6]).Ts) creates a discretetime uncertain statespace object with sampling time Ts. 10343 .c.
G = tf([1 2 3]. In the second example.ss(G)) ans = 1 isequal(simplify(usys.C.NominalValue. you can convert a notuncertain tf model to an uncertain statespace model without uncertainties. usys = uss(G) USS: 3 States. usys = uss(A.'class').uss C = [1 1].B. Verify that the equality of the nominal value and simplified representation to the original system.ss(G)) ans = 1 See Also frd ss Creates or converts to frequency response data model Creates or converts to statespace model 10344 . 1 Input. 1 Output.0). Continuous System isequal(usys.[1 2 3 4]).
.atomname2.... identified by atomname1.ElementName1.value2. but otherwise can be of any class......value2}) is allowed. value can also be the strings 'NominalValue' or 'Random' (or specified only partially) in which case the nominal value. Combinations of the above syntax’s is also allowed.}) B = usubs(M. then that value is substituted for all of the listed atoms.value1. identified by ElementName1.atomname2. The value may itself be uncertain. respectively..) In this case. ElementName2..value1.{value1. etc.usubs Purpose Syntax 10usubs Substitute given values for uncertain elements of uncertain objects B = usubs(M. for which specific values (which may contain other placeholders too) can be substituted.atomname1.{value1.. with its fieldnames constituting the Names. StrucArray is a structure with fieldnames and values.. It needs to be the correct size.atomname2}. it is not required that value be in a cell array. and the field values constituting the Values. so that • B = usubs(M. or a random instance of the atom is used.value1. etc. In the following function call. etc.value2.value2. to the values in value1. as • B = usubs(M. For this situation. respectively.ElementName2.. 10345 .{atomname1...) B = usubs(M.. B = usubs(M. B = usubs(M. to the values in value1.atomname2.. if the value cell is 1by1.}.StrucArray) usubs is used to substitute a specific value for an uncertain element of an Description uncertain object. In this manner.. atomname2.. The names and values can also be grouped in Cell Arrays.{atomname1.value2. the result may be of any class. and can be an array.atomname1.atomname1.value1. Hence. value2. value2. The names and values can also be grouped in a structure.) sets the elements in M.) sets the atoms in M. uncertain elements act as symbolic placeholders. etc.
'p'.3).5).6). wcgain and robuststab return the offending uncertain element values in this manner.1).5) m1 = 1 5 25 4 NamesValues.p = 5.NamesValues) m2 = 1 5 25 4 m1 . m2 = usubs(m.1. m3 = usubs(m. % 2by2by6 size(m3) ans = 2 2 6 You can use usubs to substitute for individual uncertainties. m = [1 p.StrucArray) Robustness analysis commands. 10346 . and form a simple 2by2 uncertain matrix with the parameters a = ureal('a'. and perform identical substitution in two different manners. such as wcnorm. Example Create an uncertain matrix.5).NamesValues). which randomly samples uncertain objects also returns the sample points in this manner. Create 3 uncertain real parameters. b = ureal('b'. usample. c = ureal('c'.m2 ans = 0 0 0 0 You can make an arrayvalued substitution using the structurebased syntax. NamesValues.p^2 4].usubs • B = usubs(M. p = ureal('p'.p = rand(1. size(m) ans = 2 2 m1 = usubs(m.
1.m.'b'.[1 1]).1) .c a*b*c].'a'.Uncertainty.'a'.6. obtaining 'b' directly from m m4 = usubs(m.'a'.m1(2. You can do this using two different forms of the syntax.10). nv.2)) ans = 0 You can substitute one real parameter with a transfer function. See Also gridureal usample simplify Grids uncertain real parameters over their range Generates random samples of an atom Simplify representation of uncertain objects 10347 .'inf') ans = 0 In m.'c'. and check that the results are identical. m2 = usubs(m.tf([5].a = tf([5].c = 1.6. and check the results m1 = usubs(m. substitute 'a' with 'b'.3. nv.1)) ans = 10 simplify(10*m1(1. simplify(m1(1.b = 2. m3 = usubs(m.usubs m = [a b.2. and other parameters with DOUBLES. nv.2)*m1(2.b).[1 1]). You can perform a single parameter substitution.3). norm(m2m3.nv).
[maxgain.wcu. Determining the maximum gain over all allowable values of the uncertain elements is referred to as a worstcase gain analysis.opts) The gain of an uncertain system will generally be depend on the values of its uncertain elements.info] = wcgain(sys. Here “gain” refers to the frequency response magnitude. 10348 . The following figure shows the frequency response magnitude of many samples of an uncertain system model. This maximum gain is called the worstcase gain. 10 1 Various sample responses 10 0 Magnitude 10 −1 Nominal Response 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency wcgain can perform two types of analysis on uncertain systems.wcu.info] = wcgain(sys) [maxgain.wcgain Purpose Syntax Description 10wcgain Calculates bounds on the worstcase gain of an uncertain system.
During such 10349 .wcgain • A pointwiseinfrequency worstgain analysis yields the frequencydependent curve of maximum gain. • A peakoverfrequency worstgain analysis only aims to compute the largest value of frequencyresponse magnitude across all frequencies. shown in the figure below. 10 1 Maximum Gain pointwise across frequency 10 0 Magnitude 10 −1 WorstCase gain degradation from nominal 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency This plot shows the maximum frequencyresponse magnitude at each frequency due to the uncertain elements within the model.
wcgain an analysis. only bounds on the worstcase gain are computed. peak−across−frequency 10 0 Magnitude 10 −1 Nominal 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency The default analysis performed by wcgain is (peakoverfrequency). Basic syntax Suppose sys is an ufrd or uss with M uncertain elements. The exact value of the worstcase gain is guaranteed to lie between these upper and lower bounds. and the analysis performed on that frequency grid. thus reducing the computation time. multioutput systems. If the input system sys is an uncertain state space object (uss). then the analysis is performed on the frequency grid within the ufrd. the gain is the maximum singular value of the frequency response matrix. large frequency ranges can be quickly eliminated from consideration. In all descriptions below. then an appropriate frequency grid is generated (automatically). The results of 10350 . As with other uncertainsystem analysis tools. 10 1 WorstCase Gain. The computation used in wcgain is a frequencydomain calculation. You can control which analysis is performed using the wcgopt options object. N denotes the number of points in the frequency grid. If the input system sys is an uncertain frequency response object (ufrd). For multiinput.
10351 . and include additive unmodeled dynamics uncertainty of a level of 0.[1 0]) + ultidyn('delta'.maxgainunc).P*K1). S1 = feedback(1. Repeat the design for a controller K2 which puts the nominal closedloop bandwidth at 2. Design a “proportional” controller K1 which puts the nominal closedloop bandwidth at 0.4 (this corresponds to 100% model uncertainty at 2.5 rad/s).'bound'.8 rad/sec.0 rad/sec. which are the names of uncertain elements of sys. positive scalar.8. such that when jointly combined.[1/(25*BW1) 1]). If the nominal value of the uncertain system is unstable. Rolloff K1 at a frequency 25 times the nominal closedloop bandwidth.LowerBound.LowerBound).'inf') shows the gain. Upper bound on worstcase gain. CriticalFrequency maxgainunc is a structure containing values of uncertain elements which maximize the system gain. form the closedloop sensitivity function. P = tf(1. then maxgain. lead to the gain value in maxgain.UpperBound equal ∞.maxgainunc] = wcgain(sys) maxgain is a structure with the following fields Field LowerBound UpperBound Description Lower bound on worstcase gain. K1 = tf(BW1. The value of each field is the corresponding value of the uncertain element. Example Create a plant with nominal model of an integrator.LowerBound and maxgain.wcgain [maxgain.4). The command norm(usubs(sys. positive scalar. There are M fieldnames. In each case.0. The critical value of frequency at which maximum gain occurs (this is associated with maxgain.[1 1]. BW1 = 0.
K2 = tf(BW2.P*K2). using usubs to replace the uncertain element with the worst value returned by wcgain.wcunc1).'r'. S2 = feedback(1...Nom.0215e+001 The maxgain variables indicate that controller K1 achieves better worstcase performance than K2.1034e+000 CriticalFrequency: 1. 10352 .wcunc1] = wcgain(S1).0. bodemag(S1.. Plot Bode magnitude plots of the nominal closedloop sensitivity functions.wcgain BW2 = 2.5080e+000 CriticalFrequency: 5. as well as the “worst” instances. [maxgain1. Assess the worstcase gain of the closedloop sensitivity function.usubs(S1.3096e+000 maxgain2 maxgain2 = LowerBound: 5.[1/(25*BW2) 1]).1024e+000 UpperBound: 5. maxgain1 maxgain1 = LowerBound: 1.wcunc2] = wcgain(S2).'r'. [maxgain2.5070e+000 UpperBound: 1.
including sensitivities of the worstcase gain to the uncertain elements ranges and frequencybyfrequency information.wcunc2).'b') 10 1 Bode Diagram 10 0 Magnitude (abs) 10 −1 Nominal S1 WorstCase S1 Nominal S2 WorstCase S2 10 −2 10 −2 10 −1 10 Frequency (rad/sec) 0 10 1 10 2 Note that although the nominal closedloop sensitivity resulting from K2 is superior to that with K1.maxgainunc. [maxgain.wcgain S2.info] = wcgain(sys) 10353 . the worstcase behavior is much worse.'b'.Nom. Basic syntax with 3rd output argument A 3rd output argument yields more specialized information.usubs(S2.
For instance. value is 1. To do this. each entry indicating the local sensitivity of the worstcase gain in maxgain. the values are NaN.. [maxgain. determining the worstcase gain at each and every frequency point.wcgain The 3rd output argument info is a structure with the following fields Field Sensitivity Description A struct with M fields. If the Sensitivity property of the wcgopt object is 'off'. setting the “stepsize” in the sensitivity computation. Frequency ArrayIndex Options (eg. and controlling behavior across frequency and array dimensions) can be specified using the worstcase gain analysis options wcgopt object. adjusting the stopping criteria. a value of 25 indicates that if the uncertainty range is enlarged by 8%. 10354 .info] = wcgain(sys.info] = wcgain(sys). In more complicated situations (described later) the value of this field will be dependent on the input data.LowerBound to all of the individual uncertain elements uncertainty ranges. the wcgopt options object must be used. opt = wcgopt('FreqPtWise'. N×1 frequency vector associated with analysis. [maxgain.'off'). then the worstcase gain should increase by about 2%.opt) Advanced options: PointwiseinFrequency Calculations It is also possible to perform the computation pointwiseinfrequency. For instance. you can turn the sensitivity calculation off by executing opt = wcgopt('Sensitivity'.1). 1by1 scalar matrix.maxgainunc. fieldnames are names of uncertain elements of sys. turning on/off the sensitivity computation.maxgainunc. Values of fields are positive numbers.
maxgainunc{k}) at the k’th frequency (in info. info is a structure with the following fields Field Sensitivity Description N×1 cell. whose numerical value is 1. If the nominal value of the uncertain system is unstable. UpperBound CriticalFrequency maxgainunc is a N×1 cell array of values of uncertain elements which maximize the system gain. The critical value of frequency at which maximum gain occurs (this is associated with norm(maxgain. whose M fieldnames are the names of uncertain elements of sys.wcgain As the calculation is pointwiseinfrequency.) Upper bound on worstcase gain.LowerBound{k}. (frd with N frequency points.) Scalar. Each entry of the cell array is a struct. corresponding to the sensitivities in the worstcase gain at each individual frequency. each value is the 1by1 matrix. Frequency ArrayIndex 10355 . maxgain is a structure with the following fields Field LowerBound Description Lower bound on worstcase gain. then maxgain. each entry is struct. In more complicated situations (described later) the value of this field will be dependent on the input data. often containing scalar information relevant to each particular frequency. The maximum singular value of usubs(sys.inf)). N×1 cell array.LowerBound and maxgain. many results are N×1 cell arrays. positive scalar. N×1 frequency vector associated with analysis.UpperBound equal ∞.LowerBound.Frequency(k)) is equal to maxgain. positive scalar (frd with N frequency points.
a 7by5by8 array of uncertain r output..opt) 10356 . suppose that sys is a r×c×7×5×8 uncertain system (ie.….e2. To specify the desired computation. By definition. set the ArrayDimPtWise property as follows. the results of [maxgain. suppose that the array dimensions of sys are d1×…×dF (7×5×8 in the above example).[2 3]). In this case. For that reason. otherwise ej=1 (only the maximum in slot j is computed). but only the “peak value” over the first array dimension (the slot with 7 points) is kept track of. opt = wcgopt('ArrayDimPtWise'. Moreover. assume that the ArrayDimPtWise property of the wcgopt object has been set to some of the integers between 1 and F. it follows that e1=1. In general.e3=8. and it is also possible to perform the computation pointwiseinthearraydimensions. For concreteness. In order to perform the worstcase gain calculation pointwise over the 2nd and 3rd array dimensions (the slot with 5 points and the slot with 8 points).wcgain Advanced options: Handling array dimensions If sys has array dimensions. if j is an integer listed in ArrayDimPtWise. In the above example. many of the results will be of dimension 1by5by8. c input systems).info] = wcgain(sys. In this case.eF denote the dimensions of the array on which the results are computed. with ArrayDimPtWise set to [2 3]. Furthermore. the default behavior is to maximize over all of these dimensions as well. the wcgopt must be used. the worstcase gain calculation is performed “pointwise” on the 5by8 grid. any combination of “peakover” and “pointwiseover” is allowed. This can be controlled though. determining the worstcase gain at each and every grid point.e2=5. Assume FreqPtWise is set to 'off' (we’ll return to that case later below).maxgainunc. Let e1. then ej=dj (all grid points in slot j are computed).
'inf') shows the gain. N×1 frequency vector associated with analysis.maxgainunc). The value of each field is the corresponding value of the uncertain element. each entry is the local sensitivity of the worstcase gain in maxgain. and “peaked” over all others.LowerBound. containing values of uncertain elements which maximize the system gain. computed pointwise over all array dimensions listed in ArrayDimPtWise property. There are M fieldnames.ArrayIndexis an e1×…×eF matrix whose value is the singleindex representation of the maximizing location in the d1×…×dF grid.wcgain are: maxgain is a structure with the following fields Field LowerBound Description 1by1 frd. with array dimensions e1×…×eF. The variable info. The command norm(usubs(sys. Upper bound. lower bound on worstcase gain. UpperBound CriticalFrequency maxgainunc is a e1×…×eF struct. and should be identical to maxgain. which are the names of uncertain elements of sys. analogous to LowerBound e1×…×eF array with the critical value of frequency at which maximum gain occurs (this is associated with maxgain.LowerBound (to within tolerance used in norm). there is a corresponding value in the d1×…×dF grid where the maximum occurs. such that when jointly combined.LowerBound). lead to the gain value in maxgain. At each value in the e1×…×eF grid.LowerBound to all of the individual uncertain elements uncertainty ranges. Frequency ArrayIndex 10357 . info is a structure with the following fields Field Sensitivity Description e1×…×eF struct array.
e2.eF denote the dimensions of the array on which the results are computed. Let e1. many results are N×1 cell arrays. maxgain.….wcgain Advanced options: Array dimension handling with FreqPtWise set 'on' The final case involves array dimensions and pointwiseinfrequency calculations.LowerBound{k} is a 1by1 frd with array dimensions e1×…×eF.Frequency(k). e1×…×eF array with the critical value of frequency at which maximum gain (pointwise over all array dimensions listed in ArrayDimPtWise property. assume that the ArrayDimPtWise property of the wcgopt object has been set to some of the integers between 1 and F. maxgainunc is a N×1 cell array. 10358 .LowerBound. and “peaked” over all others) occurs. Again. Upper bound on worstcase gain. computed pointwise over all array dimensions listed in ArrayDimPtWise property.Frequency(k). As the calculation is pointwiseinfrequency. often containing e1×…×eF arrays in each cell. Furthermore. UpperBound CriticalFrequency maxgain.CriticalFrequency e1×…×eF array with the critical value of frequency at which maximum gain (pointwise over all array dimensions listed in ArrayDimPtWise property. and is a lower bound on worstcase gain at frequency info. containing values of uncertain elements which maximize the system gain at frequency info. and “peaked” over all others) occurs. and “peaked” over all others. k’th entry is a e1×…×eF struct. maxgain is a structure with the following fields Field LowerBound Description Nby1 cell array. analogous to maxgain. suppose that the array dimensions of sys are d1×…×dF.
k’th e1×…×eF. smallgain theorem analysis. If sys is a single ss or frd. at each value in the e1×…×eF grid.'inf')). (holding uncertain real parameters fixed) and a coordinate aligned search on the uncertain real parameters (while holding the complex blocks fixed). N×1 frequency vector associated with analysis.ArrayIndex is a e1×…×eF matrix whose value is the singleindex representation of the maximizing location in the d1×…×dF grid. Lower bounds for wcgain are computed using a power iteration on ultidyn. The variable info. for all allowable modeled uncertainty. The frequency at which the gain in LowerBound occurs is in CriticalFrequency. the gain is provably less than or equal to UpperBound. Upper bounds are obtained by solving a semidefinite 10359 . each entry is the e1×…×eF struct array which holds the local sensitivity of the worstcase gain at one frequency to all of the individual uncertain elements uncertainty ranges. Algorithm The worstcase gain is guaranteed to be at least as large as LowerBound (some value of allowable uncertain elements yield this gain – one instance is returned in the structure maxgainunc.. Frequency ArrayIndex Behavior on notuncertain systems wcgain can also be used on notuncertain systems (eg. there is a corresponding value in the d1×…×dF grid where the maximum occurs. which is essentially optimallyscaled. Similarly.wcgain info is a structure with the following fields Field Sensitivity Description N×1 cell. These bounds are derived using upper bound for the structured singular value. In other words. if sys has array dimensions. then the possible combinations of “peakover” and “pointwiseover” can be used to customize the computation. the worstcase gain is guaranteed to be no larger than UpperBound. Nby1 cell array. ucomplex and ucomplexm uncertain atoms. ss and frd). then the worstcase gain is simply the gain of the system (identical to norm(sys. However.
Thought of as a function of problem data and frequency. However. answers arbitrarily close to the actual answers are obtainable with finite frequency grids. the worstcase gain is a continuous function (unlike the robust stability margin. wcgain uses Branch and Bound on the uncertain real parameters to tighten the lower and upper bounds. which in special cases is not – see section entitled Regularizing Robust Stability calculations with only ureal uncertain elements in the online documentation). in worstcase gain calculations. increasing the density of the frequency grid will always increase the accuracy of the answers and in the limit. the problem in wcgain is less acute. This is similar to the problem in robuststab. it is possible (likely) that the true critical frequency is missing from the frequency vector used in the analysis. loopmargin mussv norm robuststab wcgopt wcsens wcmargin See Also Comprehensive analysis of feedback loops Calculate bounds on the Structured Singular Value (µ) System norm of an LTI object Calculates stability margins of uncertain systems Creates a wcgain options object Calculates worstcase sensitivities for a feedback loop Calculates worstcase margins for a feedback loop 10360 . in comparing to robuststab.wcgain program. Limitations Because the calculation is carried out with a frequency gridding. Hence.
Apply stopping criteria based on upper/lower bounds (described below) at every frequency point (as opposed to just the peak value). and wcmargin options = wcgopt options = wcgopt('name1'. The following are the wcgopt object properties: Object Property Sensitivity Description Computes margin sensitivity to individual uncertainties. The default value is 'off'. FreqPtWise=1 activates the pointwise criteria. Any unspecified property is set to its default value.'name2'. wcsens and wcmargin options object called options in which specified properties Description have specific values. wcsens.value2. In order to only compute the peak value to within tolerance.) creates a wcgain. Case is ignored for property names.wcgopt Purpose Syntax 10wcgopt Create an options object for use with wcgain.. use 0. which implies that both upper and lower bounds for worstcase gain are computed..... Default = 0. LowerBoundOnly FreqPtWise 10361 .value1.. wcgopt with no input or output arguments displays a complete list of option properties and their default values.) wcgopt options = wcgopt (with no input arguments) creates an options object with all the properties set to their default values.value2. It is sufficient to type only enough leading characters to define the property name uniquely. then only the lower bound computation will be performed.'name2'.value1. Default is 'on' If LowerBoundOnly is 'on'. options = wcgopt('name1'.
LowerBound <= AbsTol • UpperBound . being applied to the peak value over all other array dimensions. If FreqPtWise==1. Default Structure with default values of all wcgopt properties. the computation terminates when at least one of the following four conditions is true at every frequency: • UpperBound . the stopping criteria based on upper/lower bounds (described below) is used at every point in array dimensions specified in ArrayDimPtWise. Default = []. For indices specified in ArrayDimPtWise. the stopping condition is applied at every point in array dimensions specified in ArrayDimPtWise.PeakLowerBound <= Rel tol*PeakUpperBound at every frequency: 3 UpperBound <= AGoodThreshold + MGoodThresh old*Norm(NominalValue)) at some frequency: 4 LowerBound >= ABadThreshold + MBadThresh old*Norm(NominalValue) In both situations above. UpperBound and LowerBound are the peak value over all other array dimensions. 10362 .wcgopt Object Property ArrayDimPtWise Description Relevant for uss/ufrd/ss/frd arrays.PeakLowerBound <= AbsTol 2 PeakUpperBound .LowerBound <= Reltol*UpperBound • UpperBound <= AGoodThreshold + MGoodThreshold*Norm(NominalValue) • LowerBound >= ABadThreshold + MBadThreshold*Norm(NominalValue) If FreqPtWise==0. the computation terminates when any one of the following four conditions is true: 1 PeakUpperBound .
Default = 5. Default = 3. Default = 1. Default = 720. Upper and lower absolute stopping tolerance. Default=0. Number of cycles in lower bound search (positive integer). Default is 25.05 Multiplicative (UpperBound) Stopping Threshold. VaryUncertainty AbsTol RelTol AbsTol MGoodThreshold AGoodThreshold MBadThreshold AGoodThreshold Additive (LowerBound) stopping threshold. NTimes MaxCnt MaxTime 10363 . Additive (UpperBound) stopping threshold.05.02. Default=0. Default = 0. default = 20. Multiplicative (LowerBound) stopping t hreshold. Maximum computation time allowed (in seconds). and values text description of property Percentage variation of uncertainty used as a stepsize in finitedifference calculations to estimate sensitivity. Default = 0.02.wcgopt Object Property Meaning Description Structure. Upper and lower relative stopping tolerance. The computation is prematurely terminated if this much real time elapses before the computation is complete.04. Upper and lower absolute sopping tolerance. fieldnames are wcgopt properties. All quantities that have been computed are returned. Number of restarts in lower bound search (positive integer).
0500 10364 . opt = wcgopt Property Object Values: Sensitivity: 'on' LowerBoundOnly: 'off' FreqPtWise: 0 ArrayDimPtWise: [] VaryUncertainty: 25 Default: [1x1 struct] Meaning: [1x1 struct] AbsTol: 0.0200 RelTol: 0.0500 MGoodThreshold: 1.0400 AGoodThreshold: 0.0400 RelTol: 0.0400 AGoodThreshold: 0. to the worstcase value at every frequency.04.0500 MGoodThreshold: 1.04 and the pointwise over frequency test from the peak worstcase value.FreqPtWise=0.wcgopt Example You can create a wcgopt options object called opt with all default values.FreqPtWise = 1. opt. opt = wcgopt.AbsTol = 0. opt.0500 MBadThreshold: 20 ABadThreshold: 5 NTimes: 2 MaxCnt: 3 MaxTime: 720 The following statements changes the absolute tolerance stopping criteria from 0. opt.02 to 0. opt Property Object Values: Sensitivity: 'on' LowerBoundOnly: 'off' FreqPtWise: 1 ArrayDimPtWise: [] Default: [1x1 struct] Meaning: [1x1 struct] VaryUncertainty: 25 AbsTol: 0.
See Also dkitopt robopt wcgain wcnorm wcsens wcmargin Creates a dksyn options object Creates a robustperf/robuststab options object Calculates worstcase gain of a system Calculates worstcase gain of a matrix Calculates worstcase sensitivities for a feedback loop Calculates worstcase margins for a feedback loop 10365 . opt = wcgopt('MaxTime'.10000.wcgopt MBadThreshold: ABadThreshold: NTimes: MaxCnt: MaxTime: 20 5 2 3 720 This statement makes a single call to wcgopt to set the maximum computation time to 10000 seconds and disables the Sensitivity calculation.'off').'Sensitivity'.
c.wcmargo] = wcmargin(P. Consider a system with uncertain elements.c) [wcmargi. uss or ufrd object. wcmargin. L must be an uncertain system.wcmargo] = wcmargin(p. The disk margin calculates the largest region for each channel such that for all gain and phase variations inside the region the nominal closedloop system is stable. the worstcase margins at the input and output are equal since an identity matrix is used in feedback.opt) [wcmargi.c. the closedloop system is stable. See the dmplot and loopmargin Algorithm sections for more information.c) wcmargi = wcmargin(p.opt) Description Classical gain and phase margins define the loopatatime allowable. wcmargi = wcmargin(p. without reference channels. results from the worstcase margin calculation imply that the closedloop system is stable for given uncertainty set and would remain stable in the presence of an additional gain and phase margin variation in the specified input/output channel. An alternative to classical gain and phase margins is the disk margin. Worstcase margin.C) calculates the combined worstcase input and output loopatatime gain/phase margins of the feedback loop consisting of C in negative feedback with P. Hence.wcmargo] = wcmargin(L) calculates the combined worstcase input and output loopatatime gain/phase margins of the feedback loop consisting of the loop transfer matrix L in negative feedback with an identity matrix. [wcmargi. calculates the largest disk margin such that for values of the uncertainty and all gain and phase variations inside the disk. The worstcase gain and phase margins bounds are defined based on the balanced sensitivity function. If L is a uss object. Note that in this case.wcmargo] = wcmargin(p. if it is a 2dof architecture. C should only be the compensator in the feedback path. It is of interest to determine the gain and phase margins of each individual channel in the presences of uncertainty. 10366 .wcmargin Purpose Syntax 10wcmargin Worstcase disk gain/phase margins for plantcontroller feedback loop. the frequency range and number of points used to calculate wcmargi and wcmargo are chosen automatically. The guaranteed bound is calculated based on the balanced sensitivity function. These margins are called worstcase margins. independent variations in the nominal system gain and phase for which the feedback loop retains stability. [wcmargi.
Loopatatime analysis. will have NY outputs and NU inputs and hence the controller. C. wcmargi will be a Nby1 structure. wcmargi will be a NUby1 structure with the following fields: Field GainMargin Description Guaranteed bound on worstcase. or an uncertain matrix. singleloop gain margin at plant input(s). uss or ufrd. Either P or C must be an uncertain system. Loopatatime worstcase phase margin at plant input(s). the plant model. Units are degrees. must have NU outputs and NY inputs.C) wcmargi and wcmargo are structures corresponding to the loopatatime worstcase.wcmargo] = wcmargin(L) [wcmargi. the frequency range and number of points used to calculate wcmargi and wcmargo are chosen automatically. For the case with two input arguments. P. Ö Ö Ò ¹ ¹ ¾¹ Ó Ö ¹ Ø È ØÙÖ ¹ ½¹ Ó Ö ¹ Ø È ØÙÖ Basic syntax [wcmargi.wcmargin That is. For the single loop transfer matrix L of size NbyN. umat. singleloop gain and phase margin of the channel. PhaseMargin 10367 .wcmargo] = wcmargin(P. if the closedloop system has a 2dof architecture the reference channel of the controller should be eliminated resulting in a 1dof architecture as shown in the following figure. If P and C are ss/tf/zpk or uss objects.
Sensitivity wcmargo is a Nby1 structure for the single loop transfer matrix input and wcmargo is a NYby1 structure when the plant and controller are input.opt) specifies options described in opt. each entry indicating the local sensitivity of the worstcase margins to all of the individual uncertain elements uncertainty ranges.wcmargo] = wcmargin(L. Values of fields are positive numbers. a value of 50 indicates that if the uncertainty range is enlarged by 8%. 10368 . We will see that margins of the individual loops may be very sensitive to small perturbations within other loops.'on'). and set the Sensitivity property to 'on'. The worstcase bound on the gain and phase margins are calculated based on a balanced sensitivity function. create a wcgopt options object.wcmargin Field Frequency Description Frequency associated with the worstcase margin (rad/s). In both these cases. [wcmargi. Struct with M fields. wcmargo will have the same fields as wcmargi. opt = wcgopt('Sensitivity'.wcmargo] = wcmargin(p. To compute sensitivities. and/or distance to –1) can be inaccurate measures of multivariable robustness margins. phase. If the Sensitivity property of the wcgopt object is 'off'. [maxgain.opt) and [wcmargi.opt) Example MIMO LoopataTime Margins This example is designed to illustrate that loopatatime margins (gain. (See wcgopt for more details on the options for wcmargin. the values are NaN.maxgainunc. then the worstcase gain should increase by about 4%. fieldnames are names of uncertain elements of P and C.c.info] = wcgain(sys.) The sensitivity of the worstcase margin calculations to the individual uncertain elements can be selected using the options object opt. For instance.
b.10 1].c. Gmod = (eye(2)+unmod)*Gunc. defined as 2 1 α(s + 1) .06. K = I G := .'Bound'.2). G.d). unmod = ultidyn('unmod'. d K 6.08).06]).6  G and K are 2 × 2 multiinput/multioutput (MIMO) systems. Gmodg = ufrd(Gmod. The nominal plant was analyzed previous using the loopmargin command.s – α 2 2 2 s + α –α ( s + 1 ) s – α2 Set α := 10. eye(2). ingain1 = ureal('ingain1'. ss(a.1.0 1].97 1.10 0]. Gunc = ss(a.3. the gain of the first input channel. Based on experimental data.97 and 1. zeros(2. The following statement generate the updated uncertain model. a b c d G K = = = = = = [0 10.d). [1 8. in statespace form and compute its frequency response.b. Due to differences between measured data and the plant model an 8% unmodeled dynamic uncertainty is added to the plant outputs. b = [ingain1 0. b(1.0.'Range'.1).wcmargin The nominal closedloop system considered here is shown as follows d G .60)). is found to vary between 0.logspace(1.c.0 1]. construct the nominal model.[2 2]. 10369 .[0. [1 2.
loop margin analysis performed using wcmargin results in a maximum allowable gain margin variation of 1.4557 The results indicate that the worstcase margins are not very sensitive to the gain variation in the first input channel. The worstcase analysis corresponds to maximum allowable disk margin for all possible defined uncertainty ranges.31 and phase margin variations of ± 15.6 degs in the second input channel in the presences of the uncertainties.1000 Sensitivity: [1x1 struct] wcmi(2) ans = GainMargin: [0. see the loopmargin command page example for more details. The worstcase single.2745] Frequency: 0. You can display the sensitivity of the worstcase margin in the second input channel to 'unmod' and 'ingain1' as follows: wcmi(2).6426] Frequency: 0.wcmargin You can use the command wcmargin to determine the worstcase gain and phase margins in the presences of the uncertainty.wcmo] = wcmargin(Gmodg. [wcmi.7585 1. 'unmod' and 'ingain1' leads to a dramatic reduction in the gain and phase margin. allowing variation in both uncertainties. 'unmod' and 'ingain1'.6426 15.1000 Sensitivity: [1x1 struct] Hence even though the second channel had infinite gain margin and 90 degrees of phase margin. wcmi(1) ans = GainMargin: [0.2745 50. but is very sensitive the LTI dynamic uncertainty at the output of the plant.Sensitivity ans = ingain1: 12. 10370 .3185] PhaseMargin: [15.7681] PhaseMargin: [50.3613 2. 'ingain1'.1865 unmod: 290.K).
Sensitivity ans = ingain1: 16. Though the worstcase margins at the second output channel is even more sensitive to the LTI dynamic uncertainty than the input channel margins.4632] PhaseMargin: [21. wcmo(1) ans = GainMargin: [0.wcmargin The worstcase singleloop margin at the output results in a maximum allowable gain margin variation of 1. 'unmod' and 'ingain1'.6995 61.3435 unmod: 392.1320 The results are similar to the worstcase margins at the input.1000 Sensitivity: [1x1 struct] You can display the sensitivity of the worstcase margin in the second output channel to 'unmod 'and 'ingain1' as follows: wcmo(2).1000 Sensitivity: [1x1 struct] wcmo(2) ans = GainMargin: [0.2984 21.3 degs in the second output channel in the presences of the uncertainties.2984] Frequency: 0.6995] Frequency: 0. See Also dmplot loopsens loopmargin robuststab usubs wcgain wcgopt wcsens Interprets disk gain and phase margins Calculates sensitivity functions of feedback loops Performs comprehensive analysis of feedback loops Calculates stability margins of uncertain systems Substitutes values for uncertain atoms Calculates worstcase gain of a system Creates a worstcase options object Calculates worstcase sensitivity functions 10371 .2521 3.9664] PhaseMargin: [61.46 and phase margin variation of ± 21.6835 1.
maxnormunc] = wcnorm(mat) maxnorm is a structure with the following fields Field LowerBound UpperBound Description Lower bound on worstcase norm. positive scalar. lead to the norm value in maxnorm. The exact value of the worstcase norm is guaranteed to lie between these upper and lower bounds.LowerBound.info] = wcnorm(m.opts) Description The norm of an uncertain matrix generally depends on the values of its uncertain elements. Basic syntax Suppose mat is a umat or a uss with M uncertain elements. The results of [maxnorm.info] = wcnorm(m) [maxnorm.wcu. There are M fieldnames. positive scalar.wcu] = wcnorm(m) [maxnorm. Upper bound on worstcase norm.wcu.maxnormunc)) 10372 . The following command shows the norm: norm(usubs(mat. Determining the maximum norm over all allowable values of the uncertain elements is referred to as a worstcase norm analysis. only bounds on the worstcase norm are computed. The value of each field is the corresponding value of the uncertain element. such that when jointly combined.wcnorm Purpose Syntax 10wcnorm Calculate the worstcase norm of an uncertain matrix maxnorm = wcnorm(m) [maxnorm. The maximum norm is called the worstcase norm. maxnormunc is a structure that includes values of uncertain elements and maximizes the matrix norm. which are the names of uncertain elements of mat. As with other uncertainsystem analysis tools.wcu] = wcnorm(m.opts) [maxnorm.
[2 3]). If the Sensitivity property of the wcgopt object is 'off'.LowerBound to all of the individual uncertain elements uncertainty ranges. ArrayIndex Advanced options: Handling array dimensions If mat has array dimensions. To perform the worstcase gain calculation pointwise over the second and third array dimensions (the slots with 5 points and 8 points. For concreteness. In this case. then the worstcase norm should increase by about 2%. the worstcase norm calculation is performed “pointwise” on the 5by8 grid. Only the “peak value” in the first array dimension (the slot with 7 10373 . Fieldnames are names of uncertain elements of sys. Any combination of “peakover” and “pointwiseover” is allowed. suppose that mat is an r×c×7×5×8 uncertain system (i. set the ArrayDimPtWise property: opt = wcgopt('ArrayDimPtWise'.wcnorm Basic syntax with third output argument A third output argument provides information about sensitivities of the worstcase norm to the uncertain elements ranges. a value of 25 indicates that if the uncertainty range is increased by 8%. the values are NaN. a 7by5by8 array of uncertain r output. respectively). [maxnorm. Field values are positive numbers.. each entry indicating the local sensitivity of the worstcase norm in maxnorm. To specify the desired computation. the wcgopt must be used. In more complicated situations (described later) the value of this field depends on the input data. It is also possible to perform the computation pointwiseinthearraydimensions to determine the worstcase norm at each grid point. the default behavior is to maximize over all dimensions.maxnormunc.info] = wcgain(mat) The third output argument info is a structure with the following fields: Field Sensitivity Description A struct with M fields. 1by1 scalar matrix with the value of 1. For instance. c input systems).e.
By definition.wcnorm points) is tracked. it follows that e1=1. suppose that the array dimensions of sys are d1×…×dF (7×5×8 in the above example). e2=5. computed pointwise over all array dimensions listed in ArrayDimPtWise property and “peaked” over all others.eF denote the dimensions of the array on which the results are computed. Upper bound. In this case. which lead to the gain value in maxnorm.maxgainunc. There are M fieldnames. the following command [maxgain.…. 10374 . The value of each field is the corresponding value of the uncertain element. which are the names of uncertain elements of mat. if j is an integer listed in ArrayDimPtWise.opt) produces the maxgain a structure with the following fields Field LowerBound Description e1×…×eF matrix of lower bounds on worstcase norm. with ArrayDimPtWise set to [2 3].e2. then ej=dj (all grid points in slot j are computed).info] = wcgain(sys. For that reason. many of the results will be of dimension 1by5by8.LowerBound when jointly combined. Furthermore. otherwise ej=1 (only the maximum in slot j is computed). Let e1. analogous to LowerBound UpperBound maxgainunc is a e1×…×eF struct. assume that the ArrayDimPtWise property of the wcgopt object has been set to some of the integers between 1 and F. In the above example. containing values of uncertain elements which maximize the system norm. In general. e3=8.
7199 UpperBound: 14. as well as its inverse.1.'Range'. 2]).c d].7327 [maxnormMi] = wcnorm(Mi) maxnormMi = LowerBound: 2.5963 UpperBound: 2. [maxnormM] = wcnorm(M) maxnormM = LowerBound: 14. 10375 .'Range'. Your objective is to accurately estimate the worstcase. 3]).[8 d=ureal('d'. 11]). The condition number of M must be less than the product of the two upper bounds for all values of the uncertain elements making up M.'Range'.LowerBound to the uncertainty range of each uncertain element.[1 b=ureal('b'. there is a corresponding value in the d1×…×dF grid where the maximum occurs. value of the condition number of the matrix.ArrayIndex is an e1×…×eF matrix. or the largest.5979 6]).5.'Range'. The variable info.9.'Range'. Mi = inv(M). ArrayIndex Example You can construct an uncertain matrix and compute the worstcase norm of the matrix.3. Compute these crude bounds on the worstcase value of the condition number.2. where each entry is the local sensitivity of the worstcase norm in maxnorm.[4 b=ureal('b'.[2 c=ureal('c'. where the value is the singleindex representation of the maximizing location in the d1×…×dF grid. a=ureal('a'. 10]). At each value in the e1×…×eF grid. Conversely.[0 M = [a b. the largest value of M condition number must be at least equal to the condition number of the nominal value of M.wcnorm info is a structure with the following fields Field Sensitivity Description e1×…×eF struct array.
where a free normbounded matrix ∆ tries to align the gains of M and M−1 κ ( M ) = max m × m (σ max ( M∆M ) ) –1 ∆∈C σ max ( ∆ ) ≤ 1 If M is itself uncertain.zeros(2.NominalValue). condLowerBound = cond(M. Finally. Delta and inv(M). You can create the expression involving M. The default value of ABadThreshold is 5.2)).UpperBound. consider the stopping criteria and call wcnorm. During the computation. Delta = ucomplexm('Delta'. with nominal value equal to zero.NominalValue ) then the calculation is terminated. Create a 2by2 ucomplexm object.38). [condLowerBound condUpperBound] ans = 5. if wcnorm determines that the worstcase norm is at least ABadThreshold + MBadThreshold*norm ( H.UpperBound*maxnormMi. To keep wcnorm from prematurely stopping. H = M*Delta*Mi. set ABadThreshold to 38 (based on our crude upper bound above). opt = wcgopt('ABadThreshold'. the stopping criteria is governed by ABadThreshold. The range of values represented by Delta includes 2by2 matrices with the maximum singular value less than or equal to 1.NominalValue equals 0.2743 How can you get a more accurate estimate? Recall that the condition number of an nxm matrix M can be expressed as an optimization.0757 38. In our case. you can compute the worstcase condition number of an uncertain matrix by using a ucomplexm uncertain element. since H. 10376 . and then by using wcnorm to carry out the maximization. Therefore. then the worstcase condition number involves further maximization over the possible values of M.wcnorm condUpperBound = maxnormM. One stopping criteria for wcnorm(H) is based on the norm of the nominal value of H.
wcu.LowerBound.wcnorm [maxKappa.opt).info] = wcnorm(H.9926 You can verify that wcu makes the condition number as large as maxKappa. maxKappa maxKappa = LowerBound: 26. cond(usubs(M.9629 Algorithm See Also See wcgain lti/norm svd wcgain wcgopt Calculates LTI system norms Calculates singular value decomposition Calculates worstcase gain of a system Creates a wcgain options object 10377 .9629 UpperBound: 27.wcu)) ans = 26.
type.type.type.C. CP. ½ ½ ¹ ¾ Ñ ¹ È ¹ Ñ ¿ ¾ Description Equation Si: Input Sensitivity Ti: Input Complementary Sensitivity So: Output Sensitivity (I+CP)−1 CP(I+CP)−1 (I+PC)−1 10378 .C. shown below.wcsens Purpose Syntax 10wcsens Calculate the worstcase sensitivity and complementary sensitivity functions of a plantcontroller feedback loop wcst wcst wcst wcst wcst wcst wcst wcst wcst wcst = = = = = = = = = = wcsens(L) wcsens(L.scaling. defines the input/output sensitivity.opt) wcsens(P.opt) wcsens(L.scaling. or output. PC.type) wcsens(L. T=L(I+L)−1.C.scaling) wcsens(P. complementary sensitivity and loop transfer functions.C) wcsens(P. where L is the loop gain matrix associated with the input.opt) wcsens(P.opt) Description The sensitivity function.C. and the complementary sensitivity function.type) wcsens(P.type. S=(I+L)−1.scaling) wcsens(L. are two transfer functions related to the robustness and performance of the closedloop system. The multivariable closedloop interconnection structure.
wcst = wcsens(P. if it is a 2dof architecture (see loopsens). 0 otherwise. the frequency range and number of points are chosen automatically. If P and C are ss/tf/zpk or uss objects. wcst is a structure with the following substructures: Table 102: Fields of wcst: Field Si Ti Description Worstcase inputtoplant sensitivity function Worstcase inputtoplant complementary sensitivity function Worstcase outputtoplant sensitivity function Worstcase outputtoplant complementary sensitivity function Worstcase plant times inputtoplant sensitivity function Worstcase compensator times outputtoplant sensitivity function 1 if nominal closed loop is stable. not any reference channels. C should only be the compensator in the feedback path. the frequency range and number of points are chosen automatically. So To PSi CSo Stable 10379 .C) calculates the worstcase sensitivity and complementary sensitivity functions for the feedback loop C in negative feedback with P.wcsens Description Equation To: Output Complementary Sensitivity Li: Input Loop Transfer Function Lo: Output Loop Transfer Function PC(I+PC)−1 CP PC wcst = wcsens(L) calculates the worstcase sensitivity and complementary sensitivity functions for the loop transfer matrix L in feedback in negative feedback with an identity matrix. NaN for frd/ufrd objects. If L is a uss object.
which are the names of uncertain elements of sensitivity function. lead to the gain value in MaximumGain. containing values of uncertain elements BadUncertainValues which maximize the sensitivity gain. BadUncertainValues.wcsens Each sensitivity substructure is a structures with five fields MaximumGain.LowerBound. CSo: Field MaximumGain Description struct with fields LowerBound. such that when jointly combined. Struct. Ti. There are M fluidness. So. System Uncertain sensitivity function (ufrd or uss). PSi. 10380 . UpperBound and CriticalFrequency. Sensitivity derived from the outputs of wcgain. BadSystem. To. LowerBound and UpperBound are bounds on the unweighted maximum gain of the uncertain sensitivity function. System. CriticalFrequency is the frequency at which the maximum gain occurs. The value of each field is the corresponding value of the uncertain element. Table 103: Fields of Si.
wcst = wcsens(L.'To'.wcsens Field BadSystem Description Worstcase system based on the uncertain object values in BadUncertainValues. Similarly.'CSo').'So'. Sensitivity Struct with M fields. setting type to 'Input' or 'Output' selects all input Sensitivity functions ('Si'.'CSo') or all complementary sensitivity functions ('Ti'.'To'. scaling is either the character strings 'Absolute' (default). the maximum relative gain is the largest ratio of the worstcase gain and the nominal gain evaluated at each frequency point in the analysis.'Ti'. as 'Si'.type) and wcst = wcsens(P. as well as a comma separated list. type.'Ti'.C. bounds on the maximum scaled gain of the 10381 . The 'Relative' scaling finds bounds on the maximum relative gain of the uncertain sensitivity function. wcst = wcsens(L. 'All' selects all six Sensitivity functions for analysis (default).scaling) and wcst = wcsens(P.'To').'PSi'. i. a value of 50 indicates that if the uncertainty range is enlarged by 8%.'PSi') or all output sensitivity functions ('So.type) allows selection of individual Sensitivity and Complementary Sensitivity functions. Setting type to 'S' or 'T' selects all sensitivity functions ('Si'. fieldnames are names of un certain elements of system.'CSo' corresponding to the sensitivity and complementary sensitivity functions. 'Si'. Similarly if scaling is a ss/tf/zpk/frd object.C. 'Relative' or a ss/tf/zpk/frd object.type.'To'. type may also be a cell containing a collection of strings. then the maximum gain should increase by about 4%. For instance.e. BadSystem is defined as BadSystem=usubs(System. If the 'Sensitivity' property of the wcgopt object is 'off'.scaling) adds a scaling to the worstcase sensitivity analysis. the values are NaN. each entry indicating the local sensitivity of the maximum gain to all of the individual uncertain elements uncertainty ranges. That is. Values of fields are positive numbers. The default scaling 'Absolute' calculates bounds on the maximum gain of the uncertain sensitivity function.type.'PSi'.'So'. BadUncertainValues).
scaling.'Abs'. opt = wcgopt('Sensitivity'. wcst = wcsens(P.C).opt) Example The following constructs a feedback loop with a first order plant and a proportionalintegral controller. e. If scaling is an object. create a wcgopt options object. delta = ultidyn('delta'. Snom = looptransfer. the input/output sensitivity functions are the same.Si can be found in the wcgain help. The time constant is uncertain and the model also includes an multiplicative uncertainty.Si contains the worst case sensitivity function.{'Ti'. To compute the sensitivities to the individual uncertain components. P = tf(1.'Rel'.'on'). The nominal (input) sensitivity function has a peak of 1. This worst case sensitivity has a 10382 . C=tf([4 4].type.'Si'. If scaling is 'Relative'or a ss/tf/zpk/frd object.wcsens uncertain sensitivity function are found. the worstcase analysis peaks over frequency. type and scaling can also be combined in a cell array.g.'PSi'.'range'.5.09 at omega = 1.opt) or wcst = wcsens(P.wt) wcst = wcsens(P. Since the plant and controller are singleinput / singleoutput. More information about the fields in wcst. its input/output dimensions should be 1by1 or dimensions compatible with P and C. wcst = wcsens(P.0864 wcsens is then used to compute the worstcase sensitivity function as the uncertainty ranges over its possible values.[1 0]). norm(Snom.[1 1]).C. tau = ureal('tau'.inf) ans = 1.Si. The badsystem field of wcst. and set the Sensitivity property to 'on'.C.[4 6]).[tau 1])*(1+0. looptransfer = loopsens(P.C.'So'}.25*delta). (See wcgopt for more details on the options for wcsens.C.opt) specifies options for the worstcase gain calculation as defined by opt.) The sensitivity of the worstcase sensitivity calculations to the individual uncertain components can be determined using the options object opt.55 rad/sec.NominalValue.
omega = logspace(1.50). vol. Packard.Si contains the perturbation that corresponds to this worst case sensitivity function. legend('Nominal Sensitivity'.'WorstCase Sensitivity'. be different. and A.” AIAA Journal of Guidance.Swc. MarchApril 2001. Balas..'.. 24.K. “Worst case analysis of the X38 crew return vehicle flight control system.inf) ans = 1. The maxgainunc field of wcst. 'Location'.02 rad/sec. loopsens loopmargin robuststab usubs wcgain wcgopt wcmargin See Also Calculate sensitivity functions of feedback loops Comprehensive analysis of feedback loops Calculate stability margins of uncertain systems Substitutes values for uncertain atoms Calculate worstcase gain of a system Create a worstcase options object Calculate worstcase margins for feedback loop 10383 .''. G. no.Si. in general.wcsens peak of 1. bodemag(Snom. pp. 2. 261269.omega).5075 For multiinput/multioutput systems the various input/output sensitivity functions will.1.J. Dynamics and Control.BadSystem.52 at omega = 1.'. wcst = wcsens(P. Shin.. Reference J.C) wcst = Si: [1x1 struct] Ti: [1x1 struct] So: [1x1 struct] To: [1x1 struct] PSi: [1x1 struct] CSo: [1x1 struct] Stable: 1 Swc = wcst.'SouthEast') norm(Swc.
wcsens 10384 .
10285 bilinear transform. J. R. F. statespace H∞ 10104 Dscalings automatic prefitting 1061 F fitmag 1087 fitmaglp 1087 forbidden regions 29 Franklin. 1031 fundamental limits righthalfplane poles 216 righthalfplane zeros 216 G C Chiang. G. frequency continuous to continuous poleshifting transform 1029 continuous to discrete backward rectangular 1027 forward rectangular 1027 shifted Tustin 1028 general bilinear 1028 reverse transform 1027 bisection algorithm 10120 design goals crossover 216 performance 216 rolloff 216 stability robustness 216 disturbance attenuation 26 Doyle.A.Index A ACC Benchmark plant 1029 additive error 1022. bilinear pole shifting 1031 cmsclsyn 1037 gain reduction tolerance 210 gain/phase margins MIMO system 29 gap metric 10224 Index1 . Y. 10277 allpass phase matrix 1034 augmented plant 1019 complementary sensitivity 25 complementary sensitivity T 1019 conicsector 10292 crossover frequency ωc 216 D B Balanced model truncation 1022 balancmr 1022 Schur method 10277 schurmr 10277 square root method 1022 Balanced stochastic truncation 1032 BST 1032 bstmr 1032 balancmr 1022 Bamieh. B. 10105. C.
D. 10105.Index genphase 1087 γiteration 10120 Glover. See ltrsyn ltrsyn 10183 M magfit 1087 max entropy 10120 McFarlane. 1032. See ltrsyn LQG optimal control 10101 LTR control synthesis.. 10105. C. X. 10277 hankelsv 10112 Hankel singular value based model reduction 10250 reduce 10250 Hankel singular values 1022. K. 106 mixedsensitivity 218 mixedsensitivity synthesis Index2 .V. 10181 linear matrix inequalities LMI solvers 118 LMI solvers 118 loop shaping 110 loopsyn 25 loop transfer function matrix 25 loopshaping synthesis loopsyn H∞ optimal method 10177 LTR loop transfer recovery method 10183 see also mixedsensitivity loopsyn 10177 LQG loop tranferfunction recovery. 10277 L Le. loop shaping 10181 statespace H2 10104 hankel singular values NCF 115 HiMAT aircraft model 110 I imp2ss 10133 H H∞ mixsyn 218 norm 23 H∞ loop shaping 110 mixedsensitivity 110 Sampleddata 110 H∞ norm 10292 H∞ optimal controller 10118 H∞ Control performance objectives 53 H2 norm 23 H2 control synthesis 1099 mixedsensitivity 1019 h2syn 1099 Hankel Minimum Degree Approximation 10105 Hankel minimum degree approximation hankelmr 10105 MDA 10105 Zeroth Order Hankel MDA 10109 Hankel singular value 1022. 10225 mfilter 10196 Mixed H∞ /H2 lcontroller synthesis 110.
M. 10368 plant augmentation 1019 proper system 1021 R relative error 1032 robustness of stability 216 robustness analysis Monte Carlo 17 worst case 17 rolloff 216 S N ncfmargin 10222 ncfsyn 10222 norm H∞ 22 H2 22 normalized comprime factor (NCF) 10222 Normalized coprime factor 10218 Left Compie Factorization 10218 Right Coprime Facotrization 10218 Normalized coprme factor ncfmr 10218 Normallized coprime factor balanced model truncation 10218 norms 526 performance 528 Safonov. 10314 slowfast 10308 P perturbation additive 26 multiplicative 26 spectral factor 1034 square root method 1022 squaringdown prefilter 10178 Stable and antistable projection 10314 Index3 . 10314 slow and fast modes decomposition 10308. G.Index H∞ 10193 H2 1019 plant augmentation for 1019 mixsyn 10193 mktito 10198 Modal form realization 10200 modreal 10200 model reduction 114 controller order reduction 115 Monte Carlo random sample 15 multiplicative error bound 1032 multiplicative uncertainty 22 multivariable margins 10165. imaginy axis zeros H∞ 1031 return difference matrix 10104 stabiltiy and robustness 10296 sdhfsyn 10284 sectf 10292 sector bilinear transformation 10292 sensitivity 25 sensitivity S 1019 singular values 23 properties of 23 Slow and fadt modes decomposition slowfast 10308 Slow and fast modes decomposition 10308.
K.Index stabproj 10314 stable and antistable projections 10314 stabproj 10314 SVD system realization 10133 System realization 10133 T TITO (twoinputtwooutput) system 10198 U uncertain LTI system 13 USS object 15 W weighted mixedsensitivity 1019 worstcase peak gain 19 Z Zames. 10225 Index4 . G. 10296 Zhou.
This action might not be possible to undo. Are you sure you want to continue?