ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ò ÐÝ× × Ò × Ò

Ë ÙÖ Ë Ó ×Ø
Norwegian University of Science and Technology

Á Ò ÈÓ×ØÐ Ø Û Ø
University of Leicester

Second Edition This version: August 29, 2001

ÂÇÀÆ ÏÁÄ
×Ø Ö
º

Æ Û ÓÖ

² ËÇÆË
º

Ö× Ò

º

ÌÓÖÓÒØÓ

º

Ë Ò ÔÓÖ

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

BORGHEIM, an engineer: Herregud, en kan da ikke gjøre noe bedre enn leke i denne velsignede verden. Jeg synes hele livet er som en lek, jeg! Good heavens, one can’t do anything better than play in this blessed world. The whole of life seems like playing to me!

Act one, L ITTLE E YOLF, Henrik Ibsen.

Ú

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

ÇÆÌ ÆÌË
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PREFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 INTRODUCTION . . . . . . . . . . . . 1.1 The process of control system design 1.2 The control problem . . . . . . . . 1.3 Transfer functions . . . . . . . . . . 1.4 Scaling . . . . . . . . . . . . . . . 1.5 Deriving linear models . . . . . . . 1.6 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii v ix 1 1 2 3 5 8 11 15 15 21 24 27 39 40 55 62 63 63 64 68 79 84 87

2 CLASSICAL FEEDBACK CONTROL . 2.1 Frequency response . . . . . . . . . . 2.2 Feedback control . . . . . . . . . . . 2.3 Closed-loop stability . . . . . . . . . 2.4 Evaluating closed-loop performance . 2.5 Controller design . . . . . . . . . . . 2.6 Loop shaping . . . . . . . . . . . . . 2.7 Shaping closed-loop transfer functions 2.8 Conclusion . . . . . . . . . . . . . .

3 INTRODUCTION TO MULTIVARIABLE CONTROL 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 3.2 Transfer functions for MIMO systems . . . . . . . . 3.3 Multivariable frequency response analysis . . . . . . 3.4 Control of multivariable plants . . . . . . . . . . . . 3.5 Introduction to multivariable RHP-zeros . . . . . . . 3.6 Condition number and RGA . . . . . . . . . . . . .

Ú
3.7 3.8 3.9 3.10

ÅÍÄÌÁÎ ÊÁ
Introduction to MIMO robustness . General control problem formulation Additional exercises . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Ä
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

à ÇÆÌÊÇÄ
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 . 98 . 110 . 112 . . . . . . . . . . . . 113 113 122 127 128 132 135 139 143 145 152 158 159 159 163 164 171 172 173 180 182 185 187 189 193 194 196 199 201 210 213 213 214 218 219 220

4 ELEMENTS OF LINEAR SYSTEM THEORY . 4.1 System descriptions . . . . . . . . . . . . . . 4.2 State controllability and state observability . . 4.3 Stability . . . . . . . . . . . . . . . . . . . . 4.4 Poles . . . . . . . . . . . . . . . . . . . . . . 4.5 Zeros . . . . . . . . . . . . . . . . . . . . . 4.6 Some remarks on poles and zeros . . . . . . . 4.7 Internal stability of feedback systems . . . . . 4.8 Stabilizing controllers . . . . . . . . . . . . . 4.9 Stability analysis in the frequency domain . . 4.10 System norms . . . . . . . . . . . . . . . . . 4.11 Conclusion . . . . . . . . . . . . . . . . . .

5 LIMITATIONS ON PERFORMANCE IN SISO SYSTEMS . . . . . . 5.1 Input-Output Controllability . . . . . . . . . . . . . . . . . . . . . 5.2 Perfect control and plant inversion . . . . . . . . . . . . . . . . . . 5.3 Constraints on Ë and Ì . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Ideal ISE optimal control . . . . . . . . . . . . . . . . . . . . . . . 5.5 Limitations imposed by time delays . . . . . . . . . . . . . . . . . 5.6 Limitations imposed by RHP-zeros . . . . . . . . . . . . . . . . . . 5.7 RHP-zeros amd non-causal controllers . . . . . . . . . . . . . . . . 5.8 Limitations imposed by unstable (RHP) poles . . . . . . . . . . . . 5.9 Combined unstable (RHP) poles and zeros . . . . . . . . . . . . . . 5.10 Performance requirements imposed by disturbances and commands 5.11 Limitations imposed by input constraints . . . . . . . . . . . . . . . 5.12 Limitations imposed by phase lag . . . . . . . . . . . . . . . . . . 5.13 Limitations imposed by uncertainty . . . . . . . . . . . . . . . . . 5.14 Summary: Controllability analysis with feedback control . . . . . . 5.15 Summary: Controllability analysis with feedforward control . . . . 5.16 Applications of controllability analysis . . . . . . . . . . . . . . . . 5.17 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 LIMITATIONS ON PERFORMANCE IN MIMO SYSTEMS . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Constraints on Ë and Ì . . . . . . . . . . . . . . . . . . . . 6.3 Functional controllability . . . . . . . . . . . . . . . . . . . 6.4 Limitations imposed by time delays . . . . . . . . . . . . . 6.5 Limitations imposed by RHP-zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ÇÆÌ ÆÌË
6.6 6.7 6.8 6.9 6.10 6.11 6.12 Limitations imposed by unstable (RHP) poles . . . RHP-poles combined with RHP-zeros . . . . . . . Performance requirements imposed by disturbances Limitations imposed by input constraints . . . . . . Limitations imposed by uncertainty . . . . . . . . MIMO Input-output controllability . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Ú
223 224 226 228 234 246 251 253 253 255 258 259 270 277 284 289 291 293 293 296 303 305 307 309 312 314 322 326 330 339 348 351 355 355 359 368 382 403

7 UNCERTAINTY AND ROBUSTNESS FOR SISO SYSTEMS 7.1 Introduction to robustness . . . . . . . . . . . . . . . . . . . 7.2 Representing uncertainty . . . . . . . . . . . . . . . . . . . 7.3 Parametric uncertainty . . . . . . . . . . . . . . . . . . . . 7.4 Representing uncertainty in the frequency domain . . . . . . 7.5 SISO Robust stability . . . . . . . . . . . . . . . . . . . . . 7.6 SISO Robust performance . . . . . . . . . . . . . . . . . . 7.7 Examples of parametric uncertainty . . . . . . . . . . . . . 7.8 Additional exercises . . . . . . . . . . . . . . . . . . . . . . 7.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 8 ROBUST STABILITY AND PERFORMANCE ANALYSIS . 8.1 General control configuration with uncertainty . . . . . . . 8.2 Representing uncertainty . . . . . . . . . . . . . . . . . . 8.3 Obtaining È , Æ and Å . . . . . . . . . . . . . . . . . . . 8.4 Definitions of robust stability and robust performance . . . 8.5 Robust stability of the Å ¡-structure . . . . . . . . . . . . 8.6 RS for complex unstructured uncertainty . . . . . . . . . . 8.7 RS with structured uncertainty: Motivation . . . . . . . . . 8.8 The structured singular value . . . . . . . . . . . . . . . . 8.9 Robust stability with structured uncertainty . . . . . . . . 8.10 Robust performance . . . . . . . . . . . . . . . . . . . . . 8.11 Application: RP with input uncertainty . . . . . . . . . . . 8.12 -synthesis and à -iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13 Further remarks on 8.14 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 9 CONTROLLER DESIGN . . . . . . . . . 9.1 Trade-offs in MIMO feedback design 9.2 LQG control . . . . . . . . . . . . . . 9.3 À¾ and À½ control . . . . . . . . . . 9.4 À½ loop-shaping design . . . . . . . 9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10 CONTROL STRUCTURE DESIGN . . . . . . . . . . . . . . . . . . . 405

Ú
10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9

ÅÍÄÌÁÎ ÊÁ

Ä
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

à ÇÆÌÊÇÄ
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 407 410 416 418 420 429 441 458 459 459 460 462 463 464 467 476 477 478 479 479 480 490 500 506 509 509 512 515 522 526 539 541

Introduction . . . . . . . . . . . . . . . . . . Optimization and control . . . . . . . . . . . Selection of controlled outputs . . . . . . . . Selection of manipulations and measurements RGA for non-square plant . . . . . . . . . . Control configuration elements . . . . . . . . Hierarchical and partial control . . . . . . . . Decentralized feedback control . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . .

11 MODEL REDUCTION . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . 11.2 Truncation and residualization . . . . . . . . . . 11.3 Balanced realizations . . . . . . . . . . . . . . . 11.4 Balanced truncation and balanced residualization 11.5 Optimal Hankel norm approximation . . . . . . . 11.6 Two practical examples . . . . . . . . . . . . . . 11.7 Reduction of unstable models . . . . . . . . . . . 11.8 Model reduction using MATLAB . . . . . . . . . 11.9 Conclusion . . . . . . . . . . . . . . . . . . . . 12 CASE STUDIES . . . . 12.1 Introduction . . . . 12.2 Helicopter control . 12.3 Aero-engine control 12.4 Distillation process 12.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A MATRIX THEORY AND NORMS . . . . . A.1 Basics . . . . . . . . . . . . . . . . . . A.2 Eigenvalues and eigenvectors . . . . . . A.3 Singular Value Decomposition . . . . . A.4 Relative Gain Array . . . . . . . . . . . A.5 Norms . . . . . . . . . . . . . . . . . . A.6 Factorization of the sensitivity function A.7 Linear fractional transformations . . . .

B PROJECT WORK and SAMPLE EXAM . . . . . . . . . . . . . . . . 545 B.1 Project work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 B.2 Sample exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561

ÈÊ
This is a book on practical feedback control and not on system theory generally. Feedback is used in control systems to change the dynamics of the system (usually to make the response stable and sufficiently fast), and to reduce the sensitivity of the system to signal uncertainty (disturbances) and model uncertainty. Important topics covered in the book, include

¯ ¯ ¯ ¯ ¯ ¯ ¯

classical frequency-domain methods analysis of directions in multivariable systems using the singular value decomposition input-output controllability (inherent control limitations in the plant) model uncertainty and robustness performance requirements methods for controller design and model reduction control structure selection and decentralized control

The treatment is for linear systems. The theory is then much simpler and more well developed, and a large amount of practical experience tells us that in many cases linear controllers designed using linear methods provide satisfactory performance when applied to real nonlinear plants. We have attempted to keep the mathematics at a reasonably simple level, and we emphasize results that enhance insight and intuition. The design methods currently available for linear systems are well developed, and with associated software it is relatively straightforward to design controllers for most multivariable plants. However, without insight and intuition it is difficult to judge a solution, and to know how to proceed (e.g. how to change weights) in order to improve a design. The book is appropriate for use as a text for an introductory graduate course in multivariable control or for an advanced undergraduate course. We also think it will be useful for engineers who want to understand multivariable control, its limitations, and how it can be applied in practice. There are numerous worked examples, exercises and case studies which make frequent use of MATLAB TM 1 .

½ MATLAB is a registered trademark of The MathWorks, Inc.

Ü

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

The prerequisites for reading this book are an introductory course in classical single-input single-output (SISO) control and some elementary knowledge of matrices and linear algebra. Parts of the book can be studied alone, and provide an appropriate background for a number of linear control courses at both undergraduate and graduate levels: classical loop-shaping control, an introduction to multivariable control, advanced multivariable control, robust control, controller design, control structure design and controllability analysis. The book is partly based on a graduate multivariable control course given by the first author in the Cybernetics Department at the Norwegian University of Science and Technology in Trondheim. About 10 students from Electrical, Chemical and Mechanical Engineering have taken the course each year since 1989. The course has usually consisted of 3 lectures a week for 12 weeks. In addition to regular assignments, the students have been required to complete a 50 hour design project using MATLAB. In Appendix B, a project outline is given together with a sample exam.

Examples and internet
Most of the numerical examples have been solved using MATLAB. Some sample files are included in the text to illustrate the steps involved. Most of these files use the -toolbox, and some the Robust Control toolbox, but in most cases the problems could have been solved easily using other software packages. The following are available over the internet:

¯ ¯ ¯ ¯ ¯ ¯

MATLAB files for examples and figures Solutions to selected exercises Linear state-space models for plants used in the case studies Corrections, comments to chapters, extra exercises and exam sets

This information can be accessed from the authors’ home pages: http://www.chembio.ntnu.no/users/skoge http://www.le.ac.uk/engineering/staff/Postlethwaite

Comments and questions
Please send questions, errors and any comments you may have to the authors. Their email addresses are:

¯ ¯

Sigurd.Skogestad@chembio.ntnu.no ixp@le.ac.uk

ÈÊ

Ü

Acknowledgements
The contents of the book are strongly influenced by the ideas and courses of Professors John Doyle and Manfred Morari from the first author’s time as a graduate student at Caltech during the period 1983-1986, and by the formative years, 19751981, the second author spent at Cambridge University with Professor Alistair MacFarlane. We thank the organizers of the 1993 European Control Conference for inviting us to present a short course on applied À ½ control, which was the starting point for our collaboration. The final manuscript began to take shape in 1994-95 during a stay the authors had at the University of California at Berkeley – thanks to Andy Packard, Kameshwar Poolla, Masayoshi Tomizuka and others at the BCCI-lab, and to the stimulating coffee at Brewed Awakening. We are grateful for the numerous technical and editorial contributions of Yi Cao, Kjetil Havre, Ghassan Murad and Ying Zhao. The computations for Example 4.5 were performed by Roy S. Smith who shared an office with the authors at Berkeley. Helpful comments and corrections were provided by Richard Braatz, Jie Chen, Atle C. Christiansen, Wankyun Chung, Bjørn Glemmestad, John Morten Godhavn, Finn Are Michelsen and Per Johan Nicklasson. A number of people have assisted in editing and typing various versions of the manuscript, including Zi-Qin Wang, Yongjiang Yu, Greg Becker, Fen Wu, Regina Raag and Anneli Laur. We also acknowledge the contributions from our graduate students, notably Neale Foster, Morten Hovd, Elling W. Jacobsen, Petter Lundstr¨ m, John Morud, Raza Samar and o Erik A. Wolff. The aero-engine model (Chapters 11 and 12) and the helicopter model (Chapter 12) are provided with the kind permission of Rolls-Royce Military Aero Engines Ltd, and the UK Ministry of Defence, DRA Bedford, respectively. Finally, thanks to colleagues and former colleagues at Trondheim and Caltech from the first author, and at Leicester, Oxford and Cambridge from the second author. We have made use of material from several books. In particular, we recommend Zhou, Doyle and Glover (1996) as an excellent reference on system theory and À ½ control. Of the others we would like to acknowledge, and recommend for further reading, the following: Rosenbrock (1970), Rosenbrock (1974), Kwakernaak and Sivan (1972), Kailath (1980), Chen (1984), Francis (1987), Anderson and Moore (1989), Maciejowski (1989), Morari and Zafiriou (1989), Boyd and Barratt (1991), Doyle et al. (1992), Green and Limebeer (1995), Levine (1995), and the MATLAB toolbox manuals of Grace et al. (1992), Balas et al. (1993) and Chiang and Safonov (1992).

Second edition
In this second edition, we have corrected a number of minor mistakes and made numerous changes and additions to the text, partly arising from the many questions and comments we have received from interested readers. All corrections to the first

We have tried to minimize the changes in numbering of pages.Ü ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ edition are available on the web. so there should be little problem in using the two editions in parallel. . figures. examples. exercises and equation.

ÈÊ Blank page Ü .

Choose hardware and software and implement the controller. and if they are not satisfied modify the specifications or the type of controller. An example is given to show how to derive a linear model in terms of deviation variables for a practical application. 7. Model the system and simplify the model. 1.1 The process of control system design The process of designing a control system usually makes many demands of the engineer or engineering team. based on the overall control objectives. Decide on the type of controller to be used. Decide on the measurements and manipulated variables: what sensors and actuators will be used and where will they be placed? 6. 11.½ ÁÆÌÊÇ Í ÌÁÇÆ In this chapter. if necessary. The scaling of variables is critical in applications and so we provide a simple procedure for this. These demands often emerge in a step by step design procedure as follows: 1. either on a computer or a pilot plant. 9. 4. We then discuss linear models and transfer functions which are the basic building blocks for the analysis and design techniques presented in this book. Simulate the resulting controlled system. Decide which variables are to be controlled (controlled outputs). and tune the controller on-line. we begin with a brief outline of the design process for control systems. Test and validate the control system. Analyze the resulting controlled system to see if the specifications are satisfied. if necessary. Finally. Scale the variables and analyze the resulting model. Decide on performance specifications. Design a controller. 3. if necessary. 10. 5. Study the system (plant) to be controlled and obtain initial information about the control objectives. determine its properties. 2. Select the control configuration. 12. Repeat from step 2. 13. 8. we summarize the most important notation used in the book. . 14.

1. To derive simple tools with which to quantify the inherent input-output controllability of a plant is the goal of Chapters 5 and 6. “even the best control system cannot make a Ferrari out of a Volkswagen”. may not deliver the desired performance. Ziegler and Nichols then proceed to observe that there is a factor in equipment design that is neglected. However. advanced controllers are able to eke out better results than older models. but otherwise it cannot be changed by the control engineer. Therefore. Interestingly. when applied to a miserably designed process. 13 and 14). as is clear from the following quote taken from a paper by Ziegler and Nichols (1943): In the application of automatic controllers.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Control courses and text books usually focus on steps 9 and 10 in the above procedure. and state that . the missing characteristic can be called the “controllability”. the process of control system design should in some cases also include a step 0. . Simply stated. the ability of the process to achieve and maintain the desired equilibrium value. It is affected by the location of sensors and actuators. involving the design of the process equipment itself. For example. 5 and 6. but on these processes. many real control systems are designed without any consideration of these two steps. it is important to realize that controller and process form a unit. on methods for controller design and control system analysis. using only on-line tuning (involving steps 1. on badly designed processes. True. 4 5. credit or discredit for results obtained are attributable to one as much as the other.2 The control problem The objective of a control system is to make the output Ý behave in a desired way by manipulating the plant input Ù. in this case a suitable control structure may not be known at the outset. it may be possible to design workable control systems. A poor controller is often able to perform acceptably on a process which is easily controlled. that is. A special feature of this book is the provision of tools for input-output controllability analysis (step 3) and for control structure design (steps 4. . Input-output controllability is the ability to achieve acceptable control performance. The regulator problem is to manipulate Ù to . The finest controller made. often based on a hierarchy of cascaded control loops. even for complex systems with many inputs and outputs. and there is a need for systematic tools and insights to assist the designer with steps 4. 5. 7. there is a definite end point which can be approached by instrumentation and it falls short of perfection. 6 and 7). The idea of looking at process equipment design and control system design as an integrated whole is not new. 6.

instead of a single model we may study the behaviour of a class of models. in both cases we want the control error Ý   Ö to be small. where the model “uncertainty” or “perturbation” is bounded. change with time. Nominal Performance (NP). The system satisfies the performance specifications for all perturbed plants about the nominal model up to the worst-case model uncertainty. Poles and zeros appear explicitly in factorized scalar transfer functions. In this book we make use of linear models of the form Ý Ù· (1. The system satisfies the performance specifications with no model uncertainty. inaccuracy in may cause problems because the plant will be part of a feedback loop. To deal with such a problem we will make use of the concept of model uncertainty. are used to express Û¡ in terms of normalized perturbations. To arrive at a good design for à we need a priori information about the expected disturbances and reference inputs. The servo problem is to manipulate Ù to keep the output close to a given reference input Ö. Robust performance (RP). where the magnitude (norm) of ¡ is less than or equal to ½. ¡. Robust stability (RS). 1. whereas in the time domain the evaluation of complicated convolution integrals is required. In most cases weighting functions. which are very useful in applications for the following reasons: ¯ ¯ ¯ ¯ ¯ Invaluable insights are obtained from simple frequency-dependent plots. The algorithm for adjusting Ù based on the available information is the controller à . Û´×µ. In particular.3 Transfer functions The book makes extensive use of transfer functions. Important concepts for feedback such as bandwidth and peaks of closed-loop transfer functions may be defined. . The system is stable with no model uncertainty.1) ) may be inaccurate or may A major source of difficulty is that the models ( . Ô · . Thus.ÁÆÌÊÇ Í ÌÁÇÆ ¿ counteract the effect of a disturbance . and of the frequency domain. ´ µ gives the response to a sinusoidal input of frequency . The system is stable for all perturbed plants about the nominal model up to the worst-case model uncertainty. A series interconnection of systems corresponds in the frequency domain to multiplication of the individual system transfer functions. ´×µ. but otherwise unknown. For example. and of the plant model ( ) and disturbance model ( ). The following terms are useful: Nominal stability (NS).

2) we obtain ×ܽ ´×µ   ܽ ´Ø ×ܾ ´×µ   ܾ ´Ø ¼µ ¼µ Ý´×µ   ½ ܽ ´×µ · ܾ ´×µ · ¬½ Ù´×µ   ¼ ܽ ´×µ · ¬¼ Ù´×µ ܽ ´×µ (1.5) with input ٴص and output Ý ´Øµ.3) where Ý ´×µ denotes the Laplace transform of Ý ´Øµ. and so on. have similar behaviour) if their frequency responses are similar. An example of such a system is ܽ ´Øµ ܾ ´Øµ ݴص   ½ ܽ ´Øµ · ܾ ´Øµ · ¬½ ٴص   ¼ ܽ ´Øµ · ¬¼ ٴص ܽ ´Øµ (1.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¯ Uncertainty is more easily handled in the frequency domain. In addition..e.2) where ܴص Ü Ø. time-invariant systems whose input-output responses are governed by linear ordinary differential equations with constant coefficients. The elimination of Ü ½ ´×µ and ܾ ´×µ from (1. then we can assume Ü ½ ´Ø ¼µ ܾ ´Ø ¼µ ¼. will be used throughout the book to model systems and their components. Ü ½ ´Øµ and ܾ ´Øµ the states.4) may also represent the following system ݴص · ½ ݴص · ¼ ݴص ¬½ ٴص · ¬¼ ٴص (1.3) then yields the transfer function Ý´×µ Ù´×µ ´×µ ¬½ × · ¬¼ ¾ · ½× · ¼ × (1. for linear systems.4) Importantly. If we apply the Laplace transform to (1. If ٴص ܽ ´Øµ ܾ ´Øµ and Ý ´Øµ represent deviation variables away from a nominal operating point or trajectory. a small change in a parameter in a state-space description can result in an entirely different system response. More generally.4). Transfer functions. ’ Here ٴص represents the input signal.6) Ò · Ò ½ ×Ò ½ · ¡ ¡ ¡ · ½ × · ¼ × . Notice that the transfer function in (1. such as ´×µ in (1. We consider linear. the transfer function is independent of the input signal (forcing function). and Ý ´Øµ the output signal. etc. To simplify our presentation we will make the usual abuse of notation and replace Ý ´×µ by Ý ´×µ. we will omit the independent variables × and Ø when the meaning is clear. On the other hand. The system is time-invariant since the coefficients ½ ¼ ¬½ and ¬¼ are independent of time. This is related to the fact that two systems can be described as close (i. we consider rational transfer functions of the form ¬ÒÞ ×ÒÞ · ¡ ¡ ¡ · ¬½ × · ¬¼ ´×µ (1.

1 ¯ ¯ ¯ ¯ Ü A system A system A system A system ´×µ is strictly proper if ´ µ ¼ as ½. certain derived transfer functions. This is sometimes shown explicitly. All practical systems will have zero gain at a sufficiently high frequency.2). with Ò Ü· Ù Ý Ò Þ . It is often convenient. similar to (1. and ÒÞ is the order of the numerator (or zero polynomial). Definition 1. which implies that a change in a dependent variable (here Ý ) can simply be found by adding together the separate effects resulting from changes in the independent variables (here Ù and ) considered one at a time. 1. Usually we use ´×µ to represent the effect of the inputs Ù on the outputs Ý . to model high frequency effects by a non-zero -term. are semi-proper.7) written as Remark. It requires the engineer to make a judgement at the start of the design process about the required performance of the system. on the allowed magnitude of each input signal. whereas ´×µ represents the effect on Ý of the disturbances . ´×µ is semi-proper or bi-proper if ´ µ ¼ as ´×µ which is strictly proper or semi-proper is proper. Then Ò   Ò Þ is referred to as the pole excess or relative order. Ü · Ù. the Æ is normally omitted.ÁÆÌÊÇ Í ÌÁÇÆ For multivariable systems. for example. Furthermore.6) by a state-space description. and are therefore strictly proper. For a proper system. and on the allowed deviation of each output. ´×µ is a matrix of transfer functions. such as Ë ´Á · à µ ½ .8) We have made use of the superposition principle for linear systems.6) Ò is the order of the denominator (or pole polynomial) and is also called the order of the system. ´×µ is improper if ´ µ ½ as ½. We then have the following linear process model in terms of deviation variables Ý ´×µ ´×µÙ´×µ · ´×µ ´×µ (1.4 Scaling Scaling is very important in practical applications as it makes model analysis and controller design (weight selection) much simpler. ´×µ and Ý ´×µ are deviation variables. ½. . by use of the notation ÆÙ´×µ. All the signals Ù´×µ. In (1. we may realize (1. decisions are made on the expected magnitudes of disturbances and reference changes. but since we always use deviation variables when we consider Laplace transforms. however. To do this. and hence semi-proper models are frequently used. The transfer function may then be ´×µ ´×Á   µ ½ · (1.

so the same scaling factor should be applied to each. Two alternatives are possible: ¯ Ñ Ü — largest allowed control error ¯ ÖÑ Ü — largest expected change in reference value Since a major objective of control is to minimize the control error . and Ö become diagonal scaling maximum value. that all errors (outputs) are of about equal importance in terms of their magnitude. or allow. The variables Ý. Ö .14) . and Ö are in the same units.10) ¯ Ñ Ü — largest expected change in disturbance ¯ ÙÑ Ü — largest allowed input change The maximum deviation from a nominal value should be chosen by thinking of the maximum value one can expect.12) Ñ Ü Ö ÖÑ Ü . This is done by dividing each variable by its maximum expected or allowed change.11) To formalize the scaling procedure. for example. we here usually choose to scale with respect to the maximum control error: Ý Ý ÑÜ Ö Ö ÑÜ ÑÜ (1. Ù.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Let the unscaled (or originally scaled) linear model of the process in deviation variables be Ý Ù· Ý Ö (1. This ensures. For disturbances and manipulated inputs.9) where a hat ( ) is used to show that the variables are in their unscaled units. A useful approach for scaling is to make the variables less than one in magnitude.13) into (1. as a function of time. we use the scaled variables ÑÜ where: Ù Ù ÙÑ Ü (1.9) we get Ý  Ù Ö and introducing the scaled transfer functions  ½  ½ (1.13) On substituting (1. in which case matrices. The corresponding scaled variables to use for control purposes are then (1. introduce the scaling factors ÑÜ Ù ÙÑ Ü For MIMO systems each variable in the vectors . Ù and may have a different  ½ Ù Ý Ù  ½ Ù Ý ÙÙ ·  ½ Ý  ½ Ö  ½ Ö (1.

´Øµ ½ and ִص ½ such that ´Øµ ½. this applies to the input-output controllability analysis presented in Chapters 5 and 6. Remark 2 With the above scalings.1. and our control ݴص   ִص ½ Remark 1 A number of the interpretations used in the book depend critically on a correct scaling. Ê ½). and references Ö of magnitude ½. In particular. This is done by dividing the reference by the maximum expected reference change Ö We then have that Ö ÖÑ Ü Ö  ½ Ö (1. The block diagram for the system in scaled variables may then be written as in Figure 1. the worst-case behaviour of a system is analyzed by considering disturbances of magnitude ½.17) Ê is the largest expected change in reference relative to the allowed control Ù ¹ ¹ + + Ý ¹+ Ö ¹ Figure 1.16) Ö Here ÊÖ Û Ö Ê ¸  ½ Ö ÖÑ Ü Ñ Ü Ö Ê (1. We will disturbance with sometimes use this to unify our treatment of disturbances and references. Remark 3 The control error is Ý Ö Ù·   ÊÖ (1.ÁÆÌÊÇ Í ÌÁÇÆ then yields the following model in terms of scaled variables Ý Ù· Ý Ö (1.   .1: Model in terms of scaled variables error (typically. for which the following control objective is relevant: ¯ In terms of scaled variables we have that objective is to manipulate Ù with ٴص (at least most of the time). for a MIMO system one cannot correctly make use of the sensitivity function Ë ´Á · à µ ½ unless the output errors are of comparable magnitude. which is less than 1 in magnitude. Furthermore.18) and we see that a normalized reference change Ö may be viewed as a special case of a Ê.15) Here Ù and should be less than 1 in magnitude. and it is useful in some cases to introduce a scaled reference Ö . where Ê is usually a constant diagonal matrix.

if the issue is to select which outputs to control. if the disturbance is ½¼ then Ñ Ü ½¼. Formulate a nonlinear state-space model based on physical knowledge.11) in terms of the control error is used when analyzing a given plant. For example. This approach may be conservative (in manipulated input is terms of allowing too large disturbances etc. see Section 10. from analyzing input-output data.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 4 The scaling of the outputs in (1. Remark 5 If the expected or allowed variation of a variable about ¼ (its nominal value) is not symmetric. 2. (c) Subtract the steady-state to eliminate the terms involving only steady-state quantities. Determine the steady-state operating point (or trajectory) about which to linearize. These parts are usually accomplished together.3. For example. e. or from a combination of these two approaches. (b) Introduce the deviation variables. The following steps are usually taken when deriving a linear model for controller design based on a first-principle approach: 1. it is always important for a control engineer to have a good understanding of a model’s origin. then the largest variation should be used for Ñ Ü and the smallest variation for ÙÑ Ü and Ñ Ü . 1. and if the Ù ½¼ then ÙÑ Ü . Introduce deviation variables and linearize the model.19) . for a nonlinear statespace model of the form Ü Ø ´Ü Ùµ (1.) when the variations for several variables are not symmetric.g. Æܴص defined by Æܴص ܴص   Ü£ where the superscript £ denotes the steady-state operating point or trajectory along which we are linearizing. then one may choose to scale the outputs with respect to their expected variation (which is usually similar to ÖÑ Ü ).5 Deriving linear models Linear models may be obtained from physical “first-principle” models. There are essentially three parts to this step: (a) Linearize the equations using a Taylor expansion where second and higher order terms are omitted. However. 3.     A further discussion on scaling and performance is given in Chapter 5 on page 161. Although modelling and system identification are not covered in this book.

since (1. in which case the Jacobians and are matrices. An energy balance for the room requires that the change in energy in the room must equal the net inflow of energy to the room (per unit of time). 1.20) Here Ü and Ù may be vectors.2: Room heating process The above steps for deriving a linear model will be illustrated on the simple example depicted in Figure 1. where the control problem is to adjust the heat input É to maintain constant room temperature Ì (within ½ K).ÁÆÌÊÇ Í ÌÁÇÆ the linearized model in deviation variables (ÆÜ Æܴص Ø £ ÆÙ) is ßÞ ßÞ Ü Æܴص · £ Ù Æٴص (1.22) . This yields the following state-space model ¦ ´ ̵ Ø Î É · «´ÌÓ   Ì µ (1. Also.1 Physical model of a room heating process. Units are shown in square brackets. its Laplace transform ÆÜ´×µ · ÆÙ´×µ. Physical model. The outdoor temperature ÌÓ is the main disturbance. ÌÓ Ã Ã «Ï à Ìà Π à ¡¡ ¡¡ ¡ ¡ ¡ ¡ ¡ ¡ ÉÏ Figure 1. 4. or becomes ×ÆÜ´×µ ÆÜ´×µ ´×Á   µ ½ ÆÙ´×µ (1.2.21) Example 1.20) is in terms of deviation variables. Scale the variables to obtain scaled models which are more suitable for control purposes. In most cases steps 2 and 3 are performed numerically based on the model obtained in step 1.

É [W] is the heat input (from some heat source).24) ÆÌ ´×µ « scaled variables then becomes ÆÌÓ ´×µ (1. thus we neglect heat accumulation in the walls. and since its nominal ¾¼¼¼ W (see Remark 5 on page 8).2 where we analyze the input-output controllability of the room heating process. Then the steady-state energy £ ¾¼¼¼ ¾¼ ½¼¼ W/K.26) × · ½ ÆÌÑ Ü ½¼¼¼× · ½ ¾¼.22) is already linear. Introduce the following scaled variables ½ ½ ÆÉ´×µ · ÆÌÓ ´×µ ×·½ « ½¼¼ ¡ ½¼¿ ½¼¼ The time constant for this example is On taking Laplace transforms in (1. assuming get ÆÌ ´Øµ ¼ at Ø ¼. The fact that ½ means that we need some control (feedback or feedforward) ½µ when there is a disturbance of magnitude to keep the output within its allowed bound ( ½. (This value corresponds approximately to the heat capacity of air in a room Î of about ½¼¼ m¿ . Linear model in scaled variables. Î [J/K] is the heat capacity of the room.e. then one would have to include an extra term £ ´Ì £ ÌÓ µÆ«´Øµ on the right hand side of Equation (1.e. If we assume « is constant the model in (1. whereas the static gain for the disturbance is Note that the static gain for the input is ½¼. If « depended on the state variable (Ì in this example). It means that for a step increase in heat input it will take about ½ min for the temperature to reach ¿± of its steady-state increase. The fact that means that we have enough “power” in the inputs to reject the ½. balance yields « ½¼¼ kJ/K. the expected value is ¾¼¼¼ W we have ÆÉÑ Ü ½¼ K. Operating point. 4.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where Ì [K] is the room temperature. using an input of magnitude Ù ¼) for the maximum disturbance ( ½). Furthermore. Then introducing deviation variables     ÆÌ ´Øµ yields Ì ´Øµ   Ì £ ´Øµ Æɴص Πɴص   É£ ´Øµ ÆÌÓ ´Øµ £ ÌÓ ´Øµ   ÌÓ ´Øµ Remark. ÆÌÓ Ñ Ü Ý ´×µ Ù´×µ ´ ×µ ´× µ ÆÌ ´×µ ÆÌÑ Ü ÆÉ´×µ ÆÉÑ Ü ¾¼ ½ ÆÉÑ Ü ½ × · ½ ÆÌÑ Ü « ½¼¼¼× · ½ ½ ÆÌÓ Ñ Ü ½¼ ´×µ (1. Finally. We will return with a disturbance rejection ( detailed discussion of this in Section 5.23)   ½¼¼¼ s ½ min which is reasonable.) 3.23). have perfect disturbance at steady state. and rearranging we Î (1. Ø ÆÌ ´Øµ Æɴص · «´ÆÌÓ ´Øµ   ÆÌ ´Øµµ (1. i. that is. We assume the room heat capacity is constant. Linear model in deviation variables. the heat input can vary between ¼ W and ¼¼¼ W. we can.23). Consider a case where the heat input É£ is ¾¼¼¼ W and the difference £ between indoor and outdoor temperatures Ì £ ÌÓ is ¾¼ K. The model in terms of variations in outdoor temperature are ¦½¼ K. and the term «´ÌÓ Ì µ [W] represents the net heat loss due to exchange of air and heat conduction through the walls. ÆÌÑ Ü Æ Ñ Ü ½ K. 2. i. . or on one of the independent variables of interest (É or ÌÓ in this example).25) ÆÌÓ Ñ Ü In our case the acceptable variations in room temperature Ì are ¦½ K.16.

The latter can be used to represent a wide class of controllers. The symbols used in Figure 1. this notation is reasonably standard. Ý . denoted by lower-case letters. Matrix elements are usually is the ’th element in the matrix . The Laplace variable × is often omitted for simplicity. for example if is partitioned so that is itself a matrix. and Ì feedback. where Ä is the transfer function around the loop as seen from the output. We sometimes write ´×µ s (1. . In most cases Ä Ã . To represent uncertainty we use perturbations (not normalized) or perturbations ¡ (normalized such that their magnitude (norm) is less than or equal to one). so sometimes we use upper-case letters . or to avoid conflicts in notation. That is. Ò).3. Lower-case letters are used for vectors and signals (e. . However. ËÁ ´Á · ÄÁ µ ½ and ÌÁ ÄÁ ´Á · ÄÁ µ ½ . namely Ö and ÝÑ . as we will see. it can also be used to formulate optimization problems for controller design. with a state-space realization ´ µ has a transfer function a system ´×µ ´×Á   µ ½ · . Apart from the use of Ú to represent the controller inputs for the general configuration. à ). Ë ´Á · ĵ ½ and Ì Ä´Á · ĵ ½ . whereas the perturbed model with uncertainty is denoted Ô (usually for a set of possible perturbed plants) or ¼ (usually for a ½ A one-degree of freedom controller has only the control error Ö   ÝÑ as its input.27) to mean that the transfer function ´×µ has a state-space realization given by the µ. but an overriding concern has been to be consistent within the book. so we often write when we mean ´×µ. We have tried to use the most familiar notation from the literature whenever possible.6 Notation There is no standard notation to cover all of the topics covered in this book. With negative output. and a general control configuration.g. Ù. and.ÁÆÌÊÇ Í ÌÁÇÆ ½½ 1.1. whereas the two degrees-of-freedom controller has two inputs. including the one and two degreesof-freedom configurations. a two degrees-offreedom control configuration 1. quadruple ´ For closed-loop transfer functions we use Ë to denote the sensitivity at the plant Á   Ë to denote the complementary sensitivity. which shows a one degree-of-freedom control configuration with negative feedback.3 are defined in Table 1. as well as feedforward and estimation schemes and many others. transfer functions and systems (e.g. The also include measurement dynamics (Ý Ñ Ñ corresponding transfer functions as seen from the input of the plant are Ä Á à (or ÄÁ Ã Ñ ). and capital letters for matrices. The most important notation is summarized in Figure 1. For state-space realizations we use the standard ´ µ-notation. The nominal plant model is . to ensure that the reader can follow the ideas and techniques through from one chapter to another. but if we Ý · Ò) then Ä Ã Ñ .

3: Control configurations .½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ö + ¹ ÝÑ - ¹ à ٠¹ ¹ + + + Õ ¹Ý + Ò (a) One degree-of-freedom control configuration Ö ÝÑ ¹ ¹ à ٠¹ ¹ + + + Õ ¹Ý + Ò (b) Two degrees-of-freedom control configuration Û Ù ¹ ¹ È Ã ¹Þ Ú (c) General control configuration Figure 1.

3(b). measured disturbances. in the two degrees-ofÃÖ freedom controller in Figure 1. etc. For the special case of a one degree-of-freedom controller with perfect measurements we have Ú Ö   Ý. measured plant outputs. setpoints) disturbances (process noise) Ò measurement noise Ý plant outputs.g. ÝÑ measured Ý Ù control signals (manipulated plant inputs) For the general control configuration (Figure 1. In addition.ÁÆÌÊÇ Í ÌÁÇÆ ½¿ Table 1. It will include and and the interconnection structure between the plant and the controller. e. then it will also include weighting functions. Û exogenous inputs: commands. Sometimes the controller is broken down into its constituent parts. For example. disturbances and noise Þ exogenous outputs.1: Nomenclature à controller. if È is being used to formulate a design problem. “error” signals to be minimized. For the conventional control configurations (Figure 1. e. commands. These signals include the variables to be controlled (“primary” outputs with reference values Ö) and possibly some additional “secondary” measurements to improve control.3(a) and (b)): plant model disturbance model Ö reference inputs (commands.g. Ù control signals È . in whatever configuration.3(c)): generalized plant model. Usually the signals Ý are measurable. à ÃÝ where ÃÖ is a prefilter and ÃÝ is the feedback controller. Ý   Ö Ú controller inputs for the general configuration.

The left-half plane (LHP) is the open left half of the complex plane. or: B implies A B is sufficient for A B only if A not A µ not B The remaining notation. Similarly. For example. A RHP-pole (unstable pole) is a pole located in the right-half plane.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ particular perturbed plant). where Û is a weight representing the magnitude of Ô the uncertainty. Then the following expressions are equivalent: A´B A if B. a RHP-zero (“unstable” zero) is a zero located in the right-half plane. By the right-half plane (RHP) we mean the closed right half of the complex plane. . ¸ is used to denote equivalent by definition. or: If B then A A is necessary for B B µ A. special terminology and abbreviations will be defined in the text. and À to represent its complex conjugate transpose. including the imaginary axis ( -axis). Let A and B be logic statements. excluding the imaginary axis. and thus includes poles on the imaginary axis. with additive uncertainty we may have · · Û ¡ . and means that is identically equal to . We use Ì to denote the transpose of a matrix . Mathematical terminology The symbol ¸ is used to denote equal by definition.

we review the classical frequency-response techniques for the analysis and design of single-loop (single-input single-output. In this book we use the first interpretation. . It gives the response to an input sinusoid of frequency . During the 1980’s the classical methods were extended to a more formal method based on shaping closed-loop transfer functions. limitations and problems of feedback control. namely that of frequency-by-frequency sinusoidal response. A system’s response to sinusoids of varying frequency. SISO) feedback control systems. This interpretation has the advantage of being directly linked to the time domain. We introduce this method at the end of the chapter. The frequency content of a deterministic signal via the Fourier transform. Frequency responses can be used to describe: 1. and have proved to be indispensable when it comes to providing insight into the benefits. and 3. These loop-shaping techniques have been successfully used by industrial control engineers for decades. the frequency distribution of a stochastic signal via the power spectral density function. For the other two interpretations we cannot assign a clear physical meaning to ´ µ or Ý´ µ at a particular frequency – it is the distribution relative to other frequencies which matters then. This will be explained in more detail below. for example. by considering the ½ norm of the weighted sensitivity function.1 Frequency response On replacing × by in a transfer function model ´×µ we get the so-called frequency response description. The same underlying ideas and techniques will recur throughout the book as we present practical procedures for the analysis and design of multivariable (multi-input multi-output. and at each frequency the complex number ´ µ (or complex matrix for a MIMO system) has a clear physical interpretation. 2. MIMO) control systems. 2.¾ Ä ËËÁ Ä ÇÆÌÊÇÄ À Ã In this chapter.

the signal’s magnitude is amplified by a factor ´ µ and its phase is shifted by ´ µ. (2. Although this insight may be obtained by viewing the frequency response in terms of its relationship between power spectral densities. we believe that the frequency-byfrequency sinusoidal response interpretation is the most transparent and useful. ´×µ  × ×·½ ¾ ½¼ (2. A physical interpretation of the frequency response for a stable linear system Ý ´×µÙ is as follows. let ´ ÁÑ ´ µ. such that This input signal is persistent.1.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ One important advantage of a frequency response analysis of a system is that it provides insight into the benefits and trade-offs of feedback control. this statement is illustrated for the following first-order delay system (time in seconds). it can be shown that Ý ¼ Ù¼ and can be obtained directly from the and evaluating Laplace transform ´×µ after inserting the imaginary number × the magnitude and phase of the resulting complex number ´ µ. Note that the output sinusoid has a different amplitude Ý ¼ and is also shifted in phase from the input by Importantly.1) says that after sending a sinusoidal signal through a system ´×µ. as is evident from the excellent treatment by Kwakernaak and Sivan (1972). For example.3) . that is. it has been applied since Ø  ½. We have ¸¬ « ݼ Ù¼ For example. then ´ µ · Ô ´ µ Ö Ê (2. It is important that the reader has this picture in mind when reading the rest of the book.2) ´ µ ¾· ¾ ´ µ Ö Ø Ò´ In words. Apply a sinusoidal input signal with frequency [rad/s] and magnitude Ù¼ . Frequency-by-frequency sinusoids We now want to give a physical picture of frequency response in terms of a system’s response to persistent sinusoids. Then the output signal is also a persistent sinusoid of the same frequency. In Figure 2. it is needed to understand the response of a multivariable system in terms of its singular value decomposition. with real part ´ µ and imaginary part µ (2.1) µ . namely ٴص Ù¼ × Ò´ Ø · «µ ݴص ݼ × Ò´ Ø · ¬ µ Here Ù¼ and ݼ represent magnitudes and are therefore both non-negative.

and sinusoidal inputs with ½ are attenuated by the system dynamics. being equal to Ý ¼ ´ µ Ù¼ ´ µ . The system gain ´ µ is equal to at low frequencies. The gain remains relatively constant up to the break frequency ½ where it starts falling sharply. we see that the output Ý lags behind the input by about a quarter of a period and that the amplitude of the output is approximately twice that of the input. this is the steady-state gain and is obtained by setting × ¼ (or ¼).1: Sinusoidal response for system ¼ ¾ rad/s ´×µ  ¾× ´½¼× · ½µ at frequency At frequency ¼ ¾ rad/s. More accurately. In Figure 2. is also referred to as the system gain. We note that in this case both the gain and phase fall monotonically with frequency. ¾ corresponds to ¾ corresponds to ¿ ¼½ dB. Sometimes the gain is given in units of dB (decibel) defined as For example. and ½ corresponds to Both ´ µ and ´ µ depend on the frequency . Physically. The magnitude of the frequency response. ´ µ . and ¼ dB.Ä ËËÁ 2 1 0 −1 −2 0 Ä ÇÆÌÊÇÄ ½ ݴص ٴص 10 20 30 40 50 60 Time [sec] 70 80 90 100 Figure 2.3). Ô (2. This dependency may be plotted explicitly in Bode plots (with as independent variable) or somewhat implicitly in a Nyquist plot (phase plane plot). It describes how the system responds to persistent sinusoidal inputs of frequency . the Bode plots are shown for the system in (2.2. This is quite common for process control applications. The delay only shifts the sinusoid in time.4) . and a linear scale for the phase. and thus affects the phase but not the gain. In Bode plots we usually employ a log-scale for frequency and gain. ¾¼ ÐÓ ½¼ ¼¾ dB. the amplification is ´ µ and the phase shift is Ô ´ µ¾ · ½ Ô ´½¼ µ¾ · ½ ¾¾ ´ µ   Ö Ø Ò´ µ    Ö Ø Ò´½¼ µ ¾  ½ ½Ö   Æ ´ µ is called the frequency response of the system ´×µ. the system responds too slowly to let highfrequency (“fast”) inputs have much effect on the outputs.

7) where we have used as an argument because Ý ¼ and ¬ depend on frequency. Since ´ µ ´ µ sinusoidal response in (2.8) or because the term where Ø appears on both sides Ý´ µ ´ µÙ´ µ (2. From Euler’s formula for complex numbers we have that Þ Ó× Þ · × Ò Þ .2: Frequency response (Bode plots) of ´×µ  ¾× ´½¼× · ½µ The frequency response is also useful for an unstable plant ´×µ.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Magnitude 10 0 10 −2 10 0 −3 10 −2 10 −1 10 0 10 1 Phase −90 −180 −270 10 −3 10 −2 Frequency [rad/s] 10 −1 10 0 10 1 Figure 2. and in some cases so may Ù¼ and «. Let ´×µ be stabilized by feedback control. It then follows that × Ò´ ص is equal to the imaginary part of the Ø .9) .5) ¼ ¼ ݼ ´ µ Ù¼ ¬ ´ µ·« (2. Note that Ù´ µ is not equal to Ù´×µ evaluated at × ´ µ the nor is it equal to ٴص evaluated at Ø . In this case all signals within the system are persistent sinusoids with the same frequency . Now introduce the complex numbers and ´ µ and Ù´ µ ¸ Ù¼ « Ý´ µ ¸ ݼ ¬ (2.2). Phasor notation.6) can then be written in complex form as follows Ý´ µ Ø ´ µÙ´ µ Ø (2.5) and (2. which by itself has no steady-state response. and ´ µ yields as before the sinusoidal response from the input to the output of ´×µ.6) ´ µ are defined in (2. and we can write the time domain sinusoidal response in complex function complex form as follows: ٴص Ù ÁÑ ´ Ø·«µ Ú × × Ø ½ ݴص Ý ÁÑ ´ Ø·¬µ (2. and consider applying a sinusoidal forcing signal to the stabilized system.

Also  ½ ÐÒ ¬   ¼ ¬¡ ¼ · ¼ ¬ in (2. the time delay system   × has a constant gain of 1. This may be quantified by the Bode gainphase relationship which gives the phase of (normalized 1 such that ´¼µ ¼) at a given frequency ¼ as a function of ´ µ over the entire frequency range: ´ ¼µ ½ ½ ÐÒ ´ µ ¬ · ¼ ¬ ¬ ¬ ¬¡ ÐÒ ¬ ¬   ÐÒ ¼¬  ½ ßÞ Æ´ (2. Minimum phase systems. ¾ Æ ´ ¼µ Ö ¼Æ ¡ Æ ´ ¼ µ (2. but its phase is   [rad]. We will use this phasor notation throughout the book. For example.10) µ The name minimum phase refers to the fact that such a system has the minimum possible phase lag for the given magnitude response ´ µ .Ä ËËÁ Ä ÇÆÌÊÇÄ ½ which we refer to as the phasor notation. the system ´×µ ×· zero at × has a constant gain of 1. RHP-zeros and time delays contribute additional phase lag to a system when compared to that of a minimum phase system with the same gain (hence the term  ×· with a RHPnon-minimum phase system). Similarly. so it follows ¬that ¬ ʽ ¬ · ¼¬ primarily determined by the local slope Æ ´ ¼ µ.11) ½ ½ The normalization of ´×µ is necessary to handle systems such as ×·¾ and × ½ . Ù´ µ. For stable systems which are minimum phase (no time delays or right-half plane (RHP) zeros) there is a unique relationship between the gain and phase of the frequency response. Systems with integrators may be ½ treated by replacing ½ by ×·¯ where ¯ is a small positive number. the reader should interpret this as such as Ù´ µ (with and not a (complex) sinusoidal signal. In particular. × . (2. At each frequency. but its phase is  ¾ Ö Ø Ò´ µ [rad] (and not ¼ [rad] as it would be for the minimum phase system ´×µ ½ of the same gain). Ý ´ µ and ´ µ are complex numbers. which have equal ·¾ gain. Thus whenever we use notation as an argument). the local slope at frequency ¼ is Æ ´ ¼µ ¬ ¬ The term ÐÒ ¬ which justifies the commonly used approximation for stable minimum phase systems ¼ . The term Æ ´ µ is the slope of the magnitude in log-variables at frequency . and it is good for stable minimum phase systems except at frequencies close to those of resonance (complex) poles or zeros. and the usual rules for multiplying complex numbers apply. Ù´ µ Ø .10) is infinite at   ¬ ¬ ÐÒ ´ µ ÐÒ ¼ ´ ¼ µ is ¾ ¾ ´ ¼µ The approximation is exact for the system ´×µ ½ × Ò (where Æ ´ µ  Ò).9) also applies to MIMO systems where Ù´ µ and Ý´ µ are complex vectors representing the sinusoidal signal in each channel and ´ µ is a complex matrix. but their phases differ by ½ ¼Æ . are stable and minimum phase.

3. In Figure 2. These approximations yield straight lines on a log-log plot which meet at . For example. and in particular the magnitude (gain) diagram.¾¼ 10 5 ÅÍÄÌÁÎ ÊÁ 0 Ä Ã ÇÆÌÊÇÄ Magnitude 10 0 −2 −1 −2 −3 10 −5 10 0 10 −2 ½ 10 −1 10 0 ¾ 10 1 ¿ 10 2 10 3 Phase −45 −90 −135 −180 10 −3 −2 −1 0 1 2 3 10 10 Frequency [rad/s] 10 10 10 10 ×·½ Figure 2. the term × ¾ · ¾ × ¼ · ¼ (where ½) is approximated by ¼ for ¾ ´ µ¾   ¾ for ¼ and by × ¼.13) The asymptotes (straight-line approximations) are shown by dotted lines.3 or less. ¾ and ¿ . and the only case when there is significant deviation is around the resonance frequency ¼ for complex poles or zeros with a of about 0.3: Bode plots of transfer function Ľ ¿¼ ´×·¼ ¼½µ¾ ´×·½¼µ . For the design methods used in this book it is useful to be able to sketch Bode plots quickly. For complex ¾ ¾ poles or zeros. for a transfer function ´×µ the asymptotic Bode plots of the approximation · . The reader is therefore advised to become familiar with asymptotic Bode plots (straight-line approximations). Straight-line approximations (asymptotes).12) therefore. The vertical dotted lines on the upper plot indicate the break frequencies ½ . In (2. the frequencies the so-called break point frequency Þ½ Þ¾ Ô½ Ô¾ are the break points where the asymptotes meet. The asymptotes are given by dotted lines.12) ´× · Ô½ µ´× · Ô¾ µ ¡ ¡ ¡ ´ µ are obtained by using for each term ´× · µ for and by · for Ľ ´×µ ¿¼ ´× · ½µ ´× · ¼ ¼½µ¾´× · ½¼µ (2. the Bode plots are shown for damping ´× · Þ½ µ´× · Þ¾ µ ¡ ¡ ¡ (2. The magnitude of a transfer function is usually close to its asymptotic value. In this example the asymptotic slope of Ä ½ is 0 up to the first break frequency at ½ .

remains at ½¿ Ó up to frequency ½¼. and ends ½ decade passes through the correct phase change of ½¼ ). we examine the simple one degree-of-freedom negative feedback structure shown in Figure 2. whereas the phase does not. frequency) which starts ½ decade before the before the break frequency (at Ó at the break frequency . this much improved after the break frequency (at phase approximation drops from ¼ to ½ ¼Ó between frequencies ¼ ¼¼½ ( ½ ½¼) and ¼ ½. Remark. Then ½ rad/s there is a zero and the slope changes to Æ  ½. The asymptotic phase jumps at the break frequency by   ¼ Ó (LHP-pole or RHP-zero) or · ¼ Ó (LHP-zero or RHPpole). before it drops down to ½ ¼Ó at frequency ½¼¼ ( ½¼ ¿ ).4: Block diagram of one degree-of-freedom feedback control system 2.2. An improved phase approximation of a term × · is obtained by.3. there at ¾ is a break frequency corresponding to a pole at ¿ ½¼ rad/s and so the slope is Æ  ¾ at this and higher frequencies. Finally.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾½ ¼ ¼½ rad/s where we have two poles and then the slope changes to Æ  ¾.4.1 One degree-of-freedom controller In most of this chapter. We note that the magnitude follows the asymptotes closely. instead of jumping. connecting the phase asymptotes by a straight line (on a Bode plot with logarithmic ½¼).2 Feedback control Ö + ¹ - ¹ à ٠¹ ¹+ + + + Õ ¹Ý ÝÑ Ò Figure 2. For the example in Figure 2. ¦         2. The input to the controller à ´×µ is Ö   Ý Ñ . increases up to ½¿ Ó at frequency ½.

  2. The corresponding plant input signal is Ù ÃËÖ   ÃË   ÃËÒ (2. First. this is not a good definition of an error variable.¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where ÝÑ Ý · Ò is the measured output and Ò is the measurement noise.16) yields à ´Ö   Ý   Òµ · ÃÖ · or ´Á · à µÝ   ÃÒ   ´Á · à µ ½ Ã Ò ßÞ Ì (2. the error should involve the actual value (Ý ) and not the measured value (ÝÑ ). Thus.14) into (2.20) loop transfer function sensitivity function complementary sensitivity function The following notation and terminology are used Ì Ä Ã Ë ´Á · à µ ½ ´Á · ĵ ½ ´Á · à µ ½ à ´Á · ĵ ½ Ä . However. the input to the plant is Ù Ã ´×µ´Ö   Ý   Òµ (2.17) and hence the closed-loop response is Ý ´Á · à µ ½ Ã Ö · ´Á · à µ ½ ßÞ Ì ßÞ (2. The control error is defined as Ý Ö where Ö denotes the reference value (setpoint) for the output. the error is normally defined as the actual value (here Ý ) minus the desired value (here Ö).16) and for a one degree-of-freedom controller the substitution of (2.18) Ë The control error is Ý Ö where we have used the fact Ì   Á  ËÖ · Ë   Ì Ò (2.15) Remark.14) The objective of control is to manipulate Ù (design à ) such that the control error remains small in spite of disturbances . Second.2.2 Closed-loop transfer functions The plant model is written as Ý Ý ´×µÙ · ´× µ (2. (2. In the literature. the control error is frequently defined as Ö ÝÑ which is often the controller input.19)  Ë .

Remark 3 In general. The term complementary sensitivity for Ì follows from the identity: Ë·Ì Á (2. The term sensitivity function is natural because Ë gives the sensitivity ´Á · ĵ reduction afforded by feedback.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾¿ We see that Ë is the closed-loop transfer function from the output disturbances to the outputs. write Ë · Ì ´Á · ĵ  ½ · ´Á · ĵ ½ Ä and factor out the term  ½ . A “perfect” feedforward controller is obtained by removing the feedback signal and using the controller ÃÖ ´×µ  ½ ´×µ (2. by straightforward differentiation of Ì . for SISO systems we may write Ë · Ì and so on. the response with feedback is obtained by premultiplying the right hand side by Ë . In particular. consider the “open-loop” case i. Ñ ´×µ.21) To derive (2. To see this. Of course.3 Why feedback? At this point it is pertinent to ask why we should use feedback control at all — rather than simply using feedforward control. Bode first called Ë sensitivity because it gives the relative sensitivity of the closed-loop transfer function Ì to the relative plant model error.25) (we assume for now that it is possible to obtain and physically realize such an inverse. the above is not the original reason for the name “sensitivity”.23) Remark 2 Equations (2. Remark 1 Actually.2.2.18) shows that. at a given frequency we have for a SISO plant. Ë ½·Ä . that Ì Ì Ë (2. closed-loop transfer functions for SISO systems with negative feedback may be obtained from the rule ÇÍÌÈÍÌ Ö Ø ½ · ÐÓÓÔ ¡ ÁÆÈÍÌ (2. in the loop.24) is easily derived by generalizing (2. In Section then Ä´×µ 3.17). with no feedback. we present a more general form of this rule which also applies to multivariable systems. The rule in (2. Then Ý ÃÖ · ·¼¡Ò (2.e.22) are written in matrix form because they also apply to ½ Ä ½.21).14)-(2. with the exception of noise. We assume that the plant and controller are . while Ì is the closed-loop transfer function from the reference signals to the outputs. Ã Ñ .24) where “direct” represents the transfer function for the direct effect of the input on the output (with the feedback path open) and “loop” is the transfer function around the loop (denoted Ä´×µ). Ì ½·Ä MIMO systems.22) and a comparison with (2. If there is also a measurement device. In the above case Ä Ã . although this may of course not be true). 2.

This is illustrated next by a simple example.5 1 0. This . is never an exact model. The model has a right-half plane (RHP) zero at × ¼ rad/s.5 0 5 à à ¿ (Unstable) ¾ à ½ ÈÙ 15 20 25 30 35 40 45 50 à 10 ¼ Time [sec] Figure 2. then the controller may “overreact” and the closed-loop system becomes unstable. this of the disturbances on the outputs. 2.5 0 −0. that is. The fundamental reasons for using feedback control are therefore the presence of 1. An unstable plant The third reason follows because unstable plants can only be stabilized by feedback (see internal stability in Chapter 4).¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ both stable and that all the disturbances are known.26) This is one of two main example processes used in this chapter to illustrate the techniques of classical control. and the disturbances are never known exactly. Then with Ö   feedforward controller would yield perfect control: Ý Ù· ÃÖ ´Ö   µ· Ö Unfortunately. Model uncertainty (¡) 3. the effect as the controller input. If the feedback gain is too large. we know .1 Inverse response process. The ability of feedback to reduce the effect of model uncertainty is of crucial importance in controller design. Consider the plant (time in seconds) ´×µ ¿´ ¾× · ½µ ´ × · ½µ´½¼× · ½µ (2.5: Effect of proportional gain response process à on the closed-loop response Ý ´Øµ of the inverse Example 2. 2 1. Signal uncertainty – unknown disturbance ( ) 2.3 Closed-loop stability One of the main issues in designing feedback controllers is stability.

Method ½. and may also be used for systems with time delays. it provides useful measures of relative stability and forms the basis for several of the robustness tests used later in this book.2 Stability of inverse response process with proportional control. Method 2. The controller gain at the limit of instability. 2. à in Figure 2. ¸ Two methods are commonly used to determine closed-loop stability: 1. The frequency response (including negative frequencies) of Ä´ µ is plotted in the complex plane and the number of encirclements it makes of the critical point  ½ is counted. The poles of the closed-loop system are evaluated. Let us determine the condition for closed-loop stability of the plant in (2. However. ÃÙ ¾ . and this is usually how the poles are computed numerically.26) with proportional control. ¾ .27) ½ ¼ is the phase crossover frequency defined by Ä´ ½ ¼ µ  ½ ¼Æ. corresponding to the frequency Ù ¾ ÈÙ ¼ ¾ rad/s. that is ËØ where Ð ØÝ ¸ Ä´ ½ ¼ µ ½ (2. is and unstable for à sometimes called the ultimate gain and for this value the system is seen to cycle continuously with a period ÈÙ ½ ¾ s. one may equivalently use Bode’s stability condition which says that the closed-loop system is stable if and only if the loop gain Ä is less than 1 at this frequency. e. where the This is illustrated for a proportional (P) controller à ´×µ ÌÖ Ã ´½ · à µ ½ Ö to a step change in the reference (ִص ½ for response Ý Ø ¼) is shown for four different values of à . The poles are also equal to the eigenvalues of the state-space -matrix. The poles are solutions to ½ · Ä´×µ ¼ or equivalently the roots of ´ × · ½µ´½¼× · ½µ · à ¿´ ¾× · ½µ ¼ ¸ ¼×¾ · ´½   à µ× · ´½ · ¿Ã µ ¼ (2. poles on the imaginary axis are considered “unstable”). time delays must first be approximated as rational transfer functions. the roots of ½· Ä´×µ ¼ are found. The system is stable if and only if all the closed-loop poles are in the LHP.28) . That is. The system is stable if and only if all the closed-loop poles are in the open left-half plane (LHP) (that is. For open-loop stable systems where Ä´ µ falls with frequency such that Ä´ µ crosses  ½ ¼Æ only once (from above at frequency ½ ¼ ). has a nice graphical interpretation.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾ imposes a fundamental limitation on control.7) closed-loop stability is inferred by equating the number of encirclements to the number of open-loop unstable poles (RHP-poles). and high controller gains will induce closedloop instability. By Nyquist’s stability criterion (for which a detailed statement is given in Theorem 4. Pad´ approximations. Furthermore. with à ´×µ à (a constant) and loop transfer function Ä´×µ à ´×µ. where Ä is the transfer function around the loop. which involves computing the poles. which is based on the frequency e response. that is. Example 2. The system is seen to be stable for à ¾ .g. 1.5. is best suited for numerical calculations.

Thus. A graphical evaluation ½) are shown in is most enlightening. and the system will be continuously cycling with ¼ ½¾ rad/s corresponding to a period ÈÙ ¾ ½ ¾ s. The poles at the gain”) is ÃÙ ÃÙ ¾ into (2. For second order systems.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ But since we are only interested in the half plane location of the poles. and use the Routh-Hurwitz test to check for stability. Alternatively. the phase crossover frequency may be obtained analytically from:   ¸ this frequency is Ä´ ½ ¼ µ which gives ½ ¼   Ö Ø Ò´¾ ½ ¼ µ   Ö Ø Ò´ ½ ¼ µ   Ö Ø Ò´½¼ ½ ¼ µ  ½ ¼Æ Ã Ô Ô ¿ ¡ ´¾ ½ ¼ µ¾ · ½ Ô ´ ½ ¼ µ¾ · ½ ¡ ´½¼ ½ ¼ µ¾ · ½ ¼ ½¾ rad/s as found in the pole calculation above.5.28). This yields Ä´ ½ ¼ µ ½ à ¾ (as found from (2. this test says that we have stability if and only if all the coefficients have the same sign.e. From these one finds the frequency ½ ¼ where Ä is ½ ¼Æ and then reads à ´ ½ ¼ µ ¼ à .28) to get onset of instability may be found by substituting Ã Ô ¼×¾ · ¼. Ä´×µ with à Figure 2.5. This yields the following stability conditions ¡¡¡ or equivalently ½ ¿ ¾ . This agrees a frequency with the simulation results in Figure 2. and we find that the maximum allowed gain (“ultimate ¾ which agrees with the simulation in Figure 2.6.6: Bode plots for Ä´×µ  ¾× à ´½¼¿´·½µ´·½µ × ×·½µ with à ½ 2. one may consider the coefficients of the characteristic equation Ò Ò× · ½ × · ¼ ¼ in (2. and we get off the corresponding gain. Stability may also be evaluated from the frequency response of Ä´×µ.   ´½ à   à µ ¼ ´½ · ¿Ã µ ¼ ¦ ¦ Magnitude 10 0 10 −2 10 −2 10 −1 0 −45 −90 −135 −180 −225 −270 −2 10 ½¼ 10 0 10 1 Phase 10 −1 Frequency [rad/s] 10 0 10 1 Figure 2. × ¼ ¼ ½¾.e. it is not necessary to solve (2.27) that the system is stable if and only if Ä´ ½ ¼ µ above). Rather. at the onset of instability we have two poles on the imaginary axis. The Bode plots of the plant (i. With negative feedback (à ¼) only the upper bound is of practical interest. The loop gain at ¼ à Ĵ ½ ¼ µ .28). i.

We found that a controller gain of ½ gave a reasonably good response. that is. Actually.5 1 0.7: Closed-loop response to a step change in reference for the inverse response process with PI-control Example 2.4.5 0 −0.5 0 10 20 30 ݴص ٴص 40 50 60 70 80 Time [sec] Figure 2.29) . The objective of this section is to discuss ways of evaluating closed-loop performance. The stability condition Ä´ ½ ¼ µ ½ then yields à ¾ as expected. 2. We have already studied the use of a proportional controller for the process in (2. illustrates what type of closed-loop performance one might expect.26). 2 1. except for a steady-state offset (see Figure 2. the real objective of control is to improve performance. to make the output Ý ´Øµ behave in a more desirable manner. From ËÖ it follows that for Ö ½ the steady-state control error is ¼ ½ (as is confirmed by the simulation in Figure 2.26).5). The reason for this offset is the non-zero steady-state sensitivity function. 2.6. the possibility of inducing instability is one of the disadvantages of feedback control which has to be traded off against performance improvement. Ë ´¼µ ½·Ã½ ´¼µ ¼ ½ (where ´¼µ ¿ is the steady-state gain of the plant). To remove the steady-state offset we add integral action in the form of a PI-controller à     à ´×µ à ½· ½ Á× (2.4 Evaluating closed-loop performance Although closed-loop stability is an important issue.5).1 Typical closed-loop responses The following example which considers proportional plus integral (PI) control of the inverse response process in (2.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾ which is the same as found from the graph in Figure 2.3 PI-control of the inverse response process.

¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where ÃÙ is the maximum (ultimate) P-controller gain and ÈÙ is the corresponding period ¾ and ÈÙ ½ ¾ s (as observed from the simulation of oscillations. The it does not settle to within overshoot (height of peak relative to the final value) is about ¾% which is much larger than one would normally like for reference tracking. which is the ratio between subsequent peaks. is about ¼ ¿ which is also a bit large. ÃÙ and ÈÙ can be in Figure 2. ÈÙ ¾ Ù (2.4.11.31) to compute ÃÙ and ÈÙ for the process in (2. . but it then ¼ s (the rise time).7. In our case ÃÙ ½ ½ and Á ½¾ s.30) ÃÙ ½ where Ù is defined by ´ Ùµ The closed-loop response. and we get à obtained from the model ´×µ.5). Later. The output Ý ´Øµ has an initial inverse response due to the RHP-zero.31) ¦ Magnitude 10 0 Ì Ä Ë Ì 10 −2 10 90 −2 10 −1 10 0 10 1 Phase 0 −90 Ì Ä −1 Ë −180 −270 −2 10 10 10 0 Frequency [rad/s] 10 1 Figure 2. The peak value of Ë is ÅË ¿ ¾. and the peak value of Ì is ÅÌ ¿ ¿ which again are high according to normal design rules. and from the plot of Ä´ µ.8: Bode magnitude and phase plots of Ä Ã .8. The response is quite oscillatory and rises quickly and Ý ´Øµ ¼ at Ø % of the final value until after Ø s (the settling time). repeated in Figure 2.1 Use (2. in Section 2. with PI-control. we find that the phase margin (PM) is ¼ ¿ rad = ½ Æ and the gain margin (GM) is ½ ¿. Ë and Ì when ´×µ ´ ׿´ ¾×·½µ . The decay ratio. These margins are too small according to common rules of thumb. ´ Ùµ  ½ ¼Æ . Ë and Ì are shown in Figure 2. to a step change in reference is shown in Figure 2. The settings for à and Á can be determined from the classical tuning rules of Ziegler and Nichols (1942): à ÃÙ ¾ ¾ Á ÈÙ ½ ¾ (2. Exercise 2. we define stability margins. and à ´×µ ½ ½¿ ´½ · ½¾½ × µ (a Ziegler-Nichols PI controller) ·½µ´½¼×·½µ The corresponding Bode plots for Ä.26). Alternatively.3.

5 0 ØÖ Time Ø× Figure 2. for this example. However. resulting in a two degrees-of-freedom controller. Settling time: (Ø× ) the time after which the output remains within ¦ ± of its final value.2 (20%) or less. Another measure of the quality of the response is: ¯ Excess variation: the total variation (TV) divided by the overall change at steady state. and considers the following characteristics (see Figure 2. which is usually required to be small.9): ¯ ¯ ¯ ¯ ¯ Rise time: (ØÖ ) the time it takes for the output to first reach 90% of its final value. whereas the overshoot.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾ In summary. 2. .2 Time domain performance 1. which is usually required to be small. Decay ratio: the ratio of the second and first peaks.9: Characteristics of closed-loop response to step in reference Step response analysis. and one can add a prefilter to improve the response for reference tracking. The above example illustrates the approach often taken by engineers when evaluating the performance of a control system. decay ratio and steady-state offset are related to the quality of the response. this will not change the stability robustness of the system. the Ziegler-Nichols’ PI-tunings are somewhat “aggressive” and give a closed-loop system with smaller stability margins and a more oscillatory response than would normally be regarded as acceptable. which should typically be 0. which is usually required to be small. For disturbance rejection the controller settings may be more reasonable.4. Overshoot: the peak value divided by the final value. The rise time and settling time are measures of the speed of the response. That is.5 Overshoot Decay ratio = = ݴص 1 ½¼ ¼ 0. which should typically be 1. which should be as close to 1 as possible.3 or less. one simulates the response to a step in the reference input. Steady-state offset: the difference between the final value and the desired final value.

e. Actually. Remark 2 The step response is equal to the integral of the corresponding impulse response. one may investigate in simulation how the controller works if the plant model parameters are different from their nominal values. Ý ´Øµ resulting from an impulse change in ִص.10: Total variation is ÌΠȽ ½ Ú . for example. so the excess variation is equal to the total variation. 1991. Note that in this case the various objectives related to both the speed and quality of response are combined into one number. The above measures address the output response.32) where Ì ´Øµ is the impulse response of Ì . set Ù´ µ ½ in (4. ´Øµ ¾ ¼ ´ µ¾ . That is.10. Finally. which usually should be as small and smooth as possible. 98).¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ý ´ ص Ú¼ Ú¾ Ú½ Ú Ú¿ Ú Ú Ú¼ Time Figure 2. Ù). Another advantage of the 2-norm is that the resulting optimization problems (such as minimizing ISE) are numerically easy Õ solve. one should consider the magnitude of the manipulated input (control signal. let Ý Ì Ö. then the response to these should also be considered. i. but in LQ-control one normally considers an impulse rather than a step change in ִص. If there are important disturbances. Ý ´Øµ. then the total variation in Ý for a step change in Ö is ¼ ´É ´Øµ ¾ · Ê Ù´Øµ ¾ µ Ø where É and Ê are positive constants. and Excess variation is ÌÎ Ú¼ The total variation is the total movement of the output as illustrated in Figure 2. e. . one might use the integral squared error Õ (ISE). This is similar to ÌÎ ½ ¼ Ì´ µ ¸ Ì ´Øµ ½ (2. One can also take input magnitudes into account by considering.g. Some thought then reveals that one can compute the total variation as the integrated absolute area (1-norm) of the corresponding impulse response (Boyd and Barratt. or its square root which is the 2-norm of the error signal. in most cases minimizing the 2-norm seems to give a reasonable trade-off between the various objectives listed above. p. For example. For the cases considered here the overall change is 1. Remark 1 Another way of quantifying time domain performance is in terms of some norm of the error signal ´Øµ Ý ´Øµ ִص. to   ʽ  ʽ linear quadratic (LQ) optimal control.11). In addition.

12. 10 1 Magnitude 10 0 ½Å −2 10 −1 10 −90 10 −1 ½¼ 10 0 Phase −135 −180 −225 10 −2 ÈÅ Frequency [rad/s] 10 −1 10 0 Figure 2. Ì and Ë are shown in Figure 2. that is Ä´ µ (2. gain and phase margins. is that it considers a broader class of signals (sinusoids of any frequency). respectively. or of various closedloop transfer functions.11: Typical Bode plot of Ä´ µ with PM and GM indicated The gain margin is defined as Å ½ Ä´ ½ ¼ µ (2.8.4. Gain and phase margins Let Ä´×µ denote the loop transfer function of a system which is closed-loop stable under negative feedback.g.33) where the phase crossover frequency ½ ¼ is where the Nyquist curve of crosses the negative real axis between  ½ and 0. A typical Bode plot and a typical Nyquist plot of Ä´ µ illustrating the gain margin (GM) and phase margin (PM) are given in Figures 2. We will now describe some of the important frequency-domain measures used to assess performance e. the maximum peaks of Ë and Ì . One advantage of the frequency domain compared to a step response analysis. This makes it easier to characterize feedback properties. Ä´ µ. and in particular system behaviour in the crossover (bandwidth) region.11 and 2.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿½ 2. may also be used to characterize closed-loop performance. Typical Bode plots of Ä.3 Frequency domain performance The frequency-response of the loop transfer function.34) Ä´ ½ ¼ µ  ½ ¼Æ . and the various definitions of crossover and bandwidth frequencies used to characterize speed of response.

The phase margin is defined as ÈÅ where the gain crossover frequency that is Ä´ µ · ½ ¼Æ (2. The GM is thus a direct safeguard ¾.5 ½   ½Å −1 Ä´ ½ ¼ µ ÈÅ ·½ 0. The GM is the factor by which the loop gain Ä´ µ may be increased before the closed-loop system becomes unstable. in dB) is the vertical distance from the unit magnitude line down to Ä´ ½ ¼ µ .g.12). If the against steady-state gain uncertainty (error). see Figure 2.5 1 Ê Ä´ µ −0.36) is where Ä´ µ Ä´ µ ½ The phase margin tells how much negative phase (phase lag) we can add to Ä´×µ at frequency before the phase at this frequency becomes  ½ ¼ Æ which corresponds to closed-loop instability (see Figure 2.12: Typical Nyquist plot of Ä´ µ for stable plant with PM and GM indicated.35) first crosses 1 from above. we have that GM (in logarithms. On a Bode plot with a logarithmic axis for Ä .¿¾ ÅÍÄÌÁÎ ÊÁ ÁÑ Ä Ã ÇÆÌÊÇÄ 0. Typically.11. we require PM larger than ¿¼ Æ .5 Ä´ µ Figure 2. Typically we require Å Nyquist plot of Ä crosses the negative real axis between  ½ and  ½ then a gain reduction margin can be similarly defined from the smallest value of Ä´ ½ ¼ µ of such crossings. (2. e. Closed-loop instability occurs if Ä´ µ encircles the critical point ½   If there is more than one crossing the largest value of Ä´ ½ ¼ µ is taken.

2 Prove that the maximum additional delay for which closed-loop stability is maintained is given by (2. and so if Û is in [rad/s] then PM must be in radians. and with feedback Without control (Ù  Ì Ë·Ì ½ . However. resulting in a slower response) the system can tolerate larger time delay errors. From the above arguments we see that gain and phase margins provide stability margins for gain and delay uncertainty.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿¿ or more. the gain and phase margins are used to provide the appropriate trade-off between performance and stability. but this is not a general rule. For stable plants we usually have Å Ë ÅÌ . It is also important to note that by decreasing the value of (lowering the closed-loop bandwidth. The PM is a direct safeguard against time delay uncertainty. ½ Example 2.) Typically. A large value of Å Ë or ÅÌ (larger than about ) indicates poor performance as well as poor robustness. We now give some justification for why we may want to bound the value of Å Ë . it is required that Å Ë is less than about ¾ ( dB) and Å Ì is less than about ½ ¾ (¾ dB).37) Note that the units must be consistent. An upper bound on Å Ì has been a common design specification in classical control and the reader may be familiar with the use of Å -circles on a Nyquist plot or a Nichols chart used to determine Å Ì from Ä´ µ. In short. The allowed time delay error s. Exercise 2. we have Ý Ö   Ö. the system becomes unstable if we add a time delay of ÑÜ ÈÅ (2. A large value of Å Ë therefore occurs if and only if ÅÌ is large.3 Derive the approximation for order delay system.38) (Note that ÅË Ë ½ and ÅÌ Ì ½ in terms of the À½ norm introduced later. Since Ë · Ì ½ it follows that at any frequency ÃÙ ½ ´ Ùµ given in (5.4 For the PI-controlled inverse response process example we have ÈÅ Æ is then ½ ÑÜ ¼¿ ¿ rad rad ¼ ¿ rad and ¼ ¾¿ rad/s ½ ¼ ¾¿ rad/s. Exercise 2. Maximum peak criteria The maximum peaks of the sensitivity and complementary sensitivity functions are defined as ÅË Ñ Ü Ë ´ µ ÅÌ Ñ Ü Ì ´ µ (2. as we show below the gain and phase margins are closely related to the peak values of Ë ´ µ and Ì ´ µ and are therefore also useful in terms of performance. ¼).37).73) for a first- Ë so ÅË and ÅÌ differ at most by ½.

One may also view Å Ë as a robustness measure. Specifically. Similarly.42)     ¾ × Ò´ÈÅ ¾µ and the inequalities follow. . Ë is small at low of reducing frequencies. that is. with Å Ë ¾ we are guaranteed Å ¾ and ÈÅ ¾ ¼ Æ .40) consider µ ½ ½ · Ä´ µ ½ ½ Ä´ µ and we Figure 2. The smallest distance between Ä´ µ and the -1   point is ÅË ½ .   In conclusion. so we want Ä to stay away from this point. Å Proof of (2. and therefore for robustness. This means that provided ½ ¼ exists. We note with interest that (2. for a given Å Ë we are guaranteed ÅË ½ ½ (2. To derive the PM-inequalities in (2. for a given value of Å Ì we are guaranteed ½ ½ ½ Å ½· (2.39) ÈÅ ¾ Ö × Ò Ö ÅË   ½ ¾ÅË ÅË For example. For instance. To maintain closed-loop stability the number of encirclements of the critical point  ½ by Ä´ µ must not change.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ control Ë´   Öµ. In summary. see the remark below). ÅË . which are sometimes used.39) and (2. for example.13 where we have Ë ´ obtain ½ Ë´ µ Ì ´ µ (2.40) ÈÅ ¾ Ö × Ò Ö ÅÌ ¾ÅÌ ÅÌ and therefore with Å Ì ¾ we have Å ½ and ÈÅ ¾ ¼ Æ .41) requires Ë to be larger than 1 at frequency ½ ¼ . then the peak of Ë ´ µ must exceed 1. But because ¼ or all real systems are strictly proper we must at high frequencies have that Ä equivalently Ë ½.40): To derive the GM-inequalities notice that Ä´ ½ ¼ µ (since Å ½ Ä´ ½ ¼ µ and Ä is real and negative at ½ ¼ ).41) and the GM-results follow. At intermediate frequencies one cannot avoid in practice a peak value. Thus. Alternative formulas. as is now explained. Ä´ µ has more than ½ ¼Æ phase lag at some frequency (which is the case for any real system). Usually. ¾ identity ¾ × Ò´ÈÅ ¾µ   Remark. can make specifications on the gain and phase margins unnecessary. There is a close relationship between these maximum peaks and the gain and phase margins. Ë ´¼µ ¼ for systems with integral action.39) and (2. from which we get  ½ Å Ì ´ ½ ¼µ  ½ Å ½ Ë´ ½ ¼ µ ½ ½   ½Å (2. larger than 1 (e. and the value of Å Ë is a measure of the worst-case performance degradation. the better. there is an intermediate frequency range where feedback control degrades performance. feedback control improves performance in terms at all frequencies where Ë ½. both for stability and performance we want Å Ë close to 1. the smaller Å Ë . requiring Å Ë ¾ implies the common rules of thumb Å ¾ and ÈÅ ¿¼Æ. we see that specifications on the peaks of Ë ´ µ or Ì ´ µ (Å Ë or ÅÌ ).g. Thus. follow from the Ô ¾´½ Ó×´Èŵµ.

This provides some justification for the use of ÅÌ in classical control to evaluate the quality of the response. the bound in (2.1. From Table 2.1.13: At frequency 2. With ִص ½ (a unit step change) the values of the overshoot and total variation for Ý ´Øµ are given. Given that the response of many systems can be crudely approximated by fairly loworder systems. Is there any relationship between the frequency domain peak of Ì ´ µ.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿ ÁÑ −1 ÈÅ ¾ µ Ê Ä´ µ −1 Figure 2. together with Å Ì and ÅË .44) suggests that Å Ì may provide a reasonable approximation to the total variation.4 Relationship between time and frequency domain peaks For a change in reference Ö. the output is Ý ´×µ Ì ´×µÖ´×µ. as a function of in Table 2. for example the overshoot or the total variation? To answer this consider a prototype second-order system with complementary sensitivity function For underdamped systems with ½ the poles are complex and yield oscillatory step responses. Å Ì .136) and (2. This is further confirmed by (A.43) ÅÌ ÌÎ ´¾Ò · ½µÅÌ (2. and any characteristic of the time domain step response. we see that the total variation TV correlates quite well with Å Ì .4.32) which together yield the following general bounds Here Ò is the order of Ì ´×µ.  ½   Ä´ we see from the figure that ½ · Ä´ µ ¾ × Ò´ÈÅ ¾µ Ì ´×µ ¾ ×¾ · ¾ × · ½ ½ (2. which is ¾ for our prototype system in (2.43).44) .

tv=sum(abs(diff(y1))) Mt=hinfnorm(T. The interpretation we use is that control is effective if we obtain some benefit in terms of performance.39 1.55 2.73 6.zeta=0. which are related to the quality of the response.21 1.66 2.C. y1 = step(A.25 1.03 5.1.D]=unpck(T). and the system will usually be more robust. since high frequency signals are more easily passed on to the outputs. [A.22 1. overshoot=max(y1). Above we considered peaks of closedloop transfer functions.2 0.7 Frequency domain ÅÌ ÅË 1 1.C.35 1.09 1.36 1.97 63.68 1.15 1 1. For tracking performance Ý   Ö  ËÖ and we get that feedback is effective (in terms of the error is improving performance) as long as the relative error Ö  Ë is reasonably small.73 5.707 when defining the bandwith is that.e-4).01 Time domain Overshoot Total variation 1 1 1 1 1 1 1. Conversely. if the bandwidth is small.1.e-4) 2.1: Peak values and total variation of prototype second-order system 2.04 1.1 0. Å Ë and ÅÌ . which we may define to be less than 0.5 Bandwidth and crossover frequency The concept of bandwidth is very important in understanding the benefits and tradeoffs involved when applying feedback control. T = nd2sys(1.t).05 1 1. S = msub(1.03 1.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 2.0 % MATLAB code (Mu toolbox) to generate Table: tau=1. Loosely speaking.707 in magnitude.6 0. a large bandwidth corresponds to a faster rise time.T).0 1. and this leads to considering the bandwidth frequency of the system. and we then simply call ¾ The word “effective” may be interpreted in different ways.1.B. the time response will generally be slow. bandwidth may be defined as the frequency range ½ ¾ over which control is effective.4. 2 We then get the following ¾ The reason for choosing the value 0.Ms=hinfnorm(S.53 3. for the simplest case with . ½ ¼. for performance we must also consider the speed of the response. In most cases we require tight control at steady-state so the bandwidth.02 1.1.0 50.t=0:0.01:100.[tau*tau 2*tau*zeta 1]).B.8 0.22 1.5 1.12 50.D.4 0.08 1 1. In general. A high bandwidth also indicates a system which is sensitive to noise and to parameter variations. and this may give rise to different definitions of bandwidth.0 0. However.

707 from above. However. so PM= ¼Ó . is also sometimes used to define closed-loop bandwidth. ¾ frequency where Ì ´ µ crosses 0. at frequencies higher than Ì we ½ and control has no significant effect on the response. Consider the simplest case with a first-order closed-loop response. Similarly. is less useful for feedback control. we must have Ì Ì (2. Ë is less than 0. Example.1 Ô (closed-loop) bandwidth. is the highest frequency at which Ì ´ µ crosses ½ ¾ ¼ ¼ ´ ¿ dB) from above. Ë´ µ Remark.1 continue. and we have Ì .45). the output is Ý Ý ¼. It has the advantage of being simple to compute and usually gives a value between and ¼ Æ (most practical systems) we have Ì . For ÈÅ ¼Æ we get Ë ´ µ Ì´ µ ¼ ¼ . This leads to an alternative definition which has been traditionally used to define the bandwidth of a control system: The bandwidth in terms of Ì .7. but does not improve performance — in most cases we find that in this frequency range Ë is larger than 1 and control degrades performance. Thus. For tracking performance. The plant has a RHP-zero and the PI-tunings are quite agressive ¼½ so GM=1. for systems with PM Proof of (2. Ì . The first crosses ½ ¾ ¼ ¼ ´  ¿ dB) from below. we would argue that this alternative definition. which we may define to be larger than 0.45) From this we have that the situation is generally as follows: Up to the frequency .15). Furthermore.707. asymptote × · ½ crosses 1 at . The bandwidth and crossover frequencies are ¼¾ Ì ¼ (confirming 2.Ä ËËÁ definition: Ä ÇÆÌÊÇÄ ¿ . and since is the frequency where Ë ´ µ crosses 0. although being closer to how the term is used in some other fields. The situation just have Ë described is illustrated in Example 2. ½   ½¼ ½ Example 2. . is the frequency where Definition 2.63 and PM=½ Ó . Another interpretation is to say that control is effective if it significantly changes the Ì Ö and since without control output response. defined as the frequency where Ä´ µ first crosses 1 from above. Specifically. theÔ¾ ¼ ¼ .5 below (see Figure 2. since Ì is the . (or really undefined) and GM= . the phase of Ä remains constant at ¼Ó . a first-order closed-loop response with Ë the frequency where Ë ´ µ low-frequency Ô× ¾´× · ¾ µ. we may say that control is effective as long as Ì is reasonably large.45): Note that Ä´ µ ½ so Ë ´ µ Ì ´ µ . Finally. and control is effective in terms of improving performance. In the frequency range Ì control still affects the response.42)). when ÈÅ ¼Æ we get Ë ´ µ Ì´ µ ¼ ¼ (see (2. Ô   The gain crossover frequency.707 from below we must have . Ä´×µ × Ë ´×µ × ×· Ì ´ ×µ ×· In this ideal case the above bandwidth and crossover frequencies are identical: Ì .

both Ä and Ì have a RHP-zero at Þ ¼ ½.5 −1 0 5 10 15 20 25 30 35 40 45 50 Time [sec] Figure 2.5 Comparison of and Ì as indicators of performance. The rise time is ¿½ ¼ s.14: Step response for system Ì  ×·¼ ½ ½ ×·¼ ½ ×·½ 10 0 Ë ½¼ Ì Magnitude Ì 10 −1 10 −2 10 −2 10 −1 Frequency [rad/s] 10 0 10 1 10 2 Figure 2. However.46) For this system. An example where Ì is a poor indicator of performance is the following (we are not suggesting this as a good controller design!): Ä  × · Þ ×´ × · Þ · ¾µ Ì  × · Þ ½ ×·Þ ×·½ Þ ¼½ ½ (2. We see that Ì ½ up to to about Ì . illustrating that is a is close to ½ better indicator of closed-loop performance than Ì . The closed-loop response RHP-zeros).¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Example 2. but very different from ½ Ì ½ ¼ s.15. We find that ¼ ¼¿ and ¼ ¼ are both less than Þ ¼ ½ (as one should expect because speed of response is limited by the presence of ½ ¼ is ten times larger than Þ . which ¾ ¼ s. 1 0.14. ÅË ½ ¿ and ÅÌ ½.15: Plots of Ë and Ì for system Ì  ×·¼ ½ ½ ×·¼ ½ ×·½ The magnitude Bode plots of Ë and Ì are shown in Figure 2. and we have Å ¾ ½. in the frequency range from Ì the phase of Ì (not shown) .5 ݴص 0 −0. whereas Ì ½ to a unit step change in the reference is shown in Figure 2. ÈÅ ¼ ½Æ .

but most other methods involve minimizing some cost function. classical loop shaping is difficult to apply for complicated systems. Ì and ÃË . so in practice tracking is out of phase and thus poor In conclusion. such .    ¾¾¼Æ . Ä´ µ. while Ì (in terms of Ì ) may be misleading in some cases. and then finds a controller which gives the desired shape(s). for for good performance we want Ë close to 0. and it is not sufficient that Ì ½. The signal-based approach. In this approach the designer specifies the magnitude of some transfer function(s) as a function of frequency. On the other hand. Optimization is usually used. is shaped. 2. and this will be the case if Ë ¼ irrespective of the phase of Ë . Usually no optimization is involved and the designer aims to obtain Ä´ µ with desired bandwidth.5 Controller design We have considered ways of evaluating performance. but one also needs methods for controller design. such as Ë . slopes etc. However. The method consists of a second step where optimization is used to make an initial loop-shaping design more robust. more on this later. (b) Shaping of closed-loop transfer functions. This is the classical approach in which the magnitude of the open-loop transfer function. The Ziegler-Nichols’ method used earlier is well suited for on-line tuning. Here one considers a particular disturbance or reference change and then one tries to optimize the closed-loop response. by reducing à from the value obtained by the Ziegler-Nichols’ rules) or adjust some weighting factor in an objective function used to synthesize the controller.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿ drops from about ¼Æ to about in this frequency range. If performance is not satisfactory then one must either adjust the controller parameters directly (for example. and one may then instead use the Glover-McFarlane À ½ loop-shaping design presented in Chapter 9. The reason is that we want Ì ½ in order to have good performance. (which is defined in terms of Ë ) and also (in terms of Ä ) are good indicators of closed-loop performance. The overall design process is iterative between controller design and performance (or cost) evaluation. The “modern” state-space methods from the 1960’s. We will look at this approach in detail later in this chapter. There exist a large number of methods for controller design and some of these will be discussed in Chapter 9. 2. (a) Loop shaping. we must also consider its phase. Shaping of transfer functions. In addition to heuristic rules and on-line tuning we can distinguish between three main approaches to controller design: 1. This involves time domain problem formulations resulting in the minimization of a norm of a transfer function. resulting in various À ½ optimal control problems such as mixed weighted sensitivity.

2. stability margins. etc. such as rise times.1 Trade-offs in terms of Ä Recall equation (2. 3.47) . a signal-based À ½ optimal control methodology can be derived in which the À ½ norm of a combination of closed-loop transfer functions is minimized. frequency-by-frequency.19). Also. These methods may be generalized to include frequency dependent weights on the signals leading to what is called the WienerHopf (or À ¾ -norm) design method. On-line optimization approaches such as model predictive control are likely to become more popular as faster computers and more efficient and reliable computational algorithms are developed. This often involves multi-objective optimization where one attempts to optimize directly the true objectives. By considering sinusoidal signals. An understanding of how à can be selected to shape this loop gain provides invaluable insight into the multivariable techniques and concepts which will be presented later in the book.6 Loop shaping In the classical loop-shaping approach to controller design. by effectively including performance evaluation and controller design in a single step procedure. are based on this signal-oriented approach. and so we will discuss loop shaping in some detail in the next two sections. Numerical optimization. the problem formulation is far more critical than in iterative twostep approaches. which yields the closed-loop response in terms of the control error Ý   Ö: Ä   ´Á ·ßÞ µ ½ Ö · ´Á ·ßÞ µ ½ Ä Ë Ë   ´Á · ßÞµ ½ Ä Ò Ä Ì (2. an important topic which will be addressed in later chapters.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ as Linear Quadratic Gaussian (LQG) control. This approach has attracted significant interest. Computationally.6. such optimization problems may be difficult to solve. especially if one does not have convexity in the controller parameters. to yield quite complex robust performance problems requiring -synthesis. and may be combined with model uncertainty representations. The numerical optimization approach may also be performed online. “loop shape” refers to the magnitude of the loop transfer function Ä Ã as a function of frequency. 2. In LQG the input signals are assumed to be stochastic (or alternatively impulses in a deterministic setting) and the expected value of the output variance (or the 2-norm) is minimized. which might be useful when dealing with cases with constraints on the inputs and outputs.

i. which is obtained with Ä ¼.Ä ËËÁ Ä ÇÆÌÊÇÄ ½ For “perfect control” we want Ý Ö ¼. On the other hand. Ä´ µ . 6. the conflicting design objectives mentioned above are generally in different frequency ranges. we would like ¼¡ ·¼¡Ö·¼¡Ò The first two requirements in this equation. Performance. or equivalently. namely disturbance rejection and ¼. It is also important to consider the magnitude of the control action Ù (which is the input to the plant). Performance. the requirement for zero noise transmission implies that Ì ¼. Stabilization of unstable plant: Ä large. and we can meet most of the objectives by using a large ½) at low frequencies below crossover. Ë Á . the actuator and the . 7. and small loop gains to reduce the effect of noise. in this case between large loop gains for disturbance rejection and tracking. 5.g. Robust stability (stable plant): Ä small (because of uncertain or neglected dynamics). and a small gain ( Ä ½) loop gain ( Ä at high frequencies above crossover. Fortunately. and also because Ù is often a disturbance to other parts of the system (e. good command following: Ä large. This illustrates the fundamental nature of feedback design which always involves a trade-off between conflicting objectives. we usually want to avoid fast changes in Ù. 4. Nominal stability (stable plant): Ä small (because of RHP-zeros and time delays). or equivalently. good disturbance rejection: needs large controller gains. Ì Á . that is. Ä large. 3. consider opening a window in your office to adjust your comfort and the undesirable disturbance this will impose on the air conditioning system for the building). this implies that the loop transfer function Ä must be large in magnitude. are obtained with Ë Ë ´Á · ĵ ½ . We want Ù small because this causes less wear and saves input energy. Small magnitude of input signals: à small and Ä small. 8. In particular.2 Fundamentals of loop-shaping design By loop shaping we mean a design procedure that involves explicitly shaping the ´×µÃ ´×µ where magnitude of the loop transfer function. Here Ä´×µ à ´×µ is the feedback controller to be designed and ´×µ is the product of all other transfer functions around the loop. controller gains and a small Ä The most important design objectives which necessitate trade-offs in feedback control are summarized below: 1.e. The control action is given by Ù Ã ´Ö   Ýѵ and we find as expected that a small Ù corresponds to small Ã. 2. Since command tracking. 2. including the plant. Mitigation of measurement noise on plant outputs: Ä small. Physical controller must be strictly proper: à ¼ and Ä ¼ at high frequencies.6.

If the gain is measured in decibels  ½ corresponds to  ¾¼ dB/ decade. i. If the references or disturbances require the outputs to change in a ramp-like fashion then a slope of  ¾ is required. At low any real system. if we are considering step changes in the references or disturbances which affect the outputs as steps. The value of  Æ at (dB) then a slope of Æ high frequencies is often called the roll-off rate. a desired loop shape for Ä´ µ typically has a slope of about  ½ in the crossover region. then a slope for Ä of  ½ at low frequencies is acceptable. that is. For example. For stability.3. However. which by itself gives   ¼Æ phase lag. and a slope of  ¾ or higher beyond this frequency. to get the benefits of feedback control we want the loop gain. it is desirable that Ä´ µ falls sharply with frequency. In summary.e. if the slope is made steeper at lower or higher frequencies. As an example.13) with the Bode plot shown in Figure 2. this is not consistent with the desire that Ä´ µ should fall sharply. Also. Essentially. with a proper controller. integrators are included in the controller to get the desired low-frequency performance. To measure how Ä falls with frequency we consider the logarithmic slope Æ ÐÒ Ä ÐÒ . the loop transfer function Ä ½ × Ò (which has a slope Æ  Ò on a log-log plot) has a phase Ä  Ò ¡ ¼ Æ. disregarding stability for the moment. The design of Ä´×µ is most crucial and difficult in the crossover region between (where Ä ½) and ½ ¼ (where Ä  ½ ¼Æ). due to the influence of the steeper slopes of  ¾ at lower and higher frequencies. to get a and therefore ½ ¼ large. Thus. unmodelled high-frequency dynamics and limitations on the allowed manipulated inputs. which is required for à rolls off at least as fast as . to have a phase margin of Æ we need Ä  ½¿ Æ. that is. For example. the additional phase lag from delays and RHP-zeros may in practice be  ¿¼ Æ or more. so the actual phase of Ä ½ at is approximately  ½¾ Æ . At the gain crossover frequency . then this will add unwanted phase lag at intermediate frequencies. Thus. there is a “penalty” of about  ¿ Æ at crossover. we at least need the loop gain to be less than 1 at frequency ½ ¼ . Ä´ ½ ¼ µ ½. consider Ľ ´×µ given in (2. In addition. the roll-off is 2 or larger. we want high bandwidth (fast response) we want the phase lag in Ä to be small. The situation becomes even worse for cases with delays or RHP-zeros in Ä´×µ which add undesirable phase lag to Ä without contributing to a desirable negative slope in Ä. For example. Unfortunately. Thus. due to time delays. In practice. the desired shape of Ä depends on what disturbances and references we are designing for. Ä´ µ .¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ measurement device. a slope Æ  ½ implies that Ä drops by a factor of 10 when increases by a factor of 10. Here the slope of the asymptote of Ä is  ½ at the gain crossover frequency (where Ä ½ ´ µ ½). the loop gain has to drop below one at and above some frequency which we call the crossover frequency . and the slope of Ä cannot exceed Æ  ½ . RHP-zeros. we must have that Ä frequencies. and for offset-free reference tracking the rule is that . However. to be as large as possible within the bandwidth region.

defined as the number of pure integrators in Ä´×µ. selected closed-loop time responses. 3. The plant and our choice for the loop-shape is   ´×µ ¿´ ¾× · ½µ ´ × · ½µ´½¼× · ½µ Ä´×µ ¿Ã ´ ¾× · ½µ ×´¾× · ½µ´¼ ¿¿× · ½µ (2. etc. where Ä´ µ ½. and if ִص is a ramp then Ö´×µ ½ ×¾ (ÒÖ ¾). .48) ´×µ and to get zero offset (i. The shape of Ä´ µ.48) require ÒÁ ÒÖ . We will now design a loop-shaping controller for the example process in (2. e. 2. . Proof: Let Ä´×µ Ä´×µ ×ÒÁ where Ä´¼µ is non-zero and finite and ÒÁ is the number of integrators in Ä´×µ — sometimes ÒÁ is called the system type.49) ´Ø ½µ ¼) we must from (2. the control error is Ð Ñ ´Øµ × Ð Ñ × ´×µ ¼ (2. the peaks of closed-loop frequency responses (Å Ì and ÅË ). Example 2. and therefore a reasonable approach is to let the loop transfer function have a slope of ½ at low frequencies. and ranges. For example. Loop-shaping design is typically an iterative procedure where the designer shapes and reshapes Ä´ µ after computing the phase and gain margins. we discuss how to specify the loop shape when disturbance rejection is the primary objective of control. The system type. The desired slope at lower frequencies depends on the nature of the disturbance or reference signal. ÒÁ  ÒÖ   ½ · ½ ´×µ Ö´×µ   Ò×Á Ä × · Ä´×µ (2.16 for à ¼¼ . if ִص is a unit step.g. In Section 2.26) which has a ¼ .6 Loop-shaping design for the inverse response process. The procedure is illustrated next by an example. we desire a slope of about Æ a larger roll-off at higher frequencies. We require the system to have one integrator (type ½ system). and the rule ¾ In conclusion. and then to roll off with a higher slope at frequencies beyond ¼ rad/s.4. in terms of the slope of Ä´ µ in certain frequency  ½ around crossover.6. then Ö´×µ ½ × (ÒÖ ½). The controller gain à was selected to get reasonable stability margins (PM and GM). The gain crossover frequency. Consider a reference signal of the form Ö´×µ ½ ×ÒÖ .50) The frequency response (Bode plots) of Ä is shown in Figure 2. The final value theorem for Laplace transforms is Ø ½ In our case. the magnitude of the input signal.e.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿ ¯ Ä´×µ must contain at least one integrator for each integrator in Ö´×µ. The RHP-zero limits the achievable bandwidth and so the crossover RHP-zero at × region (defined as the frequencies between and ½ ¼ ) will be at about ¼ rad/s. Typically. follows. one can define the desired loop transfer function in terms of the following specifications: 1.

ÅÌ ½ ½½) ¼¼ 2 1. The phase of Ä is ¼Æ at low frequency.17 also shows that the magnitude     .50) is    ¾.5 1 0. ½ ¼ ¼ ¿.50) is ¼Æ .17: Response to step in reference for loop-shaping design The asymptotic slope of Ä is ½ up to ¿ rad/s where it changes to corresponding to the loop-shape in (2.5 0 −0. so at ¾×·½ ¼ rad/s.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Magnitude 10 0 ½Å −2 10 −2 10 −90 10 −1 ½¼ 10 0 10 1 Phase −180 −270 −360 −2 10 ÈÅ 10 −1 Frequency [rad/s] 10 0 10 1 Figure 2.5. This is desired in this case because we do not want the slope of the loop shape to drop at the break frequencies ½ ½¼ ¼ ½ rad/s ¼ ¾ rad/s just before crossover. ( Å ¾ ¾.5 0 5 10 15 20 25 30 35 ݴص ٴص 40 45 50 Time [sec] Figure 2.51) à ´×µ à ´½¼× · ½µ´ × · ½µ ×´¾× · ½µ´¼ ¿¿× · ½µ à ¼¼ The controller has zeros at the locations of the plant poles. The corresponding time response is shown in corresponding to Å Figure 2. The controller (2. and and ½ ¼ rad/s the additional contribution from the term  ¾×·½ in (2. It is seen to be much better than the responses with either the simple PI-controller in Figure 2. ÈÅ ¼ ½ . ÅË ½ .17. Figure 2.16: Frequency response of Ä´×µ in (2.50) for loop-shaping design with à Æ.7 or with the P-controller in Figure 2. The choice à ¼ ¼ yields ¼ ½ rad/s for stability we need ¾ ¾ and PM= Æ .

performance we need . so for acceptable control Þ ¾. even when there are no RHP-zeros or delays. Thus. The magnitude Bode plot for the controller (2. with preferably a steeper slope before and after crossover. The phase contribution from the all-pass term at is  ¾ Ö Ø Ò´¼ µ   ¿Æ (which is close to   Æ).51) is shown ¼ rad/s the in Figure 2.51) for loop-shaping design Limitations imposed by RHP-zeros and time delays Based on the above loop-shaping arguments we can now examine how the presence of delays and RHP-zeros limit the achievable control performance. approximately.Ä ËËÁ Ä ÇÆÌÊÇÄ of the input signal remains less than about ½ in magnitude. To avoid an increase in slope caused by this zero we place a pole at ×  Þ such that the loop transfer function contains the term  ×·Þ . Next consider a real RHP-zero at × Þ . It is interesting to note that in the crossover region around controller gain is quite constant.18: Magnitude Bode plot of controller (2. This means that the controller gain is not too large at high frequencies. which at frequency ½ is  ½ rad =   Æ (which is more than   Æ).18. then the phase lag of Ä at will necessarily be at least   ¼Æ . which is similar to the “best” gain found using a P-controller (see Figure 2. for acceptable control performance we need ½ . 10 1 Magnitude ô µ 10 0 10 −1 10 −2 10 −1 Frequency [rad/s] 10 0 10 1 Figure 2. Therefore. the form of which is referred to as all-pass since its magnitude equals ×·Þ Þ ¾ 1 at all frequencies. around ½ in magnitude. First consider a time delay . approximately. if we assume that for performance and robustness we want a phase margin of about ¿ Æ or more. We have already argued that if we want the loop shape to have a slope of  ½ around crossover ( ). then the additional phase contribution from any delays and RHP-zeros at frequency cannot exceed about   Æ.5). It yields an additional phase contribution of   .

We now introduce our second SISO example control problem in which disturbance rejection is an important objective in addition to command tracking. but otherwise the specified Ä´×µ was independent of ´×µ.53) is not generally desirable.6. Command tracking: The rise time (to reach ¼% of the final value) should be less than ¼ ¿ s and the overshoot should be less than %.20). 1989) which has proved successful in many applications. The controller will not be realizable if ´×µ has a pole excess of two or larger. This is an old idea.3 Inverse-based controller design In Example 2.52) where is the desired gain crossover frequency.52) is à ´×µ ×  ½ ´×µ (2. The control objectives are: 1. Input constraints: ٴص should remain within the range ½ ½ at all times to avoid input saturation (this is easily satisfied for most designs). Consider the disturbance process described by ´×µ ¾¼¼ ½ ½¼× · ½ ´¼ ¼ × · ½µ¾ ´×µ ½¼¼ ½¼× · ½ (2.52) and (2.6. Example 2. and may in any case yield large input signals. there are at least two good reasons for why this inverse-based controller may not be a good choice: 1. This is illustrated by the following example. 2. This loop shape yields a phase margin of ¼ Æ and an infinite gain margin since the phase of Ä´ µ never reaches  ½ ¼Æ. unless the references and disturbances affect the outputs as steps. The loop shape resulting from (2. Select a loop shape which has a slope of  ½ throughout the frequency range. 3. However.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 2.53) That is.     . Problem formulation. namely Ä´×µ × (2. Disturbance rejection: The output in response to a unit step disturbance should remain within the range ½ ½ at all times. 2. and it should return to ¼ as quickly as possible ( Ý ´Øµ should at least be less than ¼ ½ after ¿ s). The controller corresponding to (2. and is also the essential part of the internal model control (IMC) design procedure (Morari and Zafiriou.4. the controller inverts the plant and adds an integrator (½ ×). We assume that the plant has been appropriately scaled as outlined in Section 1. These problems may be partly fixed by adding high-frequency dynamics to the controller.e. we made sure that Ä´×µ contained the RHP-zero of ´×µ.54) with time in seconds (a block diagram is shown in Figure 2.7 Disturbance process. This suggests the following possible approach for a minimum-phase plant (i. one with no RHP-zeros or time delays).

  The above example illustrates that the simple inverse-based design method where Ä has a slope of about Æ  ½ at all frequencies.5 0 0 1 Time [sec] (a) Tracking response 2 3 1 Time [sec] 2 3 (b) Disturbance response Figure 2. reference tracking was excellent. i. but it does not drop below ¼ ½ until after ¾¿ s. The objective of the next section is to understand why the disturbance response was so poor. it is still ¼ Because of the integral action the output does eventually return to zero. does not always yield satisfactory designs. stays within the range ½ ½ . . unit disturbance ( ´ µ is lower at higher frequencies.19(a). we do not want and stability problems associated with high gain feedback. We will consider the inverse-based design as given ½¼.5 0 0 Ý ´Øµ 0. we use ´¼ ½× · ½µ ´¼ ¼½× · ½µ to give the realizable design ü ´×µ × ½¼× · ½ ¼ ½× · ½ Ä ´×µ ¾¼¼ ¼ ¼½× · ½ ¼ ¼ ½× · ½ × ´¼ ¼ × · ½µ¾ ´¼ ¼½× · ½µ ½¼ (2. On the to be larger than necessary because of sensitivity to noise other hand. Since ´×µ has a pole excess of three this yields an by (2.e. Although the output at Ø ¿ s (whereas it should be less than ¼ ½). Thus. In the example. However. Inverse-based controller design. and to propose a more desirable loop shape for disturbance rejection.Ä ËËÁ Ä ÇÆÌÊÇÄ Analysis.52) and (2.19: Responses with “inverse-based” controller ü ´×µ for the disturbance process The response to a step reference is excellent as shown in Figure 2. so we need to be approximately equal to ½¼ rad/s for disturbance rejection. but it remains larger than ½ up to The magnitude ½¼ rad/s (where ´ µ ½).53) with unrealizable controller.19(b)) is much too sluggish. The rise time is about ¼ ½ s and there is no overshoot so the specifications are more than satisfied.5 1 1. and therefore we choose to approximate the plant term ´¼ ¼ × · ½µ¾ by ´¼ ½× · ½µ and then in the controller we let this term be effective over one decade.55) 1. Since ´¼µ ½¼¼ we have that without control the output response to a ½) will be ½¼¼ times larger than what is deemed to be acceptable. but disturbance rejection was poor. the response to a step disturbance (Figure 2. We will thus aim at a design with ½¼ rad/s.5 1 Ý ´Øµ 0. feedback control is needed up to frequency .

A reasonable initial loop shape ÄÑ Ò ´×µ is then one that just satisfies the condition ÄÑ Ò where the subscript Ñ ´ µ ½.g.56) At frequencies where ½. and use à This can be summarized as follows:  ½ (2. from which we get that a maximum reference Ê may be viewed as a disturbance ½ with ´×µ  Ê where change Ö Ê is usually a constant.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 2.4 Loop shaping for disturbance rejection At the outset we assume that the disturbance has been scaled such that at each frequency ´ µ ½. In addition to satisfying Ä (eq. ½·Ä (2. For disturbances entering directly at the plant input (which is a common situation in practice – often referred to as a load disturbance). so to achieve ´ µ ½ for ´ µ ½ feedback control we have (the worst-case disturbance) we require Ë ´ µ ½ . ½.57) at frequencies around crossover. or equivalently. thereby reducing the sensitivity to noise and avoiding stability problems. This follows from (1. This explains why selecting à to be like  ½ (an inversebased controller) yields good responses to step changes in the reference.18). this is approximately the same as requiring Ä . we have and we get ÃÑ Ò ½. in order to minimize the input signals. For disturbances entering directly at the plant output. so an inverse-based design provides the best trade-off between performance (disturbance rejection) and minimum use of feedback. so a simple proportional controller with unit gain yields a good trade-off between output performance and input usage. we often add integral action at low frequencies.59) ¯ ¯ ¯ ¯ For disturbance rejection a good choice for the controller is one which contains the dynamics ( ) of the disturbance and inverts the dynamics ( ) of the inputs (at least at frequencies just before crossover). Notice that a reference change may be viewed as a disturbance directly affecting the output. we do not want to use larger loop gains than necessary (at least at frequencies around crossover). 2. to improve low-frequency performance (e.57) Ò signifies that Ä Ñ Ò is the smallest loop gain to satisfy à the corresponding controller with the minimum gain ÃÑ Ò ×· Á ×  ½ (2. However. to get zero steady-state offset). we get ÃÑ Ò  ½ . the desired loop-shape Ä´×µ may be modified as follows: . With Ý Ë . and the main control objective is to achieve ´ µ ½.6.58) In addition. Since Ä satisfies (2.

Adding an integrator yields zero steady-state offset to a step disturbance. When selecting Ä´×µ to satisfy Ä by the corresponding minimum-phase transfer function with the same magnitude. Increase the loop gain at low frequencies as illustrated in (2.59) to improve the settling time and to reduce the steady-state offset. Remark.Ä ËËÁ Ä ÇÆÌÊÇÄ 1. The plant can be represented by the block diagram in Figure 2. Let Ä´×µ roll off faster at higher frequencies (beyond the bandwidth) in order to reduce the use of manipulated inputs.20. Initial design. ¬ From (2. that is.20: Block diagram representation of the disturbance process in (2. The corresponding controller is . to make the controller realizable and to reduce the effects of noise.54) ÄÑ Ò Step 1. Example 2. Consider again the plant described by (2. 1989). 2. The above requirements are concerned with the magnitude. any time à because RHP pole-zero delays or RHP-zeros in ´×µ must be included in Ä cancellations between ´×µ and à ´×µ yield internal instability. and we share the same see that the disturbance enters at the plant input in the sense that and dominating dynamics as represented by the term ¾¼¼ ´½¼× · ½µ.57) we know that a good initial loop shape looks like ¬ ¬ ½¼¼ ¬ ¬ ½¼×·½ ¬ at frequencies up to crossover. 1974). ¼ Ö + ¹ - ¹ à ´×µ Ù ¹ ´¼ ¼ ½×·½µ¾ ¹+ + ¹ ½¼¾¼¼ ×·½ Õ ¹Ý Figure 2. The idea of including a disturbance model in the controller is well known and is more rigorously presented in. Ä´ µ . Around crossover make the slope Æ of Ä to be about  ½. 3. This is to achieve good transient behaviour with acceptable gain and phase margins.54). for example. see Chapter 4. or the internal model control design for disturbances (Morari and Zafiriou. our development is simple. time delays and RHP-zeros in ´×µ should not be included in Ä´×µ as this will impose undesirable limitations on feedback. and sufficient for gaining the insight needed for later chapters. However. On the other hand. In addition.8 Loop-shaping design for the disturbance process. research on the internal model principle (Wonham. the dynamics (phase) of Ä´×µ must be selected such that the closed-loop system one should replace ´×µ is stable.

and for Ø ¿ s the response to a step change in the disturbance is not much different from that with the more complicated inverse-based controller ü ´×µ of (2. A reasonable value is Á ¼Æ ½½Æ at . To get integral action we multiply the controller by the term ×· Á .55) as ½ as Ø . than poles).21.5 0 0 Ľ Ľ ľ −2 ݽ ݾ Ý¿ 1 Time [sec] (b) Disturbance responses 10 0 Ä¿ 10 2 10 −2 10 10 0 2 3 Frequency [rad/s] (a) Loop gains Figure 2.59). þ and ÿ for the Step 2. the response is slightly oscillatory as might be expected since the phase margin is only ¿½Æ and the peak values for Ë and Ì are ÅË ¾ ¾ and ÅÌ ½ .19. there is no integral action and ݽ ´Øµ ý ´×µ ¼ (2.5 1 0. Step 3.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ  ½ ÄÑ Ò ¼ ´¼ ¼ × · ½µ¾ . shown earlier in Figure 2. but in order to maintain an acceptable phase Æ for controller ý ) the term should not add too much negative phase at margin (which is ¼ ¾ for which frequency .21: Loop shapes and disturbance responses for controllers disturbance process ý . In our case the phase contribution from ×· Á is Ö Ø Ò´½ ¼ ¾µ × ½¼ rad/s.21(b) satisfies the requirement that Ý ´Øµ ¼ ½ at time Ø ¿ s. Ľ ´ µ . This controller is not proper (i. so Á should not be too large. For performance we want large gains at low frequencies. More gain at low frequency. where Á is the frequency up to which the term is effective (the × asymptotic value of the term is 1 for Á ). but Ý ´Øµ exceeds ½ for a short time.61) ÿ ´×µ ¼ ×·¾ ¼ ¼ ×·½ × ¼ ¼¼ × · ½ (2. and the response (ݽ ´Øµ) to a step change in the disturbance are shown in Figure 2. so we want Á to be large.62) . but since the term ´¼ ¼ × · ½µ¾ only comes into effect at ½ ¼ ¼ ½¼ rad/s. This simple controller works surprisingly well. To increase the phase margin and improve the transient response we supplement the controller with “derivative action” by multiplying þ ´×µ by a lead-lag term which is effective over one decade starting at ¾¼ rad/s: ×·¾ × (2. it has more zeros à ´×µ ¾¼ rad/s. so we select the following controller     à ¾ ´ ×µ ¼ The resulting disturbance response (ݾ ) shown in Figure 2.e. High-frequency correction. Also. see (2. we may replace it by a which is beyond the desired gain crossover frequency constant gain of ½ resulting in a proportional controller The magnitude of the corresponding loop transfer function.60) ½ 10 4 ľ Ä¿ Magnitude 10 2 1. However.

83 2.9Æ 50. see × (2.4 8.99 0. The feedback controller Ã Ý is used to reduce the effect of uncertainty (disturbances and model error) whereas the prefilter Ã Ö shapes the commands Ö to improve tracking performance.51 1. the inverse-based design ü for reference tracking and the three designs ý þ and ÿ for disturbance rejection. it is optimal to design the combined two degrees-of-freedom controller à in one step.04 3.21 0.2: Alternative loop-shaping designs for the disturbance process Reference Disturbance GM PM ÅË ÅÌ ØÖ ÝÑ Ü ÝÑ Ü Ý ´Ø ¿µ Spec.7Æ 30. The solution is to use a two degrees-of-freedom controller where the reference signal Ö and output measurement Ý Ñ are independently treated by the controller. On the other hand.2 summarizes the results for the four loop-shaping designs. Table 2.65 9. ü ý þ ÿ 9.75 0. in practice ÃÝ is often designed first for disturbance rejection.24 0. and peak values ÅË ½ ¿ and ÅÌ Figure 2.35 1. the inverse-based controller ü inverts the term ½ ´½¼× · ½µ which is also in the disturbance model. However.16 1.95 1.53).3(b) on page 12 where the controller has two inputs (Ö and ÝÑ ) and one output (Ù). and therefore yields a very sluggish at Ø ¿ s whereas it should be less than ¼ ½).5 Two degrees-of-freedom design For reference tracking we typically want the controller to look like ½  ½ .9Æ 44.95 4. it is seen that the controller ÿ ´×µ reacts quicker than disturbance response Ý¿ ´Øµ stays below ½. and then Ã Ö is designed .19 0. 2. the overshoot is ¾ % which is significantly higher than the maximum value of %. for this process none of the controller designs meet all the objectives for both reference tracking and disturbance rejection. × whereas for disturbance rejection we want the controller to look like ½  ½ .34 1.6.24 1. However. response to disturbances (the output is still ¼ In summary.48 8. rather than operating on their difference Ö   Ý Ñ as in a one degree-of-freedom controller.16 0. We cannot achieve both of these simultaneously with a single (feedback) controller. see (2.23 0.24 19.99 0.22 where Ã Ý denotes the feedback part of the controller and Ã Ö a reference prefilter. In general.28 1.21(b).7 72.Ä ËËÁ Ä ÇÆÌÊÇÄ ½ ½ ¾¿.27 1.001 0.00 1.27 0.43 1 1. it is not satisfactory for reference tracking. the controller is often split into two separate blocks as shown in Figure 2. There exist several alternative implementations of a two degrees-of-freedom controller. Although controller ÿ satisfies the requirements for disturbance rejection.33 1. The solution is to use a two degrees-of-freedom controller as is discussed next.59).001 Table 2.89 1. From þ ´×µ and the This gives a phase margin of ½Æ . The most general form is shown in Figure 1.9Æ ½¼ ¿ ½¼ ½ ¼½ 11.

Let Ì Ä´½ · ĵ ½ (with Ä ÃÝ ) denote the complementary sensitivity function for the feedback system. but the overshoot is ¾ % which is significantly higher than the maximum value of %.64) Here we select Ð Ð if we want to speed up the response. and also Ì Ã Ö ÌÖ if Ì ´×µ is not known exactly.22: Two degrees-of-freedom controller to improve reference tracking. many process control applications. Remark. To improve upon this we can use a two degrees-of-freedom ÿ . This is the approach taken here.9 Two degrees-of-freedom design for the disturbance process. in practice it is not so simple because the resulting Ã Ö ´×µ may be unstable (if ´×µ has RHP-zeros) or unrealizable. The rise time is ¼ ½ s which is better than the required value of ¼ ¿s.23. and we design ÃÖ ´×µ based on (2. If the desired transfer function for reference tracking (often denoted the reference model) is Ì Ö . which is the case in ¼). the command tracking performance was not quite acceptable as is shown by Ý¿ in Figure 2. whereas for a two degrees-of-freedom controller Ý Ì Ã Ö Ö. a simple lag is often used (with Ð Example 2. In Example 2. However. then the corresponding ideal reference prefilter Ã Ö satisfies Ì ÃÖ ÌÖ . A convenient practical choice of prefilter is the lead-lag network ÃÖ ´×µ Ð ×·½ Ð ×·½ (2. or ÃÖ ´×µ Ì  ½´×µÌÖ ´×µ (2. and Ð Ð if we want to slow down the response. However. in theory we may design Ã Ö ´×µ to get any desired tracking response Ì Ö ´×µ.63) with reference model controller with ÃÝ .63) Thus.8 we designed a loop-shaping controller ÿ ´×µ for the plant in (2. Then for a one degree-of-freedom controller Ý Ì Ö.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ö ¹ ÃÖ + ¹ - ¹ ÃÝ Ù¹ ¹+ + + + ¹Ý ÝÑ Ò Figure 2.54) which gave good performance with respect to disturbances. If one does not require fast reference tracking.

In conclusion.24(a) we see that resonance frequencies of ¼ and ¾ rad/s. ¼ ×·½ ¡ ½ ¼ × · ½ ¼ ¼¿× · ½ (2. Thus we use Ì ´×µ ¼ ½×·½ ¼ ×·½ ´¼ ½×·½µ´¼ ×·½µ . Following closed-loop simulations we modified this slightly to arrive yields ÃÖ ´×µ ¼ ×·½ at the design     ÃÖ¿ ´×µ where the term ½ ´¼ ¼¿× · ½µ was included to avoid the initial peaking of the input signal ٴص above ½. The tracking response with this two degrees-of-freedom controller is shown in Figure 2. we are able to satisfy all specifications using a two degrees-of-freedom controller. we may either use the actual Ì ´×µ and then use a low-order approximation of ÃÖ ´×µ. from which (2.23 we approximate the response by two parts.5 3 Time [sec] Figure 2.5 2 2. To get a low-order ÃÖ ´×µ.21. a fast response with time constant ¼ ½ s and gain ½ .10 Loop shaping for a flexible structure. Consider the following model of a flexible structure with a disturbance occurring at the plant input ´×µ ´×µ ¾ ×´×¾ · ½µ ¾ · ¼ ¾ µ´×¾ · ¾¾ µ ´× (2. can be used to design a one degree-of-freedom controller for a very different kind of plant.63) ¼ ×·½ . so control is needed at these frequencies. and a slower response with time constant ¼ s and gain ¼ (the sum of ´¼ ×·½µ ½ ¼ the gains is 1).5 0 0 Ý¿ ´Øµ(two degrees-of-freedom) 0.5 1 1.23. The disturbance response is the same as curve Ý¿ in Figure 2. We will do the latter. and the overshoot is only ¾ ¿% which is better than the requirement of %. From the step response Ý¿ in Figure 2. The dashed line in Figure 2. Example 2.24(b) shows the open-loop response to a unit step disturbance.23: Tracking responses with the one degree-of-freedom controller (ÿ µ and the two degrees-of-freedom controller (ÿ ÃÖ¿ ) for the disturbance process ÌÖ ½ ´¼ ½× · ½µ (a first-order response with no overshoot).Ä ËËÁ 1. or we may start with a low-order approximation of Ì ´×µ. The output is .65) Loop shaping applied to a flexible structure The following example shows how the loop-shaping procedure for disturbance rejection. The rise time is ¼ ¾ s which is better than the requirement of ¼ ¿ s.5 Ä ÇÆÌÊÇÄ ¿ Ý¿ ´Øµ 1 0.66) ´ µ ½ around the From the Bode magnitude plot in Figure 2.

as outlined in this section. Ä (denoted the “shaped plant” × ).ÅÍÄÌÁÎ ÊÁ 10 2 Ä Ã ÇÆÌÊÇÄ 2 1 Magnitude ÝÇÄ Ý Ä 10 0 0 −1 10 −2 10 −2 10 0 10 2 −2 0 Frequency [rad/s] (a) Magnitude plot of 10 15 20 Time [sec] (b) Open-loop and closed-loop disturbance responses with à 5 ½ Figure 2. However. From (2. The fact that the ´×µ gives closed-loop stability is not immediately obvious since has choice Ä´×µ gain crossover frequencies. it may be very difficult to achieve stability. Fortunately.24: Flexible structure in (2.6 Conclusions on loop shaping The loop-shaping procedure outlined and illustrated by the examples above is well suited for relatively simple problems.6. the output goes up to about ¼ and then returns to zero. . instability cannot occur because the plant is “passive” ½ ¼Æ at all frequencies. there exist alternative methods where the burden on the engineer is much less.67) turns out to be a good choice as is verified by the closed-loop disturbance response (solid line) in Figure 2. decides on a loop shape.66) seen to cycle between ¾ and ¾ (outside the allowed range ½ to ½). and in the second step an optimization provides the necessary phase corrections to get a stable and robust design. The method is applied to the disturbance process in Example 9. which confirms that ½ for control is needed. Another design philosophy which deals directly with shaping both the gain and phase of Ä´×µ is the quantitative feedback theory (QFT) of Horowitz (1991).58) a controller which meets the specification Ý ´ µ  ½ ´ µ ½ is given by ÃÑ Ò ´ µ ½. Although the procedure may be extended to more complicated systems the effort required by the engineer is considerably greater. as might arise for stable plants where Ä´×µ crosses the negative real axis only once. with   2. where in the first step the engineer. In particular. One such approach is the Glover-McFarlane À ½ loop-shaping procedure which is discussed in detail in Chapter 9.3 on page 387. Indeed the controller     à ´×µ ½ (2.24(b). It is essentially a two-step procedure.

However.e.68) Remark.1 The terms À½ and À¾ The À½ norm of a stable scalar transfer function ´×µ is simply the peak value of ´ µ as a function of frequency. such as Ë ´×µ and Ì ´×µ. where we synthesize a controller by minimizing an À ½ performance objective.7 Shaping closed-loop transfer functions In this section. such as Ë and Ì .4. An alternative design strategy is to directly shape the magnitudes of closed-loop transfer functions. which determine the final response. à . ½ The terms À½ norm and À ½ control are intimidating at first. After all. This is because the maximum may only be approached as Û and may therefore not actually be achieved. Such a design strategy can be formulated as an À½ optimal control problem.Ä ËËÁ Ä ÇÆÌÊÇÄ 2. one cannot infer anything about Ë and Ì from the magnitude of the loop shape. for engineering purposes there is no difference between “×ÙÔ” and “Ñ Ü”. the least upper bound). we are simply talking about a design method which aims to press down the peak(s) of one or more selected transfer functions. as in the Specifications directly on the open-loop transfer function Ä loop-shaping design procedures of the previous section. that is. the term À ½ . make the design process transparent as it is clear how changes in Ä´×µ affect the controller à ´×µ and vice versa. Before explaining how this may be done in practice. has now established itself in the control community. However. which is purely mathematical. For example. however. ´×µ ½ ¸ Ñ Ü ´ µ (2. thus automating the actual controller design and leaving the engineer with the task of selecting reasonable bounds (“weights”) on the desired closed-loop transfer functions. Ä´ µ . Ë and Ì may experience large peaks if Ä´ µ is close to  ½. and a name conveying the engineering significance of À ½ would have been better. i. we introduce the reader to the shaping of the magnitudes of closedloop transfer functions. 2. To make the term . An apparent problem with this approach.7. the phase of Ä´ µ is crucial in this frequency range. we should here replace “Ñ Ü” (the maximum value) by “×ÙÔ” (the supremum. we discuss the terms À ½ and À¾ . The following approximations apply Ä´ µ Ä´ µ ½ ½ µ Ë Ä ½ Ì ½ µ Ë ½ Ì Ä but in the crossover region where Ä´ µ is close to 1. The topic is discussed further in Section 3. is that it does not consider directly the closed-loop transfer functions.6 and in more detail in Chapter 9. Strictly speaking.

Maximum tracking error at selected frequencies. it is sufficient to consider just its magnitude Ë . Maximum peak magnitude of Ë . Shape of Ë over selected frequency ranges. Typical specifications in terms of Ë include: 1. Ô ½ ½ ´×µ ¾ ¸ ´ µ¾ ¾  ½ ½¾ (2. Note that the À ¾ norm of a semiproper (or bi-proper) transfer function (where Ð Ñ × ½ ´×µ is a non-zero constant) is infinite. Mathematically. the symbol À ¾ stands for the Hardy space of transfer functions with bounded ¾-norm. by raising to an infinite power we pick out its peak value. 3. see (4. µ The peak specification prevents amplification of noise at high frequencies. 2. The subscript È stands for performance since Ë is mainly used as a performance indicator. the sensitivity function Ë is a very good indicator of closedloop performance. Ë ´ µ ½ Å . the symbol À stands for “Hardy space”. First.117). System type. where ÛÈ ´×µ is a weight selected by the designer.7. whereas its À½ norm is finite.2 Weighted sensitivity As already discussed.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ less forbidding. 5. which is the set of stable and strictly proper transfer functions. an explanation of its background may help. on the magnitude of Ë . and the performance requirement becomes . typically we select Å ¾. . both for SISO and MIMO systems.69) 2. the symbol ½ comes from the fact that the maximum magnitude over frequency may be written as Ñ Ü ´ µ ÐÑ Ô ½ ½  ½ ´ µÔ ½ Ô Essentially.707 from below). Similarly. Minimum bandwidth frequency £ (defined as the frequency where Ë ´ crosses 0. that is. we need not worry about its phase. The main advantage of considering Ë is that because we ideally want Ë small. The À¾ norm of a strictly proper stable scalar transfer function is defined as The factor ½ ¾ is introduced to get consistency with the 2-norm of the corresponding impulse response. 4. which is simply the set of stable and proper transfer functions. ½ Û È ´×µ . An example of a semi-proper transfer function (with an infinite À ¾ norm) is the sensitivity function Ë ´Á · à µ  ½ . Next. and À ½ in the context of this book is the set of transfer functions with bounded ½-norm. or alternatively the maximum steady-state tracking error. and also introduces a margin of robustness. these specifications may be captured by an upper bound.

exceeds its upper bound. such as Û È Ë .25(b). must be less than one. an example is shown where the sensitivity. at some frequencies. Ë .70) ¸ ÛÈ Ë ¸ ÛÈ Ë ½ ½ (2.25(a). ½ Û È . Note that we usually do not use a log-scale for the magnitude when plotting weighted transfer functions. and in words the performance requirement is that the À ½ norm of the weighted sensitivity. The resulting weighted sensitivity. ÛÈ Ë therefore exceeds 1 at the same frequencies as is illustrated in Figure 2.25: Case where Ë exceeds its bound ½ ÛÈ .71) The last equivalence follows from the definition of the À ½ norm. . resulting in ÛÈ Ë ½ ½ Ë´ µ ½ ÛÈ ´ µ ½ (2. Û È Ë .Ä ËËÁ Ä ÇÆÌÊÇÄ 10 1 Ë Magnitude 10 0 ½ ÛÈ 10 −1 10 −2 10 −2 10 −1 Frequency [rad/s] 10 0 10 1 (a) Sensitivity 3 Ë and performance weight ÛÈ ÛÈ Ë ½ Magnitude 2 1 0 −2 10 10 −1 Frequency [rad/s] 10 0 10 1 (b) Weighted sensitivity ÛÈ Ë Figure 2. In Figure 2.

£ × yields an Ë which exactly matches the Remark. For this weight the loop shape Ä bound (2. The insights gained in the previous section on loop-shaping design are very useful for selecting weights. For example. Exact and asymptotic plot of (2.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10 0 Å £ Magnitude 10 −1 10 −2 10 −2 10 −1 10 0 10 1 10 2 Frequency [rad/s] Figure 2.26. and then a higher-order weight may be selected.26: Inverse of performance weight.72) ½ at low and we see that ½ ÛÈ ´ µ (the upper bound on Ë ) is equal to frequencies.73) and compare with the asymptotic ÛÈ in (2. It then follows that a good initial choice for the performance ´ µ at frequencies where ½. The weight illustrated may be represented by ÛÈ . which is approximately the bandwidth requirement. and the asymptote crosses 1 at the frequency £ .71) at frequencies below the bandwidth and easily satisfies (by a factor Å ) the bound at higher frequencies. is shown ÛÈ ´×µ × Å· £ ×· £ (2. for disturbance rejection we must satisfy Ë ´ µ ½ at all frequencies (assuming the variables have been scaled to be less than 1 in magnitude). ½ in Figure 2. An asymptotic plot of a typical upper bound.73) Exercise 2.4 Make an asymptotic plot of ½ ÛÈ in (2. weight is to let ÛÈ ´ µ look like . in order to improve performance.72) ½ ÛÈ ´ µ in Weight selection. In some cases. we may want a steeper slope for Ä (and Ë ) below the bandwidth. is equal to Å ½ at high frequencies.72). A weight which asks for a slope of  ¾ for Ä in a range of frequencies below crossover is ÛÈ ´×µ plot of ½ ´× Å ½ ¾ · £ µ¾ ´× · £ ½ ¾ µ¾ (2.

one might specify an upper bound ½ Û Ì on the magnitude of Ì to make sure that Ä rolls off sufficiently fast at high frequencies. A good tutorial introduction to À ½ control is given by Kwakernaak (1993).78) that ½ ¼ ¼ and ¾ ¼ ¼ . there is a possible “error” in each specification equal to at most a factor ¾ general. function. To do this one can make demands on another closed-loop transfer Á Ë ÃË .3 Stacked requirements: mixed sensitivity The specification ÛÈ Ë ½ ½ puts a lower bound on the bandwidth. Ô Ô .78) are very similar when either ½ or ¾ is small. Æ is a vector and ´Æ µ is the usual Euclidean vector norm: ´Æ µ Ô After selecting the form of Æ and the weights. That ¿ dB. In any case. with Ò stacked requirements the resulting error is at most Ò. and are effectively knobs for the engineer to select and adjust until a satisfactory design is reached. for example. Ù ÃË ´Ö   µ. To combine these “mixed sensitivity” specifications. and nor does it allow us to specify the roll-off of Ä´×µ above the bandwidth. This inaccuracy in the specifications is something we are probably willing to sacrifice in the interests of mathematical convenience. For example. In is.7.77) and (2. but not an upper one. we get from (2. on the complementary sensitivity Ì For instance.Ä ËËÁ Ä ÇÆÌÊÇÄ 2. assume that ½ ´Ã µ and ¾ ´Ã µ are two functions of à (which might represent ½ ´Ã µ ÛÈ Ë and ¾ ´Ã µ ÛÌ Ì ) and that we want to achieve ÛÈ Ë ¾ · ÛÌ Ì ¾ · ÛÙ ÃË ¾ (2. the À ½ optimal controller is obtained by solving the problem Ñ Ò Æ ´Ã µ ½ (2. but not quite the same as the stacked requirement ½¾· ¾¾ (2. ½ Û Ù . Also. Remark 1 The stacking procedure is selected for mathematical convenience as it does not allow us to exactly specify the bounds on the individual transfer functions as described above. the specifications are in general rather rough. resulting in the following overall specification: ¾ Æ ½ Ñ Ü ´Æ ´ µµ ½ Æ ÛÈ Ë ÛÌ Ì ÛÙ ÃË ¿ (2.77) This is similar to.76) à where à is a stabilizing controller. ´Æ ´ µµ. For SISO systems. one may place an upper bound. but in the “worst” case when ½ ¾ .78) Objectives (2.74) We here use the maximum singular value. to measure the size of the matrix Æ at each frequency. a “stacking approach” is usually used.75) ½ ½ ¾ ½ Ô Ò ¾ ½ ½ (2. to achieve robustness or to restrict the magnitude of the input signals. on the magnitude of ÃË .

79) Appropriate scaling of the plant has been performed so that the inputs should be about ½ or ½. the transfer functions resulting from a solution to (2.5. input to G = ’[u]’. outputvar = ’[Wp. tol=0. For this problem.27 where the curve for ˽ is slightly above that for ½ ÛÈ ½ ). À½ controller % Plant is G.05 1])). The value £ approximately the desired crossover frequency of ½¼ rad/s. and so on.72).¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 2 Let ­¼ Ñ ÒÃ Æ ´Ã µ ½ denote the optimal ½ norm. gmn=0. Consider again the plant in (2. sysoutname = ’P’. gmx=20.11 ½ mixed sensitivity design for the disturbance process. This gives the designer a mechanism for directly shaping the magnitudes of ´Ë µ.gmn. The ½ problem is solved with the -toolbox in MATLAB using the commands in Table 2. that is. u(1)]’.conv([10 1].conv([0.nmeas.tol). ´Ì µ.gopt] = hinfsyn(P. ´ÃË µ.gmx. À Table 2. Wu.001. Wu = 1.[0. % Weights. cleanupsysic = ’yes’.nu. The performance less in magnitude. we achieved an optimal ½ norm of ½ ¿ . ´Æ ´ µµ ­¼ at all frequencies. Wp = nd2sys([1/M wb]. Nevertheless. nu=1. The practical implication is that. the design seems good with Ë ½ ÅË À .ghinf.e-4. wb=10. % % Find H-infinity optimal controller: % nmeas=1.3: MATLAB program to synthesize an % Uses the Mu-toolbox G=nd2sys(1.200). and consider an ½ mixed sensitivity Ë ÃË design in which À À Æ ÛÈ Ë ÛÙ ÃË (2.5. but to get a stable weight A value of and to prevent numerical problems in the algorithm used to synthesize the controller.8 for more details) % systemnames = ’G Wp Wu’. An important property of ½ optimal controllers is that they yield a flat frequency response. except for at most a factor Ò. input to Wp = ’[r-G]’. so the weighted sensitivity requirements are not quite satisfied (see design 1 in Figure 2. This has no practical ½¼ has been selected to achieve significance in terms of control performance. % % Generalized plant P is found with function sysic: % (see Section 3. inputvar = ’[ r(1).76) will be close to ­¼ times the bounds selected by the designer. À À Ô Example 2. M=1.3.80) ¼ would ask for integral action in the controller. we have moved the integrator slightly by using a small non-zero value for . sysic. [khinf. input to Wu = ’[u]’. and we therefore select a simple input weight ÛÙ weight is chosen.05 1]. r-G]’. in the form of (2. A=1. [1 wb*A]).54). as ÛÈ ½ ´×µ × Å· £ ×· £ Å ½ £ ½¼ ½¼  (2.

5 0 ݽ ´Øµ ݾ ´Øµ 1 Time [sec] 2 3 0 1 Time [sec] 2 3 (a) Tracking response (b) Disturbance response Figure 2. ÈÅ ½ ¾Æ and ¾¾ rad/s. we get a design with an optimal ½ norm of ¾ ¾½. and the tracking response is very good as shown by curve ݽ in Figure 2.28: Closed-loop step responses for two alternative disturbance process À½ designs (1 and 2) for the ¼ . With the weight ÛÈ ¾ . Å sluggish.Ä ËËÁ Ä ÇÆÌÊÇÄ ½ 10 0 Magnitude 10 −2 ˽ ½ ÛÈ ½ −2 ˾ ½ ÛÈ ¾ 10 −1 0 1 2 10 −4 10 Frequency [rad/s] 10 10 10 Figure 2.27. (The design is actually very similar to the loop-shaping design for references. We therefore try ÛÈ ¾ ´×µ ´× Å ½ ¾ · £ µ¾ ´× · £ ½ ¾ µ¾ Å ½ £ ½¼ ½¼  (2.) However.5 ݾ ´Øµ ݽ ´Øµ 1 0.5 1 0. ü . then from our earlier discussion in Section 2.28(b) that the disturbance response is very ½ ¿¼. we see from curve ݽ in Figure 2. Ì ½ ÅÌ ½ ¼. and is seen from the dashed line to cross ½ in magnitude at about the same frequency as weight ÛÈ ½ .5 0 0 1. but it specifies tighter control at lower frequencies. yielding À .6. which was an inverse-based controller.81) The inverse of this weight is shown in Figure 2.4 this motivates the need for a performance weight that specifies higher gains at low frequencies.27: Inverse of performance weight (dashed line) and resulting sensitivity function (solid line) for two ½ designs (1 and 2) for the disturbance process À 1. If disturbance rejection is the main concern.28(a).

8 Conclusion The main purpose of this chapter has been to present the classical ideas and techniques of feedback control. as was discussed in Example 2.73).) The disturbance response is very good. ÿ . whereas the tracking response has a somewhat high overshoot.28(a). .9. ÅÌ ½ ¿.72) and (2. In conclusion. Å . (The design is actually very similar to the loop-shaping design for disturbances.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÅË ½ ¿. To get a design with both good tracking and good disturbance rejection we need a two degrees-of-freedom controller. 2. design ½ is best for reference tracking whereas design ¾ is best for disturbance rejection. can be properly developed before MIMO systems are considered. see curve ݾ in Figure 2. We also introduced the À½ problem based on weighted sensitivity. and the design approaches available. ÈÅ ¿ ¿Æ and ½½ ¿ rad/s. for which typical performance weights are given in (2. We have concentrated on SISO systems so that insights into the necessary design trade-offs.

then this will generally affect all the outputs. the basic transfer function model is Ý ´×µ vector. The need for a careful analysis of the effect of uncertainty in MIMO systems is motivated by two examples. despite the complicating factor of directions. . However. A non-interacting plant would result if Ù ½ only affects ݽ .1 Introduction We consider a multi-input multi-output (MIMO) plant with Ñ inputs and Ð outputs. but not for scalars. ´×µÙ´×µ. The singular value decomposition (SVD) provides a useful way of quantifying multivariable directionality. and multivariable right-half plane (RHP) zeros. Finally we describe a general control configuration that can be used to formulate control problems. Directions are relevant for vectors and matrices. We discuss the singular value decomposition (SVD). Many of these important topics are considered again in greater detail later in the book. The chapter should be accessible to readers who have attended a classical SISO control course. that is. The main difference between a scalar (SISO) system and a MIMO system is the presence of directions in the latter.¿ ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ In this chapter. Ù ½ . multivariable control. Ù is an Ñ ¢ ½ vector and ´×µ is an Ð ¢ Ñ transfer function matrix. and so on. there is interaction between the inputs and outputs. An exception to this is Bode’s stability condition which has no generalization in terms of singular values. and we will see that most SISO results involving the absolute value (magnitude) may be generalized to multivariable systems by considering the maximum singular value. This is related to the fact that it is difficult to find a good measure of phase for MIMO transfer functions. ݽ ݾ ÝÐ . 3. most of the ideas and techniques presented in the previous chapter on SISO systems may be extended to MIMO systems. where Ý is an Ð ¢ ½ Thus. If we make a change in the first input. Ù¾ only affects ݾ . we introduce the reader to multi-input multi-output (MIMO) systems.

which demonstrate that the simple gain and phase margins used for SISO systems do not generalize easily to MIMO systems. 3.2 Transfer functions for MIMO systems Ù ¹ ½ ¹ ¾ Þ ¹ Ù + ¹ Ú¹ + ½ ¾ Õ Ý¹ Þ (a) Cascade system (b) Positive feedback system Figure 3. Although most of the formulas for scalar systems apply. 2. This has led some authors to use block diagrams in which the inputs enter at the right hand side. 1. Cascade rule. . We also give a brief introduction to multivariable control and decoupling. you may find it useful to browse through Appendix A where some important mathematical tools are described. that is. we consider a general control problem formulation. we must exercise some care since matrix multiplication is not commutative. we have Ú ´Á   ĵ ½ Ù where Ä ¾ ½ is the transfer function around the loop. Feedback rule. each ¾ ¢ ¾. Exercises to test your understanding of this mathematics are given at the end of this chapter. in general à à .1: Block diagrams for the cascade rule and the feedback rule The following three rules are useful when evaluating transfer functions for MIMO systems.1(a). We then consider a simple plant with a multivariable RHP-zero and show how the effect of this zero may be shifted from one output channel to another. so no fundamental benefit is obtained.1(b). The order of the transfer function matrices in ¾ ½ (from left to right) is the reverse of the order in which they appear in the block diagram of Figure 3. ½ and ¾ in Remark. Finally.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ The chapter is organized as follows. For the cascade (series) interconnection of Figure 3.1(a) (from left to right). We start by presenting some rules for determining multivariable transfer functions from block diagrams. the overall transfer function matrix is ¾ ½. With reference to the positive feedback system in Figure 3. in this case the order of the transfer function blocks in a feedback path will be reversed compared with their order in the formula. However. and study two example plants. At this point. After this we discuss robustness. Then we introduce the singular value decomposition and show how it may be used to study directions in multivariable systems.

  . one of which gives the term Ƚ½ directly.2) Example 3. For such systems it is probably safer to write down the signal equations and eliminate internal variables to get the transfer function of interest. Care should be taken when applying this rule to systems with nested loops. If you exit from a feedback loop then include a term ´Á   ĵ ½ for positive feedback (or ´Á · ĵ  ½ for negative feedback) where Ä is the transfer function around that loop (evaluated against the signal flow starting at the point of exit from the loop). Ⱦ¾ ¹ Ⱦ½ Û ¹ + + ¹ Ã Õ ¹ Ƚ¾ ¹ + + ¹ Ƚ½ ¹Þ Figure 3. and finally we meet Ⱦ½ .1) Proof: Equation (3. There are two branches. MIMO Rule: Start from the output and write down the blocks as you meet them when moving backwards (against the signal flow).ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ 3.2 is given by Þ ´È½½ · Ƚ¾ à ´Á   Ⱦ¾ à µ ½ Ⱦ½ µÛ (3. In the other branch we move backwards and meet Ƚ¾ and then à . For matrices of appropriate dimensions ½ ´Á   ¾ ½ µ ½ ´Á   ½ ¾ µ ½ ½ ´Á (3.1 The transfer function for the block diagram in Figure 3.1 Derive the cascade and feedback rules. taking the most direct path towards the input. Push-through rule. We then exit from a feedback loop and get a term ´Á ĵ ½ (positive feedback) with Ä È¾¾ à .2) To derive this from the MIMO rule above we start at the output Þ and move backwards towards Û.     ½ ¾ µ and post¾ Exercise 3. The rule is best understood by considering an example.1) is verified by pre-multiplying both sides by multiplying both sides by ´Á ¾ ½ µ. The cascade and feedback rules can be combined into the following MIMO rule for evaluating closed-loop transfer functions from block diagrams.2: Block diagram corresponding to (3.

5) The input sensitivity and input complementary sensitivity functions are then defined as ËÁ ¸ ´Á · ÄÁ µ ½ ÌÁ ¸ Á   ËÁ ÄÁ ´Á · ÄÁ µ ½ (3. Thus. and ÌÁ Ì . We define ÄÁ to be the loop transfer function as seen when breaking the loop at the input to the plant with negative feedback assumed. and to make this explicit one may use the notation Ä Ç Ä. we define Ä to be the loop transfer function as seen when breaking the loop at the output of the plant. .3. Negative feedback control systems + ¹ ¹ + Ù¹ Ö Ã + ¾ ¹ + ¹ + ½ Ý Õ ¹ Figure 3.3) The sensitivity and complementary sensitivity are then defined as ¸ Á   Ë Ä´Á · ĵ ½ (3. Of course.3.3. also see equations (2.4 In Figure 3.  Ì Á is the transfer function from Ä. and Ë is the transfer function from ½ to Ý . ËÇ Ë and ÌÇ Ì . Exercise 3. for SISO systems Exercise 3. ¾ to Ù.18) corresponds to the negative feedback system in Figure 2.4) In Figure 3. Ë and Ì are sometimes called the output sensitivity and output complementary sensitivity.3 Use the MIMO rule to show that (2. respectively.4. Ì is the transfer function from Ö to Ý . Use the push-through rule to rewrite the two transfer functions.16) to (2.6) ÄÁ In Figure 3. ËÁ Ë .3: Conventional negative feedback control system For the negative feedback system in Figure 3.3. In Figure 3.1(b). for the case where the loop consists of a plant and a feedback controller à we have Ä Ë ¸ ´Á · ĵ ½ Ì Ã (3. what transfer function does ËÁ represent? Evaluate the transfer functions from ½ and ¾ to Ö   Ý . This is to distinguish them from the corresponding transfer functions evaluated at the input to the plant.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Exercise 3.2 Use the MIMO rule to derive the transfer functions from Ù to Ý and from Ù to Þ in Figure 3.20) which apply to MIMO systems.3 ÄÁ à (3.

but then these ’s would actually represent two different physical entities). Ë ´Á · à µ ½ .8) says that Ë Á Ë and follows from the push-through rule. see (3.7). A given transfer matrix never occurs twice in sequence.7) follows trivially by factorizing out the term ´Á · ĵ  ½ from the right.11) and where Ç is an output multiplicative perturbation representing the difference between ¼ . (3. so Ä must be semi-proper with rarely the case in practice). . This may be obtained numerically using model reduction.8) (3. Ë ¼ ´Á · ¼ à µ ½ . since Ä is square. For example. we can always compute the frequency response of Ä´ µ ½ (except at frequencies where Ä´×µ has -axis poles).7)-(3. (3. and ËÄ with ¾Ò states. To assist in remembering (3. e.10). and Ì is the nominal complementary sensitivity function. For example. but with and à interchanged. Ä´×µ à with Ò states (so is a Ò Ò matrix) and we want to a state-space realization for Ä find the state space realization of Ì . apply for the transfer functions evaluated at the plant input.139) relates the sensitivity of a perturbed plant.10) need not be square whereas Ä Ã is square. Remark 3 In Appendix A.7) (3. but they are also useful for numerical calculations involving state-space ´×Á µ ½ · . Similar relationships. On the other hand. Remark 1 The above identities are clearly useful when deriving transfer functions analytically.   ¢   Remark 2 Note also that the right identity in (3.10) ´Á · à µ ½ ´Á · à µ ½ à ´Á · à µ ½ ´Á · à µ ½ à ´Á · à µ ½ Ã Ì Ä´Á · ĵ ½ ´Á · ´Äµ ½ µ ½ Note that the matrices and à in (3.10) can be derived from the identity     Ž ½ ž ½ ´Å¾ Ž µ ½ .7)-(3.10) can only be used to compute the state¼ (which is space realization of Ì if that of Ä ½ exists. (A.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ The following relationships are useful: Ä ÇÆÌÊÇÄ ´Á · ĵ ½ · Ä´Á · ĵ ½ Ë·Ì Á (3.9) (3. We have ˼ Ë ´Á · Ç Ì µ ½ Ç ¸´ ¼  µ  ½ (3. Then we can first form Ë ´Á · ĵ ½ with Ò states.g. For example. a minimal realization of Ì then multiply it by Ä to obtain Ì has only Ò states.9) also follows from the push-through rule. but it is preferable to find it directly using Ì Á Ë .10) note that comes first (because the transfer function is evaluated at the output) and then and à alternate in sequence. However. (3. assume we have been given realizations. and then obtain Ì ´ µ from (3. the closed-loop transfer function ´Á · à µ  ½ does not exist (unless is repeated in the block diagram. to that of the nominal plant.6 we present some factorizations of the sensitivity function which will be useful in later applications. (3.

4: System ´×µ ´×µ with input ¹Ý and output Ý (We here denote the input by rather than by Ù to avoid confusion with the matrix Í used below in the singular value decomposition). see (2. In Section 2. which can be analyzed using standard tools in matrix algebra.9).1 Obtaining the frequency response from ´×µ ¹ Figure 3.13)-(3. we may compactly represent the sinusoidal time response described in (3. In particular.13) ÝÓ Ó ´ µ ¬  « ´ µ (3. These results may be directly generalized to multivariable systems by considering the elements of the matrix . 3. it has been applied since Ø  ½.1 we considered the The frequency domain is ideal for studying directions in multivariable systems at any given frequency.14) where the amplification (gain) and phase shift may be obtained from the complex number ´ µ as follows ¼ × Ò´ Ø · « µ (3.15) In phasor notation. Then the corresponding persistent output signal in channel is also a sinusoid with the same frequency Ý ´Øµ Ý ¼ × Ò´ Ø · ¬ µ (3. that is. Consider the system ´×µ in Figure 3. the choice × ¼ is of interest since ´ µ represents the response to a sinusoidal signal of frequency . apply to input channel a scalar sinusoidal signal given by This input signal is persistent. However.4 with input ´×µ and output Ý ´×µ: Ý´×µ ´×µ ´×µ (3.3.16) . We have ¯ ´ µ represents the sinusoidal response from input ´Øµ to output . To be more specific.12) sinusoidal response of scalar systems.15) by Ý´ µ ´ µ ´ µ (3. if we fix × × ¼ then we may view ´× ¼ µ simply as a complex matrix.7) and (2.3 Multivariable frequency response analysis The transfer function ´×µ is a function of the Laplace variable × and can be used to represent a dynamic system.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 3.

Ò µ Ý´ µ ݽ ´ ݾ ´ ÝÐ ´ .23) ¾¼ «¾ 3.22) which can be computed by multiplying the complex matrix Ý´ µ ´ µ ´ µ Ý´ µ ´ µ by the complex vector ´ µ: ½¼ «½ ´ µ (3. the gain at a given frequency is simply Ý´ µ ´ µ ´ µ ´ µ ´ µ ´ µ The gain depends on the frequency .2 Consider a ¾ ¾ multivariable system where we simultaneously apply sinusoidal signals of the same frequency to the two input channels: ¢ ´Øµ The corresponding output signal is ½ ´Øµ ¾ ´Øµ ݽ ´Øµ ݾ ´Øµ ½¼ × Ò´ Ø · «½ µ ¾¼ × Ò´ Ø · «¾ µ ݽ¼ × Ò´ Ø · ¬½ µ ݾ¼ × Ò´ Ø · ¬¾ µ ݽ¼ ¬½ ݾ¼ ¬¾ (3.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ where Ä « ÇÆÌÊÇÄ ´ µ Ó Ý´ µ Here the use of (and not ) as the argument of are complex numbers.3. . by the superposition principle for linear systems. .14). . equal to the sum of the individual responses. and we have from (3.19) ´ µ ¿ ½´ µ ¾´ µ Ñ´ .20) represent the vectors of sinusoidal input and output signals. µ¿ µ µ (3.17) ´ µ and Ý ´ µ implies that these Ý´ µ ½´ µ ½´ µ · ¾´ µ ¾´ µ · ¡ ¡ ¡ Ý´ µ ¾ ´ µ ´ µ (3. . Ý . The overall response to simultaneous input signals of the same frequency in several input channels is.13) and (3. .16) ÝÓ ¬ (3.18) or in matrix form where ´ µ ´ µ ¾ (3. Example 3. but since the system is linear it is independent of the input magnitude ´ µ .21) Ý ´Ø µ (3.2 Directions in multivariable systems For a SISO system. representing at each frequency the magnitude and phase of the sinusoidal signals in (3.

1. as discussed in Appendix A. and we need to “sum up” the magnitudes of the elements in each vector by use of some norm.29) . Ñ Ü ¼ ¾ ¾ ¾ Ñ Ü ¾ ½ ¾ ´ µ . However.5. The maximum gain as the direction of the input is varied is the maximum singular value of .28) follow because the gain is independent of the input magnitude for a linear system. ¿ ¾ (3.27) whereas the minimum gain is the minimum singular value of ÑÒ ¼ ¾ ÑÒ ¾ ½ ¾ ´ µ (3.3 For a system with two inputs.24) and the magnitude of the vector output signal is Ý´ µ ¾ The gain of the system ratio × Ý´ µ¾ Õ ¾ ¾ ݽ¼ · ݾ¼ · ¡ ¡ ¡ (3.26) Again the gain depends on the frequency . the usual measure of length. If we select the vector 2-norm. Example 3.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Things are not quite as simple for MIMO systems where the input and output signals are both vectors.28) The first identities in (3.27) and (3. then at a given frequency the magnitude of the vector input signal is ´ µ ¾ × ´ µ¾ Õ ¾ · ¾ ·¡¡¡ ½¼ ¾¼ (3. for a MIMO system there are additional degrees of freedom and the gain depends also on the direction of the input . (3. the following five inputs: ½¼ ¾¼ . and again it is independent of the input magnitude ´ µ ¾ . For example. the gain is in general different for ½ ½ ¼ ¾ ¼ ½ ¿ ¼ ¼ ¼ ¼ ¾ ½  ¼ ¼ ¼ ¼  ¼ ¼ (which all have the same magnitude for the ¾ ¾ system ¢ ½ but are in different directions).25) ´×µ for a particular input signal ´ µ is then given by the ´ µ ´ µ ¾ ´ µ ¾ Ô Ô Ý´ µ ¾ ´ µ ¾ ¾ ¾ ݽ¼ · ݾ¼ · ¡ ¡ ¡ ¾ · ¾ ·¡¡¡ ½¼ ¾¼ (3.

3 Eigenvalues are a poor measure of gain Before discussing in more detail the singular value decomposition we want to demonstrate that the magnitudes of the eigenvalues of a transfer function matrix. First of all. ´ ´ µ . ´ µ .3.29) the following output vectors (a constant matrix) we compute for the five inputs ݽ ݽ ¾ ¿ ݾ ¿ ݾ ¾ ¾ Ý¿ ¿ ¿ Ý ¼ ¼ ¼ ¼ Ý  ¼ ¾ ¼¾ and the 2-norms of these five outputs (i. with ¼ ½ Ì we get an output vector Ý ½¼¼ ¼ Ì . the gains for the five inputs) are Ý¿ ¾ ¿¼ Ý ¾ ½ ¼¼ Ý ¾ ¼¾ This dependency of the gain on the input direction is illustrated graphically in Figure 3. For example.30) equal to zero. depending on the ratio ¾¼ ½¼ . do not provide a useful means of generalizing the SISO gain. respectively.5: Gain ½ ¾ ¾ as a function of ¾¼ ½¼ for ½ in (3.g. consider the system Ý ¼ ½¼¼ ¼ ¼ (3. To see this let Ø be an eigenvector of and consider an input Then the output is Ý Ø Ø where is the corresponding eigenvalue. We see that. To see this. an input vector The “problem” is that the eigenvalues measure the gain for the special case when the inputs and the outputs are in the same direction.e. eigenvalues can only be computed for square systems.5 where we have used the ratio ¾¼ ½¼ as an independent variable to represent the input direction. eigenvectors. These are the minimum and maximum singular values of ½ . the gain varies between ¼ ¾ and ¿ . and even then they with can be very misleading. to conclude from the which has both eigenvalues eigenvalues that the system gain is zero is clearly misleading.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ 8 Ä ÇÆÌÊÇÄ ½ ´ ½µ ¿ Ý ¾ ¾ 6 4 2 0 −5 ´ ½µ −4 −3 ¼¾ −2 −1 ¾¼ ½¼ 0 1 2 3 4 5 Figure 3. e. We get Ý Ø Ø . However. namely in the direction of the Ø. 3.

¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ so measures the gain in the direction Ø . . 3. Two important properties which must be concept of a matrix norm. ´ µ Õ ´ À µ (3. in this chapter we will mainly use the induced 2-norm.29) and (3. also see (A. Exercise 3. each depending on the situation. denoted satisfied for a matrix norm are the triangle inequality ½· ¾ and the multiplicative property ½ · ½ ¡ ¾ ¾ (3. Consider a fixed frequency where ´ µ is a constant Ð ¢ Ñ complex matrix. and the maximum singular value ¾ ´ µ (the latter three norms are induced by a vector norm. does not satisfy the properties of a matrix norm.4 Singular value decomposition The singular value decomposition (SVD) is defined in Appendix A. the other entries are zero.2 we introduce several matrix norms.5 Compute the spectral radius and the five matrix norms mentioned above for the matrices in (3. the magnitude of the largest eigenvalue. We will use all of these norms in this book.31) ½ ¾ (3. we need the To find useful generalizations of .5 for more details).27).32) (see Appendix A. In Appendix A. the sum norm ×ÙÑ. Notice that ´ µ ½¼¼ for the matrix in (3. The singular values are the positive square roots of the eigenvalues of À . e. As we may expect. However.g. see (3. and we write Í ¦Î À where (3. ´ µ ¸ Ñ Ü ´ µ (the spectral radius).30).30). this is the reason for the subscript ).3.34) . Here we are interested in its physical interpretation when applied to the frequency response of a MIMO system ´×µ with Ñ inputs and Ð outputs. such as the Frobenius norm .115). for the case when is a matrix. where À is the complex conjugate transpose of . the maximum row sum ½ . ´ µ. but not for performance. and denote ´ µ by for simplicity.33) ¦ is an Ð ¢ Ñ matrix with Ñ Ò Ð Ñ non-negative singular values.5. the maximum column sum ½ . This may be useful for stability analysis. Any matrix may be decomposed into its singular value decomposition.3. arranged in descending order along its main diagonal.

From (3. Input and output directions. then the output is in the direction Ù . The column vectors of Í . Furthermore. That is. . The reader should be careful not to confuse these two. To see this. which for column becomes (3. the column vectors of Î . the singular values must be computed numerically. since Ú ¾ ½ and Ù ¾ ½ we see that the ’th singular value gives directly the gain of the matrix in this direction. Ù Î This is illustrated by the SVD of a real ¾ ¢ ¾ matrix which can always be written in the form Ó× ¾ ¦ × Ò ¾ Ì Ó× ½   × Ò ½ ½ ¼ (3. whereas is a scalar.35) ¼   × Ò ¦ Ó× ×Ò Ó× is an Ñ ¢ Ñ unitary matrix of input singular vectors. These input and output directions are related through the singular values.36) ÙÀ Ù ½ ÙÀ Ù ¼ (3. and represent the input directions. and the associated directions are called principal directions.37) Likewise.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ .35) we see that the matrices Í and Î involve rotations and that their columns are orthonormal. represent the output directions of the plant. In other words ´ µ Ú ¾ Ú ¾ Ú ¾ (3. It is standard notation to use the symbol Í to denote the matrix of output singular vectors.33) may be written as Î Ú Ù (3. This is unfortunate as it is also standard notation to use Ù (lower case) to represent the input signal. For ¾ ¢ ¾ matrices however. denoted Ù .38) where Ú and Ù are vectors. denoted Ú .39) Some advantages of the SVD over the eigenvalue decomposition for analyzing gains and directionality of multivariable plants are: 1. analytic expressions for the singular values are given in (A. Ú . that is Ô Ù ¾ Ù½¾· Ù¾¾· · Ùо ½ (3. ½ ßÞ ½ Í ßÞ ¦ ¾ ¾ ßÞ ÎÌ ¾ where the angles ½ and ¾ depend on the given matrix. The plant directions obtained from the SVD are orthogonal. ¿ Í is an Ð ¢ Ð unitary matrix of output singular vectors. The singular values are sometimes called the principal values or principal gains. Caution. 2. are orthogonal and of unit length. The singular values give better information about the gains of the plant. They are orthogonal and of unit length (orthonormal). In general. note that since Î is unitary we have Î À Î Á . so Í ¦.36). if we consider an input in the direction Ú .

3 continue. we can always choose a nonzero input .3. Example 3.42) Ù Ú½ ÚÙ Ù and Ú Ú Ú. Thus. and Ù is the corresponding output direction in which the inputs are most effective.5) until the “least important”. Ä Ã ÇÆÌÊÇÄ Maximum and minimum singular values.29) with ½ The singular value decomposition of ¿ ¾ ¿¿ ¼ ¼ ¼¾ ¾ ßÞ (3. Ú ) may be multiplied by a complex in the nullspace of with more inputs than outputs (Ñ such that . The SVD also applies directly to non-square plants.ÅÍÄÌÁÎ ÊÁ 3. “high-gain” or “most important” directions. Consider again the system (3.43) The vector Ú corresponds to the input direction with largest amplification. and so on (see Appendix A. ¼ . This confirms findings on page 70. ¼ Ð). The directions involving Ú and Ù are sometimes referred to as the “strongest”. in the sense that the elements in each pair of vectors (Ù . The smallest gain of ½ For a “fat” matrix Note that the directions as given by the singular vectors are not unique. The next most important directions are associated with Ú ¾ and Ù¾ . Then it follows that Ù (3.44) ½ is ¦ ½ ¼ ¾ ¼ ¼ ¼ ¼  ¼ ¾ ßÞ ¼  ¼ ¼ ¼ ¼ ¼ ÎÀ ßÞ À Í The largest gain of 7.41) . As already stated. “weak” or “low-gain” directions which are associated with Ú and Ù. for any vector we have that ´ µ Define Ù½ ¾ ¾ Ù Ú ´ µ (3.40) and that the smallest gain for any input direction (excluding the “wasted” in the nullspace of when there are more inputs than outputs 1 ) is equal to the minimum singular value ´ µ where ´ µ ÑÒ ¼ ÑÒ Ð Ñ ¾ ¾ Ú ¾ Ú ¾ (3. it can be shown that the largest gain for any input direction is equal to the maximum singular value ´ µ ½´ µ Ñ Ü ¼ ¾ ¾ Ú½ ¾ Ú½ ¾ (3.343 is for an input in the direction 0.272 is for an input in the direction Ú  ¼ ¼ Ú ¼ ¼ ¼ .

44) is ¿ ¿ ¼ ¾ ¾ ¾ ¼.44) (g1=[5 4.4. corresponding to the largest singular value.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ scalar of magnitude 1 ( ½). For the shopping cart the gain depends strongly on the input direction. We will discuss this in more detail later. This is rather difficult (the plant has low gain in this direction) so a strong force is needed. Example 3.4 Shopping cart.45) The variables have been scaled as discussed in Section 1. i. then you will probably find that the signs are different from those given above. [u. whereas other combinations have a weak effect on the outputs. Ú ¼ ¼  ¼ ¼ Ì . This may be quantified by the condition number. The next direction. This follows from the relatively large off-diagonal elements in ½ . On the other  ¼ ¼  ¼ ¼ Ì . and the control problem associated with the shopping cart can be described as follows: Assume we want to push the shopping cart sideways (maybe we are blocking someone). Finally. the most “difficult” or “weak” direction. However. will clearly be in the forwards direction. However. corresponding to the second singular value. Since in (3.39). We thus see that the control of an ill-conditioned plant may be especially difficult if there is input uncertainty which can cause the input signal to “spread” from one input direction to another. For example. since the elements are much larger than ½ in magnitude this suggests that there will be no problems with input constraints. Consider the following steady-state model of a distillation column ½¼ ¾  ½¼   (3. forwards. if there is any uncertainty in our knowledge about the direction the cart is pointing. we say that the system is interactive. This is easily seen from (3. This is a simple illustrative example where we can easily figure out the principal directions from experience. For example.44) both inputs affect both outputs. To see this consider the SVD of : ¼ ¾  ¼ ½ ¼ ½ ¼ ¾ ßÞ ½ ¾ ¼ ¼ ½¿ ßÞ Í ¦ ¼ ¼  ¼ ¼  ¼ ¼  ¼ ¼ ÎÀ ßÞ À (3. that is. Consider a shopping cart (supermarket trolley) with fixed wheels which we may want to move in three directions. will be sideways. which for the system in (3.v]=svd(g1)). the plant is illconditioned.5 Distillation process. The strongest direction.e. we see that if we hand. the ratio between the gains in the strong and weak directions. if you use Matlab to compute the SVD of the matrix in (3. from the second input singular vector. the system is ill-conditioned.s.46) From the first input singular vector. this is somewhat misleading as the gain in the low-gain direction (corresponding to the smallest singular value) is actually only just above ½. Example 3. sideways and upwards. Control of ill-conditioned plants is sometimes difficult. we  ½) provided we also change may change the sign of the vector Ú (multiply by the sign of the vector Ù. 3 2 ]. some combinations of the inputs have a strong effect on the outputs. Thus. will be upwards (lifting up the cart). Ú . we see that the gain is ½ ¾ when we increase one input and decrease the other input by a similar amount. corresponding to the smallest singular value. then some of our applied force will be directed forwards (where the plant gain is large) and the cart will suddenly move forward with an undesired large speed. Furthermore.

6.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ increase both inputs by the same amount then the gain is only ½ ¿ . We therefore see that changes in Ù½ and Ù¾ counteract each other. The singular values are usually plotted as a function of frequency in a Bode magnitude plot with a log-scale for frequency and magnitude.e. (which varies as Î and even a small imbalance leads to large changes in the product compositions. then the overall steady-state change in ݽ is only ½ . which is obtained for inputs in the direction Ú singular vector directions. to change ݽ ݾ . The reason for this is that the plant is such that the two inputs counteract each other. and the condition number is ½ ¾ ½ ¿ The physics of this example is discussed in more detail below. tells us in which . and is a general property of distillation columns where both products are of high purity. Similarly. From the output ¼ ¾   we see that the effect is to move the outputs in different   Ù¾ Ä   Î ). Again. the output singular vectors will be denoted by Ù and Ù. This can also be seen from the smallest singular value. simultaneous changes in the internal flows Ä and Î ). increase Ù½ Ù  ¼ ½  ¼ ¼  ¼ ¼ . Note that we have here returned to the convention of using Ù½ and Ù¾ to denote the manipulated inputs.     ´ µ ½¿ . Therefore. . The model in (3. This makes sense from a physical point of view.45) represents twopoint (dual) composition control of a distillation column. In this case the third output singular vector. at least at steady-state. Example 3. that is.7. 2 in Section 3. the outputs are very sensitive to yields a large steady-state change in ݽ of . the distillation column is very sensitive to changes in external flows (i. Typical plots are shown in Figure 3.e.2). and later in this chapter we will consider a simple controller design (see Motivating robustness example No.  ¼ ¼ ¼ ¼ For dynamic systems the singular values and their associated directions vary with frequency. that is. an increase in Ù¾ by ½ (with Ù½ constant) yields ݽ this is a very large change. is ill-conditioned. and if we increase Ù½ and Ù¾ simultaneously by ½. that is. where the top composition is to ¼ (output ݽ ) and the bottom composition at Ü ¼ ¼½ (output be controlled at Ý Ý¾ ). Physically. Non-Square plants The SVD is also useful for non-square plants. and for control purposes it is usually the frequency range corresponding to the closed-loop bandwidth which is of main interest. This can be seen from the input singular vector Ú   associated with the largest singular value. the reason for this small change is that the compositions in the distillation column are only weakly dependent on changes in the internal flows (i. Thus. to make both products purer simultaneously. The reason for this is that the external distillate flow Ä) has to be about equal to the amount of light component in the feed. Thus an increase in Ù½ by ½ (with Ù¾ constant) The ½ ½-element of the gain matrix is . changes in Ù½ . using reflux L (input Ù½ ) and boilup V (input Ù¾ ) as manipulated inputs (see Figure 10. but in the opposite direction of that for the increase in Ù½ . Ù ¿ . consider a plant with 2 inputs and 3 outputs. For example.6 Physics of the distillation process.8 on page 434). it takes a large control action to move the compositions in different directions. On the other hand. the distillation process ½½ .

what is the interpretation of the singular values and the associated input directions (Î )? What is Í in this case? 3. Exercise 3.6: Typical plots of singular values output direction the plant cannot be controlled. the additional input singular vectors tell us in which directions the input will have no effect. But the maximum singular value is also very useful in terms of frequency-domain performance and robustness. ´ µ Ö´ µ .3. Similarly. but note that an is in the nullspace of and yields an output Ý ¼.5 Singular values for performance So far we have used the SVD primarily to gain insight into the directionality of MIMO systems. We here consider performance. the control error. for a plant with more inputs than outputs. Example 3. For SISO systems we earlier found that Ë ´ µ evaluated as a function of frequency gives useful information about the effectiveness of feedback control. For example.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ 10 2 Ä ÇÆÌÊÇÄ 10 2 ´ µ ´ µ −4 −2 0 Magnitude 10 10 0 ´ µ ´ µ −2 Magnitude 2 1 10 0 10 −1 10 10 0 10 −2 10 Frequency [rad/s] (a) Spinning satellite in (3.76) 10 Frequency [rad/s] 10 10 (b) Distillation process in (3. ¾ with singular value decomposition ½ ¿ ¾  ½ ¼ ½¿ ¼ ¼ ¼ ¾  ¼ ½ ½ ¼ ¼ ¼ ¼ ½¾  ¼ ¼ ¼ ¼ ßÞ ¼ ½ ÎÀ À ¾ ¼ ¼ ½ ¼ ½  ¼ ßÞ ¿ ¼ Í ßÞ ¦ From the definition of the minimum singular value we have input in the direction ¼  ¼ ¼½ ´ ¾µ ½¿ .7 Consider a non-square system with 3 inputs and 2 outputs.81) Figure 3.6 For a system with Ñ inputs and ½ output. it is the gain from a sinusoidal reference input (or output disturbance) to Ë´ µ .

then we consider the worst-case (low-gain) direction.47) In terms of performance. and define ¯ Bandwidth. reaches 0. it is reasonable to require that the gain ´ µ ¾ Ö´ µ ¾ remains small for any direction of Ö´ µ. Let ½ Û È ´ µ (the inverse of the performance weight) represent the maximum allowed magnitude of ¾ Ö ¾ at each frequency. The singular values of Ë ´ µ may be plotted as functions of frequency. As for SISO systems we define the bandwidth as the frequency up to which feedback is effective. This results in the following performance requirement: ´Ë ´ µµ ½ ÛÈ ´ µ ¸ ´ÛÈ Ë µ ¸ ÛÈ Ë ½ ½ ½ (3.10(a). and ¡ ¾ is the vector 2-norm. but it is unavoidable for real systems. ´Ë µ. this gain depends on the direction of Ö´ µ and we have from (3. As explained above. they are small at low frequencies where feedback is effective. Typically. .ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ For MIMO systems a useful generalization results if we consider the ratio is the vector of control errors.2. : Frequency where ½ ´Ë µ crosses Ô¾ ¼ from below. ´Ë ´ µµ ´ µ ¾ Ö´ µ ¾ ´Ë ´ µµ (3. including the “worst-case” direction which gives a gain of ´Ë ´ µµ. usually has a peak larger than 1 around the crossover frequencies.42) that it is bounded by the maximum and minimum singular value of Ë .7 (the highgain or best direction). where Ö is the vector of reference inputs.50) The maximum singular value. This peak is undesirable.49) Typical performance weights Û È ´×µ are given in Section 2.7. For MIMO systems the bandwidth will depend on directions. which should be studied carefully. and a higher frequency where the minimum singular value. and we have a bandwidth region between a lower frequency where the maximum singular value.7 (the low-gain or worst-case direction). ´Ë µ. If we want to associate a single bandwidth frequency for a multivariable system. as illustrated later in Figure 3. ´Ë ´ µµ. ´ µ ¾ Ö´ µ ¾ . reaches 0. and they approach 1 at high frequencies because any real system is strictly proper: ½ Ä´ µ ¼ µ Ë´ µ Á (3.48) where the À½ norm (see also page 55) is defined as the peak of the maximum singular value of the frequency response Å ´×µ ½ ¸ Ñ Ü ´Å ´ µµ (3.

41. 3. at higher frequencies where for any real system ´Äµ (and ´Äµ) is small we have that ´Ë µ ½.52) . The most common approach is to use a pre-compensator.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ It is then understood that the bandwidth is at least for any direction of the input (reference or disturbance) signal. A conceptually simple approach to multivariable control is given by a two-step procedure in which we first design a “compensator” to deal with the interactions in . After finding a suitable Ï ½ ´×µ we can design a diagonal controller Ã × ´×µ for the shaped ´×µÏ½ ´×µ (3.7: One degree-of-freedom feedback control configuration Consider the simple feedback system in Figure 3. Thus.41 and 2. and at the bandwidth frequency (where ½ ´Ë ´ µµ ¾ ½ ½) we have that ´Ä´ µµ is between 0. Since Ë ´Á · ĵ  ½ . the bandwidth is approximately where ´Äµ crosses 1. Finally. and then design a diagonal controller using methods similar to those for SISO systems.7. which counteracts the interactions in the plant and results in a “new” shaped plant: × ´×µ which is more diagonal and easier to control than the original plant ´×µ. Ï ½ ´×µ.4 Control of multivariable plants Ö + ¹ - ¹ à ٠¹ ¹+ + + + Õ ¹Ý ÝÑ Ò Figure 3. This approach is discussed below. (A.52) yields ´Äµ   ½ ½ ´Ë µ ´Äµ · ½ (3.51) Thus at frequencies where feedback is effective (namely where ´Äµ ½) we have Ô ´Ë µ ½ ´Äµ.

Ä´×µ ½·Ð´×µ Á and Ð Ì ´×µ ½·´Ð×´µ×µ Á . the overall controller is with д׵ à ´×µ à ÒÚ ´×µ ¸ д׵  ½ ´×µ (3. Remark 1 Some design approaches in this spirit are the Nyquist Array technique of Rosenbrock (1974) and the characteristic loci technique of MacFarlane and Kouvaritakis (1977). × desirable properties. À À À 3. i. This may be obtained by selecting Ͻ and the off-diagonal elements of Ͻ are then called “decoupling elements”.1 Decoupling Decoupling control results when the compensator Ï ½ is chosen such that × Ï½ in (3.  ½ This is usually obtained by choosing a constant pre-compensator Ï ½ Ó where Ó is a real approximation of ´ Ó µ. The overall controller is then Ä Ã ÇÆÌÊÇÄ Ã ´×µ Ͻ ´×µÃ× ´×µ (3. For example. Ó may be obtained. The main difference is that in ½ loop shaping. Approximate decoupling at frequency Û Ó : × ´ Ó µ is as diagonal as possible. Remark 2 The ½ loop-shaping design procedure. In some cases we may want to keep the diagonal elements in the shaped plant  ½ .4. using the align algorithm of Kouvaritakis (1974). It results in a decoupled ½ д׵Á . described in detail in Section 9. with  ½ ´×µ (disregarding the possible × ´×µ Á and a square plant. 3. and then a controller Ã× ´×µ is designed. we get Ï ½  ½ ´×µ). Ã× ´×µ is a full multivariable controller. is Ͻ . Ë ´×µ nominal system with identical loops. In other cases we may want the diagonal unchanged by selecting Ͻ  ½ ´´  ½ µ µ ½ . Steady-state decoupling: × ´¼µ is diagonal. If we then select Ã× ´×µ д׵Á (e. Remark.4.e. designed based on optimization (to optimize ½ robust stability). The following different cases are possible: 1. Dynamic decoupling: × ´×µ is diagonal at all frequencies. with similar in that a pre-compensator is first chosen to yield a shaped plant.g. problems involved in realizing ×). The bandwidth frequency is a good selection for Ó because the effect on performance of reducing interaction is normally greatest at this frequency.¼ plant ÅÍÄÌÁÎ ÊÁ × ´×µ.54) We will later refer to (3. . elements in Ͻ to be 1. for example.53) In many cases effective compensators may be derived on physical grounds and may include nonlinear elements such as ratios.52) is diagonal at a selected frequency. This may be obtained by selecting a  ½ ´¼µ (and for a non-square plant we may use constant pre-compensator Ï ½ the pseudo-inverse provided ´¼µ has full row (output) rank). 2.54) as an inverse-based controller.

As one might expect. Ͻ and Ͼ .1.and post-compensators and the SVD-controller The above pre-compensator approach may be extended by introducing a postcompensator Ï ¾ ´×µ. The requirement of decoupling and the use of an inverse-based controller may not be desirable for disturbance rejection.and post-compensator design. which avoids most of the problems just mentioned. is the internal model control (IMC) approach (Morari and Zafiriou. Ͻ Ã× Ï¾ Ò (3. One then designs a diagonal controller à ¹ Ͼ ¹ Ã× ¹ Ͻ ¹ Figure 3. The reasons are similar to those given for SISO systems in Section 2. but there are several difficulties: 1.) 2. SVD-controllers are studied by Hung and MacFarlane (1982). The overall controller is then à ´×µ Ͻ where ÎÓ ÍÓ ¦ÓÎÓÌ . page 93.4.4.2. 3. is to use partial (one-way) decoupling where × ´×µ in (3.52) is upper or lower triangular.8: Pre.5. as shown in Figure 3. (1994) who found that the SVD controller structure is .ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½ The idea of decoupling control is appealing. and are discussed further below.55) The SVD-controller is a special case of a pre. and by Hovd et al. which essentially yields a decoupling controller. They also yield insights into the limitations imposed by the multivariable interactions on achievable performance.8. see (3.6. 1989). Another common strategy. Ã× is diagonal Ã× for the shaped plant Ï ¾ Ͻ . page 221).56) and ÍÓ are obtained from a singular value decomposition of Ó where Ó is a real approximation of ´ Ó µ at a given frequency Û Ó (often around the bandwidth). One popular design method.58). This is illustrated below in Section 3. decoupling may be very sensitive to modelling errors and uncertainties.2 Pre. Even though decoupling controllers may not always be desirable in practice. Here ÎÓ Ï¾ Ì ÍÓ (3. they are of interest from a theoretical point of view. 3. If the plant has RHP-zeros then the requirement of decoupling generally introduces extra RHP-zeros into the closed-loop system (see Section 6.7.and post-compensators.

However. In summary. the SVD-controller provides a useful class of controllers. i.57) where ͽ ´×µ is some all-pass transfer function (which at each frequency has all its singular values equal to 1). .4) such that at each frequency the disturbances are of magnitude 1.e. so a simple constant unit gain controller yields a good . and our performance ½. and each element in à ´×µ may be designed independently. This is equivalent to requiring ´Ë µ ½. At frequencies where feedback is effective we   ´Á · ĵ ½ Ä ½. The closed-loop disturbance response is Ý Ë . such that the controller that minimizes the input magnitude is one that yields all singular values of Ë equal to 1. In have Ë conclusion.4 What is the shape of the “best” feedback controller? Consider the problem of disturbance rejection. this works well if ´×µ is close to diagonal. The subscript min refers to the use of the smallest loop gain that satisfies the performance objective. In requirement is that Ý ¾ many cases there is a trade-off between input usage and performance.58) some all-pass transfer function matrix.10). This corresponds to ËÑ Ò Í½ (3.3 Diagonal controller (decentralized control) Another simple approach to multivariable controller design is to use a diagonal or block-diagonal controller à ´×µ.58) for SISO systems. and (3. This provides a  ½ which was derived in (2. e. Clearly. For simplicity.58) on page 48 therefore also applies to MIMO systems. if off-diagonal elements in ´×µ are large. we see that for disturbances entering at the plant inputs. and the summary following (2. then the performance with decentralized diagonal control may be poor because no attempt is made to counteract the interactions.4. the controller and loop shape with the minimum gain will often look like   ͽ ½ is where ; generalization of Ã Ñ Ò ÃÑ Ò  ½ ; ÄÑ Ò Í¾ (3. we assume that is square so ͽ ´ µ is a unitary matrix. 3.4. 3. Decentralized control is discussed in more detail in Chapter 10. we get ÃÑ Ò Í¾ . ´Ë µ ½ . Suppose we have scaled the system (see Section 1. This is often referred to as decentralized control. For example.g. for plants consisting of symmetrically interconnected subsystems. and by selecting a selecting Ã× Ó diagonal Ã× with a low condition number (­ ´Ã × µ small) generally results in a robust controller (see Section 6. because then the plant to be controlled is essentially a collection of independent sub-plants. ¾ ½.57) yields Ä Ñ Ò ÃÑ Ò Í½ ½ .¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ optimal in some cases. By д׵¦ ½ a decoupling design is achieved.

6 Summary of mixed-sensitivity À½ synthesis (Ë ÃË ) We here provide a brief summary of one multivariable synthesis approach. so for a system which has ÏÙ Á . A common choice for the ½ (3.5 Multivariable controller synthesis The above design methods are based on a two-step procedure in which we first design a pre-compensator (for decoupling control) or we make an input-output pairing selection (for decentralized control) and then we design a diagonal controller Ã × ´×µ.11. In the Ë ÃË problem.4. We here use the word synthesize rather than design to stress that this is a more formalized approach. Later.4. We also note with interest that ; is it is generally not possible to select a unitary matrix Í ¾ such that ÄÑ Ò diagonal.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ¿ trade-off between output performance and input usage.7.60) . 3. Optimization in controller design became prominent in the 1960’s with “optimal control theory” based on minimizing the expected value of the output variance in the face of stochastic disturbances.4. small 3. Ö to ÛÈ ×Å · £ ×· £ ÛÈ   with Ö   Ý.3 would be useful now. other approaches and norms were introduced.59) This problem was discussed earlier for SISO systems. a reasonable initial choice for the input weight is function from performance weight is Ï È 2. so a decoupling design is generally not optimal for disturbance rejection. A sample MATLAB file is provided in Example 2. page 60. namely the Ë ÃË (mixed-sensitivity) À ½ design method which is used in later examples in this Chapter. such as À½ optimal control. The following issues and guidelines are relevant when selecting the weights Ï È and ÏÙ : 1.7. Invariably this two-step procedure results in a suboptimal design. Ë is the transfer been scaled as in Section 1. ÃË is the transfer function from Ö to Ù in Figure 3. These insights can be used as a basis for a loop-shaping design. The alternative is to synthesize directly a multivariable controller à ´×µ based on minimizing some objective function (norm). and another look at Section 2. see more on À ½ loop-shaping in Chapter 9. the objective is to minimize the À ½ norm of Æ ÏÈ Ë ÏÙ ÃË (3.

Æ represents the transfer function from 3. × À½ optimal controller Ã× for this shaped plant.73). . Ì is the transfer function from  Ò to Ý . and select Û È ´×µ as a rational approximation of ½ Ë .5 Introduction to multivariable RHP-zeros By means of an example.26 on page 58). We then obtain an shaped plant. The helicopter case study in Section 12.61) ÏÙ ÃË ÏÙ ÃË or equivalently Æ where ÏÈ ËÏ ÏÙ ÃËÏ with Ï Á Ö to the weighted (3. and finally include Ï ½ ´×µ in the controller. to the plant model to ensure that the resulting Ͻ . à Ͻ Ã× . 4. For disturbance rejection. Selecting ½ ensures approximate integral ¼. we may in some cases want a steeper slope for Û È ´×µ at low frequencies than that given in (3. 3.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (see also Figure 2. one can first obtain a controller with some other design method. whereas £ action with Ë ´¼µ may be different for each output. plot the magnitude of the resulting diagonal elements of Ë as a function of frequency. and so we may want additional roll-off in Ä. Often we select Å about ¾ for all outputs. rolls off with the desired slope. we find that RHP-zeros impose fundamental limitations on control. A large value of £ yields a faster response for output . Ï ½ ´×µ. To find a reasonable initial choice for the weight Ï È . This can be achieved in several ways.59).62) and Ù. introducing a scalar parameter « to adjust the magnitude of 5. A more direct approach is to add high-frequency dynamics. it may be better to consider the disturbances explicitly by considering the À ½ norm of ÏÈ Ë Ï È Ë Æ (3. In some situations we may want to adjust Ï È or in order to satisfy better our original objectives.g. we want Ì small at high frequencies. As for SISO systems. One approach is to add Ï Ì Ì to the stack for Æ in (3. as see the weight in (2. e. To reduce sensitivity to noise and uncertainty. However. where Ï Ì ÛÌ and ÛÌ is smaller than 1 at low frequencies and large at high frequencies.60). we now give the reader an appreciation of the fact that MIMO systems have zeros even though their presence may not be obvious from the elements of ´×µ.2 illustrates this by . More details about À ½ design are given in Chapter 9.

5 0 0 2 1.64) and we have as expected RHP-zero are Ú inputs and with both outputs. For square systems we essentially have that the poles and zeros of ´×µ are the poles and zeros of Ø ´×µ. Example 3.5 1 0. this crude method may fail in some cases. The singular value decomposition of ´¼ µ is ´¼ µ ½ ½ ½ ½ ¾ ¾ ¼ ¼ ßÞ ¼  ¼ Í ½ ¾ ¼ ¼ ¼ ßÞ ¦ ¼ ½ ¼ ½ ÎÀ ßÞ ¼ ½ À  ¼ ½ (3.5 ݾ ݽ 5 10 1.3 for more details). Ù (c) Combined step in Ù½ and Ì Ù¾ . we design a controller which minimizes the ½ norm of the weighted Ë ÃË matrix  ½ ½ .5 1 0. as it may incorrectly cancel poles and zeros with the same location but different directions (see Sections 4. ¼ ½  ¼ ½ ´ ´¼ µµ and Ù ¼. We see that ݾ displays an inverse response whereas ݽ happens to À Æ ÏÈ Ë ÏÙ ÃË (3. Ù Time [sec] (b) Step in ¼ ½Ì Ù¾ .65) .9: Open-loop response for ´×µ in (3. the RHP-zero is associated with both Ù remain at zero for this particular input change. the plant does have a multivariable RHP-zero at Þ ¼ . and Ø ´¼ µ ¼. Nevertheless. and we can find the direction of a zero by looking at the direction in which the matrix ´Þ µ has zero gain. which is for a simultaneous input change in opposite directions.9(c). but for these two inputs there is no inverse response to indicate the presence of a RHP-zero.8 Consider the following plant ´×µ ½ ½ ½ ´¼ ¾× · ½µ´× · ½µ ½ · ¾× ¾ (3.5 and 4. that is. However. Thus. The input and output directions corresponding to the  ¼ ¼ .5 −1 0 ݾ ݽ 5 10 Time [sec] (a) Step in ½ ¼Ì Ù½ .9(a) and (b). ´×µ loses rank at × ¼ .ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ The zeros Þ of MIMO systems are defined as the values × Þ where ´×µ loses rank.5 0 0 ݾ ݽ 5 10 1 0.5 0 −0.63) The responses to a step in each individual input are shown in Figure 3.63) that the plant is interactive. The presence of the multivariable RHP-zero is also observed from the time response in Figure 3.5. Ù Time [sec] ½  ½ Figure 3. To see how the RHP-zero affects the closed-loop response. We see 2 1.

10(a).63) with RHP-zero Design 2. We do this by increasing the bandwidth requirement in output channel ¾ by a factor of ½¼¼: × Ò¾ Ž ž ½ £ ½ ¼¾ £ ¾ ¾ This yields an ½ norm for Æ of ¾ ¾.g. À   10 0 Magnitude ´Ë µ ´Ë µ ´Ë µ ´Ë µ 2 1 0 −1 10 2 Design 1: Design 2: 10 −2 ݽ ݾ 1 2 3 4 5 Design 1: Design 2: 10 −2 10 0 −2 0 Frequency [rad/s] (a) Singular values of Ë Time [sec] (b) Response to change in reference. but output ¾ responds very poorly (much poorer than output ½ for Design ¾). In this case we see from the dashed line in Figure 3. ¿. this comes at the expense of output ½ (ݽ ) where the response is somewhat poorer than for Design ½. For MIMO plants. one can often move most of the deteriorating effect (e. To illustrate this. Design 1. We note that both outputs behave rather poorly and both display an inverse response. whereas it was only ¾ ¾ for Design ¾.10(b) that the response for output ¾ (ݾ ) is excellent with no inverse response. the ½ norm for Æ is À À . Ö ½  ½ Ì Figure 3. However. We can also interchange the weights ÛÈ ½ and ÛÈ ¾ to stress output ½ rather than output ¾.ÅÍÄÌÁÎ ÊÁ with weights Ä Ã ÇÆÌÊÇÄ ÏÙ Á ÏÈ ÛÈ ½ ¼ ¼ ÛÈ ¾ ÛÈ ×Å · £ × · Û£ ½¼  (3. Design 3.66) The MATLAB file for the design is the same as in Table 2. We weight the two outputs equally and select ¢ × Ò½ Ž ž ½ £ ½ £ ¾ Þ ¾ ¼¾ This yields an ½ norm for Æ of ¾ ¼ and the resulting singular values of Ë are shown by the ½ ½Ì solid lines in Figure 3.10: Alternative designs for ¾ ¢ ¾ plant (3. Since there is a RHP-zero at Þ limit the bandwidth of the closed-loop system. inverse response) of a RHP-zero to a particular output channel. The closed-loop response to a reference change Ö is shown by the solid lines in Figure 3. we change the weight ÛÈ ¾ so that more emphasis is placed on output ¾. Furthermore.10(b).3 on page 60. In this case (not shown) we get an excellent response in output ½ with no inverse response. except that we now ¼ we expect that this will somehow have a ¾ ¾ system.

ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ

Ä

ÇÆÌÊÇÄ

¼ ½. This may be expected from the output direction of the RHP-zero, Ù  ¼ , which is mostly in the direction of output ½. We will discuss this in more detail in Section 6.5.1.

Thus, we see that it is easier, for this example, to get tight control of output ¾ than of output

Remark 1 We find from this example that we can direct the effect of the RHP-zero to either of the two outputs. This is typical of multivariable RHP-zeros, but there are cases where the RHP-zero is associated with a particular output channel and it is not possible to move its effect to another channel. The zero is then called a “pinned zero” (see Section 4.6). Remark 2 It is observed from the plot of the singular values in Figure 3.10(a), that we were able to obtain by Design 2 a very large improvement in the “good” direction (corresponding to ´Ë µ) at the expense of only a minor deterioration in the “bad” direction (corresponding to ´Ë µ). Thus Design 1 demonstrates a shortcoming of the ½ norm: only the worst direction (maximum singular value) contributes to the ½ norm and it may not always be easy to get a good trade-off between the various directions.

À

À

3.6 Condition number and RGA
Two measures which are used to quantify the degree of directionality and the level of (two-way) interactions in MIMO systems, are the condition number and the relative gain array (RGA), respectively. We here define the two measures and present an overview of their practical use. We do not give detailed proofs, but refer to other places in the book for further details.

3.6.1 Condition number
We define the condition number of a matrix as the ratio between the maximum and minimum singular values,

­´ µ

´ µ ´ µ

(3.67)

A matrix with a large condition number is said to be ill-conditioned. For a nonsingular (square) matrix ´ µ ½ ´  ½ µ, so ­ ´ µ ´ µ ´  ½ µ. It then follows from (A.119) that the condition number is large if both and  ½ have large elements. The condition number depends strongly on the scaling of the inputs and outputs. To be more specific, if ½ and ¾ are diagonal scaling matrices, then the condition numbers of the matrices and ½ ¾ may be arbitrarily far apart. In general, the matrix should be scaled on physical grounds, for example, by dividing each input and output by its largest expected or desired value as discussed in Section 1.4. One might also consider minimizing the condition number over all possible scalings. This results in the minimized or optimal condition number which is defined by ­£´ µ Ñ Ò ­´ ½ ¾µ (3.68)

½ ¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

and can be computed using (A.73). The condition number has been used as an input-output controllability measure, and in particular it has been postulated that a large condition number indicates sensitivity to uncertainty. This is not true in general, but the reverse holds; if the condition number is small, then the multivariable effects of uncertainty are not likely to be serious (see (6.72)). If the condition number is large (say, larger than 10), then this may indicate control problems:

´ µ ´ µ may be caused by a small value 1. A large condition number ­ ´ µ of ´ µ, which is generally undesirable (on the other hand, a large value of ´ µ need not necessarily be a problem). 2. A large condition number may mean that the plant has a large minimized condition number, or equivalently, it has large RGA-elements which indicate fundamental control problems; see below. 3. A large condition number does imply that the system is sensitive to “unstructured” (full-block) input uncertainty (e.g. with an inverse-based controller, see (8.135)), but this kind of uncertainty often does not occur in practice. We therefore cannot generally conclude that a plant with a large condition number is sensitive to uncertainty, e.g. see the diagonal plant in Example 3.9.
3.6.2 Relative Gain Array (RGA)
The relative gain array (RGA) of a non-singular square matrix defined as Ê ´ µ £´ µ ¸ ¢ ´  ½ µÌ is a square matrix (3.69)

where ¢ denotes element-by-element multiplication (the Hadamard or Schur product). For a ¾ ¢ ¾ matrix with elements the RGA is

£´ µ

½½ ¾½

½¾ ¾¾

½   ½½

½½

½   ½½

½½

½½

½   ½¾ ¾½ ½½ ¾¾

½

(3.70)

Bristol (1966) originally introduced the RGA as a steady-state measure of interactions for decentralized control. Unfortunately, based on the original definition, ¼”. To the many people have dismissed the RGA as being “only meaningful at contrary, in most cases it is the value of the RGA at frequencies close to crossover which is most important. The RGA has a number of interesting algebraic properties, of which the most important are (see Appendix A.4 for more details): 1. It is independent of input and output scaling. 2. Its rows and columns sum to one. 3. The sum-norm of the RGA, £ ×ÙÑ, is very close to the minimized condition number ­ £ ; see (A.78). This means that plants with large RGA-elements are

ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ

Ä

ÇÆÌÊÇÄ

always ill-conditioned (with a large value of ­ ´ µ), but the reverse may not hold (i.e. a plant with a large ­ ´ µ may have small RGA-elements). 4. A relative change in an element of equal to the negative inverse of its corresponding RGA-element yields singularity. 5. The RGA is the identity matrix if is upper or lower triangular. From the last property it follows that the RGA (or more precisely £   Á ) provides a measure of two-way interaction. Example 3.9 Consider a diagonal plant and compute the RGA and condition number,

½¼¼ ¼ ¼ ½

£´ µ

Á ­´ µ

´ µ ´ µ

½¼¼ ½

½¼¼ ­ £ ´ µ

½

(3.71)

Here the condition number is 100 which means that the plant gain depends strongly on the Á input direction. However, since the plant is diagonal there are no interactions so £´ µ and the minimized condition number ­£ ´ µ ½.

Example 3.10 Consider a triangular plant

for which we get

Note that for a triangular matrix, the RGA is always the identity matrix and ­£ ´ ½.

½ ¾ ¼ ½

 ½

½ ¼

 ¾ £´ µ Á ­ ´ µ ¾ ½ ½ ¼ ½

¿ ­£´ µ

½

(3.72)

µ is always

In addition to the algebraic properties listed above, the RGA has a surprising number of useful control properties: 1. The RGA is a good indicator of sensitivity to uncertainty: (a) Uncertainty in the input channels (diagonal input uncertainty). Plants with large RGA-elements around the crossover frequency are fundamentally difficult to control because of sensitivity to input uncertainty (e.g. caused by uncertain or neglected actuator dynamics). In particular, decouplers or other inverse-based controllers should not be used for plants with large RGAelements (see page 243). (b) Element uncertainty. As implied by algebraic property no. 4 above, large RGA-elements imply sensitivity to element-by-element uncertainty. However, this kind of uncertainty may not occur in practice due to physical couplings between the transfer function elements. Therefore, diagonal input uncertainty (which is always present) is usually of more concern for plants with large RGAelements. Example 3.11 Consider again the distillation process for which we have at steady-state

½¼ ¾

   ½¼

 ½

¼¿ ¼¿

 ¼ ¿½  ¼ ¿¾¼

£´ µ

¿ ½  ¿ ½  ¿ ½ ¿ ½

(3.73)

¼

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

In this case ­ ´ µ ½ ¾ ½ ¿ ½ ½ ½ is only slightly larger than ­£ ´ µ ½¿ ¾ . ½¿ ¾ . This The magnitude sum of the elements in the RGA-matrix is £ ×ÙÑ confirms (A.79) which states that, for ¾ ¾ systems, £´ µ ×ÙÑ ­ £ ´ µ when ­ £ ´ µ is large. The condition number is large, but since the minimum singular value ´ µ ½ ¿ ½ is larger than ½ this does not by itself imply a control problem. However, the large RGAelements indicate control problems, and fundamental control problems are expected if analysis shows that ´ µ has large RGA-elements also in the crossover frequency range. (Indeed, the idealized dynamic model (3.81) used below has large RGA-elements at all frequencies, and we will confirm in simulations that there is a strong sensitivity to input channel uncertainty with an inverse-based controller).

¢

2. RGA and RHP-zeros. If the sign of an RGA-element changes from × ¼ to × ½, then there is a RHP-zero in or in some subsystem of (see Theorem 10.5), page 449). 3. Non-square plants. The definition of the RGA may be generalized to non-square matrices by using the pseudo inverse; see Appendix A.4.2. Extra inputs: If the sum of the elements in a column of RGA is small ( ½), then one may consider deleting the corresponding input. Extra outputs: If all elements in a row of RGA are small ( ½), then the corresponding output cannot be controlled (see Section 10.4). 4. Pairing and diagonal dominance. The RGA can be used as a measure of diagonal dominance (or more precicely, of whether the inputs or outputs can be scaled to obtain diagonal dominance), by the simple quantity RGA-number

£´ µ   Á ×ÙÑ

(3.74)

For decentralized control we prefer pairings for which the RGA-number at crossover frequencies is close to 0 (see pairing rule 1 on page 445). Similarly, for certain multivariable design methods, it is simpler to choose the weights and shape the plant if we first rearrange the inputs and outputs to make the plant diagonally dominant with a small RGA-number. 5. RGA and decentralized control. (a) Integrity: For stable plants avoid input-output pairing on negative steady-state RGA-elements. Otherwise, if the sub-controllers are designed independently each with integral action, then the interactions will cause instability either when all of the loops are closed, or when the loop corresponding to the negative relative gain becomes inactive (e.g. because of saturation) (see Theorem 10.4, page 447). (b) Stability: Prefer pairings corresponding to an RGA-number close to 0 at crossover frequencies (see page 445). Example 3.12 Consider a plant

´×µ

½ ×·½

×·½ ×· ½ ¾

(3.75)

ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ

Ä

ÇÆÌÊÇÄ

½

We find that ½½ ´ µ ¾ and ½½ ´¼µ ½ have different signs. Since none of the diagonal elements have RHP-zeros we conclude from Theorem 10.5 that ´×µ must have a RHP-zero. This is indeed true and ´×µ has a zero at × ¾.

½

 

For a detailed analysis of achievable performance of the plant (input-output controllability analysis), one must also consider the singular values, RGA and condition number as functions of frequency. In particular, the crossover frequency range is important. In addition, disturbances and the presence of unstable (RHP) plant poles and zeros must be considered. All these issues are discussed in much more detail in Chapters 5 and 6 where we discuss achievable performance and inputoutput controllability analysis for SISO and MIMO plants, respectively.

3.7 Introduction to MIMO robustness
To motivate the need for a deeper understanding of robustness, we present two examples which illustrate that MIMO systems can display a sensitivity to uncertainty not found in SISO systems. We focus our attention on diagonal input uncertainty, which is present in any real system and often limits achievable performance because it enters between the controller and the plant.

3.7.1 Motivating robustness example no. 1: Spinning Satellite
Consider the following plant (Doyle, 1986; Packard et al., 1993) which can itself be motivated by considering the angular velocity control of a satellite spinning about one of its principal axes:

´×µ

½ ×¾ · ¾

×  ¾   ´× · ½µ
¾

A minimal, state-space realization,

The plant has a pair of -axis poles at × ¦ so it needs to be stabilized. Let us apply negative feedback and try the simple diagonal constant controller

´× · ½µ ×  ¾ ´×Á   µ  ½ · ¿ ½ ¼ ¼   ¼ ¼ ½ ½ ¼ ¼   ½ ¼ ¼

½¼
, is

(3.76)

(3.77)

Ã
The complementary sensitivity function is

Á
½ ×·½

Ì ´×µ

à ´Á · à µ ½

 

½

½

(3.78)

¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

Nominal stability (NS). The closed-loop system has two poles at ×  ½ and so it is stable. This can be verified by evaluating the closed-loop state matrix

 ½ ¼ ½ ¼  ½ (To derive Ð use Ü Ü · Ù, Ý  ÃÝ). à are shown in Nominal performance (NP). The singular values of Ä Figure 3.6(a), page 77. We see that ´Äµ ½ at low frequencies and starts dropping off at about ½¼. Since ´Äµ never exceeds ½, we do not have tight control in
Ð

  Ã

 

¼

½ ¼     Ü and Ù

the low-gain direction for this plant (recall the discussion following (3.51)), so we expect poor closed-loop performance. This is confirmed by considering Ë and Ì . For example, at steady-state ´Ì µ ½¼ ¼ and ´Ë µ ½¼. Furthermore, the large off-diagonal elements in Ì ´×µ in (3.78) show that we have strong interactions in the closed-loop system. (For reference tracking, however, this may be counteracted by use of a two degrees-of-freedom controller). Robust stability (RS). Now let us consider stability robustness. In order to determine stability margins with respect to perturbations in each input channel, one may consider Figure 3.11 where we have broken the loop at the first input. The loop ½ × transfer function at this point (the transfer function from Û ½ to Þ½ ) is Ľ ´×µ Ľ ´×µ ). This corresponds to an ½ (which can be derived from Ø ½½ ´×µ ½·× ½·Ä½ ´×µ infinite gain margin and a phase margin of ¼ Æ . On breaking the loop at the second input we get the same result. This suggests good robustness properties irrespective of the value of . However, the design is far from robust as a further analysis shows. Consider input gain uncertainty, and let ¯ ½ and ¯¾ denote the relative error in the gain

Þ½

Û½

+ ¹

+ ¹ -

-

¹ ¹

Õ
Ã

Õ¹
¹

Figure 3.11: Checking stability margins “one-loop-at-a-time”

in each input channel. Then

Ù¼½

´½ · ¯½ µÙ½

Ù¼¾

´½ · ¯¾ µÙ¾

(3.79)

ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ

Ä

ÇÆÌÊÇÄ

¿

where Ù¼ and Ù¼ are the actual changes in the manipulated inputs, while Ù ½ and Ù¾ are ½ ¾ the desired changes as computed by the controller. It is important to stress that this diagonal input uncertainty, which stems from our inability to know the exact values of the manipulated inputs, is always present. In terms of a state-space description, (3.79) may be represented by replacing by

¼

½ · ¯½ ¼ ¼ ½ · ¯¾

The corresponding closed-loop state matrix is

¼

Ð

  ¼Ã

 

¼

¼

 

½ · ¯½ ¼ ¼ ½ · ¯¾

 

½

½
(3.80)

which has a characteristic polynomial given by

Ø´×Á   ¼ Ð µ

×¾ · ´¾ · ¯½ · ¯¾ µ × · ½ · ¯½ · ¯¾ · ´ ¾ · ½µ¯½ ¯¾
ßÞ

½

ßÞ

¼

The perturbed system is stable if and only if both the coefficients ¼ and ½ are positive. We therefore see that the system is always stable if we consider uncertainty in only one channel at a time (at least as long as the channel gain is positive). More ¯ ½ ½ ¯¾ ¼µ and ´¯½ ¼  ½ ¯¾ precisely, we have stability for ´ ½ ½µ. This confirms the infinite gain margin seen earlier. However, the system can only tolerate small simultaneous changes in the two channels. For example, let ¯ ½  ¯¾ , then the system is unstable ( ¼ ¼) for

¯½

Ô ¾½ ·½

¼½

In summary, we have found that checking single-loop margins is inadequate for MIMO problems. We have also observed that large values of ´Ì µ or ´Ë µ indicate robustness problems. We will return to this in Chapter 8, where we show that with ½ ´Ì µ, we are guaranteed robust stability input uncertainty of magnitude ¯ (even for “full-block complex perturbations”). In the next example we find that there can be sensitivity to diagonal input uncertainty even in cases where ´Ì µ and ´Ë µ have no large peaks. This can not happen for a diagonal controller, see (6.77), but it will happen if we use an inversebased controller for a plant with large RGA-elements, see (6.78).

3.7.2 Motivating robustness example no. 2: Distillation Process
The following is an idealized dynamic model of a distillation column,

´×µ

½ × · ½ ½¼ ¾

   ½¼

(3.81)

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

(time is in minutes). The physics of this example was discussed in Example 3.6. The plant is ill-conditioned with condition number ­ ´ µ ½ ½ at all frequencies. The plant is also strongly two-way interactive and the RGA-matrix at all frequencies is

Ê

´ µ

 ¿

¿ ½ ½

 ¿

½ ¿ ½

(3.82)

The large elements in this matrix indicate that this process is fundamentally difficult to control.
Remark. (3.81) is admittedly a very crude model of a real distillation column; there should be a high-order lag in the transfer function from input 1 to output 2 to represent the liquid flow down to the column, and higher-order composition dynamics should also be included. Nevertheless, the model is simple and displays important features of distillation column behaviour. It should be noted that with a more detailed model, the RGA-elements would approach 1 at frequencies around 1 rad/min, indicating less of a control problem.
2.5 2 1.5 1 0.5 0 0 10 20

Nominal plant: Perturbed plant:

ݽ ݾ
30
Time [min]

40

50

60

Figure 3.12: Response with decoupling controller to filtered reference input Ö½ The perturbed plant has 20% gain uncertainty as given by (3.85).

½ ´ × ·½µ.

We consider the following inverse-based controller, which may also be looked upon as a steady-state decoupler with a PI controller:

à ÒÚ ´×µ

error this controller should counteract all the interactions in the plant and give rise ½ ¿ to two decoupled first-order responses each with a time constant of ½ ¼ min. This is confirmed by the solid line in Figure 3.12 which shows the simulated response to a reference change in Ý ½ . The responses are clearly acceptable, and we conclude that nominal performance (NP) is achieved with the decoupling controller. Robust stability (RS). The resulting sensitivity and complementary sensitivity functions with this controller are

 ¼ ¿½ ½ ´½ · ×µ ¼ ¿ ½ ¼ (3.83) ¼ ¿ ¿  ¼ ¿¾¼¼ × ¼ Á . With no model Nominal performance (NP). We have à ÒÚ Ã ÒÚ ×
½  ½ ´×µ ×

Ë

ËÁ

× Á ×·¼

Ì

ÌÁ

½ Á ½ ¿× · ½

(3.84)

ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ

Ä

ÇÆÌÊÇÄ

Thus, ´Ë µ and ´Ì µ are both less than 1 at all frequencies, so there are no peaks which would indicate robustness problems. We also find that this controller gives an infinite gain margin (GM) and a phase margin (PM) of ¼ Æ in each channel. Thus, use of the traditional margins and the peak values of Ë and Ì indicate no robustness problems. However, from the large RGA-elements there is cause for concern, and this is confirmed in the following. We consider again the input gain uncertainty (3.79) as in the previous example, and we select ¯½ ¼ ¾ and ¯¾  ¼ ¾. We then have

Ù¼½

½ ¾Ù ½

Ù¼¾

¼ Ù¾

(3.85)

Note that the uncertainty is on the change in the inputs (flow rates), and not on their absolute values. A 20% error is typical for process control applications (see Remark 2 on page 302). The uncertainty in (3.85) does not by itself yield instability. This is verified by computing the closed-loop poles, which, assuming no cancellations, are Ø´Á · ÄÁ ´×µµ ¼ (see (4.102) and (A.12)). In our solutions to Ø´Á · Ä´×µµ case

ļÁ ´×µ

à ÒÚ ¼ ×½

à ÒÚ

½ · ¯½ ¼ ¼ ½ · ¯¾ ´½ · ¯½ µ

¼ ×

½ · ¯½ ¼ ¼ ½ · ¯¾
(3.86)

so the perturbed closed-loop poles are

 ¼

×¾

 ¼

´½ · ¯¾µ

and we have closed-loop stability as long as the input gains ½ · ¯ ½ and ½ · ¯¾ remain positive, so we can have up to 100% error in each input channel. We thus conclude that we have robust stability (RS) with respect to input gain errors for the decoupling controller. Robust performance (RP). For SISO systems we generally have that nominal performance (NP) and robust stability (RS) imply robust performance (RP), but this is not the case for MIMO systems. This is clearly seen from the dotted lines in Figure 3.12 which show the closed-loop response of the perturbed system. It differs drastically from the nominal response represented by the solid line, and even though it is stable, the response is clearly not acceptable; it is no longer decoupled, and Ý ½ ´Øµ and ݾ ´Øµ reach a value of about 2.5 before settling at their desired values of 1 and 0. Thus RP is not achieved by the decoupling controller.
Remark 1 There is a simple reason for the observed poor response to the reference change in ݽ . To accomplish this change, which occurs mostly in the direction corresponding to the low plant gain, the inverse-based controller generates relatively large inputs Ù½ and Ù¾ , while trying to keep Ù½ Ù¾ very small. However, the input uncertainty makes this impossible – the result is an undesired large change in the actual value of Ù¼ Ù¼ , which subsequently results ½ ¾ in large changes in ݽ and ݾ because of the large plant gain ( ´ µ ½ ¾) in this direction, as seen from (3.46).

 

 

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

Remark 2 The system remains stable for gain uncertainty up to 100% because the uncertainty occurs only at one side of the plant (at the input). If we also consider uncertainty at the output then we find that the decoupling controller yields instability for relatively small errors in the input and output gains. This is illustrated in Exercise 3.8 below. Remark 3 It is also difficult to get a robust controller with other standard design techniques ÛÈ Á (using Å ¾ for this model. For example, an Ë ÃË -design as in (3.59) with ÏÈ and ¼ ¼ in the performance weight (3.60)) and ÏÙ Á , yields a good nominal response (although not decoupled), but the system is very sensitive to input uncertainty, and the outputs go up to about 3.4 and settle very slowly when there is 20% input gain error. Remark 4 Attempts to make the inverse-based controller robust using the second step of the Glover-McFarlane ½ loop-shaping procedure are also unhelpful; see Exercise 3.9. This shows that robustness with respect to coprime factor uncertainty does not necessarily imply robustness with respect to input uncertainty. In any case, the solution is to avoid inverse-based controllers for a plant with large RGA-elements.

À

i.e. select Ͻ

Exercise 3.7 Design a SVD-controller à Ͻ Ã× Ï¾ for the distillation process in (3.81), Î and Ͼ Í Ì where Í and Î are given in (3.46). Select Ã× in the form ¼ ½ ××·½ Ã× ¼ ¾ ××·½
and try the following values: (a) (b) (c)

½ ½ ½

¾ ¼ ¼¼ ; ¼ ¼¼ , ¾ ¼ ¼ ; ¼ ½ ¼ ¼¼¿ , ¾

¼

½¿

¼ ¼

.

Simulate the closed-loop reference response with and without uncertainty. Designs (a) and (b) should be robust. Which has the best performance? Design (c) should give the response in Figure 3.12. In the simulations, include high-order plant dynamics by replacing ´×µ by ½ ´¼ ¼¾×·½µ ´×µ. What is the condition number of the controller in the three cases? Discuss the results. (See also the conclusion on page 243).

Exercise 3.8 Consider again the distillation process (3.81) with the decoupling controller, but also include output gain uncertainty ¯ . That is, let the perturbed loop transfer function be

ļ ´×µ

¼ Ã ÒÚ

¼ ×

½ · ¯½ ¼ ¼ ½ · ¯¾

ļ

ßÞ

½ · ¯½ ¼  ½ ¼ ½ · ¯¾

(3.87)

where ļ is a constant matrix for the distillation model (3.81), since all elements in share ´×µ ¼ . The closed-loop poles of the perturbed system are the same dynamics, ´×µ Ø´Á · ´ ½ ׵ļ µ ¼, or equivalently solutions to Ø´Á · ļ ´×µµ

× Ø´ Á · ļ µ ½

´× ½ µ¾ · Øִļ µ´× ½ µ · شļ µ

¼

(3.88)

For ½ ¼ we have from the Routh-Hurwitz stability condition indexRouth-Hurwitz stability test that instability occurs if and only if the trace and/or the determinant of ļ are negative.

ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ

Ä

ÇÆÌÊÇÄ

Since شļ µ ¼ for any gain error less than ½¼¼%, instability can only occur if Øִļ µ ¼. Evaluate Øִļ µ and show that with gain errors of equal magnitude the combination of ¯¾ ¯½ ¯¾ ¯. Use this to errors which most easily yields instability is with ¯½ show that the perturbed system is unstable if

 

 

¯
where ½½ ½½ ¾¾ we get instability for

Ö

½ ¾ ½½   ½

(3.89)

¯

Ø

is the ½ ½-element of the RGA of . In our case ½½ ¼ ½¾¼. Check this numerically, e.g. using MATLAB.

¿ ½ and

Remark. The instability condition in (3.89) for simultaneous input and output gain uncertainty, applies to the very special case of a ¾ ¾ plant, in which all elements share the ´×µ ¼ , and an inverse-based controller, Ã ´×µ ´ ½ ×µ  ½ ´×µ. same dynamics, ´×µ

¢

Exercise 3.9 Consider again the distillation process ´×µ in (3.81). The response using the inverse-based controller à ÒÚ in (3.83) was found to be sensitive to input gain errors. We want to see if the controller can be modified to yield a more robust system by using the Glover-McFarlane ½ loop-shaping procedure. To this effect, let the shaped plant be à ÒÚ , i.e. Ͻ à ÒÚ , and design an ½ controller Ã× for the shaped plant × Ã ÒÚ Ã× . (You (see page 389 and Chapter 9), such that the overall controller becomes à ½ ½ which indicates good robustness with respect to coprime factor will find that ­Ñ Ò uncertainty, but the loop shape is almost unchanged and the system remains sensitive to input uncertainty.)

À

À

3.7.3 Robustness conclusions
From the two motivating examples above we found that multivariable plants can display a sensitivity to uncertainty (in this case input uncertainty) which is fundamentally different from what is possible in SISO systems. In the first example (spinning satellite), we had excellent stability margins (PM and GM) when considering one loop at a time, but small simultaneous input gain errors gave instability. This might have been expected from the peak values (À ½ norms) of Ë and Ì , defined as

Ì ½

Ñ Ü ´Ì ´ µµ

Ë ½

Ñ Ü ´Ë ´ µµ

(3.90)

which were both large (about 10) for this example. In the second example (distillation process), we again had excellent stability margins (PM and GM), and the system was also robustly stable to errors (even simultaneous) of up to 100% in the input gains. However, in this case small input gain errors gave very poor output performance, so robust performance was not satisfied, and adding simultaneous output gain uncertainty resulted in instability (see Exercise 3.8). These problems with the decoupling controller might have been expected because the plant has large RGA-elements. For this second example the

We want to avoid a trial-and-error procedure based on checking stability and performance for a large number of candidate plants. where È is the generalized plant and à is the generalized controller as explained in Table 1. 3. are useful indicators of robustness problems. This motivates the need for better tools for analyzing the effects of model uncertainty.3 where a -analysis predicts the robustness problems found above. generates a control signal Ù which counteracts the influence of Û on Þ . is a simple tool which is able to identify the worst-case plant. The formulation makes use of the general control configuration in Figure 3.11. 1984). for example.13. they provide no exact answer to whether a given source of uncertainty will yield instability or poor performance. The most important point of this section is to appreciate that almost any linear control . the À ½ norm. and in the end one does not know whether those plants are the limiting ones. What is desired. Although sensitivity peaks. thereby minimizing the closed-loop norm from Û to Þ . and introduce the structured singular value as our tool. This is very time consuming. etc.13: General control configuration for the case with no model uncertainty ¯ In this section we consider a general method of formulating control problems introduced by Doyle (1983.1 on page 13. The two motivating examples are studied in more detail in Example 8.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ À½ norms of Ë and Ì were both about ½.10 and Section 8. so the absence of peaks in Ë and Ì does not guarantee robustness. This will be the focus of Chapters 7 and 8 where we show how to represent model uncertainty in the À ½ framework. The controller design problem is then: Find a controller à which based on the information in Ú . RGA-elements. Note that positive feedback is used. The overall control objective is to minimize some norm of the transfer function from Û to Þ .8 General control problem formulation (weighted) exogenous inputs Û Ù ¹ ¹ È Ã ¹Þ Ú (weighted) exogenous outputs control signals sensed outputs Figure 3.

Remark 1 The configuration in Figure 3. However. Some examples are given below and further examples are given in Section 9.9. The first step is to identify the signals for the generalized plant: ¾ Û With this choice of Ú . that is.14.11 and 9. To construct È one should note that it is an open-loop system and remember to break all “loops” entering and exiting the controller à .¹ à ٠¹ ¹+ + + Õ ¹Ý + ÝÑ Ò Figure 3. Also note that Þ Ý Ö.3 (Figures 9.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ problem can be formulated using the block diagram in Figure 3. which means that performance is specified in terms of the actual output Ý and not in terms of the measured output ÝÑ .14: One degree-of-freedom control configuration Example 3. they assume that È is given. Þ .8. 9.91)     Þ Ú Ý Ö Ö   ÝÑ Ù·  Ö Ö  Ù  ÁÛ½   ÁÛ¾ · ¼Û¿ · Ù   Ò  ÁÛ½ · ÁÛ¾   ÁÛ¿   Ù . We want to find È for the conventional one degree-of-freedom control configuration in Figure 3. To derive È (and à ) for a specific case we must first find a block diagram representation and identify the signals Û.13 may at first glance seem restrictive. including the design of observers (the estimation problem) and feedforward controllers. The routines in MATLAB for synthesizing À ½ and À¾ optimal controllers assume that the problem is in the general form of Figure 3. this is not the case. 3.21 (with model uncertainty). 9. Remark 2 We may generalize the control configuration still further by including diagnostics as additional outputs from the controller giving the 4-parameter controller introduced by Nett (1986).13. but this is not considered in this book.1 Obtaining the generalized plant È Ö + ¹.14 then yields Û½ Û¾ Û¿ ¿ ¾ ¿ Ö Ò Þ Ý Ö Ú Ö   ÝÑ Ö Ý Ò (3. and we will demonstrate the generality of the setup with a few examples.10. Ù and Ú . The block diagram in Figure 3.12).13 (for the nominal case) or in Figure 3. the controller only has information about the deviation Ö ÝÑ .13 One degree-of-freedom feedback control configuration.

2 Controller design: Including weights in È To get a meaningful controller synthesis problem.1: MATLAB program to generate È in (3. sysic.+ + + Ú Ã Figure 3.15: Equivalent representation of Figure 3. sysoutname = ’P’. r-G-d-n]’. For example. outputvar = ’[G+d-r. we generally have to include weights Ï Þ and ÏÛ in the generalized plant È .14 where the error signal to be minimized is Þ Ý Ö and the input to the controller is Ú Ö ÝÑ     and È which represents the transfer function matrix from Û ÙÌ to Þ ÚÌ is (3. % Consists of vectors z and v. Table 3. Obtaining the generalized plant È may seem tedious. inputvar = ’[d(1). That is. The code in Table 3. However.½¼¼ ´ ÅÍÄÌÁÎ ÊÁ Ä + ¹- à ÇÆÌÊÇÄ È Û Ö Ò Õ ¹Þ ¹ Ù ¹ + Õ ¹ + ¹.r(1).92) % Uses the Mu-toolbox systemnames = ’G’. in terms of the À ½ or À¾ norms.16.8. from the representation in Figure 3. 3. for example. Alternatively. % G is the SISO plant. see Figure 3. input to G = ’[u]’. we consider the weighted or normalized exogenous ÏÛ Û consists of the “physical” signals entering the system. inputs Û (where Û .14. can be obtained by inspection Remark.92) È Á  Á ¼  Á Á  Á   È Note that È does not contain the controller. in MATLAB we may use the simulink program.15. % Consists of vectors w and u. or we may use the sysic program in the -toolbox. when performing numerical calculations È can be generated using software.n(1).92) for Figure 3.1 generates the generalized plant È in (3.u(1)]’.

Example 3. that is. We get from Figure 3.14 Stacked Ë Ì ÃË problem. These requirements may be combined into a stacked À½ ¾ Ñ Ò Æ ´Ã µ ½ Ã Æ ÏÙ ÃË ÏÌ Ì ÏÈ Ë ¿ (3. ´Ì µ (for robustness and to avoid sensitivity to noise) and ´ÃË µ (to penalize large inputs). where (convince yourself that this is true).17 the following set of equations À     Þ½ Þ¾ Þ¿ Ú ÏÙ Ù ÏÌ Ù ÏÈ Û · ÏÈ Ù  Û   Ù . Except for some negative signs which have no effect when evaluating Æ ½ . In other words.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½¼½ È Û ¹ ÏÛ Û ¹ ¹ È Ã Þ ¹ ÏÞ ¹Þ Figure 3.93) where à is a stabilizing controller. and Þ consists of the weighted input Þ½ ÏÙ Ù. and the weighted control error Þ¿ ÏÈ ´Ý Öµ. references and noise). we have Þ ÆÛ and the objective is to minimize the ½ norm from Û to Þ . and the weighted or normalized controlled outputs Þ ÏÞ Þ (where Þ often consists of the control error Ý   Ö and the manipulated input Ù). the Æ in (3. Here Û represents a reference command (Û the negative sign does not really matter) or a disturbance entering at the output (Û Ý ).93) may be represented by the block diagram in Figure 3.16: General control configuration for the case with no model uncertainty disturbances. The weighting matrices are usually frequency dependent and typically selected such that weighted signals Û and Þ are of magnitude 1. Consider an bound problem À½ problem where we want to ´Ë µ (for performance). the norm from Û to Þ should be less than 1. Thus. and we may without loss of generality assume that Ï Û ´×µ and ÏÞ ´×µ are stable and minimum phase (they need not even be rational transfer functions but if not they will be unsuitable for controller synthesis using current software). the weighted output Þ¾ ÏÌ Ý . in most cases only the magnitude of the weights matter.17 Ö.

93) Û ÙÌ È to ¾ Þ ÚÌ is ¿ ¼ ÏÙ Á ¼ ÏÌ ÏÈ Á ÏÈ  Á   (3.98) (3.99) . For cases with one degree-of-freedom negative feedback control we have È ¾¾   . i.8.3 Partitioning the generalized plant È We often partition È as È È½½ Ƚ¾ Ⱦ½ Ⱦ¾ (3.97) Þ Ú È½½ Ⱦ½ Ƚ½ Û · Ƚ¾ ٠Ⱦ½ Û · Ⱦ¾ Ù The reader should become familiar with this notation.95) such that its parts are compatible with the signals control configuration.14 we get ÏÈ Á ¼ ¼ Ƚ¾ Note that Ⱦ¾ has dimensions compatible with the controller. Þ . if à is an Ò Ù ¢ ÒÚ matrix.e. Ù and Ú in the generalized (3. In Example 3.17: Block diagram corresponding to Þ so the generalized plant È from ÆÛ in (3.94) 3. then È ¾¾ is an ÒÚ ¢ ÒÙ matrix.½¼¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¹ Û Ú¹ à ÏÙ ÏÌ ÏÈ ¹ ¹ ¹ Þ ¹ - Õ Ù¹ ¹ + Õ¹ Õ+ Figure 3.96) (3. Û.  Á Ⱦ¾   ÏÙ Á ÏÌ ÏÈ (3.

96).13 by using à to close a lower feedback loop around È . To assist in remembering the sequence of Ƚ¾ and Ⱦ½ in (3.   The reader is advised to become comfortable with the above manipulations before progressing much further. Remark.97) and (3.8.15 We want to derive LFT-formula in (3.101) given in (3. combine this with the controller equation and eliminate Ù and Ú from equations (3. This is useful when synthesizing the controller.99) using the ¾ Æ where we have made use of the identities Ë ´Á · à µ ½ . for analysis of closed-loop performance the controller is given. With the exception of the two negative signs.16 have the controller Þ ÆÛ (3. We get ¾ ¿ ¾ Æ for the partitioned ¿ È in (3. However. Ì ÃË and Á Ì Ë .98) and (3. ¼ ÏÙ Á ¼ · ÏÌ ÏÈ Á ÏÈ Ã ´Á · à µ ½ ´ Á µ  ÏÙÃË ¿  ÏÌ Ì ÏÈ Ë   . To find Æ . first partition the generalized plant È as Ù ÃÚ Þ (3.4 Analysis: Closing the loop to get Æ Û ¹ Æ ¹Þ Figure 3. Of course.93). In words.100) where Æ is a function of à .102) Æ È½½ · Ƚ¾ à ´Á   Ⱦ¾ à µ ½ Ⱦ½ ¸ Ð ´È à µ Here Ð ´È à µ denotes a lower linear fractional transformation (LFT) of È with à as the parameter.95)-(3. the negative signs have no effect on the norm of Æ .101) to yield where Æ is given by ÆÛ (3. (3. The lower LFT in (3.13 the term ´Á   Ⱦ¾ à µ ½ has a negative sign. Æ is obtained from Figure 3.102) is also represented by the block diagram in Figure 3.13 and 3. notice that the first (last) index in Ƚ½ is the same as the first (last) index in Ƚ¾ à ´Á Ⱦ¾ à µ ½ Ⱦ½ . Some properties of LFTs are given in Appendix A. this is identical to Æ given in (3.2. Since positive feedback is used in the general configuration in Figure 3.97).7.18 where The general feedback configurations in Figures 3.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½¼¿ 3.102). Example 3. and we may absorb à into the interconnection structure and obtain the system Æ as shown in Figure 3.18: General block diagram for analysis with no uncertainty à as a separate block.102).

19: System with feedforward. (ii) Let Þ using Æ Ý Ö Ö Ö Þ Ú (3. For example in the MATLAB -Toolbox we can evaluate Æ Ð ´È à µ using the command N=starp(P.19. it should be noted that deriving Æ from È is much simpler using available software. there is no control objective associated with it.103) Ù ÝÑ Ò ÆÛ and derive Æ in two different ways. ݾ is a secondary output (extra measurement). and we also measure the disturbance . that is. Here starp denotes the matrix star product which generalizes the use of LFTs (see Appendix A.10 Consider the two degrees-of-freedom feedback configuration in Figure 1.5). (i) Find È when Exercise 3.8. By secondary we mean that ݾ is of secondary importance for control.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Again. The control configuration includes a two degreesof-freedom controller. where ݽ is the output we want to control.3(b). directly from the block diagram and Ð ´È à µ. Û ¾ ¿ 3.16 Consider the control system in Figure 3.7.5 Generalized plant È : Further examples To illustrate the generality of the configuration in Figure 3.13. we now present two further examples: one in which we derive È for a problem involving feedforward control. and one for a problem involving estimation. local feedback and two degrees-of-freedom control Example 3. Õ Ã Ö ¹ ÃÖ + ¹ ¹ - ý + ¹ -Ù ¹ + ¾ þ ݾ¹ Õ + +¹ ½ ½ Õݹ Figure 3.K). a feedforward controller and a local feedback controller based on the .

96) and (3. From Example 3.11.19 we get È ½  Á ¼ Á ½ ¼ ¼ ¼ Á ¼ (3. Let denote the unknown external inputs (including noise and disturbances) and Ù the known plant inputs (a subscript is used because in this case the output Ù from à is not the plant input). Remark.11 Cascade implementation. Since the controller has explicit information about Ö we have a two degrees-of-freedom controller. However. the output Ý which we want to control. if we impose restrictions on the design such that. The local feedback based on ݾ is often implemented in a cascade manner. unless the optimal þ or ý have RHPzeros. The generalized controller à may be written in terms of the individual controller blocks in Figure 3.104) Note that and Ö are both inputs and outputs to È and we have assumed a perfect measurement of the disturbance .106) Then partitioning È as in (3.105) By writing down the equations or by inspection from Figure 3. Let the model be Example 3. This will generally give some performance loss compared to a simultaneous design of ÃÝ and ÃÖ .17 Output estimator. In this case the output from ý enters into þ and it may be viewed as a reference signal for ݾ .ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ¾ ¿ ½¼ extra measurement ݾ . Consider further Example 3. for example þ or ý are designed “locally” (without considering the whole problem). we see that a cascade implementation does not usually limit the achievable performance since. we do have a measurement of another output variable ݾ . for a two degrees-of-freedom controller a common approach is to first design the feedback controller ÃÝ for disturbance rejection (without considering reference tracking) and then design ÃÖ for reference tracking. To recast this into our standard configuration of Figure 3.5. see also Figure 10. Exercise 3. Consider a situation where we have no measurement of Ý Ù · ݾ Ù · . For example.19 as follows: à ý ÃÖ ¾  Ã½  Ã¾ à ½ ¾¿ ¼ ½ ¾ ¾ ¼ ¼Ì ´ ½ ¾ µÌ Ì (3.16 and Exercise 3. However. then this will limit the achievable performance.97) yields Ⱦ¾ ¾ ¼Ì Ì.16.13 we define Û Ö Þ Ý½   Ö Ú Ö Ý½ ݾ (3. Derive the generalized controller à and the generalized plant È in this case. we can obtain from the optimal overall à the subcontrollers þ and ý (although we may have to add a small -term to à to make the controllers proper).

20.107) ¼ ¼ since the estimator problem does not involve feedback. Exercise 3. Furthermore. This problem may The objective is to design an estimator. One particular estimator à ×Ø is a Kalman Filter Section 9. In the Kalman filter problem studied in     . that is. such that the estimated output Ý Ã ×Ø Ù ¾ is as close as possible in some sense to the true output Ý .2 the objective is to minimize Ü Ü (whereas in Example 3.17 the objective was to minimize Ý Ý).½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ý ×Ø . à à ×Ø and ¾ È We see that Ⱦ¾  Á ¿ ¼ ¼ ¼ Á (3. Ù Õ Ý¾ ¹ ¹ Kalman Filter Ý Ý¾ ¹ ¹ + ¹ - Þ ¹ à ×Ø Ý ¹+ à ªª ª ݾ ¾ ½ Á × Ù ¹ ¹ +¹ + + Ü Õ ¹ ¹Ý Figure 3. the output Ù from the generalized controller is the estimate of the plant output.13 and find È . Show how the Kalman filter problem can be represented in the general configuration of Figure 3. see Figure 3.12 State estimator (observer).13 with Û Ù Ù Ý Þ Ý Ý Ú Ý¾ Ù Note that Ù Ý . à be written in the general framework of Figure 3.20: Output estimation problem.

Nevertheless. However. since there may not exist a block diagram representation for Æ . where ÛÈ is a scalar weight. Set à 2. ¼ in Æ to obtain Ƚ½ . This was illustrated above for the stacked Æ in (3. there are some controller design problems which are not covered. We will use the above procedure to derive È when ÛÈ Ë ÛÈ ´Á · à µ ½ . whereas ´Á · à µ  ½ may be represented . and we have 2. 1.108) È ÛÈ Á Á  ÛÈ   Remark.108) this means that we may move the negative sign of the scalar ÛÈ from Ƚ¾ to Ⱦ½ . To use the general form we must first obtain a È such that Æ Ð ´È à µ .109) The transfer function ´Á · à µ  ½ may be represented on a block diagram with the input and output signals after the plant. Define 3. whereas from Step 3 in the above procedure we see that Ƚ¾ and Ⱦ½ are not unique. Use the above procedure to derive the generalized plant È for the stacked Æ in (3. this is not always possible. Example 3. For È in (3. consider the stacked transfer function Æ ´Á · à µ ½ ´Á · à µ ½ (3. we can now usually obtain È ½¾ and Ⱦ½ by inspection.93). we have that Ƚ½ and Ⱦ¾ are unique. Ê É Ã ´Á · à µ ½ so Ⱦ¾  ÛÈ Ê so we have Ƚ¾    ÛÈ . let « be a real scalar then we may instead choose Ƚ¾ «È½¾ and Ⱦ½ ´½ «µÈ¾½ . Alternatively.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½¼ 3. Ƚ½ Æ ´Ã ¼µ ÛÈ Á . Since É È½¾ ÊȾ½ . For instance. As a simple example. and we get (3. When obtaining È from a given Æ . and Ⱦ½ Á . Æ   ÛÈ Á ÛÈ ´Ë   Á µ  ÛÈ Ì  ÛÈ Ã ´Á · à µ ½ . the following procedure may be useful: 1. 3.13 Mixed sensitivity.93). É Æ   Ƚ½ and rewrite É such that each term has a common factor Ê Ã ´Á   Ⱦ¾ à µ ½ (this gives Ⱦ¾ ).6 Deriving È from Æ For cases where Æ is given and we wish to find a È such that Æ Ð ´È õ Ƚ½ · Ƚ¾ à ´Á   Ⱦ¾ à µ ½È¾½ it is usually best to work from a block diagram representation. É Æ 3.7 Problems not covered by the general formulation The above examples have demonstrated the generality of the control configuration in Figure 3.18 Weighted sensitivity.8.13. Exercise 3.8. Let Æ be some closed-loop transfer function whose norm we want to minimize.

we are not able to find solutions to È ½¾ and Ⱦ½ in Step 3. 3.8. van Diggelen and Glover (1994b) present a solution for a similar problem where the Frobenius norm is used instead of the singular value to “sum up the channels”.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ by another block diagram with input and output signals before the plant. Here the matrix ¡ is a block-diagonal matrix that includes all possible perturbations (representing uncertainty) to the system.110) Ë Remark. in Æ there are no cross coupling terms between an input before the plant and an output after the plant (corresponding to ´Á · à µ  ½ ).8 A general control configuration including model uncertainty The general control configuration in Figure 3.8. It is usually normalized in such a way that ¡ ½ ½. Equivalently. or between an input after the plant and an output before the plant (corresponding to  Ã ´Á · à µ  ½ ) so Æ cannot be represented in block diagram form.14 Show that Æ in (3. is a special case of the Hadamard weighted ½ problem studied by van Diggelen and Glover (1994a). À À ÛÈ Á where ÛÈ Exercise 3.13 may be extended to include model uncertainty as shown by the block diagram in Figure 3. Although the solution to this ½ problem remains intractable. The case where Æ cannot be written as an LFT of à .110) can be represented in block diagram form if ÏÈ is a scalar. if we apply the procedure in Section 3.6 to Æ in (3. ¡ Ù¡ Û Ù ¹ ¹ ¹ Ý¡ È Ú Ã ¹Þ Figure 3. Another stacked transfer function which cannot in general be represented in block diagram form is ÏÈ Ë Æ (3.109). However.21: General control configuration for the case with model uncertainty .21.

112) Remark 1 Controller synthesis based on Figure 3. and Æ Ð ´È õ Ƚ½ · Ƚ¾ à ´Á   Ⱦ¾ à µ ½È¾½ (3. If we partition È to be compatible with the controller à .´ µ ´µ Figure 3. For analysis (with a given controller).21 is still an unsolved problem. Usually.21 in terms of È (for synthesis) may be transformed into the block diagram in Figure 3. resulting in an upper LFT (see Appendix A. the situation is better and with the ½ norm an assessment of robust performance involves computing the structured singular value.111) To evaluate the perturbed (uncertain) transfer function from external inputs Û to external outputs Þ . that is ƽ½ has dimensions compatible with ¡.102) applies.7): Þ Ù ´Æ ¡µÛ Ù ´Æ ¡µ ¸ ƾ¾ · ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ (3.112) Æ has been partitioned to be compatible with ¡. This is discussed in more detail in Chapter 8. À Remark 2 In (3. although good practical approaches like à -iteration to find the “ -optimal” controller are in use (see Section 8. we use ¡ to close the upper loop around Æ (see Figure 3. .12). then the same lower LFT as found in (3. ¡ is square in which case ƽ½ is a square matrix .23: Rearranging a system with multiple perturbations into the Æ ¡-structure The block diagram in Figure 3.22).22 in terms of Æ (for analysis) by using à to close a lower loop around È .

111).17 What is the relationship between the RGA-matrix and uncertainty in the individual elements? Illustrate this for perturbations in the ½ ½-element of the matrix ½¼ (3.102) (without uncertainty). ½. Explain the difference between the two and illustrate by computing the appropriate infinity norm for Ž  ¾ ¿ ž ´×µ × ½ ¿ ×·½×·¾ Exercise 3. We desire no steady-state offset. The exercises illustrate material which the reader should know before reading the subsequent chapters. ¡ .½½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¡µ of the same dimension as ¡. In Chapter 8 we discuss in detail how to obtain Æ and ¡. so they are not the same È and Æ as used earlier. etc. as will be discussed in more detail in Chapters 7 and 8.113) .72) and (2. a bandwidth better than ½¼ rad/s.23(a). so ƾ¾ is the nominal transfer function from Û to Þ . neglected dynamics.21 may seem surprising.16 By Å ½ one can mean either a spatial or temporal norm.23(b). and a resonance peak lower than ¾.15 Consider the performance specification ÛÈ Ë ½ transfer function weight ÛÈ ´×µ and sketch it as a function of frequency for the following two cases: 1. For the nominal case with no uncertainty we have Ù ´Æ Ù ´Æ ¼µ ƾ¾ . Remark 3 Note that È and Æ here also include information about how the uncertainty affects the system. Actually. Then “pull out” each of these blocks from the system so that an input and an output can be associated with each ¡ as shown in Figure 3.9 Additional exercises Most of these exercises are based on material presented in Appendix A. First represent each source ½. so some explanation is in order. 2. but for notational simplicity we did not. Exercise 3. These of uncertainty by a perturbation block. Hint: See (2. Remark 4 The fact that almost any control problem with uncertainty can be represented by Figure 3. less than ½¼% error up to frequency ¿ rad/s. which is normalized such that ¡ perturbations may result from parametric uncertainty. Strictly speaking.111) (with uncertainty) are equal to the È and Æ in (3. for example in (3. Finally.73). the parts Ⱦ¾ and ƾ¾ of È and Æ in (3. We desire less than ½% steady-state offset.102). a bandwidth better than ½ rad/s and a resonance peak (worst amplification caused by feedback) lower than ½ . Suggest a rational Exercise 3. 3. collect these perturbation blocks into a large block-diagonal matrix having perturbation inputs and outputs as shown in Figure 3. we should have used another symbol for Æ and È in (3.

Exercise 3. ¢   à ´Á · à µ ½ as an LFT of Ã. and ¼ for all . find È Â such that Exercise 3.23 Find two matrices and such that ´ · µ ´ µ · ´ µ which proves that the spectral radius does not satisfy the triangle inequality and is thus not a norm.20 Find example matrices to illustrate that the above bounds are also tight when is a square Ñ ¢ Ñ matrix with Ñ ¾. i. we may have equality) for ¾ ¾ matrices (Ñ ¾): ¢ ´ µ ÔÑ ´ µ ´ µ Ñ ÑÜ ÔÑ ´ µ ÔÑ Ñ Ü ½ ÔÑ ´ µ ÔÑ ½ ½ ½ ×ÙÑ Exercise 3. (i) Formulate a condition in terms of the for the matrix · to remain non-singular.91) can be written as à Р´Â ɵ and find  .e. ½ for .26 State-space descriptions may be represented as LFTs. Apply this to maximum singular value of in (3. Ñ Ü and ½ ¼ ½ ¼ ×ÙÑ for ½ Á ¾ ½ ¼ ¼ ¼ ¿ ½ ½ ½ ½ ½ ½ ¼ ¼ Show using the above matrices that the following bounds are tight (i.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½½½ Exercise 3. Exercise 3. Exercise 3.e.19 Compute ½. and is ´ µ smaller than the smallest element? For a non-singular matrix.e. is ´ µ greater than the largest element (in magnitude). .18 Assume that is non-singular. find À for à ´Á · à µ ½ . (a) What is Ø ? (b) What are the eigenvalues of ? all and find an with the smallest value of ´ µ such (c) What is the RGA of ? (d) Let Ñ that · is singular. how is ´ µ related to the largest element in  ½ ? Exercise 3. ´ µ ¾. find such that Exercise 3. .22 Consider a lower triangular Ñ Ñ matrix with ½. To demonstrate this Ð ´À ½ ×µ ´×Á   µ ½ · Exercise 3.113) and (ii) find an of minimum magnitude which makes · singular.24 Write Ì Ì Ð ´È à µ.21 Do the extreme singular values bound the magnitudes of the elements of a matrix? That is. i. the following matrices and tabulate your results: ½.25 Write à as an LFT of Ì Ã Ð ´Â Ì µ.27 Show that the set of all stabilizing controllers in (4. Exercise 3.

Multivariable RHP-zeros impose fundamental limitations on closedloop performance.28 In (3. 1988a)). such that ˼ ´Á · ¼ à µ ½ a) First find Ð ´ à µ. . This exercise deals with how the above result may be derived in a systematic (though cumbersome) manner using LFTs (see also (Skogestad and Morari. Ë Ë¼ where Ç ´ ¼ µ  ½ .25). and find  such that à Р´Â Ì µ (see Exercise 3.10 Conclusion The main purpose of this chapter has been to give an overview of methods for analysis and design of multivariable control systems. In terms of controller design.156) ´Á · à µ ½ by Ë ´Á · Ç Ì µ ½   Æ c) Evaluate Ë ¼ ½½ ½¾ ½¾ ¾½ ¾½ ¾¾ · ¾½ ¾¾ ½¾ Ì µ and show that Á   ¼  ½ Ì ´Á   ´Á   ¼  ½ µÌ µ ½ d) Finally. Note that since ½½ ¼ we have from (A. but for MIMO systems we can often direct the undesired effect of a RHP-zero to a subset of the outputs. ˼ Ð ´Æ 3. These methods are discussed in much more detail in Chapters 8 and 9. and we demonstrated in two examples the possible sensitivity to input gain uncertainty. ˼ ¼ à µ ½ . Other useful tools for analyzing directionality and interactions are the condition number and the RGA.½½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´Á · Exercise 3. In this chapter we have only discussed the À ½ weighted sensitivity method. What is Æ in terms of and ?. which can be used as a basis for synthesizing multivariable controllers using a number of methods. We also introduced a general control configuration in terms of the generalized plant È . including LQG. Closed-loop performance may be analyzed in the frequency domain by evaluating the maximum singular value of the sensitivity function as a function of frequency. À¾ . À½ and -optimal control. show that this may be rewritten as ˼ Ë ´Á · Ç Ì µ ½ . is related to that of the nominal plant. we discusssed some simple approaches such as decoupling and decentralized control. In terms of analysis. we have shown how to evaluate MIMO transfer functions and how to use the singular value decomposition of the frequency-dependent plant transfer function matrix to provide insight into multivariable directionality. ¼ b) Combine these LFTs to find Ë ¼ Ð ´Æ Ì µ. MIMO systems are often more sensitive to uncertainty than SISO systems.11) we stated that the sensitivity of a perturbed plant.

Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ The main objective of this chapter is to summarize important results from linear system theory The treatment is thorough. input signals). (1996). such as Kailath (1980) or Zhou et al.g. but readers are encouraged to consult other books.1 State-space representation Consider a system with Ñ inputs (vector Ù) and Ð outputs (vector Ý ) which has an internal description of Ò states (vector Ü).1 System descriptions The most important property of a linear system (operator) is that it satisfies the superposition principle: Let ´Ùµ be a linear operator. 4. A natural way to represent many physical systems is by nonlinear state-space models of the form Ü ´Ü Ùµ Ý ´Ü Ùµ (4. let Ù ½ and Ù¾ be two independent variables (e. and let « ½ and «¾ be two real scalars.1. for more details and background information if these results are new to them. all of which are equivalent for systems that can be described by linear ordinary differential equations with constant coefficients and which do not involve differentiation of the inputs (independent variables). In terms of deviation . then ´«½ Ù½ · «¾ Ù¾ µ «½ ´Ù½ µ · «¾ ´Ù¾ µ (4.1) We use in this book various representations of time-invariant linear systems. The most important of these representations are discussed in this section. Linear state-space models may then be derived from the linearization of such models.2) where Ü Ü Ø and and are nonlinear functions. 4.

ܴص ݴص ܴص · ٴص ܴص · ٴص (4. Ü Ë  ½ Ü. let Ë be an invertible constant matrix. rational. involving the right (Ø ) and left (Õ ) eigenvectors of .6) where the matrix exponential is Á· ½ ½ ´ ص ½ Ø Ø ÕÀ (4. Then an equivalent state-space realization (i.5 for an example of such a derivation). there exist realizations with the same input-output behaviour. the dynamical system response ܴص for Ø Ø ¼ can be determined from ܴص ´Ø Ø¼ µ ܴؼ µ · Ø Ø Ø¼ ´Ø  µ Ù´ µ Ò (4. Ü and . i. and Ü Ý Ü Ù which gives rise to the short-hand notation s (4. We will refer to the .4) is not a unique description of the input-output behaviour of a linear system.e.5) which is frequently used to describe a state-space model of a system . To see this. one with the same input-output behaviour) in terms of these new states (“coordinates”) is Ë Ë  ½ Ë Ë  ½ ØÐ The most common realizations are given by a few canonical forms. such as the Jordan (diagonalized) canonical form. but with additional unobservable and/or uncontrollable states (modes). These equations provide a convenient means of describing the dynamic behaviour of proper. applies for cases with distinct eigenvalues ( ´ µ).22). see (A. Second. is sometimes called the state matrix.½½ variables (where etc.e.3) (4. linear systems.3)–(4.) we have ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ü represents a deviation from some nominal value or trajectory. First. etc. and introduce the new states Ü ËÜ. If (4. They may be rewritten as where . even for a minimal realization (a realization with the fewest number of states and consequently no unobservable or uncontrollable modes) there are an infinite number of possibilities.7) The latter dyadic expansion.4) are real matrices. the observability canonical form.3) with an initial state condition Ü´Ø ¼ µ and an input ٴص. Note that the representation in (4.2) then Ù (see Section 1. Given the linear dynamical system in (4.3) is derived by linearizing (4.

If is nonsingular then (4. 4.3). Remark.9) is a special case of (4.Laplace transforms The transfer function representation is unique and is very useful for directly obtaining insight into the properties of a system.9) Á ¼ . ¼ 4. we have that Remark. With the assumption of a zero initial state. For a system with disturbances written as ´ µ. the Laplace transforms of (4. between the states Ü. The more general descriptor representation Ü Ý Ü· Ù· Ü· Ù· ·Ò (4.8) ܴص ܴص · ٴص (4. we may start from the state-space description. represents the response Ý ´Øµ to an impulse Ù ´Øµ ƴص for a system with a zero initial state.2 Impulse response representation The impulse response matrix is ´Øµ ¼ Ø Ê¯ where Æ ´Øµ is the unit impulse (delta) function which satisfies Ð Ñ ¯ ¼ ¼ Æ ´Øµ Ø ½.g. Ü´Ø ¼µ ¼.10) where £ denotes the convolution operator.6) be written as Ø Ý´Øµ ´Øµ £ ٴص ´Ø   µÙ´ µ (4.3) and (4.1.4) become 1 ½ We make the usual abuse of notation and let ´×µ denote the Laplace transform of ´Øµ. For a diagonalized £ is a diagonal matrix) and measurement noise Ò the state-space-model is Note that the symbol Ò is used to represent both the noise signal and the number of states. for cases where the matrix is singular.3 Transfer function representation .13) . ´Øµ. It is defined as the Laplace transform of the impulse response ½ ´×µ ´Øµ  ×Ø Ø (4. With initial state Ü´¼µ ¼. ×Ü´×µ Ü´×µ · Ù´×µ µ Ü´×µ ´×Á   µ ½ Ù´×µ (4.11) Ø · ƴص Ø ¼ ¼ (4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½½ ´ µØ as the mode associated with the eigenvalue term realization (where we select Ë Ì such that Ë Ë  ½ Ø ´ µØ . The ’th element of the impulse response matrix.1. implicit algebraic relations inludes. e.12) ¼ Alternatively. the dynamic response to an arbitrary input ٴص (which is zero for Ø ¼) may from (4.

17) Note that any system written in the state-space form of (4.18) where ÆÖ ´×µ and ÅÖ ´×µ are stable coprime transfer functions.3) and (4. For more details on the frequency response. the reader is referred to Sections 2. see (4.5 Coprime factorization Another useful way of representing systems is the coprime factorization which may be used both in state-space and transfer function form. The coprimeness implies that there should be no common RHP-zeros in Æ Ö and ÅÖ which result in pole-zero cancellations when .16) · ½ ×  When disturbances are treated separately. the state-space representation yields an internal description of the system which may be useful if the model is derived from physical principles. but the opposite is not true. ´×µ ½ Ø´×Á   µ ´×Á   µ · Ø´×Á   µ (4.4 Frequency response An important advantage of transfer functions is that the frequency response (Fourier transform) is directly obtained from the Laplace transform by setting × in ´×µ. but do not have a state-space representation.1). time delays and improper systems can be represented by Laplace transforms. 4.½½ ÅÍÄÌÁÎ ÊÁ Ä ßÞ Ã ÇÆÌÊÇÄ (4.15) ÉÒ where Ø´×Á   µ ½ ´×   ´ µµ is the pole polynomial.1. It is also more suitable for numerical calculations. and derive Ò Ø ÕÀ ´×µ (4. we may use the dyadic expansion of given in (A.3. For example. For cases where the eigenvalues of are distinct.1.1 and 3. the corresponding disturbance transfer function is ´×µ ´×Á   µ ½ · (4. 4. and Å Ö ´×µ should contain as RHP-zeros all the RHP-poles of ´×µ. On the other hand.8). from (A. In the latter case a right coprime factorization of is ´×µ   ÆÖ ´×µÅÖ ½ ´×µ (4.14) Ý´×µ where Ü´×µ · Ù´×µ µ Ý´×µ ´ ´×Á   µ ½ · µ Ù´×µ ´×µ ´×µ is the transfer function matrix.4) has a transfer function. The stability implies that ÆÖ ´×µ should contain all the RHP-zeros of ´×µ.22). Equivalently.

Two stable scalar transfer functions.1 Consider the scalar system ´×µ ´×   ½µ´× · ¾µ ´×   ¿µ´× · µ (4. a left coprime factorization of is Á (4. Remark.20) Here ÆÐ and ÅÐ are stable and coprime. Æ ´×µ and Å ´×µ. Now introduce the operator Å £ defined as Å £ ´×µ Å Ì ´ ×µ (which for × is the   same as the complex conjugate transpose Å À Å Ì ). we first make all the RHP-poles of zeros of Å . In this case we can always find stable Í and Î such that ÆÍ · ÅÎ ½.22) for any ´×   ¿µ´× · µ ×¾ · ½ × · ¾ ½ ¾ ¼. are coprime if and only if they have no common RHP-zeros. Usually.24) . (4. Example 4. Thus both proper and the identity Æ ´×µ × ½ ×· Å ´×µ × ¿ ×·¾ is a coprime factorization. we select Æ and Å to have the same poles as each other and the same order as ´×µ. Mathematically.21) ÆÐ ÍÐ · ÅÐ ÎÐ ÆÅ  ½ Á For a scalar system. Å  ½ Æ . coprimeness means that there exist stable Í Ö ´×µ and ÎÖ ´×µ such that the following Bezout identity is satisfied ÍÖ ÆÖ · ÎÖ ÅÖ Similarly. Then ´×µ ÆÖ ´×µÅÖ ½ ´×µ is called a normalized right coprime factorization if £ £ ÅÖ ÅÖ · ÆÖ ÆÖ Á (4.23) From the above example. we see that the coprime factorization is not unique. We then allocate the poles of Æ and Å so that Æ and Å are ÆÅ  ½ holds.22) To obtain a coprime factorization. the left and right coprime factorizations are identical. there exist stable such that the following Bezout identity is satisfied Í Ð ´×µ and ÎÐ ´×µ (4. We then have that Æ ´×µ ´×   ½µ´× · ¾µ ×¾ · ½ × · ¾ Å ´×µ and for any is a coprime factorization of (4. and all the RHP-zeros of zeros of Æ . This gives the most degrees of freedom subject to having a realization of Å ´×µ Æ ´×µ Ì with the lowest order.19) ´×µ ÅР½´×µÆÐ ´×µ (4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½½   forming Æ Ö ÅÖ ½ . that is.

½ and ¾ . as in the above exercise.23). some algebra and comparing of terms one obtains: ¦ To derive normalized coprime factorizations by hand.1 can be used to find the normalized coprime factorization for ´×µ using (4.g.1. . however.1 or the -toolbox command sncfbal) that the normalized coprime factors of ´×µ in (4.24). requiring that ÅÐ ÅУ · ÆÐ ÆУ Á ÅÖ Ì ÆÖ satisfies £ Ö Ö Á and is called an inner transfer ŠР½ ´×µÆÐ ´×µ is defined (4. one can easily find a state-space realization. when ¼.25) ÅÐ ÆÐ is co-inner which means РУ Á .22). Let Æ and Å be as given in (4. Exercise 4.½½ In this case ÅÍÄÌÁÎ ÊÁ Ö ´×µ Ä Ã ÇÆÌÊÇÄ function.1 We want to find the normalized coprime factorization for the scalar system in (4.26). The MATLAB commands in Table 4. 1988) by ¢ ÆÐ ´×µ ÅÐ ´×µ £ s ·À Ê  ½ ¾ Ì ·À Ê ½ ¾ À Ê ½ ¾ Ì (4.2 Verify numerically (e.22) are as given in Exercise 4. and substitute them into (4. The In this case Ð ´×µ normalized coprime factorizations are unique to within a right (left) multiplication by a unitary matrix. Numerically. If has a minimal state-space realization s then a minimal state-space realization of a normalized left coprime factorization is given (Vidyasagar. is in general difficult. Show that after ¼ ½. The normalized left coprime factorization ´×µ similarly. Exercise 4.26) where and the matrix equation À ¸  ´ · Ì µÊ ½ ʸÁ· Ì Ê ½ is the unique positive definite solution to the algebraic Riccati ´   Ë  ½ Ì µ · ´   Ë  ½ Ì µÌ   · Ë  ½ Ì ¼ where ˸Á· Ì Notice that the formulas simplify considerably for a strictly proper plant. i.e. using the MATLAB file in Table 4.

C = inv(sqrtm(R))*c. H = -(b*d’ + Z*c’)*inv(R). one may obtain an approximate inverse For a strictly proper system with by including a small additional feed-through term . a right (or left) inverse of ´×µ can be found by replacing  ½ by Ý . M = pck(A.eig. One way of obtaining a state-space realization from a SISO transfer function ¼) of the form is given next. where the order of the ×polynomial in the numerator exceeds that of the denominator. [z1. Improper transfer functions. For a square ´×µ we have  ½ s    ½    ½  ½  ½ (4. Dm = inv(sqrtm(R)).C.Dm). we can include some high-frequency dynamics which we know from physical considerations will have little significance.C. Transfer functions are a good way of representing systems because they give more immediate insight into a systems behaviour. Z = z2/z1.zwellposed.Q1. Dn = inv(sqrtm(R))*d. Improper systems.28) .reig min] = ric schr([A1’ -R1. In some cases we may want to find a state-space description of the inverse of a system.6 More on state-space realizations Inverse system. preferably chosen on physical grounds. cannot be represented in standard state-space form. the pseudo-inverse of .z2. -Q1 -A1]). ¼.1. One should be careful. R1 = c’*inv(R)*c. however.R1). Bm = H.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½½ Table 4. for numerical calculations a state-space realization is usually desired. to select the signs of the terms in such that one does not introduce RHP-zeros in ´×µ because this will make ´×µ  ½ unstable.d] using (4. Consider a strictly proper transfer function ( ´×µ ¬Ò ½ ×Ò ½ · ¡ ¡ ¡ · ¬½ × · ¬¼ Ò · Ò ½ ×Ò ½ · ¡ ¡ ¡ · ½ × · × ¼ (4.Dn). R=eye(size(d*d’))+d*d’.1: MATLAB commands to generate a normalized coprime factorization % Uses the Mu toolbox % % Find Normalized Coprime factors of system [a.Bn. A1 = a-b*inv(S)*d’*c.Bm. For a non-square ´×µ in which has full row (or column) rank. Bn = b + H*d.zerr. Realization of SISO transfer functions. 4. To approximate improper systems by state-space models. However. N = pck(A.b.27) where is assumed to be non-singular.26) % S=eye(size(d’*d))+d’*d.Z] = aresolv(A1’.z2. % Alternative: aresolv in Robust control toolbox: % [z1. A = a + H*c.c. Q1 = b*inv(S)*b’.fail.

. . . in observer canonical form.28) that ¬¼  ¾ ×    ¾ and ½. . For the term  · we get from (4. . we first bring out a constant term by division to get ×·  ¾ · ½ ×· ¾ Thus ½..½¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Then. etc. With the notation Ü Ü ¼ ´Øµ ½ the following state-space equations ÜÒ ½ ßÞ Ý Ü½ . Example 4.2 To obtain the state-space realization. . we have ÜÒ ÜÒ ½ . ½ ¼ ¡¡ ¡ ¼ ¼ ¿ ¼ ½ ¼ ¼ ¼ ¼ ½ ¼ ¼ ¼ ¡¡ ¡ ¼ ½ ¼ ¼ ¡¡ ¡ ¼ ¼ ½ ¼ ¼ . .e. We can further write this as ÝÒ ´  Ò ½ Ý Ò  ½ · ¬ Ò ½ Ù Ò ½ µ · ¡ ¡ ¡ · ´  · ½ ݼ · ¬½ Ù¼µ · ´  ¼ ÝßÞ ¬¼ Ùµ ßÞ Ü¾  ½ Ò Ü¼ Ò where we have introduced new variables Ü ½ ܾ ÜÒ and we have that ÜÒ is the Ò’th derivative of Ü ½ ´Øµ. ´×µÙ´×µ corresponds to the following differential (4. . i.29) yields ´×µ ×  ×· and ¼ . . . Note Ü Ø. (4. Ò  ½ ¿ ¬Ò ¾  ¾  ½  ¼ ¡¡¡ ¼ ¼ ¬¾ ¬½ ¬¼ . . . and therefore . of the SISO transfer function ´×µ ×  . then we must first bring out the constant term.   ¼ Ü ½ · ¬¼ Ù   ½ ܽ · ÜÒ · ¬½ Ù   Ò ½ ܽ · ܾ · ¬Ò ½ Ù ¾¬ ܽ corresponding to the realization ¾    Ò ½ Ò ¾ .29) This is called the observer canonical form. . and then find the realization of ½ ´×µ using (4. Notice that if the transfer function is not strictly proper.29). and that the output Ý is simply equal to the first state. (4. write ´×µ ½ ´×µ · .28) and the relationship Ý ´×µ equation ÝÒ ´Øµ· Ò ½ ÝÒ ½ ´Øµ·¡ ¡ ¡· ½ ݼ ´Øµ· ¼ ݴص ¬Ò ½ ÙÒ ½ ´Øµ·¡ ¡ ¡·¬½ Ù¼ ´Øµ·¬¼ ٴص where Ý Ò ½ ´Øµ and ÙÒ ½ ´Øµ represent Ò   ½’th order derivatives. Two advantages of this realization are that one can obtain the elements of the matrices directly from the transfer function. since multiplication by × corresponds to differentiation in the time domain.

For a state-space realization it must therefore be approximated. For example à ´×µ where ¯ is typically 0.35) 3.31).32) 1. An Ò’th order approximation of a time delay may be e obtained by putting Ò first-order Pad´ approximations in series  × ´½   ¾Ò ×µÒ ´½ · ¾Ò ×µÒ (4. One can at least see that it is a PID controller. which ). This can now be realized in state-space form in an infinite number of ways.33) ¼ ½½ ¼  ¯ Û Ö (4. is a scalar given by represents the controller gain at high frequencies (× × µ à ´½ · ½ · Á× ½ · ¯ × (4. Controllability canonical form where ­½ and ­¾ are as given above.1 or less.3 Consider an ideal PID-controller à ´×µ à ´½ · ½ · Á× ×µ à Á ×¾ · Á × · ½ Á× (4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾½ Example 4. Diagonalized form (Jordan canonical form) ¼ ¼½ ¼  ¯ 2.36) On comparing these four realizations with the transfer function model in (4.34) ­½ ô½   ¯¾½ µ ­¾ ¯Ã ¾ ¿ Á ½ ¼ ¬½ ¬¼ ­½ ­¾ (4. In all cases. Observer canonical form in (4. 4. Time delay. A proper PID controller may be obtained by letting the derivative action be effective over a limited frequency range. it is an improper transfer function and cannot be written in state-space form.37) . A time delay (or dead time) is an infinite-dimensional system and not representable as a rational transfer function. it is clear that the transfer function offers more immediate insight. Four common forms are given below. Observability canonical form ´¯¾ µ ­½ ­¾ Á ½  ½ ½ ¼ (4. the -matrix.30) Since this involves differentiation of the input.31) à ½·¯ ¯ à à ½ (4.29) ¼ ¼½ ½  ¯  ¯½ ¼ ½ ¼ Û Ö ¬¼ à ¯Á ¬½ ¾ à ¯ ¾   ¯ Á ½ ¼ Á (4.

The dynamical system Ü µ. A mode is called uncontrollable if none of the inputs can excite the mode.16) the following dyadic expansion of the transfer function matrix from inputs to outputs.38) we have that the ’th mode is uncontrollable if and only if the ’th input pole vector is zero – otherwise the mode is controllable. Thus. From (4. 1998) ¸ ÕÀ ¸ Ø (4. but let us start by defining state controllability. The system is (state) controllable be the ’th eigenvalue of . or Definition 4. Õ À ÕÀ .½¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Alternative (and possibly better) approximations are in use. the ’the output pole vector ÝÔ (4. 4. the pole vectors may be used for checking the state controllability and observability of a system. ´×µ Ò Ø ÕÀ · ½ ×  ÙÔ Ò ÝÔ ÙÔ · ½ ×  (4. Two of these are .2 State controllability and state observability For the case when has distinct eigenvalues. a system is state controllable if and only if all its input pole vectors are nonzero. Similarly. Then the system ´ µ is state controllable if and only if ÙÔ ¼ In words. Ü · Ù.1 State controllability.39) is an indication of how much the ’th mode is excited (and thus may be “controlled”) by the inputs.38) From this we see that the ’the input pole vector (Havre. This is explained in more detail below. is said to be state controllable if. Thus we have: Let À Õ the ’th input Õ the corresponding left eigenvector. There exists many other tests for state controllability.40) indicates how much the ’th mode is observed in the outputs. we have from (4. if all its modes are controllable. for any initial state equivalently the pair ´ Ü´¼µ ܼ . there exists an input ٴص such that ܴؽ µ ܽ . but the above approximation is often preferred because of its simplicity. any time ؽ ¼ and any final state Ü ½ . Otherwise the system is said to be state uncontrollable. Remark. and ÙÔ pole vector.

For a stable system ( is stable) Ï ´ µ.41) has rank Ò (full row rank). Ï ´Ø µ ¸ Ø ¼ Therefore.6) one can verify that a particular input which achieves ܴؽ µ ܽ is (4. The two input pole vectors are are Õ½     ÝÔ½ ÀÕ ½ ¼ ÝÔ¾ ÀÕ ¾ ½ and since ÝÔ½ is zero we have that the first mode (eigenvalue) is not state controllable. 2. 2. The controllability Gramian is also singular È ¼ ½¾ ¼ ½¾ ¼ ½¾ ¼ ½¾ . È may also be obtained as the solution is positive definite (È to the Lyapunov equation Ì È ·È Ì (4. From (4. that is. the pair ´ µ is state controllable if and we only need to consider È only if the controllability Gramian ¸ ½ È ¸ ½ ¼) and thus has full rank Ò. In fact.4 Consider a scalar system with two states and the following state-space realization  ¾  ¾ ¼   ´×µ ½ ½ ´×Á   µ ½ ½ ¼ ½ ×· ¼ The transfer function (minimal realization) is which has only one state. The controllability matrix has rank ½ since it has two linearly dependent rows: ½   ½   3. the system ´ µ is state controllable if and only if the Gramian matrix Ï ´Øµ has full rank (and thus is positive definite) for any Ø ¼. the first state corresponding to the eigenvalue at -2 is not controllable.43)   Example 4. This is verified by considering state controllability. The system ´ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾¿ µ is state controllable if and only if the controllability matrix ¾ ¡ ¡ ¡ Ò ½ ¸ (4. The eigenvalues of are ½ ¼ ¼  ¼ ¼ Ì and Õ¾ ¼ ½ Ì . ¾ and ¾ .42) ٴص where Ï   Ì Ì ´Ø½  Øµ Ï ´Ø½ µ ½ ´ ؽ ܼ   ܽ µ Ì Ì ´Øµ is the Gramian matrix at time Ø. Here Ò is the number of states.44) ¼ Ì Ì (4. and the corresponding left eigenvectors 1.Ä Å ÆÌË Ç ÄÁÆ 1.

not imply that one can hold (as Ø 2. The definition is an existence result which provides no degree of controllability (see Hankel singular values for this). The required inputs may be very large with sudden changes. Ì¿ & Ì Ì½ ̾ Ì¿ Ì 300 350 400 450 500 10 5 0 −5 −10 −15 −20 0 50 100 150 200 250 Time [sec] (b) Response of states (tank temperatures) Figure 4.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ In words.1: State controllability of four first-order systems in series Consider a system with one input and four states arising from four first-order systems in series. Some of the states may be of no practical importance. The first two objections are illustrated in the following example. It says nothing about how the states behave at earlier and later times. yield Ì ×·½ Ì¿ Ì¿ .g. but it rarely is for the following four reasons: 1.g. State controllability would therefore seem to be an important property for practical control. Energy balances. ´×µ ½ ´ × · ½µ A physical example could be four identical tanks (e. assuming no heat loss. 3. 150 Control: Ù ´Øµ 100 50 0 ̼ −50 0 100 200 300 400 500 Time [sec] (a) Input trajectory to give desired state at Ø 15 ¼¼ × States: ̽ . bath tubs) in series where water flows ½ from one tank to the next. if a system is state controllable we can by use of its inputs Ù bring it from any initial state to any final state within any given finite time. e. Example 4. it does ½) the states at a given value. ̾ .5 State controllability of tanks in series. 4.

So now we know that state controllability does not imply that the system is controllable from a practical point of view.45) In practice. and without using inappropriate control signals. This is because state controllability is concerned only with the value of the states at discrete values of time (target hitting). Two things are worth noting:     1.42) and is shown as a function of time in Figure 4. so let us consider a specific case. the controllability matrix in (4. should we be concerned if a system is not state controllable? In many cases the answer is “no”. we introduce a more practical concept of controllability which we call “input-output controllability”. want to achieve at Ø Ì¿ ´ ¼¼µ ½ and Ì ´ ¼¼µ ½. State uncontrollability will then appear as a rank deficiency in the transfer function matrix ´×µ (see functional controllability). ̾ ´ ¼¼µ ½. However. The change in inlet temperature. since Ì¿ ´ ¼¼µ ½ is positive. and that we ¼¼ s the following temperatures: ̽ ´ ¼¼µ ½.41) has full rank. But what about the reverse: If we do not have state controllability. since we may not be concerned with the behaviour of the uncontrollable states which may be outside our system boundary or of no practical importance. must be equal (all states approach 0 in this case. 2. A state-space realization is ¾ ¿ ¾ ¿  ¼ ¼½ ¼ ¼ ¼ ¼ ¼½  ¼ ¼½ ¼ ¼ ¼ ¼ ¼½  ¼ ¼½ ¼ ¼ ¼ ¼ ¼½  ¼ ¼½ ¼ ¼½ ¼ ¼ ¼ (4. ̼ ´Øµ.1(a). it is not possible to hold them at these values. ¾ In Chapter 5. This sounds almost too good to be true. Although the states (tank temperatures Ì ) are indeed at their desired values of ½ at Ø ¼¼ s. The corresponding tank temperatures are shown in Figure 4. Then. since ٠̼ is reset to 0 at Ø ¦ It is quite easy to explain the shape of the input ̼ ´Øµ: The fourth tank is furthest away and ½) and therefore the inlet temperature ̼ we want its temperature to decrease (Ì ´ ¼¼µ is initially decreased to about ¼. since ̾ ´ ¼¼µ ½. The required change in inlet temperature is more than ½¼¼ times larger than the desired temperature changes in the tanks and it also varies widely with time.         From the above example. since at steady-state all temperatures must be equal. the input Ù residence time in each tank. to achieve this was computed from (4. is this an indication that the system is not controllable in a practical sense? In other words. . and four tank temperatures.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¼¼ ½¾ Ì¿ Ì Ì are the s is the ½ ½ ½ ̽ ̾ ×·½ ̾ ̾ ×·½ ̽ ̽ ×·½ ̼ where the states Ü Ì¼ is the inlet temperature. ̼ is increased to ¼. If we are indeed concerned about these states then they should be included in the output set Ý . we know that it is very difficult to control the four temperatures independently. it is subsequently decreased to about finally increased to more than ½¼¼ to achieve ̽ ´ ¼¼µ ½. we see clearly that the property of state controllability may not imply that the system is “controllable” in a practical sense 2 .1(b). since at steady-state all the states ¼¼ s). so the system is state controllable and it must be possible to achieve at any given time any desired temperature in each of the four tanks simply by adjusting the inlet temperature. and about ¿¼ at Ø ¾¾¼ s. Assume that the system is initially at steady-state (all temperatures are zero). while in most cases we want the outputs to remain close to some desired value (or trajectory) for all values of time.

because it tells us whether we have included some states in our model which we have no means of affecting. the initial state Ü´¼µ ܼ can be determined from the time history of the input ٴص and the output Ý ´Øµ in the interval ¼ Ø ½ . Ø Ø . In summary. Two other tests for state observability are: µ is state observable if and only if we have full colum rank (rank Ò) of 1. (4. The system ´ the observability matrix ¾ ¿ Ǹ ½ . Ü · Ù. Otherwise the system. É can also be found as the solution to the following Lyapunov equation ÌÉ· É   Ì (4. a system is state observable if and only if all its output pole vectors are nonzero. for any time Ø ½ ¼. Then the system ´ µ is state observable if and only if ÝÔ ¼ In words. we might not be able to control them independently over a period of time. had used a different terminology. . For example. Ý Definition 4. .47) which must have full rank Ò (and thus be positive definite) for the system to be state observable. is said to be state unobservable. be the ’th eigenvalue of . state controllability is a system theoretical concept which is important when it comes to computations and realizations. It also tells us when we can save on computer time by deleting uncontrollable states which have no effect on the output for a zero initial state. However. who originally defined state controllability.46) Ò ½ 2. For a stable system we may consider the observability Gramian ɸ Ì ¼ Ì (4.38) we have: Let eigenvector.48) . and ÝÔ Ø the ’th output pole vector. or ´ µ. better terms might have been “point-wise controllability” or “state affect-ability” from which it would have been understood that although all the states could be individually affected. its name is somewhat misleading. The dynamical system Ü Ü · Ù (or the pair ´ µ) is said to be state observable if.2 State observability. Remark.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ So is the issue of state controllability of any value at all? Yes. and most of the above discussion might have been avoided if only Kalman. Ø be the corresponding From (4.

observability is important for measurement selection and when designing state estimators (observers). it follows that a state-space realization is minimal if and only if ´ µ is state controllable and ´ µ is state observable. and may be viewed as outside the system boundary. see Willems (1970). the fewest number of states). McMillan degree and hidden mode. and thus of no direct interest from a control point of view (unless the unobservable state is unstable. If we define Ý Ì (the temperature of the last ¼ ¼ ¼ ½ and we find that the observability matrix has full column rank tank). Ç Definition 4. and the inlet temperature in the tanks. The smallest dimension is called the McMillan degree of ´×µ. For example. Remark 2 Unobservable states have no effect on the outputs whatsoever. for linear time-invariant systems these differences have no practical significance. This is illustrated in the following example. consider a case where the initial temperatures ½ . A statespace realization ´ µ of ´×µ is said to be a minimal realization of ´×µ if has the smallest possible dimension (i. Example 4. Remark 1 Note that uncontrollable states will contribute to the output response Ý ´Øµ if the ¼µ ¼. it is clear that it is numerically very difficult to back-calculate.4 A system is (internally) stable if none of its components contain hidden unstable modes and the injection of bounded external signals at any place in the system result in bounded output signals measured anywhere in the system. However. obtaining Ü´¼µ may require taking high-order derivatives of Ý ´Øµ which may be numerically poor and sensitive to noise. then so all states are observable from Ý . e.3 Minimal realization.e. even if a system is state observable it may not be observable in a practical sense.3 Stability There are a number of ways in which stability may be defined. and we use the following definition: Definition 4. Fortunately. because we want to avoid the system “blowing up”). 4. Since only controllable and observable states contribute to the input-output behaviour from Ù to Ý . from a practical point of view. for example ̽ ´¼µ based on measurements of Ý ´Øµ Ì ´Øµ over some interval ¼ ؽ . although in theory all states are observable from the output.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾ A system is state observable if we can obtain the value of all individual states by measuring the output Ý ´Øµ over some time period. . However. A mode is hidden if it is not state controllable or observable and thus does not appear in the minimal realization.5 (tanks in series) continued. Ü´Ø are stable. are non-zero (and unknown). but this effect will die out if the uncontrollable states initial state is nonzero.g. However. Then. Ì ´¼µ ̼ ´Øµ ٴص is zero for Ø ¼.

this may require an unstable controller.É pole or The (4. However. see also the remark on page 185.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We here define a signal ٴص to be “bounded” if there exists a constant such that ٴص for all Ø. but we have no way of observing this from the outputs Ý ´Øµ.15) and Appendix A. If a system is not detectable.4) are the eigenvalues ´ µ Ò ´×   Ô µ. that our systems contain no hidden unstable modes. The word internally is included in the definition to stress that we do not only require the response from one particular input to another particular output to be stable. see defined as the finite values × also Theorem 4. we here define the poles of a system in terms of the eigenvalues of the state-space -matrix.3)– ½ Ò of the matrix . then there is a state within the system which will eventually grow out of bounds. but require stability for signals injected or measured at any point of the system.2. The poles Ô of a system with state-space description (4. Similarly. the components must contain no hidden unstable modes.4 Poles For simplicity.2 below.49) To see that this definition is reasonable. detectable and hidden unstable modes. This is discussed in more detail for feedback systems in Section 4. Definition 4. More generally. Definition 4. Remark 1 Any unstable linear system can be stabilized by feedback control (at least in theory) provided the system contains no hidden unstable mode(s). A system with unstabilizable or undetectable modes is said to contain hidden unstable modes.7. recall (4. A system is (state) stabilizable if all unstable modes are state controllable. characteristic polynomial ´×µ is defined as ´×µ ¸ Ø´×Á   µ ½ Thus the poles are the roots of the characteristic equation ´×µ ¸ Ø´×Á   µ ¼ (4. In the book we always assume. A system is (state) detectable if all unstable modes are state observable. A linear system ´ µ is detectable if and only if all output pole vectors Ý Ô associated with the unstable modes are nonzero.6 Poles. 4. the poles of ´×µ may be somewhat loosely Ô where ´Ôµ has a singularity (“is infinite”). Note that if does not correspond to a minimal realization then the poles by this definition will . any instability in the components must be contained in their input-output behaviour. Remark 2 Systems with hidden unstable modes must be avoided both in practice and in computations (since variables will eventually blow up on our computer if not on the factory floor). µ is stabilizable if and only if all input pole vectors Ù Ô A linear system ´ associated with the unstable modes are nonzero. that is.5 Stabilizable. unless otherwise stated.1.

is the least common denominator of all nonidentically-zero minors of all orders of ´×µ.4 of stability. that is.6 Consider the plant: ´×µ ´¿×·½µ¾   ´×·½µ .4. × which has no state-space realization as it contains a delay and is also improper. 4. A matrix with such a property is said to be “stable” or Hurwitz. Ê ´ µ ¼ . ¾ input. rise to stable modes where including integrators.6) can be written as a sum of terms each ´ µØ . Then with a bounded sinusoidal .1 Poles and stability For linear systems. In the procedure defined by the theorem we cancel common factors in the numerator and denominator of each minor. We will use the notation Å Ö to denote the minor corresponding to the deletion of rows Ö and columns in ´×µ.3 Poles from transfer functions The following theorem from MacFarlane and Karcanias (1976) allows us to obtain the poles directly from the transfer function matrix ´×µ and is also useful for hand calculations. Proof: From (4.7) we see that the time response (4. It then follows that only observable and controllable poles will appear in the pole polynomial.4. 4. Eigenvalues in the RHP with Ê ´ µ ¼ give rise to unstable containing a mode ´ µØ is unbounded as Ø . Thus we can not compute the poles Example 4.4.2 The pole polynomial ´×µ corresponding to a minimal realization of a system with transfer function ´×µ. Theorem 4. It also has the advantage of yielding only the poles corresponding to a minimal realization of the system. A minor of a matrix is the determinant of the matrix obtained by deleting certain rows and/or columns of the matrix. the poles determine stability: Ü · Ù is stable if and only if all the Theorem 4.2 Poles from state-space realizations Poles are usually obtained numerically by computing the eigenvalues of the matrix. are unstable from our Definition 4. Eigenvalues in the open LHP give modes since in this case ´ µØ ¼ as Ø . Systems with poles on the -axis.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾ include the poles (eigenvalues) corresponding to uncontrollable and/or unobservable states. For example. ٴص × Ò Ó Ø.1 A linear dynamic system Ü poles are in the open left-half plane (LHP). To get the fewest number of poles we should use a minimal realization of the system. consider Ý Ù and assume ´×µ has imaginary poles × Ó . the output Ý ´Øµ grows unbounded as Ø ½ ½ ¦ ½ 4.

expected ´×µ has a pole at ×   Example 4. and consider ´×µ ½ ×· ¼ ´×µ (4.50) The minors of order ½ are the four elements all have ´× · ½µ´× · ¾µ in the denominator.g.½¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´× · ½µ and as from (4.54) (4.57) The system therefore has four poles: one at ×  ½. However from Theorem 4. However.8 Consider the ¾ ¢ ¿ system.52) so a minimal realization of the system has two poles: one at ×  ½ and one at ×  ¾. with ¿ inputs and ¾ outputs. For instance. The least common denominator of all the minors is then ´×µ ´× · ½µ´× · ¾µ (4.56) By considering all minors we find their least common denominator to be ´× · ½µ´× · ¾µ¾ ´×   ½µ (4. From the above examples we see that the MIMO-poles are essentially the poles of the elements. one at × ½ and two at ×  ¾. The minor of order ¾ is the determinant Ø ´×µ ´×   ½µ´×   ¾µ · × ½ ¾ ¾ ´× · ½µ¾ ´× · ¾µ¾ ½ ½ ¾ ¾ ´× · ½µ´× · ¾µ (4. (4.51) Note the pole-zero cancellation when evaluating the determinant. let ¼ ´×µ be a square Ñ ¢ Ñ transfer function matrix with no pole at ×   .58) .49).55) Ž  ´×   ½µ ´× · ½µ´× · ¾µ¾ ´×µ Å¿ ½ ´× · ½µ´× · ¾µ (4. ½ × ½  ½ ½ ½ × · ½ ´× · ½µ´× · ¾µ ×   ½ × · ¾ × · ¾ The minor of order ¾ corresponding to the deletion of column ¾ is ¾ ¾ ž ´×   ½µ´× · ¾µ´×   ½µ´× · ½µ · ´× · ½µ´× · ¾µ´×   ½µ ¾ ´´× · ½µ´× · ¾µ´×   ½µµ ´× · ½µ´× · ¾µ The other two minors of order two are (4.2 we have that the denominator is ½.7 Consider the square transfer function matrix ´×µ ½ × ½ × ½ ¾ ´× · ½µ´× · ¾µ   ×   ¾ (4. ž ¿ ½½ ´×µ): ´×µ Example 4.53) ½ ´×   ½µ´× · ¾µ ¼ ´×   ½µ¾ ´× · ½µ´× · ¾µ´×   ½µ  ´× · ½µ´× · ¾µ ´×   ½µ´× · ½µ ´×   ½µ´× · ½µ ¾ The minors of order ½ are the five non-zero elements (e. by looking at only the elements it is not possible to determine the multiplicity of the poles.

Thus. It may have two poles at ×   (as for ´×µ in (3.59) Ø ´ ´×µµ ½ ×· ¼ ´×µ ½ ´× · µÑ   . A model reduction to obtain a minimal realization will subsequently yield a system with four poles as given in (4. The pole directions may in principle be obtained from an SVD of ´Ô µ Í ¦Î À .60) is the ’th output pole direction. then ´×µ has Ñ poles at ×   .61) where ÙÔ is the input pole direction.Ä Å ÆÌË Ç ÄÁÆ How many poles at × Ê Ë ËÌ Å ÌÀ ÇÊ ½¿½   Ø does a minimal realization of ´×µ have? From (A. To quantify this we use the input and output pole vectors introduced in (4. (1996. we will get a system with 15 states.8 and then simply combine them to get an overall state space realization.39) and (4.10).52)).4 Pole vectors and directions In multivariable system poles have directions associated with them. As an example. À Õ ¼ then the corresponding pole is not state Remark 1 As already mentioned. Preferably this should be a minimal realization. p. if ÙÔ Ø ¼ the corresponding pole is not state observable (see also controllable. ¼ so if ¼ has no zeros at × may have zeros at ×   . one pole at ×   (as in (4. consider a ¾ ¢ ¾ plant in the form given by (4. However.57). As noted above. Then ÙÔ is the first column in Î (corresponding to the infinite singular value). the poles are obtained numerically by computing the eigenvalues of the -matrix. and if ÝÔ Zhou et al. and Ý Ô the first column in Í . if we make individual realizations of the five non-zero elements in Example 4.58). and Ý Ô is the output pole direction.81)).4. to compute the poles of a transfer function ´×µ.60) The pole directions are the directions of the pole vectors and have unit length. The matrix is infinite in the direction of the pole. and we may somewhat crudely write ´Ô µÙÔ ½ ¡ ÝÔ (4. Ø ´ ¼ ´×µµ (4.50) where Ø ¼ ´×µ has a zero at ×   ) or no pole at ×   (if all the elements of ¼ ´×µ have a zero at ×   ). where each of the three poles (in the common denominator) are repeated five times. ÝÔ ¾ ÝÔ where ÝÔ is computed from (4. The pole directions may also be defined in terms of the transfer function matrix by evaluating ´×µ at the pole Ô and considering the directions of the resulting complex matrix ´Ô µ.40): ÝÔ Ø ÙÔ ÀÕ (4. ½ Specifically. 4. For example. . we must first obtain a state-space realization of the system.

È ´×µ. 1998)(Havre and Skogestad.   Þ µ where ÒÞ is the number of µ is less than the normal rank In this book we do not consider zeros at infinity.1 Zeros from state-space realizations Zeros are usually computed from a state-space description of the system. In general. this choice minimizes the lower bound on both the ¾ and ½ -norms of the transfer function ÃË from measurement (output) noise to input (Havre. This is the basis for the following definition of zeros for a multivariable system (MacFarlane and Karcanias. This definition of zeros is based on the transfer function matrix. 4. it can be argued that zeros system the zeros Þ are the solutions to ´Þ µ are values of × at which ´×µ loses rank (from rank 1 to rank 0 for a SISO system). More precisely.7 Zeros. The normal rank of ´×µ is defined as the rank of ´×µ at all values of × except at a finite number of singularities (which are the zeros). À À 4.5 Zeros Zeros of a system arise when competing effects. we require that Þ is finite. corresponding to a minimal realization of a system. For a single unstable mode. 1976). Definition 4. The zero polynomial is defined as Þ ´×µ ½ ´× finite zeros of ´×µ. loses The zeros are then the values × rank. internal to the system. the zeros are found as non-trivial solutions (with ÙÞ ¼ and ÜÞ ¼) to the following problem ´ÞÁ   ŵ ÜÞ ÙÞ ¼ (4. but we will simply call them “zeros”.½¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 2 The pole vectors provide a very useful tool for selecting inputs and outputs for stabilization. minimizes the input usage required for stabilization. These zeros are sometimes called “transmission zeros”.63) (4. First note that the state-space equations of a system may be written as Ü È ´×µ Ù Ý ¼ È ´×µ ×Á     (4.62) Þ for which the polynomial system matrix. Þ is a zero of ´×µ if the rank of ´Þ ÉÒÞ of ´×µ. are such that the output is zero even when the inputs (and the states) are not themselves identically zero. selecting the input corresponding to the largest element in ÙÔ and the output corresponding to the largest element in ÝÔ . resulting in zero output for some non-zero input.5. For a SISO ¼. We may sometimes use the term “multivariable zeros” to distinguish them from the zeros of the elements of the transfer function matrix. Numerically.64) Å Á ¼ ¼ ¼ Á . 1998b).

The next two examples consider non-square systems. where Ö is the normal rank of ´×µ. Thus the zero polynomial is given by the numerator of (4.51) already has ´×µ as its denominator.65) The normal rank of ´×µ is ¾. ´×µ has no zeros. This is also shown by the following example where the system has no zeros. we have Á 4. and we find that the system has no multivariable zeros. Ø ´×µ ¾ ×  . and since there is no value of × for which both elements become zero. Example 4. In general.2. corresponding to a minimal realization of the system. From Theorem 4. there must be a value of × for which the two columns in ´×µ are linearly dependent.51).7 continued.5. provided that these minors have been adjusted in such a way as to have the pole polynomial ´×µ as their denominators. For instance. On the other hand. Note that we usually get additional zeros if the realization is not minimal. ´×µ has a single RHP-zero at × ¾´× ½µ¾  ½ ´×·¾µ¾ This illustrates that in general multivariable zeros have no relationship with the zeros of the transfer function elements. Example 4. and the minor of order ¾ is the determinant. which is ½.9 Consider the ¾ ¢ ¾ transfer function matrix ½ × ½ ´×µ ¾´×   ½µ ×·¾ (4. for a square ¾ ¾ system to have a zero.10 Consider the ½ ¢ ¢ ¾ system ´×µ × ½ × ¾ ×·½ ×·¾ (4. for a ¾ ¿ system to have a zero. the zero polynomial is Þ ´×µ ×   . the pole polynomial is ´×µ × · ¾ and therefore ×·¾ . we need all three columns in ´×µ to be linearly dependent. non-square systems are less likely to have zeros than square systems.66) The normal rank of ´×µ is ½. Theorem 4. Example 4. Consider again the ¾ ¾ system in (4. ¢ ¢ The following is an example of a non-square system which does have a zero. Thus.3 The zero polynomial Þ ´×µ.50) where Ø ´×µ in (4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿¿ This is solved as a generalized eigenvalue problem – in the conventional eigenvalue problem Á . is the greatest common divisor of all the numerators of all order-Ö minors of ´×µ. .2 Zeros from transfer functions The following theorem from MacFarlane and Karcanias (1976) is useful for hand calculating the zeros of a transfer function matrix ´×µ.

We will use an SVD of ´Þ µ and ´Ôµ RHP-zero at Þ to determine the zero and pole directions (but we stress that this is not a reliable method numerically). using Å Ì in (4. the system We also see from the last example that a minimal realization of a MIMO system can have poles and zeros at the same value of ×. We get ¢   Ž ´×µ  ´×   ½µ¾ ž ´×µ ¾´×   ½µ´× · ¾µ Å¿ ´×µ ´×   ½µ´× · ¾µ ´×µ ´×µ ´×µ (4. because ÝÞ gives information about which output (or combination of outputs) may be difficult to control. 4. An example was given earlier in (3.65).63).56) so that their denominators are ´×µ ´× · ½µ´× · ¾µ¾ ´× ½µ. ÝÞ may be obtained from the transposed state-space description. and ÝÞ is defined as the output zero direction. which has a and a LHP-pole at Ô ¾. Taking the Hermitian (conjugate transpose) of (4. Example 4. Thus. We usually normalize the direction vectors to have unit length.69) À ÝÞ ´Þ µ ¼ ¡ ÙÀ Þ In principle. given a state-space realization. the output zero direction.53). and there will exist non-zero vectors ÙÞ and ÝÞ such that ´Þ µÙÞ ¼ ÝÞ (4. To find the zero direction consider   ¢ ´Þ µ ´µ ½ ¿ ½ ¼  ¼ ¿ ¼ ¿ ¼ ¼½ ¼ ¼ ¼ ¼ ¼  ¼ ¼ À . Consider the ¾ ¾ plant in (4. ÝÞ . or ¡ À ¼ ¡ ÝÞ .68)   Here ÙÞ is defined as the input zero direction. ´×   ½µ. A better approach numerically. we may obtain ÙÞ and ÝÞ from an SVD of ´Þ µ Í ¦Î À and we have that ÙÞ is the last column in Î (corresponding to the zero singular value of ´Þ µ) and ÝÞ is the last column of Í . provided their directions are different.11 Zero and pole directions.63). For ´×Á µ ½ · . ¡ ÙÀ ÙÞ Þ ½ À ÝÞ ÝÞ ½ From a practical point of view. ½ yields (4. we can evaluate ´×µ ´×µ have a zero at × Þ . and adjust the minors of order ¾ in (4. Similarly. is to obtain ÙÞ from a state-space description using the generalized eigenvalue problem in (4.68) yields ÙÀ À ´Þ µ Þ À ½ and ÝÞ ÝÞ Premultiplying by ÙÞ and postmuliplying by ÝÞ noting that ÙÀ ÙÞ Þ ÀÝ Þ ¼ ÙÞ . Remark.64).½¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Example 4.69). is usually of more interest than ÙÞ .5.8 continued.55) and (4. Then ´×µ loses rank at × Þ .67) The common factor for these minors is the zero polynomial Þ ´×µ has a single RHP-zero located at × ½. see (4. Consider again the ¾ ¿ system in (4.3 Zero directions In the following let × be a fixed complex scalar and consider ´×µ as a complex matrix. Let example.

For example. For square systems we essentially have that the poles and zeros of ´×µ are the poles and zeros of Ø ´×µ.g. 4. The invariant zeros can be further subdivided into input and output decoupling zeros. Rosenbrock (1966. Thus. using MATLAB) may yield additional invariant zeros.71) has Ø ´×µ ½. although the system obviously has poles at (multivariable) zeros at ½ and ¾. 2. Poles and zeros are defined in terms of the McMillan form in Zhou et al. 1976). It is important to note that although the locations of the poles and zeros are independent of input and output scalings. the system ´×µ ´× · ¾µ ´× · ½µ ¼ ¼ ´× · ½µ ´× · ¾µ (4. However. when there is a zero and pole in different parts of the system which happen to cancel when forming Ø ´×µ.      ½ and  ¾ and . this crude definition may fail in a few cases. For instance. their directions are not. We see from ÝÞ that the zero has a slightly larger ´ Ô · ¯µ The SVD as ¯ ´ ¾ · ¯µ ½  ¼  ¼ ¯¾ ¼ ¿  ¼ ½  ¿ · ¯ ¯¾ ¾´ ¿ · ¯µ  ¼  ¼ À (4. such that Ý ´Øµ ¼ for Ø ¼. 4.Ä Å ÆÌË Ç ÄÁÆ we get ÙÞ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ ´Þ µ and The zero input and output directions are associated with the zero singular value of  ¼ ¼ component in the first output. The zeros resulting from a minimal realization are sometimes called the transmission zeros. If Þ is a zero of ´×µ. We recommend that a minimal realization is found before computing the zeros. the inputs and outputs need to be scaled properly before making any interpretations based on pole and zero directions. to determine the pole directions consider ¼ ¼ and ÝÞ  ¼ ¿ ¼ . and a set of initial conditions (states) ÜÞ . These cancel poles associated with uncontrollable or unobservable states and hence have limited practical significance. then there exists an input signal of the form ÙÞ ÞØ ½· ´Øµ.70) ¼ yields ´ ¾ · ¯µ ¿  ¼ ¼½ ¼ ¼ ¼ ¼  ¼ ¼ ¿ The pole input and output directions are associated with the largest singular value. If one does not have a minimal realization. The presence of zeros implies blocking of certain input signals (MacFarlane and Karcanias. ¼ ¼  ¼ ¼ and ÝÔ . then numerical computations (e.6 Some remarks on poles and zeros 1. and we get ÙÔ slightly larger component in the second output. where ÙÞ is a (complex) vector and ½· ´Øµ is a unit step. (1996). These invariant zeros plus the transmission zeros are sometimes called the system zeros. Next. 3. We note from ÝÔ that the pole has a ½ Remark. ¼½ ¯¾ . 1970) first defined multivariable zeros using something similar to the Smith-McMillan form.

but cancellations are not possible for RHP-zeros due to internal stability (see Section 4. ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´×µ in (4. Zeros usually appear when there are fewer inputs or outputs than states. the poles are determined by the corresponding closed-loop negative feedback Ù Ø´×Á · ü µ. if Ý is a physical output. There are no zeros if the outputs contain direct information about all the states.g.               . This was illustrated by the example in Section 3. their directions are different and so they do not cancel or otherwise interact with each other. Ý ´ · à µÙ. Moving zeros. the open-loop poles are determined 10. including their output directions ÝÞ .g.1.72) 7. if Á and ¼). Thus. unstable plants may be characteristic polynomial Ð ´×µ stabilized by use of feedback control. p. ×· à to · ). whereas the zero at ½ has directions ÙÞ ÝÞ ¼ ½ Ì . but we may cancel their effect by subtracting identical poles in à ½ ½ (e.g. can only be accomplished by adding an extra input (actuator). and (c) parallel compensation cannot move the poles.g. and the zeros of ´×µ are equal to the poles  ½ ´×µ. How are the poles affected by (a) feedback ( ´Á · à µ ½ ). are unaffected by feedback. 8. and vice versa. which. even though ÝÞ is fixed it is still possible with feedback control to move the deteriorating effect of a RHP-zero to a given output channel. or when ´×Á µ ½ · with Ò states. but we may cancel pole from ½ ×· poles in by placing zeros in à (e. (b) series compensation ( à . explains why zeros were given very little attention in the optimal control theory of the 1960’s which was based on state feedback. However. provided ÝÞ has a non-zero element for this output.71) provides a good example for illustrating the importance of directions when discussing poles and zeros of multivariable systems.½¿ 5.71) the pole at ½ has ½ ¼ Ì . and is discussed in more detail in Section 6. If we apply constant gain by the characteristic polynomial ÓÐ ´×µ ü Ý . ¼. In (4. series and parallel compensation on the zeros. (b) series compensation cannot move the poles.5.55) ¢   ¼ ¼ ¼ Ò ÖÒ ´ µ Ñ Ø ÑÓ×Ø Ò   Ñ · Ö Ò ´ µ Þ ÖÓ× Ø ÑÓ×Ø Ò   ¾Ñ · Ö Ò ´ µ Þ ÖÓ× Ü ØÐÝ Ò   Ñ Þ ÖÓ× (4. ´×Á µ ½ . the number of poles is the same as the number of zeros. Consider next the effect of feedback. For a strictly proper plant ´×µ Ø´×Á µ.5.13. 1989. that is.12.7). For square systems with a non-singular -matrix. This means that the zeros in . the zeros of ´Á · à µ ½ are the zeros of plus the poles of à . 11. if the inverse of ´Ôµ exists then it follows from the SVD that          ½ ´ÔµÝÔ ¼ ¡ ÙÔ (4. Moving poles. ×· à ׷ ). ×· à ׷ ). We then have Consider a square Ñ Ñ plant ´×µ for the number of (finite) zeros of ´×µ (Maciejowski. See also Example 4. (c) The only way to move zeros is by parallel compensation. see Example 4. feedforward control) and (c) parallel compensation ( · à )? The ½ ¾ moves the answer is that (a) feedback control moves the poles (e. (a) With feedback. We note that although the system has poles and zeros at the same locations (at ½ and ¾). This probably from Ý we can directly obtain Ü (e. (b) Series compensation can counter the effect of zeros in by placing poles in à to cancel them.73) 9. directions ÙÔ ÝÔ 6. Furthermore.

and their effect cannot be moved freely to any output. i. A zero is pinned to a subset of the outputs if ÝÞ has one or more elements equal to zero. ´×Á µ ½ · has no zeros if ¼ Example 4. pinned zeros have a scalar origin. which is less than the normal rank of ½ ´×µ.e.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ 12. one can say that a system which is functionally uncontrollable has in a certain output direction “a zero for all values of ×”. as we increase the feedback gain. µ ½ µ ¼ Ð ´×µ Ð ´×µ ´×µ (4. The existence of zeros for non-square systems is common in practice in spite of what is sometimes claimed in the literature. For example. the closed-loop poles move from open-loop poles to the open-loop zeros. a zero is pinned to certain inputs if ÙÞ has one or more elements equal to zero. see page 218. which is ¾. The first Ò columns of È are independent because The last Ñ columns are independent of ×. Pinned zeros are quite common in practice. where Ò is the number of states. so the zero locations are unchanged by feedback. Similarly. An example is ´×µ in (4. The pole locations are changed by feedback.71). On the ½½ ´×   Þµ ½¾ ½¿ does not have a zero at × Þ since ¾ ´Þ µ other hand. The zero polynomial is Þ Ð ´×µ Þ ´×µ.74) 1. For example. The concept of functional controllability. so the rank of ½ ´Þ µ is 1. These results are well known from a classical root locus analysis. In most cases. Solution: Consider the polynomial system has rank Ò.13 We want to prove that ´×µ and rank ´ µ Ò. This follows because the second row of ½ ´Þµ is equal to zero. The closedsystem with plant ´×µ Þ ´×µ ´×µ and a constant gain controller.76) Þ ´×µ That is. Pinned zeros. 13.12 Effect of feedback on poles and zeros. 2. Zeros of non-square systems. Loosely speaking. matrix È ´×µ in (4. is related to zeros. where the zero at ¾ is pinned to input Ù½ and to output ݽ . As an example consider a plant with three inputs and two outputs ½ ´×µ   14. they appear if we have a zero pinned to the side of the plant with the fewest number of channels. à ´×µ loop response from reference Ö to output Ý is ½½ ½¾ ½¿ ¾½ ´×   Þµ ¾¾ ´×   Þ µ ¾¿ ´×   Þ µ which has a zero at × Þ which is pinned to output ݾ . ¾ ´×µ ¾½ ´×   Þµ ¾¾ ¾¿ has rank 2 which is equal to the normal rank of ¾ ´×µ (assuming that the last two columns of ¾ ´×µ have rank 2). In particular.62). Furthermore.75) (4. ÝÞ ¼ ½ Ì . Example 4. the effect of a measurement delay for output ݽ cannot be moved to output ݾ . The control implications of RHP-zeros and RHP-poles are discussed for SISO systems on pages 173-187 and for MIMO systems on pages 220-223. the first Ò and last Ñ columns are   . Ì ´×µ Note the following: Ä´×µ ½ · Ä´×µ ½· ´×µ ´×µ Þ ´×µ ´×µ · Þ ´×µ Þ Ð ´×µ Ð ´×µ (4. Consider a SISO negative feedback . RHP-zeros therefore imply high gain instability.

Exercise 4. but assume that both states are measured and used for feedback control. the single state of ´×µ.e. and what are the zeros of this transfer function? Exercise 4. determine a state-space realization Exercise 4.8 Consider the plant in (4. In conclusion.½¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ independent of each other.6 Find the zeros for a ¾ ¢ ¾ plant with ½ ¾½ ½½ ¾½ ½¾ ¾¾ ½ ¾¾ Á ¼ Exercise 4.3 (a) Consider a SISO system ´×µ ¼? (b) Do à and à i.e. i. . ÝÑ Ü (but the controlled output is still Ý Ü · Ù). since ¼ and has full column rank and thus cannot have any columns equal to zero. What is the transfer function from Ù´×µ to Ü´×µ. is a scalar.5 Given Ý ´×µ ½· of ´×µ and then find the zeros of ´×µ using the generalized eigenvalue problem. Does ´×µ have any zeros for have the same poles an zeros for a SISO system? For a MIMO system?   Exercise 4. ¼ because if is non-zero then the first Ò columns of È may depend on the (We need last Ñ columns for some value of ×).77).77) Exercise 4.7 For what values of ½ does the following plant have RHP-zeros? Á ½¼ ¼ ¼  ½ ½¼ ½ ½¼ ¼ ¼ ¼ ¼ ½ (4. Find the zeros. Can a RHP-zero in ´×µ give problems with stability in the feedback system? Can we achieve “perfect” control of Ý in this case? (Answers: No and no). ´×Á µ ½ · with just one state. È ´×µ always has rank Ò · Ñ and there are no zeros.4 Determine the poles and zeros of ¾ ´×µ given that ´×·¾µ ¿ ½½×¿  ½ ×¾   ¼×  ¼ ×´×·½¼µ´×·½µ´×  µ ´×·½µ´×  µ ´×·¾µ ´×·¾µ ´×·½µ´×  µ ´×·½µ´×  µ ¼´× · ½µ¾ ´× · ¾µ´×   µ ×´× · ½µ¾ ´× · ½¼µ´×   µ¾ Ø ´×µ ¼´×   ׿   ½ ×¾   ¾¿×   ½¼µ ×´× · ½µ¾ ´× · ½¼µ´×   µ¾ ´×µ have? How many poles does × ´×µÙ´×µ. with ´×µ ½ × .

it is not possible to cancel exactly a plant zero or pole because of modelling errors. e. In practice.2: Internally unstable system Remark. However. To this effect consider the system in Figure 4.g. This is a subtle but important point.7 Internal stability of feedback systems To test for closed-loop stability of a feedback system. However. that is.79) Consequently.78) Ë ´×µ is stable.2 where ´×µ ×·½ ×·½ . therefore. it is important to stress that even in the ideal case with a perfect RHP pole-zero cancellation. although the system appears to be stable when considering the output signal Ý . Ö + ¹ - ×·½µ ¹ ×´´× ½µ Ã Ù Ý ¹ + Ù ¹ × ½ + ×·½ ¹+ + Õ Ý¹ Figure 4. to obtain   Ä Ã × Ò Ë ´Á · ĵ ½ × ×· (4. it is usually enough to check just one closed-loop transfer function.14 Consider the feedback system shown in Figure 4. In forming the loop transfer function Ä Ã ´×µ × × ½ à we cancel the term ´× ½µ. a RHP pole-zero cancellation. it is unstable when considering the “internal” signal Ù. we would still get an internally unstable system.4. Ä and Ë will in practice also be unstable. In the above example. so the system is (internally) unstable. The point is best illustrated by an example. However. this assumes that there are no internal RHP pole-zero cancellations between the controller and the plant. From the above example. the transfer  Ã ´Á · à µ ½ Ý ×·   ´×  ´½µ´×½µ µ · Ý (4. In this ideal case the state-space descriptions of Ä and Ë contain an unstable hidden mode corresponding to an unstabilizable or undetectable state. as in the above example. Ë ´Á · à µ ½ . × ½ and Example 4. it is clear that to be rigorous we must consider internal stability of the feedback system.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ 4. see Definition 4. the transfer function from Ý to function from Ý to Ù is unstable: Ù Ý is stable.3 where .

3 is internally stable if and only if one of the four closed-loop transfer function matrices in (4. This is stated in the following theorem.3 is internally unstable. that is. The following can be proved using the above theorem (recall Example 4. Proof: A proof is given by Zhou et al.80) (4.81) Ù Ý ´Á · à µ ½ Ù   à ´Á · à µ ½ Ý ´Á · à µ ½ Ù · ´Á · à µ ½ Ý The theorem below follows immediately: Theorem 4.3: Block diagram used to check internal stability of feedback system we inject and measure signals at both locations between the two components. such as and à . ¾ Note how we define pole-zero cancellations in the above theorem.81) is stable. then stability of one closed-loop transfer function implies stability of the others.80) and (4. all RHP-poles in ´×µ and à ´×µ are contained in the minimal realizations of à and à .14): If there are RHP pole-zero cancellations between ´×µ and à ´×µ.80) and (4. p. we get Ë ´×µ ½ which is stable. get and à . In this case. We (4.3 is internally stable if and only if all four closed-loop transfer matrices in (4. Then the feedback system in Figure 4. Then the feedback system in Figure 4.   .e. but internal stability is clearly not possible. (1996. Theorem 4. with ´×µ has disappeared and there is effectively a RHP pole-zero cancellation. RHP polezero cancellations resulting from or à not having full normal rank are also disallowed. if à and à do not both contain all the RHP-poles in and à . If we disallow RHP pole-zero cancellations between system components. For ½ ´× µ and à ¼ we get à ¼ so the RHP-pole at × example. then the system in Figure 4. i.81) are stable.5 Assume there are no RHP pole-zero cancellations between ´×µ and à ´×µ.125).½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ý Õ ¹ +  Ã ¹ + Ù + + Ý Õ Ù Figure 4.4 Assume that the components and à contain no unstable hidden modes. In this way.

then Ä Ã.7) to show that the signal relationship (4. Similarly. Note in particular Exercise 4. If ´×µ has a RHP-pole at Ô. ÄË that Ë ¾ We notice from this that a RHP pole-zero cancellation between two transfer functions. etc. Thus Ä has a RHP-zero. then Ä while Ë ´Á · à µ ½ ÃË Ã ´Á · à µ ½ and ËÁ ´Á · à µ ½ have a RHP-zero at Ô. it follows from Ì must have a RHP-zero which exactly cancels the RHP-pole in Ä. some of which are investigated below. Now Ë is stable and thus has no RHP-pole which can cancel the RHP-zero ÄË must have a RHP-zero at Þ . and so Ì ¾ a RHP-zero.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ Ã  ½ Á ½½ Exercise 4. We first prove the following statements which apply when the overall feedback system is internally stable (Youla et al. does not necessarily imply internal instability. etc. Ë ´Á · à µ ½ must have in Ä.10 Interpolation constraints. controller.82) 4.80) and (4. à and ÌÁ à ´Á · à µ ½ will each have a RHP-zero at Þ . such ´Á · ĵ ½ . are not allowed.9 Use (A.g. such as and à . Ë ´Á · à µ ½ . Prove the following interpolation constraints which apply for SISO feedback systems when the plant ´×µ has a RHP-zero Þ or a RHPpole Ô: ´Þ µ ¼ Ä´Þ µ ¼ Ì ´Þ µ ¼ Ë ´Þ µ ½ (4.11 Given the complementary sensitivity functions Ì ½ ´× µ  ¾× · ½ ×¾ · ¼ × · ½ what can you say about possible RHP-poles or RHP-zeros in the corresponding loop transfer functions. ´×µ has a RHP-zero at Þ . Proof of 2: Clearly.84) Exercise 4. plant) that RHP pole-zero cancellations are not allowed. is internally stable if and only if Å ´×µ is stable. where we discuss alternative ways of implementing a two degrees-of-freedom controller.. Exercise 4. It is only as between Ä and Ë between separate physical components (e. Ì Ã ´Á · à µ ½ . Ľ ´×µ and ľ ´×µ? .81) also may be written From this we get that the system in Figure 4. ÄÁ Proof of 1: To achieve internal stability.12. à and ÄÁ à also have a RHP-pole at Ô. Since Ì is stable. 1974): 1. If 2.1 Implications of the internal stability requirement The requirement of internal stability in a feedback system leads to a number of interesting results.83)  ½ ´Ôµ ¼ µ ¸ µ Ä´Ôµ ½ ¸ Ì ´Ôµ ½ Ë ´Ôµ ¼ ¾× · ½ Ì ´×µ ×¾ · ¼ × · ½ ¾ (4.7. Ä has a RHP-pole at Ô. RHP pole-zero cancellations between system à must have a RHP-zero when components.3. Ù Ý Å ´×µ Ù Ý Å ´×µ   Á (4.

4: Different forms of two degrees-of-freedom controller (a) General form (b) Suitable when ÃÝ ´×µ has no RHP-zeros (c) Suitable when ÃÝ ´×µ is stable (no RHP-poles) (d) Suitable when ÃÝ ´×µ ý ´×µÃ¾ ´×µ where ý ´×µ contains no RHP-zeros and þ ´×µ no RHP poles Exercise 4.4(b)–4. This implementation issue is dealt with in this exercise by considering the three possible schemes in Figure 4. which can happen.4(d). Recall ¼ and Ì ½. ÃÖ ´×µ. in particular. for command tracking. if the feedback controller ÃÝ ´×µ contains RHP poles or zeros. The following exercise demonstrates another application of the internal stability requirement. . A discussion of the significance of these interpolation constraints is relevant. for disturbance rejection. This approach is in general not optimal. and may also yield problems when it comes to implementation.83) that a RHPthat for “perfect control” we want Ë zero Þ puts constraints on Ë and Ì which are incompatible with perfect control.4(b) should not be used if ÃÝ contains RHP-zeros (Hint: Avoid a RHP-zero between Ö and Ý ).4(a) is usually preferred both for implementation and design. However. in some cases one may want to first design the pure feedback part of the controller. The general form shown in Figure 4.Ù¹ Ö¹ ÃÖ Ö ¹ ÃÖ + ¹ -¹ ý Ù ¹ ÃÝ ÝÑ (c) þ ÝÑ (d) Figure 4. here denoted ÃÝ ´×µ. In all these schemes ÃÖ must clearly be stable. On the other hand.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark. Ö ¹ ¹ ÝÑ Ã Ù¹ Ö ¹ ÃÖ + ¹ -¹ ÃÝ Ù ¹ ÝÑ (a) (b) + ¹ . A two degrees-of-freedom controller allows one to improve performance by treating disturbance rejection and command tracking separately (at least to some degree).12 Internal stability of two degrees-of-freedom control configurations. the constraints imposed by the RHP-pole are consistent with what we would like for perfect control. We discuss this in more detail in Chapters 5 and 6. We note from (4. Thus the presence of RHP-poles mainly impose problems when tight (high gain) control is not possible. 1) Explain why the configuration in Figure 4. and then to add a simple precompensator.

80) and (4.4(d) may be used. for which the parameterization is easily derived. The requirement of internal stability also dictates that we must exercise care when we use ´×µ.e. i. Thus.3 is internally à µ ½ is stable. 4.6 For a stable plant stable if and only if É Ã ´Á · ´×µ the negative feedback system in Figure 4. To avoid this problem one should for statea separate unstable disturbance model space computations use a combined model for inputs and disturbances. Discuss why one may often set ÃÖ Á in this case (to give a fourth possibility).1 Stable plants The following lemma forms the basis.4(c) should not be used if ÃÝ contains RHPpoles. Proof: The four transfer functions in (4.8 Stabilizing controllers In this section. By all stabilizing controllers we mean all controllers that yield internal stability of the closed-loop system.88) stable the system is internally which are clearly all stable if stable if and only if É is stable. we introduce a parameterization.87) (4.85) (4.17). with ¾ .Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ 2) Explain why the configuration in Figure 4. Lemma 4. 3) Show that for a feedback controller ÃÝ the configuration in Figure 4. This implies that this configuration should not be used if we want integral action in ÃÝ (Hint: Avoid a RHP-zero between Ö and Ý ). see (4. We first consider stable plants. 1976) of all stabilizing controllers for a plant. write the model Ý Ù· in the form Ý where and Ù share the same states. and then unstable plants where we make use of the coprime factorization.86) (4. ´Á · à µ ½ Á   É ´Á · à µ ½ Á   É ´Á · à µ ½ ´Á   É µ and É are stable.. 4.14) and (4.8.81) are easily shown to be à ´Á · à µ ½ É (4. provided the RHP-poles (including integrators) of ÃÝ are contained in ý and the RHP-zeros in þ . known as the É-parameterization or Youlaparameterization (Youla et al.

) . 1989) of stabilizing controllers. we find that a parameterization of all stabilizing negative feedback controllers for the stable plant ´×µ is given by à ´Á É µ ½ É É´Á ɵ ½ (4. Remark 1 If only proper controllers are allowed then ´Á É µ ½ is semi-proper. The idea behind the IMC-structure is that the “controller” É can be designed in an open-loop fashion since the feedback signal only contains information about the difference between the actual output and the output predicted from the model. It may be derived directly from the IMC structure given in Figure 4. The parameterization in (4.5. Exercise 4.14 Show that testing internal stability of the IMC-structure is equivalent to testing for stability of the four closed-loop transfer functions in (4. there is no danger of getting a RHP-pole in à that cancels a RHP-zero in .½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ As proposed by Zames (1981).     Remark 2 We have shown that by varying É freely (but stably) we will always have internal stability. This means that although É may generate unstable controllers à . Exercise 4.15 Given a stable controller à .5: The internal model control (IMC) structure Exercise 4.5 is internally unstable if either É or is unstable.89) where the “parameter” É is any stable transfer function matrix.85) with respect to the controller à .   É must be proper since the term Ã Ö + ¹ - Ý ¹ É Õ ¹ ¹ plant ¹+ + Ý Õ ¹ ¹+ model Figure 4.88).89) is identical to the internal model control (IMC) parameterization (Morari and Zafiriou. What set of plants can be stabilized by this controller? (Hint: interchange the roles of plant and controller.13 Show that the IMC-structure in Figure 4. and thus avoid internal RHP pole-zero cancellations between à and .85)-(4. by solving (4.

complex matrix. usually of Ä´ µ 4.8.90) A parameterization of all stabilizing negative feedback controllers for the plant (Vidyasagar.19) for the right coprime factorization. and is linear in Ü if ´Üµ Ü. so they are affine3 in É. ´ÎÖ   ÉÆÐ µ ½ ´ÍÖ · ÉÅÐ µ ½ ½ ½   Remark 2 For a stable plant. (1996) for details.) will be in the form À½ · À¾ ÉÀ¿ . so ÎÖ and ÍÖ can alternatively be obtained Remark 1 With É ¼ we have ü from a left coprime factorization of some initial stabilizing controller ü . Ì .91) yields à ´Á É µ ½ É as found before in (4. where Ä´×µ is the loop ¿ A function ´Üµ is affine in Ü if ´Üµ Ü· .19) select ÍÖ ¼ and ÎÖ Á . ÎÖ ½ ÍÖ . and (4. we may write ´×µ ÆÐ ´×µ corresponding to ÅÐ Á . Note that when we talk about eigenvalues in this section. 1985) ´×µ is then (4. e.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½ 4.91) à ´×µ where ÎÖ and ÍÖ satisfy the Bezout identity (4.2 Unstable plants For an unstable plant ´×µ. the search over all ´Á · à µ ½ must be stable) is replaced by a search over stable stabilizing à ’s (e. The É-parameterization may be very useful for controller synthesis.89). so we may from (4.   Remark 3 We can also formulate the parameterization of all stabilizing controllers in statespace form.g. be it openloop or closed-loop. 4.6.9. First.9 Stability analysis in the frequency domain As noted above the stability of a linear system is equivalent to the system having no poles in the closed right-half plane (RHP). we refer to the eigenvalues of a à ´ µ. Second. and É´×µ is any stable transfer function satisfying the technical condition Ø´ÎÖ ´ µ É´ µÆÐ ´ µµ ¼. consider its left coprime factorization ´×µ Å  ½ ÆÐ Ð (4. This test may be used for any system. and not those of the state matrix . Ë É’s.g. all closed-loop transfer functions (Ë . etc. This provides a direct generalization of Nyquist’s stability test for SISO systems.1 Open and closed-loop characteristic polynomials We first derive some preliminary results involving the determinant of the return difference operator Á · Ä. In this section we will study the use of frequency-domain techniques to derive information about closed-loop stability from the open-loop transfer matrix Ä´ µ. Consider the feedback system shown in Figure 4. In this case ü ¼ is a stabilizing controller. . see page 312 in Zhou et al. This further simplifies the optimization problem.

½

ÅÍÄÌÁÎ ÊÁ

Ä

Ö

+ ¹- ¹

Ä

Ý Õ¹

à ÇÆÌÊÇÄ

Figure 4.6: Negative feedback system transfer function matrix. Stability of the open-loop system is determined by the poles of Ä´×µ. If Ä´×µ has a state-space realization

ÓÐ ÓÐ

ÓÐ ÓÐ

, that is

Ä´×µ

ÓÐ ´×Á   ÓÐ µ ÓÐ ´×µ

 ½ ÓÐ · ÓÐ

(4.92)

then the poles of Ä´×µ are the roots of the open-loop characteristic polynomial

Ø´×Á   ÓÐ µ

(4.93)

Assume there are no RHP pole-zero cancellations between ´×µ and à ´×µ. Then from Theorem 4.5 internal stability of the closed-loop system is equivalent to the stability of Ë ´×µ ´Á · Ä´×µµ ½ . The state matrix of Ë ´×µ is given (assuming Ä´×µ is well-posed, i.e. ÓÐ · Á is invertible) by

Ð

ÓÐ   ÓÐ ´Á ·

ÓÐ µ

 ½ ÓÐ

(4.94)

This equation may be derived by writing down the state-space equations for the transfer function from Ö to Ý in Figure 4.6

and using (4.96) to eliminate Ý from (4.95). The closed-loop characteristic polynomial is thus given by

Ü Ý

ÓÐ Ü · ÓÐ ´Ö   Ý µ ÓÐ ´Ö   Ý µ

(4.95) (4.96)

ÓÐ Ü ·

Ð ´×µ ¸

Ø´×Á  

е

Ø´×Á   ÓÐ · ÓÐ ´Á · ÓÐ µ ½ ÓÐ µ

(4.97)

Relationship between characteristic polynomials
The above identities may be used to express the determinant of the return difference operator, Á · Ä, in terms of Ð ´×µ and ÓÐ ´×µ. From (4.92) we get

Ø´Á · Ä´×µµ
Schur’s formula (A.14) then yields ÓÐ ¾½ ÓÐ )

Ø´Á · ÓÐ ´×Á   ÓÐ µ ½ ÓÐ · ÓÐ µ Á · ÓÐ ½¾   ÓÐ ¾¾ (with ½½
Ð ´×µ ÓÐ ´×µ

(4.98)

×Á

 

Ø´Á · Ä´×µµ

¡

(4.99)

Ø´Á · ÓÐ µ is a constant which is of no significance when evaluating the where poles. Note that Ð ´×µ and ÓÐ ´×µ are polynomials in × which have zeros only, whereas Ø´Á · Ä´×µµ is a transfer function with both poles and zeros.

Ä Å ÆÌË Ç ÄÁÆ

Ê Ë ËÌ Å ÌÀ ÇÊ

½
Þ´×µ The ÓÐ ´×µ
(4.100)

Example 4.15 We will rederive expression (4.99) for SISO systems. Let Ä´×µ sensitivity function is given by

Ë ´×µ
and the denominator is

½ ½ · Ä´×µ
ÓÐ ´×µ´½ ·

ÓÐ ´×µ Þ ´×µ · ÓÐ ´×µ

´×µ

Þ ´×µ · ÓÐ ´×µ

ÓÐ ´×µ

Þ ´×µ µ

ÓÐ ´×µ´½ · Ä´×µµ

(4.101)

which is the same as Ð ´×µ in (4.99) (except for the constant which is necessary to make the leading coefficient of Ð ´×µ equal to ½, as required by its definition). Remark 1 One may be surprised to see from (4.100) that the zero polynomial of Ë ´×µ is equal to the open-loop pole polynomial, ÓÐ ´×µ, but this is indeed correct. On the other hand, Ä´×µ ´½ · Ä´×µµ is equal to Þ ´×µ, the note from (4.74) that the zero polynomial of Ì ´×µ open-loop zero polynomial. Remark 2 From (4.99), for the case when there are no cancellations between ÓÐ ´×µ and Ð ´×µ, we have that the closed-loop poles are solutions to

Ø´Á · Ä´×µµ

¼

(4.102)

4.9.2 MIMO Nyquist stability criteria
We will consider the negative feedback system of Figure 4.6, and assume there are no internal RHP pole-zero cancellations in the loop transfer function matrix Ä´×µ, i.e. Ä´×µ contains no unstable hidden modes. Expression (4.99) for Ø´Á · Ä´×µµ then enables a straightforward generalization of Nyquist’s stability condition to multivariable systems. Theorem 4.7 Generalized (MIMO) Nyquist theorem. Let ÈÓÐ denote the number of openloop unstable poles in Ä´×µ. The closed-loop system with loop transfer function Ä´×µ and negative feedback is stable if and only if the Nyquist plot of Ø´Á · Ä´×µµ i) makes ÈÓÐ anti-clockwise encirclements of the origin, and ii) does not pass through the origin. The theorem is proved below, but let us first make some important remarks. Remark 1 By “Nyquist plot of Ø´Á · Ä´×µµ” we mean “the image of Ø´Á · Ä´×µµ as × goes clockwise around the Nyquist -contour”. The Nyquist -contour includes the ) and an infinite semi-circle into the right-half plane as illustrated in entire -axis (× Figure 4.7. The -contour must also avoid locations where Ä´×µ has -axis poles by making small indentations (semi-circles) around these points. Remark 2 In the following we define for practical reasons unstable poles or RHP-poles as poles in the open RHP, excluding the -axis. In this case the Nyquist -contour should make -axis poles, a small semicircular indentation into the RHP at locations where Ä´×µ has thereby avoiding the extra count of encirclements due to -axis poles.

½

ÅÍÄÌÁÎ ÊÁ
Im

Ä

à ÇÆÌÊÇÄ

¼
Re

Figure 4.7: Nyquist

-contour for system with no open-loop

-axis poles

Remark 3 Another practical way of avoiding the indentation is to shift all -axis poles into the LHP, for example, by replacing the integrator ½ × by ½ ´× · ¯µ where ¯ is a small positive number. Remark 4 We see that for stability Ø´Á · Ä´ µµ should make no encirclements of the origin if Ä´×µ is open-loop stable, and should make ÈÓÐ anti-clockwise encirclements if Ä´×µ is unstable. If this condition is not satisfied then the number of closed-loop unstable poles of ´Á · Ä´×µµ ½ is È Ð · ÈÓÐ , where is the number of clockwise encirclements of the origin by the Nyquist plot of Ø´Á · Ä´ µµ.

Æ

Æ

Remark 5 For any real system, Ä´×µ is proper and so to plot Ø´Á · Ä´×µµ as × traverses along the imaginary axis. This follows since the -contour we need only consider × Ð Ñ× ½ Ä´×µ the Nyquist plot of Ø´Á · Ä´×µµ ÓÐ is finite, and therefore for × converges to Ø´Á · ÓÐ µ which is on the real axis.

½

¼ the plot of Ø´Á · Ä´ µµ Remark 6 In many cases Ä´×µ contains integrators so for may “start” from . A typical plot for positive frequencies is shown in Figure 4.8 for the system

¦½

Ä

Ã

¿´ ¾× · ½µ Ã ´ × · ½µ´½¼× · ½µ

½ ½ ½¾ × · ½ ½¾ ×

(4.103)

Note that the solid and dashed curves (positive and negative frequencies) need to be connected as approaches 0, so there is also a large (infinite) semi-circle (not shown) corresponding ¼ (the indentation is to avoid the to the indentation of the -contour into the RHP at × integrator in Ä´×µ). To find which way the large semi-circle goes, one can use the rule (based on conformal mapping arguments) that a right-angled turn in the -contour will result in a right-angled turn in the Nyquist plot. It then follows for the example in (4.103) that there will be an infinite semi-circle into the RHP. There are therefore no encirclements of the origin. Since there are no open-loop unstable poles ( -axis poles are excluded in the counting), ÈÓÐ ¼, and we conclude that the closed-loop system is stable. Proof of Theorem 4.7: The proof makes use of the following result from complex variable theory (Churchill et al., 1974):

Ä Å ÆÌË Ç ÄÁÆ

Ê Ë ËÌ Å ÌÀ ÇÊ
Im
2

½

1

¦½
−1 −0.5 −1 0.5 1

Re

−2

−3

Figure 4.8: Typical Nyquist plot of ½ ·

Ø Ä´ µ ´×µ and let
denote a

Lemma 4.8 Argument Principle. Consider a (transfer) function closed contour in the complex plane. Assume that: 1. 2. 3.

´×µ is analytic along , that is, ´×µ has no poles on ´×µ has zeros inside . ´×µ has È poles inside .

.

Then the image ´×µ as the complex argument × traverses the contour È clockwise encirclements of the origin. direction will make

 

once in a clockwise

Let

Æ ´ ´×µ µ denote the number of clockwise encirclements of the point by the image ´×µ as × traverses the contour clockwise. Then a restatement of Lemma 4.8 is Æ ´¼ ´×µ µ  È
(4.104)

Ð ´×µ Ø´Á · Ä´×µµ We now recall (4.99) and apply Lemma 4.8 to the function ´×µ ÓÐ ´×µ selecting to be the Nyquist -contour. We assume Ø´Á · ÓÐ µ ¼ since otherwise goes along the -axis and around the feedback system would be ill-posed. The contour ¼) by the entire RHP, but avoids open-loop poles of Ä´×µ on the -axis (where ÓÐ ´ µ making small semi-circles into the RHP. This is needed to make ´×µ analytic along . We ÈÓÐ poles and È Ð zeros inside . Here È Ð denotes the then have that ´×µ has È number of unstable closed-loop poles (in the open RHP). (4.104) then gives

Æ ´¼ Ø´Á · Ä´×µµ µ È Ð   ÈÓÐ

(4.105)

Since the system is stable if and only if È Ð ¼, condition i) of Theorem 4.7 follows. However, Ø´Á · Ä´×µµ, and hence Ð ´×µ has we have not yet considered the possibility that ´×µ

½¼
zeros on the avoid this, 4.7 follows.

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

Ø´Á · Ä´ µµ must not be zero for any value of

-contour itself, which will also correspond to a closed-loop unstable pole. To and condition ii) in Theorem

¾

Example 4.16 SISO stability conditions. Consider an open-loop stable SISO system. In this case, the Nyquist stability condition states that for closed-loop stability the Nyquist plot of ½ · Ä´×µ should not encircle the origin. This is equivalent to the Nyquist plot of Ä´ µ not encircling the point ½ in the complex plane

 

4.9.3 Eigenvalue loci
The eigenvalue loci (sometimes called characteristic loci) are defined as the eigenvalues of the frequency response of the open-loop transfer function, ´Ä´ µµ. They partly provide a generalization of the Nyquist plot of Ä´ µ from SISO to MIMO systems, and with them gain and phase margins can be defined as in the classical sense. However, these margins are not too useful as they only indicate stability with respect to a simultaneous parameter change in all of the loops. Therefore, although characteristic loci were well researched in the 70’s and greatly influenced the British developments in multivariable control, e.g. see Postlethwaite and MacFarlane (1979), they will not be considered further in this book.

4.9.4 Small gain theorem
The Small Gain Theorem is a very general result which we will find useful in the book. We present first a generalized version of it in terms of the spectral radius, ´Ä´ µµ, which at each frequency is defined as the maximum eigenvalue magnitude

´Ä´ µµ ¸ Ñ Ü

´Ä´ µµ

(4.106)

Theorem 4.9 Spectral radius stability condition. Consider a system with a stable loop transfer function Ä´×µ. Then the closed-loop system is stable if

´Ä´ µµ

½

(4.107)

Proof: The generalized Nyquist theorem (Theorem 4.7) says that if Ä´×µ is stable, then the closed-loop system is stable if and only if the Nyquist plot of Ø´Á · Ä´×µµ does not encircle the origin. To prove condition (4.107) we will prove the “reverse”, that is, if the system is unstable and therefore Ø´Á · Ä´×µµ does encircle the origin, then there is an eigenvalue, ´Ä´ µµ which is larger than ½ at some frequency. If Ø´Á · Ä´×µµ does encircle the origin, then there must exists a gain ¯ ´¼ ½ and a frequency ¼ such that

¾

Ø´Á · ¯Ä´ ¼ µµ

¼

(4.108)

Ä Å ÆÌË Ç ÄÁÆ

Ê Ë ËÌ Å ÌÀ ÇÊ ¼ µµ ½ for ¯

½½ ¼. (4.108) is
(4.109) (4.110) (4.111) (4.112) (4.113)

This is easily seen by geometric arguments since Ø´Á · ¯Ä´ equivalent to (see eigenvalue properties in Appendix A.2.1)

´Á · ¯Ä´ ¼ µµ

¼

¸ ¸ µ ¸

½ · ¯ ´Ä´ ¼ µµ ¼ ÓÖ ×ÓÑ ´Ä´ ¼ µµ   ½ ÓÖ ×ÓÑ ¯ ´Ä´ ¼ µµ ½ ÓÖ ×ÓÑ ´Ä´ ¼ µµ ½

¾
Theorem 4.9 is quite intuitive, as it simply says that if the system gain is less than 1 in all directions (all eigenvalues) and for all frequencies ( ), then all signal deviations will eventually die out, and the system is stable. In general, the spectral radius theorem is conservative because phase information is not Ä´ µ , and consequently the above stability considered. For SISO systems ´Ä´ µµ ½ for all frequencies. This is clearly conservative, since condition requires that Ä´ µ ½ at from the Nyquist stability condition for a stable Ä´×µ, we need only require Ä´ µ ´× · ¯µ. frequencies where the phase of Ä´ µ is ½ ¼Æ Ò ¿ ¼Æ . As an example, let Ä ¼. Since the phase never reaches ½ ¼Æ the system is closed-loop stable for any value of ¯, which for a small value of ¯ is very conservative However, to satisfy (4.107) we need indeed.

 

 

¦ ¡

Remark. Later we will consider cases where the phase of Ä is allowed to vary freely, and in which case Theorem 4.9 is not conservative. Actually, a clever use of the above theorem is the main idea behind most of the conditions for robust stability and robust performance presented later in this book. The small gain theorem below follows directly from Theorem 4.9 if we consider a matrix , since at any frequency we then have ´Äµ Ä (see norm satisfying (A.116)).

¡

Theorem 4.10 Small Gain Theorem. Consider a system with a stable loop transfer function Ä´×µ. Then the closed-loop system is stable if

Ä´ µ
where

½

(4.114)

Ä

denotes any matrix norm satisfying

¡

.

Remark 1 This result is only a special case of a more general small gain theorem which also applies to many nonlinear systems (Desoer and Vidyasagar, 1975). Remark 2 The small gain theorem does not consider phase information, and is therefore independent of the sign of the feedback.

½¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ ´Äµ.

Remark 3 Any induced norm can be used, for example, the singular value,

Remark 4 The small gain theorem can be extended to include more than one block in Ľ ľ . In this case we get from (A.97) that the system is stable if the loop, e.g. Ä Ä½ ľ ½ .

¡

Remark 5 The small gain theorem is generally more conservative than the spectral radius condition in Theorem 4.9. Therefore, the arguments on conservatism made following Theorem 4.9 also apply to Theorem 4.10.

4.10 System norms
Û

¹
Figure 4.9: System

Þ ¹

Consider the system in Figure 4.9, with a stable transfer function matrix ´×µ and impulse response matrix ´Øµ. To evaluate the performance we ask the question: given information about the allowed input signals ۴ص, how large can the outputs Þ ´Øµ become? To answer this, we must evaluate the relevant system norm. We will here evaluate the output signal in terms of the usual ¾-norm,
×

Þ ´Øµ ¾

½  ½

Þ´ µ¾

(4.115)

and consider three different choices for the inputs: 1. 2. 3.

۴ص is a series of unit impulses. ۴ص is any signal satisfying ۴ص ¾ ½. ۴ص is any signal satisfying ۴ص ¾ ½, but ۴ص Þ ´Øµ for Ø ¼.

¼ for Ø

¼, and we only measure

The relevant system norms in the three cases are the ¾ , ½ , and Hankel norms, respectively. The ¾ and ½ norms also have other interpretations as are discussed below. We introduced the ¾ and ½ norms in Section 2.7, where we also discussed the terminology. In Appendix A.5.7 we present a more detailed interpretation and comparison of these and other norms.

À À

À À

À À

Ä Å ÆÌË Ç ÄÁÆ

Ê Ë ËÌ Å ÌÀ ÇÊ

½¿

4.10.1

À¾ norm
À
´×µ ¾ ¸
Ú Ù Ù ½ ٠ؾ

Consider a strictly proper system ´×µ, i.e. ¼ in a state-space realization. For the ¾ norm we use the Frobenius norm spatially (for the matrix) and integrate over frequency, i.e.

½  ½

ØÖ´ ´ µÀ ´ µµ ßÞ ´ µ¾ È ´ µ¾

(4.116)

We see that ´×µ must be strictly proper, otherwise the ¾ norm is infinite. The ¾ norm can also be given another interpretation. By Parseval’s theorem, (4.116) is equal to the ¾ norm of the impulse response

À

À

À

´×µ ¾

´Øµ ¾ ¸

Ú Ù Ù Ù Ø

½

¼

ØÖ´ Ì ´ µ ´ µµ ßÞ ´µ¾ È ´ µ¾

(4.117)

Remark 1 Note that ´×µ and ´Øµ are dynamic systems while matrices (for a given value of or ).
×

´ µ and ´ µ are constant

Remark 2 We can change the order of integration and summation in (4.117) to get

´×µ ¾

´Øµ ¾

½

¼

´ µ¾

(4.118)

´Øµ is the ’th element of the impulse response matrix, ´Øµ. From this we see that where the ¾ norm can be interpreted as the 2-norm output resulting from applying unit impulses Æ ´Øµ to each input, one after another (allowing the output to settle to zero before applying an ÔÈÑ ´×µ ¾ impulse to the next input). This is more clearly seen by writing ¾ ½ Þ ´Øµ ¾ where Þ ´Øµ is the output vector resulting from applying a unit impulse Æ ´Øµ to the ’th input.

À

In summary, we have the following deterministic performance interpretation of the

À¾ norm:
(4.119)

´× µ ¾

۴ص

Ñ ÙÒ Ø ÜÑÔÙÐ× × Þ ´Øµ ¾

The ¾ norm can also be given a stochastic interpretation (see page 371) in terms of the quadratic criterion in optimal control (LQG) where we measure the expected root mean square (rms) value of the output in response to white noise excitation. For numerical computations of the ¾ norm, consider the state-space realization ´×Á µ ½ . By substituting (4.10) into (4.117) we find

À

 

À

´×µ
(4.120)

´× µ ¾

Ô

ØÖ´ Ì É µ ÓÖ

´×µ ¾

Ô

ØÖ´ È Ì µ

where É and È are the observability and controllability Gramians, respectively, obtained as solutions to the Lyapunov equations (4.48) and (4.44).

½

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

4.10.2

À½ norm
À

Consider a proper linear stable system ´×µ (i.e. ¼ is allowed). For the ½ norm we use the singular value (induced 2-norm) spatially (for the matrix) and pick out the peak value as a function of frequency ´×µ ½ Ñ Ü ´ ´ µµ (4.121)

¸

In terms of performance we see from (4.121) that the ½ norm is the peak of the transfer function “magnitude”, and by introducing weights, the ½ norm can be interpreted as the magnitude of some closed-loop transfer function relative to a specified upper bound. This leads to specifying performance in terms of weighted sensitivity, mixed sensitivity, and so on. However, the ½ norm also has several time domain performance interpretations. First, as discussed in Section 3.3.5, it is the worst-case steady-state gain for sinusoidal inputs at any frequency. Furthermore, from Tables A.1 and A.2 in the Appendix we see that the ½ norm is equal to the induced (worst-case) 2-norm in the time domain:

À À

À

À

´×µ ½

۴ص

Ñ Ü Þ ´Øµ ¾ ¼ ۴ص ¾

۴ص ¾

ÑÜ

½ Þ ´Øµ ¾

(4.122)

This is a fortunate fact from functional analysis which is proved, for example, in Desoer and Vidyasagar (1975). In essence, (4.122) arises because the worst input signal ۴ص is a sinusoid with frequency £ and a direction which gives ´ ´ £ µµ as the maximum gain. The ½ norm is also equal to the induced power (rms) norm, and also has an interpretation as an induced norm in terms of the expected values of stochastic signals. All these various interpretations make the ½ norm useful in engineering applications. The ½ norm is usually computed numerically from a state-space realization as the smallest value of ­ such that the Hamiltonian matrix À has no eigenvalues on the imaginary axis, where

À

À

À

À

Ì , see Zhou et al. (1996, p.115). This is an iterative procedure, where one and Ê ­ ¾ Á may start with a large value of ­ and reduce it until imaginary eigenvalues for À appear.

 

  Ì ´Á · Ê ½ Ì µ

· Ê  ½ Ì

 ´ · Ê ½

Ê ½ Ì

Ì µÌ

(4.123)

4.10.3 Difference between the À¾ and À½ norms
To understand the difference between the ¾ and ½ norms, note that from (A.126) we can write the Frobenius norm in terms of singular values. We then have
×

À

À

´× µ ¾

½ ½ ¾  ½

¾ ´ ´ µµ

(4.124)

From this we see that minimizing the ½ norm corresponds to minimizing the peak of the largest singular value (“worst direction, worst frequency”), whereas minimizing the ¾ norm corresponds to minimizing the sum of the square of all the singular values over all frequencies (“average direction, average frequency”). In summary, we have

À

À

Ä Å ÆÌË Ç ÄÁÆ

Ê Ë ËÌ Å ÌÀ ÇÊ

½

¯ À½ : “push down peak of largest singular value”. ¯ À¾ : “push down whole thing” (all singular values over all frequencies).
Example 4.17 We will compute the

À½ and À¾ norms for the following SISO plant
´×µ ½ ×· ´ ½ ½ ½ Ø Ò ½ ´ µ µ¾  ½
Ö

(4.125)

The

À¾ norm is
´×µ ¾ ´

½ ½ ¾  ½

´ µ¾ ßÞ ½ ¾· ¾ ´Øµ

½ µ¾

¾

½ ¾

(4.126)

To check Parseval’s theorem we consider the impulse response

½ Ä ½ × ·
×

 ØØ
Ö

¼ ½ ¾ ½

(4.127)

and we get

´Øµ ¾
as expected. The

½

À½ norm is
´×µ ½

¼

´   Ø µ¾ Ø ÑÜ

(4.128)

ÑÜ ´ µ

½ ½ ¾ · ¾µ ¾ ´

(4.129)

For interest, we also compute the ½-norm of the impulse response (which is equal to the induced -norm in the time domain):

½

´Øµ ½
In general, it can be shown that may have equality.

½

¼

 Ø

ßÞ

´Øµ Ø

½

(4.130)

´×µ ½

´Øµ ½ , and this example illustrates that we

Example 4.18 There exists no general relationship between the example consider the two systems

À¾ and À½ norms. As an

½ ¯× (4.131) ¾ ´×µ ×¾ · ¯× · ½ ¯× · ½ ¼. Then we have for ½ that the À½ norm is ½ and the À¾ norm is infinite. For and let ¯ ¾ the À½ norm is again ½, but now the À¾ norm is zero.

½ ´×µ

Why is the ½ norm so popular? In robust control we use the ½ norm mainly because it is convenient for representing unstructured model uncertainty, and because it satisfies the multiplicative property (A.97):

À

À

´×µ ´×µ ½

´×µ ½ ¡

´×µ ½

(4.132)

½

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

This follows from (4.122) which shows that the

What is wrong with the ¾ norm? The ¾ norm has a number of good mathematical and numerical properties, and its minimization has important engineering implications. However, the ¾ norm is not an induced norm and does not satisfy the multiplicative property. This implies that we cannoy by evaluating the ¾ norm of the individual components say anything avout how their series (cascade) interconnection will behave.

À

À

À½ norm is an induced norm.

À

À

Example 4.19 Consider again ´×µ Ô ½ ¾ . Now consider the ¾ norm of

À

½ ´× · µ in (4.125), for which we found ´×µ ´×µ:
Ö ßÞ

´×µ ¾

´×µ ´×µ ¾
and we find, for

Ú Ù Ù Ù Ø

½

¼

½ Ä ½ ´ × · µ¾ ¾
Ø  Ø

½ ½ ¾

Ö

½

´×µ ¾ ¾

½, that ´×µ ´×µ ¾ ´×µ ¾ ¡ ´×µ ¾
(4.133)

which does not satisfy the multiplicative property (A.97). On the other hand, the ½ norm does satisfy the multiplicative property, and for the specific example we have equality with ´×µ ½ ´×µ ½ . ´×µ ´×µ ½ ½¾

À

¡

4.10.4 Hankel norm
In the following discussion, we aim at developing an understanding of the Hankel norm. The Hankel norm of a stable system ´×µ is obtained when one applies an input ۴ص up to Ø ¼ and measures the output Þ ´Øµ for Ø ¼, and selects ۴ص to maximize the ratio of the 2-norms of these two signals: Õ

´×µ À

¸ Ñ´ØÜ ÕÊ ¼¼ Û µ

ʽ

Þ´ µ ¾ ¾ ¾  ½ Û´ µ ¾

(4.134)

The Hankel norm is a kind of induced norm from past inputs to future outputs. Its definition is analogous to trying to pump a swing with limited input energy such that the subsequent length of jump is maximized as illustrated in Figure 4.10. It may be shown that the Hankel norm is equal to

´×µ À

Ô

´È ɵ

(4.135)

where is the spectral radius (maximum eigenvalue), È is the controllability Gramian defined in (4.43) and É the observability Gramian defined in (4.47). The name “Hankel” is used because the matrix È É has the special structure of a Hankel matrix (which has identical elements along the “wrong-way” diagonals). The corresponding Hankel singular values are the positive square roots of the eigenvalues of È É,
Ô

´È ɵ

(4.136)

Figure 4. that is. For model reduction.134). where ¦ is the matrix of Hankel singular values. À½ norm. A state-space realization is . find a model Þ ) is changed as little as possible.   ´×µ ½. i.   Example 4.138) where ´×µ denotes a truncated or residualized balanced realization with states. The Hankel and Ø ¼ À½ norms are closely related and we have (Zhou et al. see Chapter 11.. since the inputs only affect the outputs through the states at Ø ¼. This and other methods for model reduction are discussed in detail in Chapter 11 where a number of examples can be found. the Hankel norm is always smaller than (or equal to) the reasonable by comparing the definitions in (4. Consider the following problem: given a state-space description ´×µ of a ´×µ with fewer states such that the input-output behaviour (from Û to system. which is also Model reduction. We may then discard such that É È states (or rather combinations of states corresponding to certain subspaces) corresponding to the smallest Hankel singular values. Based on the discussion above it seems reasonable to make use of the Hankel norm. 1996. where we are guaranteed that Hankel singular values.137) Thus. p.10: Pumping a swing: illustration of Hankel norm.e.122) and (4. À ´×µ   ´× µ ½ ¾´ ·½ · ·¾ · ¡ ¡ ¡µ (4. The method of Hankel norm minimization gives a somewhat improved error ´×µ ´×µ ½ is less than the sum of the discarded bound. we usually start with a realization of which is internally balanced. The change in ½ norm caused by deleting states in ´×µ is less than twice the sum of the discarded Hankel singular values. The input is applied for and the jump starts at Ø ¼. ¦.111) ´× µ À ½ ´×µ ½ ¾ Ò ½ (4. .20 We want to compute analytically the various system norms for ½ ´× · µ using state-space methods.

20 using.18. 4. 4. max(hsig).123) are   ½ ­¾  ½ ´×µ ½ Ô Ô ¦ ¾   ½ ­¾ We find that À has no imaginary eigenvalues for ­ ½ ½ . The topics are standard and the treatment is complete for the purposes of this book. Exercise 4. The controllability Gramian È is obtained from the Lyapunov equation Ì ¸   È  È  ½.120) the À¾ norm is then ´×µ ¾ ´À µ Ô ØÖ´ Ì É µ Ô ½¾ The eigenvalues of the Hamiltonian matrix À in (4. From (4. Similarly. so The Hankel matrix is È É ½ ¾ and from (4.hsig]=sysbal(sys). so È ½ ¾ . hinfnorm.17. and system norms. . 4. state controllability and observability.16 Let ¼ and ¯ ¼ ¼¼¼½ and check numerically the results in Examples 4. [sysb.19 and 4. poles and zeros.½ ½ and È ·È Ì Gramian is É ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ   ¼. and for the Hankel norm. for example. the MATLAB -toolbox commands h2norm.11 Conclusion This chapter has covered the following important elements of linear system theory: system descriptions.135) the Hankel norm is ´×µ À ´È ɵ ½¾ These results agree with the frequency-domain calculations in Example 4. the observability ½ ¾ .17. stability and stabilization.

Control the outputs that have favourable dynamic and static characteristics. Select the inputs that rapidly affect the controlled variables . We summarize these limitations in the form of a procedure for input-output controllability analysis. i. 3. (1989) in a chapter called “The art of process control”. However. Control the outputs that are not self-regulating. direct and rapid effect on it. 2. For example. and how are these variables best paired together? In other textbooks one can find qualitative rules for these problems. does there even exist a controller which meets the required performance objectives? II. in practice the following three questions are often more important: I.1 Input-Output Controllability In university courses on control. 4. the following rules are given: 1. which is then applied to a series of examples. output and disturbance variables prior to this analysis is critical. there should exist an input which has a significant. for each output.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË In this chapter. 5. Input-output controllability of a plant is the ability to achieve acceptable control performance. in Seborg et al. we discuss the fundamental limitations on performance in SISO systems. Select the inputs that have large effects on the outputs. Proper scaling of the input.e. Is it a difficult control problem? Indeed. What control structure should be used? By this we mean what variables should we measure and control. methods for controller design and stability analysis are usually emphasized. How well can the plant be controlled? Before starting any controller design one should first determine how easy the plant actually is to control. which variables should we manipulate.

Zames (1981). controllability is independent of the controller. Boyd and Barratt (1991). a plant is controllable if there exists a controller (connecting plant measurements and plant inputs) that yields acceptable performance for all expected plant variations. In summary. Zafiriou. in spite of unknown but bounded variations. Rosenbrock (1970). Definition 5. Horowitz (1963). Freudenberg and Looze (1985. etc.g. size.1 (Input-output) controllability is the ability to achieve acceptable control performance. We will introduce the term input-output controllability to capture these characteristics as described in the following definition. . In other situations. or in automotive control one might decide to change the properties of a spring. to keep the outputs (Ý ) within specified bounds or displacements from their references (Ö). 1968b). by (plant) design changes. Francis and Zames (1984). Thus. that is. e. How might the process be changed to improve control? For example. 1994). It can only be affected by changing the plant itself. using available inputs (Ù) and available measurements (ÝÑ or Ñ ). The above three questions are each related to the inherent control characteristics of the process itself. Frank (1968a. Boyd and Desoer (1985). We also refer the reader to two IFAC workshops on Interactions between process design and process control (Perkins. Important ideas on performance limitations are also found in Bode (1945). III. A major objective of this chapter is to quantify these terms. and is a property of the plant (or process) alone. type. 1992. that is. such as disturbances ( ) and plant changes (including uncertainty). Engell (1988). Kwakernaak and Sivan (1972) Horowitz and Shaked (1975). Morari and Zafiriou (1989). “large”. but what is “self-regulating”. Kwakernaak (1985). the speed of response of a measurement device might be an important factor in achieving acceptable control. but at least they address important issues which are relevant before the controller is designed. Doyle and Stein (1981). These may include: ¯ ¯ ¯ ¯ ¯ ¯ ¯ changing the apparatus itself. to reduce the effects of a disturbance one may in process control consider changing the size of a buffer tank. and Morari (1983) who made use of the concept of “perfect control”. relocating sensors and actuators adding new equipment to dampen disturbances adding extra sensors adding extra actuators changing the control objectives changing the configuration of the lower layers of control already in place Whether or not the last two actions are design modifications is arguable. “rapid” and “direct”. and Chen (1995).½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ These rules are reasonable. 1988). Early work on input-output controllability analysis includes that of Ziegler and Nichols (1943).

but usually it is not. ´Øµ × Ò Ø. Consequently.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ 5. and so on.4.1. the disturbances or the setpoints. More desirable.1. Another term for input-output controllability analysis is performance targeting. see Chapter 10. A rigorous approach to controllability analysis would be to formulate mathematically the control objectives. Nonlinear simulations to validate the linear controllability analysis are of course still recommended. the model uncertainty. without performing a detailed controller design. i. so that the requirement for acceptable performance is: ¯ For any reference ִص between Ê and Ê and any disturbance ´Øµ between ½ and ½. in practice such an approach is difficult and time consuming. especially if there are a large number of candidate measurements or actuators. to determine whether or not a plant is controllable. Surprisingly.         We will interpret this definition from a frequency-by-frequency sinusoidal point of view. with this approach. 5. Experience from a large number of case studies confirms that the linear measures are often very good. using an input ٴص within the range ½ to ½. However. i. In most cases the “simulation approach” is used i. With model uncertainty this involves designing a -optimal controller (see Chapter 8). The main objective of this chapter is to derive such controllability tools based on appropriately scaled models of ´×µ and ´×µ. With Ý Ö we then have:   . no definition of the desired performance is included. the methods available for controllability analysis are largely qualitative. However. one of the most important nonlinearities. we will assume that the variables and models have been scaled as outlined in Section 1. this requires a specific controller design and specific values of disturbances and setpoint changes. to deal with slowly varying changes one may perform a controllability analysis at several selected operating points. is to have a few simple tools which can be used to get a rough idea of how easy the plant is to control. Also.2 Scaling and performance The above definition of controllability does not specify the allowed bounds for the displacements or the expected variations in the disturbance. given the plethora of mathematical methods available for control system design. when we discuss controllability. one can never know if the result is a fundamental property of the plant. or if it depends on the specific controller designed. to keep the output Ý ´Øµ within the range ִص ½ to ִص · ½ (at least most of the time).e. the class of disturbances. etc. namely that associated with input constraints. An apparent shortcoming of the controllability analysis presented in this book is that all the tools are linear.1 Input-output controllability analysis Input-output controllability analysis is applied to a plant to find out what control performance can be expected. that is. Throughout this chapter and the next. and then to synthesize controllers to see whether the objectives can be met.e. In fact. This may seem restrictive. can be handled quite well with a linear analysis.e. performance is assessed by exhaustive simulations..

Recall that with Ö ÊÖ (see Section 1. p. for simplicity we assume that Ê´ µ is Ê (a constant) up to the frequency Ö and is 0 above that frequency. it might also be argued that the allowed control error should be frequency dependent. see (5.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ For any disturbance ´ µ ½ and any reference Ö´ µ Ê´ µ. the the control error performance requirement is to keep at each frequency ´ µ ½. which can then be applied directly to the references by by Ê. 177) notes that “most industrial plants are controlled quite satisfactorily though they are not [state] controllable”. and in most cases it is reasonable to assume that the plant experiences sinusoidal disturbances of constant magnitude up to this frequency. Specifically. replacing   5. State controllability is the ability to bring a system from a given initial state to any final state within a finite time. It could also be argued that the magnitude of the sinusoidal disturbances should approach zero at high frequencies.1. And conversely. However. and was also how the term was used historically in the control literature. i. Morari (1983) introduced the term dynamic resilience. there are many systems.5 this gives no regard to the quality of the response between and after these two states and the required inputs may be excessive. While this may be true. in the 60’s “controllability” became synonymous with the rather narrow concept of “state controllability” introduced by Kalman. so we will assume that Ê´ µ is frequencydependent. For example.3 Remarks on the term controllability The above definition of (input-output) controllability is in tune with most engineers’ intuitive feeling about what the term means.1) . Ziegler and Nichols (1943) defined controllability as “the ability of the process to achieve and maintain the desired equilibrium value”. We will use (5. or simply controllability when it is clear we are not referring to state controllability. However. For example. and the term is still used in this restrictive manner by the system theory community. so instead we prefer the term inputoutput controllability.1) to unify our treatment of disturbances and references. this term does not capture the fact that it is related to control. Unfortunately. However. To avoid any confusion between practical controllability and Kalman’s state controllability. but as long as we know that all the unstable modes are both controllable and observable. For example. Similarly. including frequency variations is not recommended when doing a preliminary analysis (however. using an input Ù´ µ ½. it usually has little practical significance. . we will derive results for disturbances.5.e. like the tanks in series Example 4. we may require no steady-state offset.4) the control error may be written as Ý Ö Ù·   ÊÖ (5. as shown in Example 4.1) where Ê is the magnitude of the reference and Ö´ µ ½ and ´ µ ½ are unknown signals. one may take such considerations into account when interpreting the results). It is impossible to track very fast reference changes. The concept of state controllability is important for realizations and numerical calculations. but which are not input-output controllable. which are state controllable. Rosenbrock (1970. we really only care about frequencies within the bandwidth of the system. should be zero at low frequencies.

Therefore. of course.3) must not exceed the maximum physically allowed value. From this we get that perfect control cannot be achieved if ¯ ¯ ¯ ¯ contains RHP-zeros (since then  ½ is unstable) contains time delay (since then  ½ contains a non-causal prediction) has more poles than zeros (since then  ½ is unrealizable) In addition. there are several definitions of and bandwidth ( .2) “Perfect control” (which. To find the corresponding plant input set Ý Ö and solve for Ù in (5.2 Perfect control and plant inversion A good way of obtaining insight into the inherent limitations on performance originating in the plant itself.20) that   is measurable. i.e. The main results are summarized at end of the chapter in terms of eight controllability rules. Ý Ö. When feedback Ù Ù ÃËÖ   ÃË  ½ Ì Ö    ½ Ì or since the complementary sensitivity function is Ì ÃË . the input generated by feedback in (5. Many of the results can be formulated as upper and lower bounds on the bandwidth of the system.4) We see that at frequencies where feedback is effective and Ì Á (these arguments also apply to MIMO systems and this is the reason why we here choose to use matrix notation).4. That is.3) represents a perfect feedforward controller. The required input in (5. Ì ) in terms of the transfer functions Ë . An important lesson therefore is that perfect control requires the controller to somehow generate an inverse of . Let the plant model be Ý Ù· (5.3)   (5. perfect control cannot be achieved if . 5. 1983). we have from (2. (5.4) is the same as the perfect control input in (5.2):  ½ Ö  ½ Ù (5. Ä and Ì . assuming control Ù Ã ´Ö Ý µ is used.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ Where are we heading? In this chapter we will discuss a number of results related to achievable performance. but since we are looking for approximate bounds we will not be too concerned with these differences. for feedforward control we have that perfect control cannot be achieved if is uncertain (since then  ½ cannot be obtained exactly) The last restriction may be overcome by high gain feedback.5. but we know that we cannot have high gain feedback at all frequencies. is to consider the inputs needed to achieve perfect control (Morari.3). cannot be realized in practice) is achieved when the output is identically equal to the reference. As noted in Section 2. high gain feedback generates an inverse of even though the controller à may be very simple.

2 The waterbed effects (sensitivity integrals) A typical sensitivity function is shown by the solid line in Figure 5. is large.5) Ë·Ì (or Ë · Ì ½ for a SISO system). The effect is similar to sitting on a waterbed: pushing it down at one point. 5. we will show that this peak is unavoidable in practice.3. noise and uncertainty associated with the measurements of disturbances and outputs will present additional problems for feedforward and feedback control.5.5). in the form of theorems. In general. the outputs will “take off”. we want Ë small to obtain the benefits of feedback (small control error for commands and disturbances). respectively. a trade-off between sensitivity reduction and sensitivity increase must be performed whenever: . We note that Ë has a peak value greater than 1. We have assumed perfect measurements in the discussion so far. Two formulas are given. if without control a disturbance will cause the outputs to move far away from their desired values. (5.3 Constraints on Ë and Ì In this section. There are also other situations which make control difficult such as is unstable is large If the plant is unstable. then unless feedback control is applied to stabilize the system. So in both cases control is required. these requirements are not simultaneously possible at any frequency as is clear from (5.½ ÅÍÄÌÁÎ ÊÁ  ½  ½ Ê is large is large Ä Ã ÇÆÌÊÇÄ ¯ ¯ ¯ ¯ where “large” with our scaled models means larger than 1. we present some fundamental algebraic and analytic constraints which apply to the sensitivity Ë and complementary sensitivity Ì . Unfortunately.1.5) implies that at any frequency either Ë ´ µ or Ì ´ µ must be larger than or equal to 0. 5. Ideally. Similarly. but in practice. Specifically.1 Ë plus Ì is one From the definitions Ë ´Á · ĵ ½ and Ì Ä´Á · ĵ ½ we derive Á (5. which reduces the water level locally will result in an increased level somewhere else on the bed. and Ì small to avoid sensitivity to noise which is one of the disadvantages of feedback. which essentially say that if we push the sensitivity down at some frequencies then it will have to increase at others. and problems occur if this demand for control is somehow in conflict with the other factors mentioned above which also make control difficult. 5. and eventually hit physical constraints.3.

Ä´×µ has at least two more poles than zeros (first waterbed formula).1 Bode Sensitivity Integral (First waterbed formula). Proof: See Doyle et al. 2. between Ä and ½. (1996). 1988). Then for closed-loop stability the sensitivity function must satisfy ½ ¼ where Ê ÐÒ Ë ´ µ Û ¡ ÆÔ ½ Ê ´Ô µ (5. so there will always exist a frequency range over which Ë is greater than one. The generalization of Bode’s ¾ criterion to unstable plants is due to Freudenberg and Looze (1985. is less than one. of which the stable case is a classical result due to Bode.1: Plot of typical sensitivity.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Magnitude £ 10 0 Å Ë ½ ÛÈ 10 −1 Frequency [rad/s] Figure 5. Pole excess of two: First waterbed formula To motivate the first waterbed formula consider the open-loop transfer function Ä´×µ ½ ×´×·½µ .2.g. there exists a frequency range over which the Nyquist plot of Ä´ µ is inside the unit circle centred on the point ½. (1992. which is the distance ½ · Ä  ½ is greater than one. Suppose also that Ä´×µ has ÆÔ RHP-poles at locations Ô . with upper bound ½ ÛÈ 1. This behaviour may be quantified by the following theorem. For a graphical interpretation of (5.     Theorem 5. and thus Ë Ä´×µ will have at least two more poles than zeros (at least at sufficiently high frequency. In practice.6) ´Ô µ denotes the real part of Ô . Ë . p. e. or Ä´×µ has a RHP-zero (second waterbed formula). As shown in Figure 5.6) note that the magnitude scale is logarithmic whereas the frequency-scale is linear. 100) or Zhou et al. such that ½ · Ä . . Suppose that the openloop transfer function Ä´×µ is rational and has at least two more poles than zeros (relative degree of two or more). due to actuator and measurement dynamics).

the area of ½) exceeds that of sensitivity reduction by an amount proportional sensitivity increase ( Ë to the sum of the distance from the unstable poles to the left-half plane.½ ÅÍÄÌÁÎ ÊÁ Im 2 Ä Ã ÇÆÌÊÇÄ Ä´×µ 1 ×´×·½µ ¾ −1 −1 1 2 Re Ä´ µ −2 Figure 5. ÈÆÔ as seen from the positive contribution ½ Ê ´Ô µ in (5. where Æ is arbitrarily small (small peak). imagine a waterbed of infinite size. the effect may not be so striking in some cases.7) impose real design limitations. and (5. This is because the increase in area may come over an infinite frequency range. then we can choose ½ arbitrary large (high bandwidth) simply by selecting the interval ½ ¾ to be sufficiently large. Specifically.2: Ë ½ whenever the Nyquist plot of Ä is inside the circle Stable plant.6). However. Consider Ë ´ µ ½·Æ for ½ ¾ .5. the benefits and costs of feedback are balanced exactly. ¾ Unstable plant. Although this is true in most practical cases. From this we expect that an increase in the bandwidth (Ë smaller than 1 over a larger frequency range) must come at the expense of a larger peak in Ë .7) and the area of sensitivity reduction (ÐÒ Ë negative) must equal the area of sensitivity increase (ÐÒ Ë positive). This is illustrated in Figure 5. in practice the frequency response of Ä has to roll off at high frequencies so ¾ is limited. For a stable plant we must have ½ ¼ ÐÒ Ë ´ µ Û ¼ (5. This is plausible since we might expect to have to pay a price for stabilizing the system. The presence of unstable poles usually increases the peak of the sensitivity. ¡ .6) and (5. In this respect. and it is not strictly implied by (5. as in the waterbed analogy. Remark.6) anyway.

results in a larger peak for Ë .0 ½ = ×·½  ×·½ ×·½ Figure 5. As a further example. First.3 we see that the additional phase minimum phase counterpart ÄÑ ´×µ ½·× lag contributed by the RHP-zero and the extra pole causes the Nyquist plot to penetrate the unit circle and hence causes the sensitivity function to be larger than one. Let Ô denote the complex conjugate of Ô . ¾ the closedcorresponding to a higher bandwidth. and has ÆÔ RHP-poles. and the peak of Ë is infinite. Theorem 5.9) .ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË Im 1 ½ Ë´ µ ½ 1 −1 ÄÑ ´ −1 µ µ Re Ä´ ½ ÄÑ ´×µ = ×·½ Ä´×µ −1. Suppose that Ä´×µ has a single real RHP-zero Þ or a complex conjugate pair of zeros Þ Ü Ý . and we ¼½ ¼ ½¼ ¾¼ (5.4 which shows the magnitude of the sensitivity function for the following loop transfer function Ä´×µ The plant has a RHP-zero Þ see that an increase in the controller gain .8) ¦ ½ ¼ ÐÒ Ë ´ µ ¡ Û´Þ µ ¡ ÐÒ ÆÔ ¬ ¬ ½ ¬Ô ¬Ô · Þ¬ ¬   Þ¬ ¬ (5. Before stating the result. For loop system becomes unstable with two poles on the imaginary axis. Ô .2 Weighted sensitivity integral (Second waterbed formula). consider the non-minimum phase loop transfer function Ä´×µ ½·× ½·× and its ½ . consider Figure 5.3: Additional phase lag contributed by RHP-zero causes Ë ½ RHP-zeros: Second waterbed formula For plants with RHP-zeros the sensitivity function must satisfy an additional integral relationship. From Figure 5. which has stronger implications for the peak of Ë .5 −2. Then for closed-loop stability the sensitivity function must satisfy ¾ × × ¾·× ¾. let us illustrate why the presence of a RHP-zero implies that the peak of Ë must exceed ½ ½ × one.

Exercise 5. and a large peak for Ë is unavoidable if we try to push down Ë at low frequencies. where if the zero is real ¾Þ ¾ ½ Þ ¾ · ¾ Þ ½ · ´ Þ µ¾ and if the zero pair is complex (Þ Ü ¦ Ý ) Ü Ü · Û´Þ µ ܾ · ´Ý   µ¾ ܾ · ´Ý · µ¾ Û ´Þ µ (5. This is The weight Û´Þ µ effectively “cuts off” the contribution from ÐÒ Ë to the sensitivity integral Þ . except that the trade-off between Ë less than 1 and Ë larger than 1.10) (5. Explain why this also applies to unstable plants. ½.11) Proof: See Freudenberg and Looze (1985. which ½ .5. Optimal control with state feedback yields a loop transfer function with a pole-zero excess of one so (5. . This is illustrated by the example in Figure 5.1 Kalman inequality The Kalman inequality for optimal state feedback. There are no RHP-zeros when all states are measured so (5. (Solution: 1. 2. Thus.6) does not apply.4: Effect of increased controller gain on Ä´×µ ¾ × × ¾·× Ë for system with RHP-zero at Þ ¾.9) does not apply).7). is done over a limited frequency range.2. Thus. for a stable plant where Ë is reasonably close to ½ at high at frequencies frequencies we have approximately Þ ¼ ÐÒ Ë ´ µ ¼ (5.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10 1 Ë (unstable) ¾¼ Magnitude 10 0 ¼½ ¼ ½¼ 10 −1 10 −2 10 −1 Frequency 10 0 10 1 Figure 5. see Example 9. says that Ë does not conflict with the above sensitivity integrals. 1988). ¾ Ô Note that when there is a RHP-pole close to the RHP-zero (Ô Þ µ then Ô ·Þ  Þ not surprising as such plants are in practice impossible to stabilize.12) This is similar to Bode’s sensitivity integral relationship in (5.4 and further by the example in Figure 5. in this case the waterbed is finite.

and the peak of Ë must be higher happen at frequencies below Þ ÐÒ corresponding to ) (dashed line). we can derive lower bounds on the weighted sensitivity and weighted complementary sensitivity. The basis for these bounds is the interpolation constraints which we discuss first.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ 10 1 ÁÆ ËÁËÇ Ë ËÌ ÅË ½ ˽ Equal areas Nagnitude 10 0 ˾ 10 −1 10 −2 0 1 2 3 4 5 6 Frequency (linear scale) Figure 5.21).13) Ì ´Þ µ (5. The bound for Ë was originally derived by Zames (1981). Ľ ¾ ×´×·½µ (solid line) and ľ The two sensitivity integrals (waterbed formulas) presented above are interesting and provide valuable insights. see (5.4 Sensitivity peaks In Theorem 5. 5. Here we derive explicit bounds on the weighted peak of Ë . which are more useful for analyzing the effects of RHPzeros and RHP-poles.83) and (4. and that the peak will increase if we reduce Ë at other frequencies. The results are based on the interpolation constraints Ë ´Þ µ ½ and Ì ´Ôµ ½ given above.3 Interpolation constraints If Ô is a RHP-pole of the loop transfer function Ä´×µ then Ì ´Ôµ Similarly. which are more useful in applications than the integral relationship. 5.14) These interpolation constraints follow from the requirement of internal stability as shown in (4. if Þ is a RHP-zero of Ä´×µ then ½ ¼ Ë ´Ôµ Ë ´Þ µ ¼ ½ (5. but for Ä this must ¾ . we make use of the maximum .3. we found that a RHP-zero implies that a peak in Ë is inevitable.2.7). but for a quantitative analysis of achievable performance they are less useful. see (5.3. Fortunately.84). however. The conditions clearly restrict the allowable Ë and Ì and prove very useful in the next subsection. In addition.12).5: Sensitivity Ë Ä½  ×· (with RHP-zero at Þ ×· ½ ½·Ä In both cases the areas of Ë below and above 1 (dotted line) are equal. see (5.

½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ modulus principle for complex analytic functions (e. see Chapter 6. see maximum principle in Churchill et al. If ¡ ÆÔ Þ·Ô Þ Ô ½ (5. we have for a stable ´×µ ´ µ½ ÑÜ ´ µ ´×¼ µ ×¼ ¾ ÊÀÈ (5.15) to ´×µ ½. ÛÈ ´×µË ´×µ (weighted sensitivity). 1974) which for our purposes can be stated as follows: Maximum modulus principle.e. somewhere along the -axis. Hence.19) These bounds may be generalized to MIMO systems if the directions of poles and zeros are taken into account. Then the maximum value of ´×µ for × in the right-half plane is attained on the region’s boundary. the weighted complementary sensitivity function must satisfy for each RHP-pole Ô ÛÌ Ì ½ where Þ denote the ÆÞ RHP-zeros of ÛÌ ´Ôµ . if ´×µ has no poles (peaks) in the RHP.e.15) Remark. . (5. applying (5. Thus. The weights are included then ´×µ ½ and to make the results more general. For closed-loop stability the weighted sensitivity function must satisfy for each RHP-zero Þ ÛÈ Ë ½ where Ô denote the ÆÔ RHP-poles of ÛÈ ´Þ µ .g. and we find that ´×µ slopes downwards from the LHP and into the RHP.18) has no RHP-zeros then the bound is simply ÛÌ Ì ½ ÛÌ ´Ôµ (5. but if required we may of course select ÛÈ ´×µ ÛÌ ´×µ ½. If the plant interpolation constraint Ë ´Þ µ also has RHP-poles then the bound is larger.16) has no RHP-poles then the bound is simply ÛÈ Ë ½ ÛÈ ´Þ µ ÆÞ (5. If ¡ Þ ·Ô ½ Þ  Ô (5. ´×µ is analytic in the complex RHP).. gives ÛÈ Ë ½ ÛÈ ´Þ µË ´Þ µ ÛÈ ´Þ µ . ÛÈ ´×µË ´×µ and using the For a plant with a RHP-zero Þ . In such a plot ´×µ has “peaks” at its poles and “valleys” at its zeros.17) Similarly.3 Weighted sensitivity peaks.15) can be understood by imagining a 3-D plot of ´×µ as a function of the complex variable ×. as given in the following theorem: Theorem 5. and To derive the results below we first consider ´×µ ÛÌ ´×µÌ ´×µ (weighted complementary sensitivity). Suppose ´×µ is stable (i. i.

is to consider an “ideal” controller which is integral square error (ISE) optimal.4 Ideal ISE optimal control Another good way of obtaining insight into performance limitations. we derive the following bounds on the peaks of Ë and Ì : ÑÜ ÆÔ Þ ·Ô Þ  Ô ½ Ì ½ ÑÜ ÆÞ Þ ·Ô Þ  Ô ½ (5. Consider a RHP-zero located at Þ . and also Qiu and Davison (1993) who study “cheap” LQR control. Morari and Zafiriou show that for stable plants with RHPzeros at Þ (real or complex) and a time delay . This proves (5. Ñ Ü ÛÈ Ë ´ µ for which we get from the maximum modulus principle ÛÈ Ë ½ Ñ Ü ÛÈ ËÑ ´ µ ÛÈ ´Þ µËÑ ´Þ µ .22) This controller is “ideal” in the sense that it may not be realizable in practice because the cost function includes no penalty on the input ٴص. The basis for the proof is a “trick” where we first factor out the RHP-zeros in Ë into an all-pass part (with magnitude 1 at all points on the -axis). That is. (Remark: There is a technical problem here with -axis poles. This particular problem is considered in detail by Frank (1968a. Ë ´×µ has RHP-zeros at Ô and we may write Ë Ë ËÑ Ë ´× µ × Ô ×·Ô (5. 5. see the proof of the generalized bound (5. The weight ÛÈ ´×µ is as usual assumed to be stable and minimum phase.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ Proof of (5.21) This shows that large peaks for Ë and Ì are unavoidable if we have a RHP-zero and RHP-pole located close to each other. where ËÑ ´Þ µ Ë ´Þ µË ´Þ µ ½ ½ ÓØË ´Þ µ ½ ½ .9. for a given command ִص (which is zero for Ø ¼). This is illustrated by examples in Section 5.23) .18) is similar. Since have RHP-poles at Ô .44). the “ideal” controller is the one that generates the plant input ٴص (zero for Ø ¼) which minimizes ÁË ½ ¼ Ý ´Øµ   ִص ¾ Ø (5.16):. 1968b) and Morari and Zafiriou (1989). the “ideal” response Ý Ì Ö when ִص is a unit step is given by Ì ´×µ  × · Þ   × ×·Þ (5.   The proof of (5.20) Here ËÑ is the “minimum-phase version” of Ë with all RHP-zeros mirrored into the LHP.16). Ë ´×µ is all-pass with Ë ´ µ ½ at all frequencies. these must first be moved slightly into the RHP). ¾ If we select ÛÈ ÛÌ Ë ½ ½.

the “ideal” complementary   × . The optimal ISE-values for three simple stable plants are: ½ ÛØ ¾ ÛØ ¿ ÛØ ÐÝ ÊÀÈ   Þ ÖÓ Þ ÓÑÔÐ Ü ÊÀÈ   Þ ÖÓ× Þ Ü¦ Ý ÁË ÁË ÁË ¾Þ Ü ´Ü¾ · Ý ¾ µ This quantifies nicely the limitations imposed by non-minimum phase behaviour.6: “Ideal” sensitivity function (5.24) for a plant with delay Consider a plant ´×µ that contains a time delay   × (and no RHP-zeros). we have to wait a time until perfect control is achieved.5 Limitations imposed by time delays 1 10 Ë Magnitude 10 0 10 −1 10 −2 ½ −2 10 10 −1 Ö ÕÙ Ò Ý ¢ 10 0 ÐÝ 10 1 10 2 Figure 5. we have ½   × The magnitude Ë is plotted in Figure 5.24)   . Ì rather than that in (5.6. and applies to feedforward as well as feedback control. see Morari and Zafiriou (1989) for details. and the implications in terms of the achievable bandwidth are considered below.23) are 5. Thus. Even the “ideal” controller cannot remove this delay.23).½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where Þ is the complex conjugate of Þ . × (by a Taylor series expansion of the exponential) and the low-frequency asymptote of Ë ´ µ crosses 1 at a frequency of about ½ (the exact frequency where Ë ´ µ crosses ½ Ì ½    × (5. µ ½ at all frequencies.23) is derived by considering an “open-loop” optimization problem. For a step change in the reference ִص. At low frequencies. In the feedback Ä ½ ´ µÌ ´ µ ½ Ä´ µ at all Remark 3 If ִص is not a step then other expressions for derived. Remark 2 The ideal Ì ´×µ is “all-pass” with Ì ´ case the ideal sensitivity function is Ë ´ µ frequencies. as shown in (5. The corresponding “ideal” sensitivity function is sensitivity function will be Ì Ë ½. Remark 1 The result in (5.

RHP-zeros typically appear when we have competing effects of slow and fast dynamics. it may be proven (Holt and Morari. namely ½ (5.6 Limitations imposed by RHP-zeros We will here consider plants with a zero Þ in the closed right-half plane (and no pure time delay). A typical response for the case with one RHP-zero is shown in Figure 2. Since in this case Ë ½ Ä . also see (4.6. page 38. so we expect this value to provide an approximate upper bound on Û .2 High-gain instability It is well-known from classical root-locus analysis that as the feedback gain increases towards infinity.25) This approximate bound is the same as derived in Section 2. 5. We may also have complex zeros.6. With two real RHP-zeros the output will initially increase. In the following we attempt to build up insight into the performance limitations imposed by RHP-zeros using a number of different results in both the time and frequency domains. the “ideal” controller cannot be realized. Thus. Rosenbrock.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ 1 in Figure 5. 5.14. In practice. 1985b. in complex conjugate pairs we have Þ Ü ½   ¾ × · ½ × · ½¼  × · ´× · ½µ´× · ½¼µ ¦ 5.6.6. we have inverse response behaviour. then decrease below its original value. and finally increase to its positive steady-state value. 5. we also have that ½ is equal to the gain crossover frequency for Ä.1 Inverse response For a stable plant with ÒÞ real RHP-zeros. that is. and since these always occur Ý where Ü ¼ for RHP-zeros.76). the closed-loop poles migrate to the positions of the open-loop zeros. the presence of RHP-zeros implies high-gain instability. and for a single real RHP-zero the “ideal” . the plant ´×µ has a real RHP-zero at Þ .23) that the “ideal” ISE-optimal complementary sensitivity function Ì is all-pass.3 Bandwidth limitation I For a step change in the reference we have from (5. For example.2 by considering the limitations imposed on a loop-shaping design by a time delay . We see that the output initially decreases before increasing to its positive steady-state value.6 is ¿ ½ ½ ¼ ). 1970) that the output in response to a step change in the input will cross zero (its original value) ÒÞ times.

26) Ë ½ Ì ½   × · Þ Þ·Þ The Bode magnitude plot of Ë ´ ½ Ä ) is shown in Figure 5. Þ ¾ (5.   × ´½ ¾ ×µ ´½ · ¾ ×µ.7: “Ideal” sensitivity functions for plants with RHP-zeros For a complex pair of RHP-zeros.28) Ë Ü× ´× · Ü · Ý µ´× · Ü   Ý µ . In practice. function Þ Ü ¦ Ý . The low-frequency asymptote of Ë ´ µ crosses 1 at the frequency Þ ¾. and we derive (for a real RHP-zero) the approximate requirement Þ ¾ is also which we also derived on 45 using loop-shaping arguments. the “ideal” ISE optimal controller cannot be realized. Þ Ü¦ Ý Figure 5.23) the “ideal” sensitivity (5. we get from (5. This is seen from the Pad´ e consistent with the bound approximation of a delay.25) for a time delay.27)   Magnitude Ë 10 0 10 −1 10 −2 Þ ¾ −2 10 10 −1 10 0 Frequency ¢½ Þ 10 1 10 2 (a) Real RHP-zero Ý Ü Magnitude ¼½ Ý Ü ½¼ Ý Ü ¼ Ë 10 0 10 −1 Ý Ü ½ 10 0 10 −2 10 −2 10 −1 Frequency ¢½ Ü 10 1 10 2 (b) Complex pair of RHP-zeros.½ sensitivity function is ÅÍÄÌÁÎ ÊÁ Ä ¾× ×·Þ Ã ÇÆÌÊÇÄ (5. The bound ½ in (5. which has a RHP-zero at ¾ .7(a).

The idea is to select a form for the performance weight ÛÈ ´×µ.6. and it is worse for them to be located closer to the real axis than the imaginary axis. This indicates that purely imaginary zeros do not always impose limitations. we select ½ ÛÈ as an upper bound on the sensitivity function (see Figure 5. so to be able to (5. for a zero located on the imaginary axis (Ü ¼) the the imaginary axis (Ü Ý ideal sensitivity function is zero at all frequencies. except for a single “spike” at where it jumps up to 2.31) (We say “at least” because condition (5.1 on page 165). However.4 Bandwidth limitation II Another way of deriving a bandwidth limitation is to use the interpolation constraint Ë ´Þ µ ½ and consider the bound (5. the presence of imaginary zeros may limit achievable performance. This is also confirmed by the flexible structure in Example 2. we notice from (5.30) we must at least require that the weight satisfies ÛÈ ´Þ µ ½ ÛÈ ´Þ µ .31) to gain insight into the limitations imposed by RHP-zeros. in the presence of uncertainty which makes it difficult to place poles in the controller to counteract the zeros.28) and the figure yields the following approximate bounds Þ Þ ¾ Þ Ê ´Þ µ Ê ´Þ µ Ê ´Þ µ ÁÑ´Þ µ ´Ý Ü ½µ ÁÑ´Þ µ ´Ý Ü ½µ ÁÑ´Þ µ ´Ý Ü ½µ (5.17) we have that ÛÈ Ë ½ satisfy (5. The integral under the curve for Ë ´ µ ¾ thus approaches zero. from (5.17) is not an equality). the flexible structure is a rather special case where the plant also has imaginary poles which counteracts most of the effect of the imaginary zeros. RHP-zeros located close to the origin (with Þ small) are bad for control. for example. we require Ë´ µ ½ ÛÈ ´ µ ¸ ÛÈ Ë ½ ½ (5. 10 and 50.10. 1. Þ Ý becomes increasingly “thin” as the zero approaches that the resonance peak of Ë at ¼).1. ÁË Section 5.17) on weighted sensitivity in Theorem 5. for which the response to an input disturbance is satisfactory.29) In summary.4. As usual. ¦ 5. and then to derive a bound for the “bandwidth parameter” in the weight. see does the ideal ISE-value in response to a step in the reference.30) ÛÈ ´Þ µË ´Þ µ However. as Ü ´Ü¾ · Ý ¾ µ.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ In Figure 5.3. that is. and then by (B) considering a weight that requires good performance at high frequencies. Therefore.28) and Figure 5. For a complex pair of zeros.7(b) we plot Ë for Ý Ü equal to 0. even though the plant has a pair of imaginary zeros. Thus.7 Remark. first by considering (A) a weight that requires good performance at low frequencies. An analysis of (5. Ü Ý . We will now use (5. . in other cases.

Plot the weight for ½¼ and Å ¾.34) Ë ½ ¾) we must at least ¼ Þ in (5.33) is equivalent to For example. then from (5.31) we must require × Å· £ ×· £ From (5. plant has a RHP-zero at Þ . Exercise 5. Derive an upper bound on £ when the Note that Ò ½ yields the weight (5.e.2 Consider the weight ½ ž ÛÈ ´×µ ×·Å £ ×· Å £ × × · ž £ (5. a similar derivation yields (5. For a RHP-zero on the imaginary axis. with Å ¾ we require £ Þ given in (5. £ ÛÈ ´×µ (5. If the plant has a RHP-zero at × Þ .33) Real zero.35) £ Þ ½  ¼ Þ . Ë has slope 1 or larger on a log-log plot). which is very similar to the requirement For example.32) ÛÈ ´Þ µ ¬ ¬Þ Å ¬ ¬ Þ· ¬ · £¬ £ ¬ ¬ ½ (5. a ½. ¼. and gives the frequency range over which we allow a peak. This is the same weight as (5.29). Þ .36) with ½.3 Consider the weight ÛÈ ´×µ Å · ´ × µ which requires Ë to have a slope of Ò at low frequencies and requires its low-frequency asymptote to cross ½ at a frequency £ . Then all variables are real and positive and (5. Show that the bound becomes £ £ ½ . Derive an upper bound on £ for the case with ½¼ and Å ¾.27). Ò ½ Exercise 5. and at frequencies lower than the bandwidth the sensitivity steady-state offset less than is required to improve by at least 20 dB/decade (i. a maximum peak of Ë less than Å . with ¾( ¼ Þ .½ A.30) it specifies a minimum bandwidth £ (actually. Þ ¼: with Ö Þ ½ ½ Å ½  ¼ (no steady-state offset) and Å £ (5. The next two exercises show that the bound on £ does not depend much on the slope of the weight at low frequencies.32) with ¼ except that it approaches ½ at high frequencies.32) with Þ as Ò . or on how the weight behaves at high frequencies. Consider the case when Þ is real. Performance at low frequencies ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Consider the following performance weight is the frequency where the straight-line approximation of the weight crosses 1). which is consistent with the requirement require £ Imaginary zero.

This is discussed next. ½ we must at least require that the weight satisfies ÛÈ ´Þ µ ½. This is surprising since from Bode’s integral relationship (5. we consider a case where we want tight control at high frequencies.3 is a bit surprising. and £ À .6) we expect to pay something for having the sensitivity smaller at low frequencies. with Å at frequencies beyond the frequency of the RHP-zero. Performance at high frequencies Here. but since it is not sufficient it can be optimistic.39) This weight is equal to ½ Å at low and high frequencies. condition (5. This illustrates that ÛÈ ´Þ µ it must at least satisfy this condition). if we instead want tight control at high frequencies. Admittedly. so we would expect £ to be smaller for ½ in (5.5 where a more realistic weight is studied. so we can only achieve tight control For example. and to satisfy ÛÈ Ë ½ with a real RHP-zero we derive for the weight in (5.31) is a necessary condition on the weight (i. Exercise 5.37) ½ · ×£ Å (5.e. £ ½¼¼¼ ½. To this effect let ÛÈ ´×µ ½ ´½¼¼¼× £ · Å µ´× ´Å £ µ · ½µ ´½¼× £ · ½µ´½¼¼× £ · ½µ (5. by use of the performance weight ÛÈ ´×µ This requires tight control ( Ë ´ µ ½) at frequencies higher than £ . and we have shown that the presence of a RHP-zero then imposes an upper bound on the achievable bandwidth. but apparently it is optimistic for Ò large. with Ò ½. ½ B. the weight ¼ at high frequencies.38) ¾ the requirement is £ ¾Þ . then a RHP-zero imposes a lower bound on the bandwidth. The result for Ò in exercise 5.31) is not very optimistic (as is confirmed by other results). in the frequency range between £ Ä Thus we require “tight” control. Exercise 5.4 Draw an asymptotic magnitude Bode-plot of ÛÈ ´×µ in (5. and the asymptote crosses ½ at frequencies £ ½¼¼¼ and £ . It says that the bound £ Þ . We have so far implicitly assumed that we want tight control at low frequencies. Ë £.32).37) is unrealistic in that it requires Ë the result as is confirmed in Exercise 5. whereas the only requirement at low frequencies is that the peak of Ë is less than Å .5 Consider the case of a plant with a RHP-zero where we want to limit the sensitivity function over some frequency range.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Remark. but this does not affect in (5.37) £ Þ ½ ½ ½ Å (5. larger Ò. is independent of the required slope (Ò) at low frequency and is also independent of Å . In any case.37). Important. For the simple weight (5. has a maximum value of about ½¼ Å at intermediate frequencies. However.

In most cases we desire tight control at low frequencies.38) we see that a RHP-zero will pose control limitations either at low or high frequencies. and with a RHPzero this may be achieved at frequencies lower than about Þ ¾. and instead achieve tight control at frequencies higher than about ¾Þ . if we instead want tight control at high frequencies (and have no requirements at low frequencies) then we base the controller design on the plants initial response where the gain is reversed because of the inverse response. However. Remark.1 To illustrate this. ý ´×µ à ׷½ ¼ ¼ ½×·½ ×·½ × ½ using negative feedback Example 5. above about ¾Þ .34) and (5.6. However. if we do not need tight control at low frequencies.½ a) Make a sketch of ½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÛÈ (which provides an upper bound on Ë ). The reversal of the sign in the controller is probably best understood by considering the inverse response behaviour of a plant with a RHP-zero. and that we can achieve tight control either at frequencies below about Þ ¾ (the usual case) or Þ. ¼ (corresponding to the requirement £ À Þ ¾) and for ¾¼¼¼ (corresponding to the requirement £ Ä ¾Þ )) 5. (You will find that ÛÈ ´Þ µ ¼ ( ½) for e.9 the use of negative and positive feedback for the plant ´×µ  × · Þ ×·Þ Þ ½ (5.g. b) Show that the RHP-zero cannot be in the frequency range where we require tight control. To see this select Å ¾ and evaluate ÛÈ ´Þ µ for various values of £ ½ ½ ½¼ ½¼¼ ½¼¼¼ ¾¼¼¼ ½¼¼¼¼.8: Control of plant with RHP-zero at Þ ´×µ  ×·½ . Normally. consider in Figures 5. 10 1 Ë Magnitude 10 0 ݴص à ¼¾ à ¼ à ¼ 2 1 0 à à ¼ ¼ à ¼¾ Setpoint 10 −1 −1 10 −2 −2 0 2 10 10 Frequency [rad/s] 10 −2 0 1 2 Time 3 4 (a) Sensitivity function (b) Response to step in reference Figure 5.40) . and the sign of the controller is based on the steady-state gain of the plant.8 and 5.5 RHP-zero: Limitations at low or high frequencies Based on (5. then we may usually reverse the sign of the controller gain. we want tight control at low frequencies.

9(b)). With most medications we find that in the short-term the treatment has a positive effect. but the control has no effect at steady-state. we generally assume that the system behaviour as Ø is important. whereas ´×µ ½ at high frequencies Þ ).05 Time 0. The only way to achieve tight control at low frequencies is to use an additional actuator (input) as is often done in practice.9: Control of plant with RHP-zero at Þ × ´×µ  ×·½ . This is because we cannot track references over longer times than this since the RHP-zero then causes the output to start drifting away (as can be seen in Figure 5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Note that ´×µ ½ at low frequencies ( Þ ). this is not true in some cases because the system may only be under closed-loop control for a finite time Ø . As an example of short-term control.8) 2. For example. In this characterized by plants with a zero at the origin. may not be significant provided Ø ¼ ¼½ [s]. is × ´ × · ½µ.   10 1 à à 0 ¼ ¼ ݴص à 1 Magnitude à à à Setpoint 10 ¼ ¼ Ë 0. Short-term control. the presence of a “slow” RHP-zero ½ Þ . consider treating a patient with some medication. More precisely. An important case. then the RHP-zero at Þ ½ [rad/s] is insignificant. in Figure 5.1 (a) Sensitivity function (b) Response to step in reference Figure 5. where we can only achieve tight control at high frequencies. In which case. However. The negative plant gain in the latter case explains why we then use positive feedback ( in order to achieve tight control at high frequencies. Let Ù be the dosage of medication and Ý the condition of the patient.9(b) if (with Þ small). good transient control is possible. In this book. for example ´×µ case.5 ¼½ 10 2 ¼½ 10 −1 0 −2 10 10 Frequency [rad/s] 0 0 0. whereas in the ½ . þ ´×µ à ´¼ ¼ ×·½µ´¼ ¼¾×·½µ ×·½   ½ using positive feedback. Note that the time scales for the simulations are different. PI-control with negative feedback (Figure 5.9). the total control time is Ø Remark. Derivative control with positive feedback (Figure 5. For positive feedback the step change in reference only has a duration of ¼ ½ s. we show in the figures the sensitivity function and the time response to a step change in the reference using 1.

for example. one may encounter problems with input constraints at low frequencies (because the steady-state gain is small). we use simple PI.9 and discuss in light of the above remark.6 LHP-zeros Zeros in the left-half plane. 5.6. We discuss this in Chapter 7.6 (a) Plot the plant input ٴص corresponding to Figure 5. However.66) contains no adjustable poles that can be used to counteract the effect of a LHP-zero.37). Second. this inverse-response behaviour (characteristic of a plant with a RHP-zero) may be largely neglected during limited treatment. zeros can cross from the LHP into the RHP both through zero (which is worst if we want tight control at low frequencies) or through infinity. i. by letting the weight have a steeper slope at the crossover near the RHP-zero. Use performance weights in the form given by (5.9 with à ¼ .9. (b) In the simulations in Figures 5.32) and ½¼¼¼ and Å ¾ in (5. a controller which uses information about the future. and non-causal if its outputs also depend . A brief discussion is given here. Such a controller may be used if we have knowledge about controller ÃÖ future changes in ִص or ´Øµ. Exercise 5. but non-causal controllers are not considered in the rest of the book since our focus is on feedback control. although one may find that the dosage has to be increased during the treatment to have the desired effect. which shows the input ٴص using an internally unstable controller which over some finite time may eliminate the effect of the RHP-zero. but in practice a LHP-zero located close to the origin may cause problems. First.and derivative controllers. respectively. e. For uncertain plants. As an alternative use the Ë ÃË method in (3. a simple PID controller as in (5. For a delay   × we may achieve perfect control with a non-causal feedforward × (a prediction). do not present a fundamental limitation on control.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ long-term the treatment has a negative effect (due to side effects which may eventually lead to death). For example.10. Interestingly. Try to improve the response. the last point is illustrated by the upper left curve in Figure 5. in robotics and for product changeovers in chemical plants. ½ A system is causal if its outputs depend only on past inputs.59) to synthesize ½ controllers for both the negative and positive feedback cases. Time delay.7 RHP-zeros amd non-causal controllers Perfect control can actually be achieved for a plant with a time delay or RHP-zero if we use a non-causal controller1 . usually corresponding to “overshoots” in the time response. a simple controller can probably not then be used.8 and 5. With £ ÃË ) you will find that the time response is quite similar to that in Figure 5. À 5. on future inputs.g.37) and ÛÙ ½ (for the weight on (5.e. This may be relevant for certain servo problems.

if we know that we should be at work at 8:00.41) ִص ¼ Ø ½ Ø With a feedforward controller ÃÖ the response from Ö to Ý is Ý ´×µÃÖ ´×µÖ. and we know that it takes 30 min to get to work. Future knowledge can also be used to give perfect control in the presence of a RHP-zero. that we should be at work). A causal unstable feedback controller ÃÖ ´×µ For a step in Ö from 0 to 1 at Ø ¼. RHP-zero.g. As an example.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ For example. by the appearance of a step change in our reference position. Input: unstable controller 0 −5 −10 −5 2 1 0 −5 2 1 0 −5 0 5 1 0 −1 −5 1 0 −1 −5 1 0 −1 −5 Output: unstable controller 0 5 Input: non-causal controller Output: non-causal controller 0 5 0 5 Input: stable causal controller Output: stable causal controller 0 5 0 5 Time [sec] Time [sec] Figure 5. then we make a prediction and leave home at 7:30 (We don’t wait until 8:00 when we suddenly are told.10: Feedforward control of plant with RHP-zero 1. In theory we may achieve perfect control (y(t)=r(t)) with the following two controllers (e. consider a plant with a real RHP-zero given by ´×µ and a desired reference change  × · Þ ×·Þ Þ ¼ ¼ ¼ (5. Eaton and Rawlings (1992)). this controller generates the following input signal ¼ Ø ¼ ٴص ½   ¾ ÞØ Ø ¼  × · Þ ×·Þ .

½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ However. This controller cannot be represented in the usual transfer function form. 3. 1997)(Havre and Skogestad. the unstable controller. is not acceptable as it yields an internally unstable system in which ٴص goes to infinity as Ø increases (an exception may be if we want to control the system only over a limited time Ø .  ½   The first option.8 Limitations imposed by unstable (RHP) poles We here consider the limitations imposed when the plant has a unstable (RHP) pole at × For example.   Ô.41) is to use ÃÖ the corresponding plant input and output responses are shown in the lower plots in Figure 5. Note that for perfect control the non-causal controller needs to start a plant with Þ . the non-causal controller. À 5. see page 179). which again imposes the following two limitations: RHP-pole Limitation 1 (input usage).42) × is the “stable version” of with its RHP-poles mirrored into the LHP. it is certainly beneficial for plants with RHP-zeros. 2001) ÃË ½ where × ´Ôµ  ½ (5. The ideal causal feedforward controller in terms of ½. and the transfer function ÃË from plant outputs to plant inputs must always satisfy (Havre and Skogestad. and minimizing the ISE ( ¾ norm) of Ý ´Øµ for the plant in (5. Most importantly. ´×µ × ´×µ ¡ ×·Ô × Ô .10. but it will generate the following input ٴص ¾ ÞØ Ø ½ Ø ¼ ¼ These input signals ٴص and the corresponding outputs Ý ´Øµ are shown in Figure 5. since the controller cancels the RHP-zero in the plant it yields an internally unstable system. is usually not possible because future setpoint changes are unknown. but for practical reasons we started the simulation at Ø changing the input at Ø where ٴص ¾   ¼ ¼½¿.10 for ½. A stable non-causal (feedforward) controller that assumes that the future setpoint change is known. the presence of an ustable pole Ô requires for internal stability Ì ´Ôµ ½ . if we have such information. More precicely. In most cases we have to accept the poor performance resulting from the RHP-zero and use a stable causal controller. the plant ´×µ ½ ´× ¿µ has a RHP-pole with Ô ¿. we need to manipulate the plant inputs Ù. However. 2. For unstable plants we need feedback for stabilization. The second option.

11.44) ¡ where Þ denote the (possible) ÆÞ RHP-zeros of .) The bound (5. approximately. In summary.42) and (5. ×·Þ Consider a RHP-pole located at Ô. has RHP-zeros at Þ . ÎÑ× ´Ôµ .44) we may easily generalize (5. ½ Þ  Ô To prove (5. ÛÌ ´Ôµ in (5.42) we make use of the identity ÃË  ½ then gives Î  ½ ÃË ¡ ¡  ½ Ì . RHP-poles mainly impose limitations on the plant inputs.     Example 5. so write Ì Ì ÌÑ É × Þ with Ì ´×µ . 2001) Theorem 5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ For example.2 Consider a plant with ´×µ ´½¼× · ½µ´× ¿µ´¼ ¾× · ½µ. whereas RHP-zeros imposes limitation on the plant outputs.44).44) with  ½ ÃË ½ Ñ× ´Ôµ  ½ ¡ Þ ·Ô Þ  Ô × ´Ôµ . bound on the peak of ÃË is × ´Ôµ ½ ÃË ´ · Òµ.43) imply that stabilization may Since Ù be impossible in presence of disturbances or measurement noise Ò. the system is practically open-loop and stabilization is impossible. see Section 5. the presence of RHP-poles generally imposes a lower bound. When the inputs saturate. and use the maximum modulus principle. 1997)(Havre and Skogestad. and for a real RHPpole Ô we must require that the closed-loop bandwidth is larger than ¾Ô. Next. Use of (5. and where ÎÑ× is the “minimum-phase and stable version” of Î with its (possible) RHP-poles and RHP-zeros mirrored into the LHP. Using (5.19). Î Ì ½ É Ô·Þ ÎÑ× ´ÔµÌÑ ´Ôµ ÎÑ× ´ÔµÌ ´ÔµÌ ´Ôµ ½ ÎÑ× ´Ôµ ½ Ô Þ which proves (5. Then for closed-loop stability we must require for each RHP-pole Ô in . Proof of Limitation 1 (input usage): We will first prove the following generalized bound (Havre and Skogestad. and therefore Ì must have RHP-zeros at Þ .   ¿µ and ¼ ´×   (5. for the plant ´×µ ½ ´× ¿µ we have × ½ ´× · ¿µ and the lower ¿·¿ .3 page 191.44) is (If has no RHP-zeros the the bound is simply Î Ì ½ tight (equality) for the case when has only one RHP-pole. Thus.43) ÃË This gives ½ × ´Ôµ  ½ Ñ× ´Ôµ ÃË ½ ½¼× · ½µ´× · ¿µ ¼ ´× · ¿µ´¼ ¾× · ½µ × ¿ ¿½ ¡ ¼ ¡½ ½ RHP-pole Limitation 2 (bandwidth).4 Let Î Ì be a (weighted) closed-loop transfer function where Ì is complementary sensitivity. since the required inputs Ù may be outside the saturation limit. We need to react sufficiently fast. whereas the We derive thos from the bound ÛÌ Ì ½ presence of RHP-zeros usually places an upper bound on the allowed bandwidth. ÆÞ Þ ·Ô Î Ì ½ ÎÑ× ´Ôµ (5.42). the bounds (5. note that Î Ì ½ ÎÑ× ÌÑ× ½ ÎÑ× ÌÑ ½ .

with upper bound ½ ÛÌ ½ ÛÌ is a reasonable upper bound on the Ì´ µ ½ ÛÌ ´ µ ¸ ÛÌ Ì ½ ½ To satisfy this we must.½ ÅÍÄÌÁÎ ÊÁ Ñ× ´×µ ¡ Ä Ã ÇÆÌÊÇÄ which proves (5.47) Ô ÅÌ ÅÌ   ½ £Ì ¾Ô (5.19) ÛÌ Ì ½ ÛÌ ´Ôµ . Thus. and that Ì drops below 1 at frequency £ Ì .45) which requires Ì (like Ä ) to have a roll-off rate of at least ½ at high frequencies (which must be satisfied for any real system).11.11: Typical complementary sensitivity. Here the latter equality follows since × ´×µ × Þ ×·Þ ¾ Proof of Limitation 2 (bandwidth): ÅÌ Magnitude 10 0 £Ì ½ ÛÌ Ì 10 −1 Frequency [rad/s] Figure 5. and we have approximately ¾Ô (5. that is. For a real RHP-pole at × Ô condition ÛÌ ´Ôµ ½ yields £ Ì Ê Ð ÊÀÈ   ÔÓÐ ¾ (reasonable robustness) this gives which proves the above With ÅÌ bandwidth requirement. the presence of the RHP-pole puts a lower limit on the bandwidth in terms of Ì . since from (5. we cannot let the system roll-off at frequencies lower than about ¾Ô. The requirements on Ì are shown graphically in Figure 5.46) . that Ì is less than ÅÌ at low frequencies. We start by selecting a weight ÛÌ ´×µ such that complementary sensitivity function. at least require that the weight satisfies ÛÌ ´Ôµ ½ Now consider the following weight ÛÌ ´×µ ½ × £ Ì · ÅÌ (5. Ì .42).

but this plant is not strictly proper. The corresponding sensitivity and complementary sensitivity functions. shows that we must at least require Ô a similar analysis of the weight (5. The software gives a stable and performance weight (5. Note that the presence of any complex RHP-poles or complex RHP-zeros does not affect this result. ´× Note the requirement that ´×µ is strictly proper. this may require an unstable controller.7 Derive the bound £Ì ½½ Ô for a purely complex pole. £ ½. e. Å minimum phase controller with an ½ norm of ½ . For example. In words. If such a controller exists the plant is said to be strongly stabilizable.g. However. Example 5. The following example for a plant with Þ acceptable performance when the RHP-pole and zero are located this close. and for a real RHP-pole the approximate approximate bound ¾Ô (with ÅÌ ¾). the presence of RHP-zeros (or time delays) make stabilzation more difficult. In theory. and for practical purposes it is sometimes desirable that the controller is stable.48) We use the Ë ÃË design method as in Example 2. provided the plant does not contain unstable hidden modes.45) with £Ì ½ ½ Ô . and the time response to a unit step reference change À . We want to design an and Ô ½. “the system may go unstable before we have time to react”. We then have: ¯   A strictly proper plant with a single real RHP-zero Þ and a single real RHP-pole Ô.9 Combined unstable (RHP) poles and zeros Stabilization.11 with input weight ÏÙ ½ and ¾. any linear plant may be stabilized irrespective of the location of its RHP-poles and RHP-zeros. the RHP-zero must be located a bit further away from the RHP-pole. the plant ´×µ ½µ ´× ¾µ with Þ ½ Ô ¾ is stabilized with a stable constant gain controller with ¾ à ½. For a plant with a single RHP-pole and RHP-zero. It has been proved by Youla et al.3 ½ design for plant with RHP-pole and RHP-zero. (1974) that a strictly proper SISO plant is strongly stabilizable by a proper controller if and only if every real RHP-zero in ´×µ lies to the left of an even number (including zero) of real RHP-poles in ´×µ. However. Above we derived for a real RHP-zero the Þ ¾ (with ÅË ¾). ½ controller for a plant with Þ À À ´× µ ×  ´×   ½µ´¼ ½× · ½µ (5. ¾ Exercise 5. can be stabilized by a stable proper controller if and only if Þ Ô.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ For a purely imaginary pole located at Ô ÅÌ ¾.       In summary.   ´×µ ´× Ô×µ´ Þ×·½µ .32) with =0. the strong stabilizability requirement is Þ Ô. in order to achieve acceptable performance and robustness. 5. This indicates that for a system with a single real RHP-pole bound Ô in order to get acceptable performance and a RHP-zero we must approximately require Þ Ô shows that we can indeed get and robustness.

by small hand movements. for a plant with a single real RHP-pole Ô and a single real RHP-zero Þ . we always have Ë ½ Ì ½ Þ ·Ô Þ  Ô Ô.49) Example 5. (5. In both cases. In Theorem 5. A short rod with a large mass gives a large poles: two at the origin and one at Ô ÅÐ value of Ô. based on observing the rod either at its far end (output ݽ ) or the end in one’s hand (output ݾ ). taking into account the closeness of the RHP-pole and zero.3 we derived lower bounds on the weighted sensitivity and complementary sensitivity. [rad/s] and from (5. For example.48). the plant has three unstable to . (5. Sensitivity peaks.12: À½ design for a plant with RHP-zero at Þ and RHP-pole at Ô ½ are shown in Figure 5.5 Balancing a rod.47) we desire a bandwidth of with Å Ñ and Ð ½ [m] we get Ô about [rad/s] (corresponding to a response time of about ¼ ½ [s]). Here Ð [m] is the length of the rod and Ñ [kg] its mass. For example.4 Consider the plant in (5. (1992) Consider the problem of balancing a rod in the palm of one’s hand. Å [kg] is the mass of your hand and [ ½¼ m/s¾ ] is the acceleration due Õ gravity.½ ÅÍÄÌÁÎ ÊÁ 2 10 Magnitude 0 Ä Ã ÇÆÌÊÇÄ Ì 1 Ý ´Øµ ٴص 10 −1 Ë Ô −2 0 0 −1 10 −2 Þ 10 2 10 10 Frequency [rad/s] (a) −2 0 1 2 3 4 Time [sec] (b) Response to step in reference Ë and Ì Figure 5.12. respectively.49) gives ¿ ½ Ë ½ ½ and Ì ½ ½ ¼ and ¾ ¿. The linearized transfer functions for the two cases are ½ ´×µ   ×¾ ´ÅÐ×¾   ´Å · ѵ µ ¾ ´×µ Ð×¾   ×¾ ´ÅÐ×¾   ´Å · ѵ µ ´Å ·Ñµ . The Example 5. The objective is to keep the rod upright. This example is taken from Doyle et al. With Þ follows that for any controller we must at least have actual peak values for the above Ë ÃË -design are ¾ and it . The time response is good. and this in turn means that the system is more difficult to stabilize.

so poor control performance is inevitable if we try to balance the rod by looking at our hand (ݾ ). So although the RHP-zero far from the origin and the RHP-pole close to the origin (Þ in theory the rod may be stabilized by looking at one’s hand ( ¾ ). measuring ݽ and measuring ݾ . then we need control (feedforward or feedback). We get ¼ ½. Þ ·Ô Þ  Ô Consider a light weight rod with Ñ Å ¾. it seems doubtful that this is possible for a human.9). recall (2. Disturbance rejection. i. i. Without control the steady-state sinusoidal response is ´ µ constant. for which we expect stabilization to be difficult. However. If the mass of the rod is small (Ñ Å is small).51) Ë ´ µ ¸ Ë .49). In the following. and the control the worst-case disturbance at any frequency is ´Øµ × Ò Ø. then Ô is close to Þ and stabilization is in practice impossible with any controller.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ If one is measuring ݽ (looking at the far end of the rod) then achieving this bandwidth is the main requirement. is satisfied if ½ ½ (5. Consider a single disturbance and assume that the reference is ¼. we consider feedback control. ˽ ¾ and Ì ½ ¾. if one tries to balance the rod by looking at one’s hand (ݾ µ there Ô is also a RHP-zero at Þ Ð . From this we can immediately objective is that at each frequency ´Øµ conclude that ¯ no control is needed if be “self-regulated”).4 then ½. in which case we have ´×µ The performance requirement and only if Ë ´×µ ´×µ ´×µ ½ for any ´ µ (5. However. even with a large mass.e. ´ µ ½. Similarly. ´ µ ½. Ö ´ µ ´ µ. If the variables have been scaled as outlined in Section 1. ´×µ. stabilization is very difficult because Ô Þ whereas we would normally prefer to have Ô). for tracking we may consider the magnitude Ê of the reference change.50) ´ µ ½ ½ at any frequency.10 Performance requirements imposed by disturbances and commands The question we here want to answer is: how fast must the control system be in order to reject disturbances and track commands of a given magnitude? We find that some plants have better “built-in” disturbance rejection capabilities than others. i. ´ µ ½ at all frequencies (in which case the plant is said to If ´ µ ½ at some frequency.e. and we must have We obtain ½·­ ­ ½ ­ Ö Å ·Ñ Å 5. To quantify these problems use (5. The difference between the two cases. This may be analyzed directly by considering the appropriately scaled disturbance model.e. highlights the importance of sensor location on the achievable performance of control.

54) A plant with a small or a small is preferable since the need for feedback control is then less. in which ´×µ is of high order. is given later in Section 5. Scaling has been applied to so this means that without feedback. There we actually overcome the limitation on the slope of Ä´ µ around crossover by using local feedback loops in series. so as discussed in Section 2. ½¼ times larger than we the effect of disturbances on the outputs at low frequencies is crosses ½ at a frequency ¼½ desire. also ensure stability with reasonable margins. or alternatively.52) we also get that the frequency bound on the bandwidth: where crosses 1 from above yields a lower Û Ö × ¬Ò Ý ´ µ ½ (5. The reason for this is that we must. which fixes Ë ´Þ µ ½. given a feedback controller (which fixes Ë ) the effect of disturbances on the output is less. If the plant has a RHP-zero A typical plot of ½ at × Þ . The actual bandwidth requirement imposed by disturbances if ´ µ drops with a slope steeper than ½ (on a log-log plot) just may be higher than before the frequency . 10 0 Magnitude 10 −1 ½ Ë −2 10 Frequency [rad/s] Figure 5.   An example.6 Assume that the disturbance model is ´×µ ´½ · ×µ where ½¼ and ½¼¼ [seconds]. in addition to satisfying (5.2 we cannot let the slope of Ä´ µ around crossover be much larger than ½.52). and since rad/s. although each loop has   .13 (dotted line).53) From (5.6.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (5.52) ¸ Ë´ µ ½ ´ µ ´ µ is shown in Figure 5. then using (5. the minimum bandwidth requirement for disturbance rejection is ¼ ½ [rad/s]. Remark.17) we have the following necessary condition for satisfying Ë ½ ½: ´Þ µ ½ (5. is of high order.3 for a neutralization process. Thus feedback is required. We find that.16.13: Typical performance requirement on Ë imposed by disturbance rejection Example 5.

Assume there are no disturbances.4. Remark 1 We use a frequency-by-frequency analysis and assume that at each frequency ´ µ ½ (or Ö´ µ ½).e.     ¡¡¡ Command tracking. Ù Ø is done by considering Ù Ø as the plant input by including a term ½ × in the plant model ´×µ. see the example for more details. ½. This Remark 2 Note that rate limitations. acceptable control ( At the end of the section we consider the additional problems encountered for unstable plants (where feedback control is required). If Ë increases with a slope of ½ then the approximate bandwidth requirement becomes Ê Ö. The question we want to answer is: can the so that at any time we must have ٴص expected disturbances be rejected and can we track the reference changes while maintaining ٴص ½? We will consider separately the two cases of perfect control ( ¼) and ½). 284) who refers to lectures by Bode). The worst-case disturbance at each frequency is ´ µ ½ and the worst-case reference is Ö ÊÖ with Ö´ µ ½. Since Ù· ÊÖ. Remark 3 Below we require Ù used to simplify the presentation. we assume that the model has been scaled as outlined in Section 1. ¼. may also be handled by our analysis. and if Ë increases with a slope of ¾ it becomes Ô 5. These results apply to both feedback and feedforward control. see (5. This has no practical effect. This is a case where stability is É determined by each Á · Ä separately.55) depends on on how sharply Ë ´ µ increases in the frequency range from Ö (where Ë ½ Ê) to (where Ë ½). ½. The bandwidth requirement imposed by (5.51). Ê Ö. In this section. but the benefits of feedback are determined by ½· Ä (also see Horowitz (1991. p.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ a slope ½ around crossover. i. applies to command tracking with replaced by Ê. and is . the same performance change ִص requirement as found for disturbances. the overall loop transfer function Ä´×µ Ľ ´×µÄ¾ ´×µ ÄÒ ´×µ has a slope of about Ò. and consider a reference Êִص Ê × Ò´ ص.55) where Ö is the frequency up to which performance tracking is required.11 Limitations imposed by input constraints In all physical systems there are limits to the changes that can be made to the manipulated variables. ½ rather than Ù ½. Remark. Thus for acceptable control ( ´ µ ½) we must have     Ë ´ µÊ ½ Ö (5.

With Ö to  ½ Ö    ½ ´ µ ½ ¼) is (5. to avoid input saturation we need command tracking is required.) where Command tracking. Example 5.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 5.58) In other words. to achieve perfect control and avoid input saturation we need at all frequencies.3) the input required to achieve perfect control ( Ù Disturbance rejection. as is discussed below. and we do not condition (5. Thus.14 we see that for ½ for . ½ .56) that  ½ ´ µÊ ½ Ö (5. (However. Thus. for frequencies really need control. However. Next let ¼ and consider the worst-case reference command which Ê at all frequencies up to Ö .57) In other words. we expect that disturbances in the frequency range may cause input saturation. To keep the inputs within their constraints we must is Ö´ µ then require from (5. we do not really need control at frequencies ½.56) ¼ and ´ µ  ½ ´ µ ½ the requirement Ù´ µ ½ is equivalent (5.1 Inputs for perfect control From (5.14: Input saturation is expected for disturbances at intermediate frequencies from to ½ .7 Consider a process with Ê at all frequencies where perfect ´×µ ¼ ´ × · ½µ´¾ × · ½µ ´×µ ¿ ¼× · ½ ´½¼ · ½µ´× · ½µ From Figure 5.11. between ½ and 10 2 Magnitude 10 1 10 0 10 −1 ½ 10 −1 Frequency [rad/s] 10 0 10 1 10 2 Figure 5.57) is not satisfied for ½ . in practice.

In summary.   The requirements given by (5. For disturbance rejection we must then control (namely ´ µ require ½ Ø Ö ÕÙ Ò × Û Ö ½ (5. especially not at high frequencies. to achieve acceptable control for command tracking we must require Ê  ½ ½   Ö (5. However.58) should be then and Ê are replaced by Ê ½.60) ½) rather than “perfect control” ( ¼). Ê in (5. Ñ× Example 5.2 Inputs for acceptable control For simplicity above. we do not really need control for ´½¼× · ½µ´×   ½µ ´×µ . If these bounds are violated at some frequency then performance will not be ½) for a worst-case disturbance or reference occurring at this satisfactory (i. Specially. but feedback control is required for stabilization.8 Consider ´×µ Since disturbance rejection. input constraints combined with large disturbances may make stabilization difficult. if we want “acceptable control” ( in (5. and the input magnitude required for acceptable ½) is somewhat smaller.11. However.57) should be replaced by ½. Thus at frequencies where ´ µ ½ the smallest input needed to reduce ½ is found when Ù´ µ is chosen such that the complex vectors Ù and the error to ´ µ have opposite directions. we assumed perfect control.62) ½ ´× · ½µ´¼ ¾× · ½µ ½ and the performance objective is ½.11. and with ½ we get  ½ ´ Ù ½µ. perfect control is never really required.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ 5.59) and (5. the input will saturate when there is a sinusoidal disturbance ´Øµ we may not be able to stabilize the plant.e. The differences are clearly small at frequencies where much larger than 1. That is.59)   ½ The control error is Ý Proof: Consider a “worst-case” disturbance with ´ µ Ù· .3 Unstable plant and input constraints Feedback control is required to stabilize an unstable plant. from (5.43) we must for an unstable plant with a real RHP-pole at × Ô require × ´Ôµ Ñ× ´Ôµ (5. ½ Ù . since the plant has (5. 5.60) are restrictions imposed on the plant design in order to avoid input constraints and they apply to any controller (feedback or feedforward control). and similarly. ´ µ frequency. Note that this bound must be satisfied also when ½. ¾     Similarly. and the result follows by requiring Ù ½. and Otherwise.61) × Ò Ø.

However.e.e.63) à ´×µ ¼ ¼ ´½¼× · ½µ¾ × ´¼ ½× · ½µ¾ which without constraints yields a stable closed-loop system with a gain crossover frequency. This may must require ¼ value is confirmed by the exact bound in (5. ¼ to avoid problems with input saturation. reference changes can also drive the system into input saturation and instability.  ½ ½) for frequencies lower than ¼ . so from (??) we do not expect problems with input constraints at and from (??) we low frequencies. of about ½ . The stable closed-loop respons when there is no input constraint is shown by the dashed line.5 0 10 −1 −0. at frequencies higher than we have Ô.5 −2 10 Ô −2 ݴص 5 Time [sec] 10 −1 0 10 10 Frequency [rad/s] −1.5 0 (a) and with ¼ (b) Response to step in disturbance ( ) ¼ Figure 5.15: Instability caused by input saturation for unstable plant To check this for a particular case we select ¼ and use the controller (5. one then has the option to use a two degrees-of-freedom controller to filter the reference signal and thus reduce the magnitude of the manipulated input. We have (i.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ a RHP-pole at Ô ½.   For unstable plants.61). see Figure 5. However. i. 10 1 1 Magnitude 10 0 ٴص Unconstrained: Constrained: 0. .15(a). The closed-loop response to a unit step disturbance occurring after ½ second is shown in Figure 5. . We get × ´½µ ¼ ¾¾ ´½¼× · ½µ´× · ½µ × ½ Ñ× ´½µ ¼ ½ ´× · ½µ´¼ ¾× · ½µ × ½ ¼ to avoid input saturation of and the requirement × ´Ôµ Ñ× ´Ôµ gives sinusoidal disturbances of unit magnitude. we note that the input signal exceeds 1 for a short time. and when Ù is constrained to be within the interval ½ ½ we find indeed that the system is unstable (solid lines).15(b). But in contrast to disturbance changes and measurement noise.

and for performance (phase and gain margin) we typically need less than bout ¼ Ù . ×·½ . poses a problem (independent of à ) even when input saturation is not an à issue.65) Note that this is a strict bound to get stability. but Yes. From condition (5. A commonly used controller is the PID controller which has a maximum phase lead of ¼Æ at high frequencies.64) ´ µ Ò ¼Æ for However. and with a PI-controller ½ ¼ Ù . This is because for stability we need a positive phase margin. the phase of Ä must be larger than ½ ¼Æ at the gain crossover frequency . i. so for performance we approximately require (5.66) This is also a reasonable value for the phase margin.27). there are often limitations on practical designs.or PI-control: Ù (5. it is therefore likely (at least if   ¡   In principle. That is.e. in practice a large phase lag at high frequencies. In practice.12 Limitations imposed by phase lag We already know that phase lag from RHP-zeros and time delays is a fundamental problem. frequency. for stability we need ½ ¼ . e. there are no fundamental limitations.   ´ Ù µ ¸  ½ ¼Æ   Note that Ù depends only on the plant model. but in most practical cases there is a close relationship.g. consider a minimum-phase plant of the form ´×µ where ´½ · ½ ×µ´½ · ¾ ×µ´½ · ¿ ×µ ¡ ¡ ¡ ÉÒ ½ ´½ · ×µ (5. the plant in (5. see (2. beyond Ù . ½ ¼ (the frequency at which the phase lag around the loop is ½ ¼Æ ) is not directly related to phase lag in the plant. For example.57). we must place zeros in the If we want to extend the gain crossover frequency controller (e. an industrial cascade PID controller typically has derivative action over only one decade.64). Ò is three or larger. Define Ù as the frequency where the phase lag in the plant is ½ ¼Æ . the presence of high-order lags does not present any fundamental limitations. the maximum phase lead is smaller than 90Æ . Otherwise. Thus with these two simple controllers a phase lag in the plant does pose a fundamental limitation: Stability bound for P.g. is small) that we encounter problems with input saturation. with a proportional controller we have that ½ ¼ Ù .67) Practical performance bound (PID control): Ù ¼½ ×·½ ). Then.e. As an example. At high frequencies the gain drops sharply with É ´ µ ´ µ  Ò . but are there any limitations imposed by the phase lag resulting from minimum-phase elements? The answer is both no and yes: No. ×·½ à ´×µ à Á × · ½ Á× ¼ ½ × · ½ and the maximum phase lead is 55Æ (which is the maximum phase lead of the term (5. i. “derivative action”) to provide phase lead which counteracts the negative phase in the plant.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ 5.

a minimum-phase plant the phase lag at infinite frequency is the relative order times Of course. From (5.70) we see that to achieve ¼ ½ for ½ we must require that the relative .   5. then one may in theory extend simply invert the plant model by placing zeros in the controller at the plant poles.70) Thus. in many practical cases the bound in (5. we find for feedforward control that the model error propagates directly to the control error. and it corresponds for linear plants to the pole excess of ´×µ.69) After substituting (5. A further discussion is given in Section 6. 1992). provides much more information. However. and then let the controller roll off at high frequencies beyond the dynamics of the plant. Specifically. one may time delays. if the model is known exactly and there are no RHP-zeros or to infinite frequency. The relative order may be defined also for nonlinear plants.13. The main objective of this section is to gain more insight into this statement. For ¼Æ . Ý Ö ¼. Remark. is obtained using a perfect feedforward controller which generates the following control inputs  ½ Ö  ½ Ù (5.69).68)     Now consider applying this perfect controller to the actual plant with model ݼ ¼ ¼Ù · ¼ ¼ ¼ (5.10.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We stress again that plant phase lag does not pose a fundamental limitation if a more complex controller is used. For example. However. Assume that ´×µ is minimum phase and stable and assume there are no problems with input saturation. so we want the relative order to be small. Daoutidis and Kravaris.g.1 Feedforward control Consider a plant with the nominal model Ý Ù· . including the value of Ù . 5. the practical usefulness of the relative order is rather limited since it only gives information at infinite frequency. where we consider MIMO systems. Then perfect control.13 Limitations imposed by uncertainty The presence of uncertainty requires us to use feedback control rather than just feedforward control. The relative order (relative degree) of the plant is sometimes used as an input-output controllability measure (e. The phase lag of ´×µ as a function of frequency. and also because uncertainty about the plant model often makes it difficult to place controller zeros which counteract the plant poles at high frequencies. we find that the actual control error with the “perfect” feedforward controller is ¼ ݼ   Ö Ö Ð ÖÖÓÖ Ò ßÞ  ½ Ö  Ö Ð ÖÖÓÖ Ò ßÞ  ½ ¼ (5.68) into (5.67) applies because we may want to use a simple controller. we want the inputs to directly affect the outputs.

if the gain in in increased by 50% and the gain in reduced by 50%. such that the relative error in is larger than 1. then Ë ´¼µ Ë ¼ ´¼µ ¼ and the steady-state control error is zero even with model error. for example. With model error we get where Ë ¼   Ë´   Öµ (5. For example. the perfect feedfeedforward controller Ù    ½ ¿¿¼ ½¼× · ½ ¼ ½¼× · ½ ¿¼¼ ½¼× · ½ From (5.2 Feedback control With feedback control the closed-loop response with no model error is Ý Ö where Ë ´Á · à µ ½ is the sensitivity function.72) is the relative error for Ì is the complementary sensitivity From (5. Now apply this feedforward controller to the actual process where the gains have changed by 10 ½¼¼ ½¼× · ½ Nominally.71) ´Á · ¼ à µ ½ can be written (see (A. and feedback control is required to keep the output less than 1. for a step disturbance of magnitude 1. Example 5. and with also feedforward about ¾ ½¼ ¾). feedback only is Note that if the uncertainty is sufficiently large.139)) as ݼ   Ö Ë¼ ˼´ ¼ Ë   Öµ Here function.70) the response in this case is ݼ ¼ ¼  ½ ¼ ¼ ¾¾ ¼ ¾¼ ½¼× · ½ Thus. the output Ý will approach 20 (much larger than 1).9 Consider a case with gives perfect control Ý ¼. if we have integral where feedback is effective (where Ë action in the feedback loop and if the feedback system with model error is stable. then feedforward control may actually make control worse. and this motivates the need for feedback control. This may quite easily is happen in practice.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ ¼ . ´ ¼  µ ½ ½· Ì . (Feedback will hardly be affected by the above error as is discussed next The minimum bandwidth requirement with ½¼¼ ½¼ ½¼. . and (5.71) we see that the control error is only weakly affected by model error at frequencies ½ and Ì ½). 5.13. This requirement is clearly very difficult to satisfy model error in is less than ½ at frequencies where ¼ is much larger than 1.

We use the term “(input-output) controllability” since the bounds depend on the plant only. that is. for practical designs the bounds will need to be satisfied to get acceptable performance. all requirements are fundamental. This is illustrated in the following example. which determines the high-frequency gain.e. for example. Except for Rule 7.14 Summary: Controllability analysis with feedback control We will now summarize the results of this chapter by a set of “controllability rules”.   × ´½ · ×µ. . the uncertainty may be approximately constant. see (2. although some of the expressions.73) we see. they may be off by a factor of 2 or thereabout). If we assume ´ Ùµ ¾ (5.12). In another case the steady-state gain may change with operating point. We see that to keep ´ Ù µ constant we want that an increase in increases ´ Ù µ .10 Consider a stable first-order delay process. but this may not affect stability if the ratio . From (5. Here Ù is ´ Ùµ ½ ¼Æ . are independent of the specific controller.33). A robustness analysis which assumes the uncertain parameters to be uncorrelated is generally conservative.73) constant. GM ½ Ä´ ½ ¼ µ . This rule is useful. 5. for example. when evaluating the effect of parametric uncertainty. the ratio in which case an increase in may not affect stability. Most practical controllers behave as a constant gain ÃÓ in ÃÓ ´ ½ ¼ where ½ ¼ the crossover region.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Uncertainty at crossover. The above example illustrates the importance of taking into account the structure of the uncertainty. where ½ ¼ is the frequency where Ä is ½ ¼Æ . see also Section 5. are approximate (i. as seen from the derivations. This is further discussed in Chapters 7 and 8. in the parameters is often coupled. for example. so Ä´ ½ ¼ µ Ù (since the phase lag of the controller is approximately zero at this frequency. is unchanged. uncertainty in the crossover frequency region can result in poor performance and even instability. and may yield instability. where Example 5. However. the coupling between the uncertain parameters. For example. This observation yields the following approximate the frequency where rule:     ¯ Uncertainty which keeps ´ Uncertainty which increases Ù µ approximately constant will not change the gain margin. Although feedback control counteracts the effect of uncertainty at frequencies where the loop gain is large. ´ Ù µ may yield instability. This may be analyzed by considering the effect of the uncertainty on the gain margin. and are uncertain in the sense that they may vary with operating then Ù ´ ¾µ and we derive conditions. However. ´×µ the parameters .

1996): Rule 1. Ý and Ö are assumed to have been scaled as ´×µ are the scaled transfer functions. More . (See (5. (See (5. For a real Þ ¾ and for an imaginary RHP-zero we approximately RHP-zero we require Þ . Tight control at low frequencies with a RHP-zero Þ in ´×µ Ñ ´×µ. We require frequency Ö where tracking is required. Input constraints arising from disturbances.54)).27) and (5. For perfect require ´ µ ¼) the requirement is ´ µ ´ µ . (See (5. The from above. Let following rules apply(Skogestad.74) Ñ ´×µÝ Ë´ µ ½ Ê up to ½) we Rule 3. (See (5. Time delay in Remark. .55)). We approximately require ½ . and therefore ´×µ and denote the gain crossover frequency. and instead achieve tight control at higher frequencies.16: Feedback control system Here Ñ ´×µ denotes the measurement transfer function and we assume Ñ ´¼µ ½ (perfect steady-state measurement). If we do not need tight control at low frequencies.29)). Speed of response to reject disturbances. We approximately require ½ ´ µ specifically.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Ö + ¹ - ¹ à ¹ Ñ ¹+ + Õ ¹Ý Figure 5. defined as the frequency where Ä´ µ crosses 1 denote the frequency at which ´ µ first crosses 1 from above. The model is Ý ´×µÙ · ´×µ ÝÑ (5. control (   ´×µ Ñ ´×µ.25)). (See (5.51) Consider the control system in Figure 5. Let outlined in Section 1. In this case we must for a RHP-zero Rule 4.16.57) and (5. We require the frequency Ö where tracking is required. a RHP-zero only makes it impossible to have tight control in the frequency range close to the location of the RHP-zero. for the case when all blocks are scalar.4. The variables . require Rule 5. Strictly speaking. (See (5. with feedback control we require Ë ´ µ and (5.60)). Input constraints arising from setpoints. Rule 2. ´ µ Ê   ½ up to the . Rule 6. For acceptable control ( ´ µ ½ at frequencies where ´ µ ½. Speed of response to track reference changes. then we may reverse the sign of the controller gain. Ù.59)).

6 and 7 into the single rule: Ù (Rule 7). Þ . Here the ultimate frequency Ù is where Ñ ´ Ùµ (5.   Rule 8. one may in in most practical cases combine Rules 5. ´ Ù µ Å Margin because of delay. . Otherwise. ½. Ô.67)). Å¿ Margin because of RHP-pole. ž Margin for performance. Real open-loop unstable pole in ´×µ at × Ô.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Þ approximately require ¾Þ . Ù ½. (See (5. A special case is for plants with a zero at the origin. here we can achieve good transient control even though the control has no effect at steady-state. Ä Å½ ž ½ Control needed to reject disturbances ¾Ô Å¿ Å Þ ¾ Ù ½ Å Å Margins for stability and performance: Ž Margin to stay within constraints. Å Margin because of phase lag.47)). the input may saturate when there are disturbances. stabilize the system and we approximately require In addition. Rule 7.  ½ ¼Æ . . Figure 5.61). see (5. Å Margin because of RHP-zero. We require in most practical cases (e. with PID control): ½ ¼Æ . for unstable plants we need × ´Ôµ Ñ× ´Ôµ .17: Illustration of controllability requirements Most of the rules are illustrated graphically in Figure 5.17.g. Phase lag constraint. Since time delays (Rule 5) and RHP-zeros (Rule 6) also contribute to the phase lag. We need high feedback gains to ¾Ô. and the plant cannot be stabilized. (See Ù .

71) and (5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ We have not formulated a rule to guard against model uncertainty. as given in (5. and we must have Ê ½ at frequencies where setpoint tracking is desired (Rule 4)”. is that a delay in (“it gives the feedforward controller more time to make the right action”).3 below. to track setpoints and to stabilize the plant. the rule “Control outputs that are not self-regulating” may be quantified as: “Control outputs Ý for ´ µ ½ at some frequency” (Rule 1). Þ ¾ (Rule 6) and where as found above we approximately require Ù (Rule 7).75) may be used. Rules 1. For example. since 100% uncertainty at a given frequency allows ¼). Sometimes the problem is that the disturbances are so large that we hit input saturation. but we find that essentially the same conclusions apply to feedforward control when relevant. The rules are necessary conditions (“minimum requirements”) to achieve acceptable control performance. Condition (5. if a plant is not controllable using feedback control. The rule “Select inputs that have a large which effect on the outputs” may be quantified as: “In terms of scaled variables we must have ½ at frequencies where ½ (Rule 3). 2 and 8 tell us that we need high feedback gain (“fast control”) in order to reject disturbances. as in the example of Section 5. 6 and 7 tell us that we must use low feedback gains in the frequency range where there are RHP-zeros or delays or where the plant has a lot of phase lag. 5. If they somehow are in conflict then the plant is not controllable and the only remedy is to introduce design modifications to the plant. Rules 5. Also. To avoid the latter problem. The rules quantify the qualitative rules given in the introduction. (Rule 1) ´ µ ½ (5. it is for the presence of a RHP-zero on the imaginary axis at this frequency ( ´ µ already covered by Rule 6. Another important insight from the above rules is that a larger disturbance or a smaller specification on the control error requires faster response (higher bandwidth). except at frequencies where the relative uncertainty obviously have to detune the system.16. as shown below. uncertainty has only a minor effect on feedback performance for SISO approaches 100%. This is because.15 Summary: Controllability analysis with feedforward control The above controllability rules apply to feedback control. A major ´×µ is an advantage for feedforward control difference. or the required bandwidth is not achievable. to determine the size of equipment. On the other hand. it is usually not controllable with feedforward control. They are not sufficient since among other things we have only considered one effect at a time.75) ½ (Rule 5).     In summary. That is.72). Also. and we systems. a RHP-zero ´×µ is also an advantage for feedforward control if ´×µ has a RHP-zero at the same in . We have formulated these requirements for high and low gain as bandwidth requirements. we must at least require that the effect of the disturbance is less than 1 (in terms of scaled variables) at frequencies beyond the bandwidth.

Controllability is then analyzed by using à Рin (5.¾¼¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ location. à Р. Controllability can be analyzed by considering the feasibility of achieving perfect control. If perfect control is not possible. model uncertainty is a more serious problem for feedforward than for feedback control because there is no correction from the output measurement. and so have no RHP-zeros and no (positive) delay.78) However. then one may analyze controllability by considering an “ideal” feedforward controller. The remaining rules in terms of performance and “bandwidth” do not apply directly to feedforward control. To analyze controllability in this case we may assume that the feedforward controller à has already been designed. Rules 3 and 4 on input constraints apply directly to feedforward control. Conclusion.86) and (5.14 if ´×µ Ã Ñ · (5. Here is the scaled disturbance model. The disturbance response with Ù· Perfect control. Ideal control. disturbance (i. An example is given below in (5.70) that the plant is not at any frequency controllable with feedforward control if the relative model error for . For disturbance rejection. which is (5. The feedforward controller is Ù where Ñ becomes à ´×µ Ñ Ö ¼ ´ Ã Ñ · Ñ ´×µ is the measured disturbance.87) for a first-order delay process.13. From (5. but Rule 8 does not apply since unstable plants can only be stabilized by feedback control. we have from (5.77) ¼ is achieved with the controller ÃÔ Ö Ø    ½  ½ Ñ  ½ Ñ should This assumes that Ã Ô Ö Ø is stable and causal (no prediction). The controller is ideal in that it assumes we have a perfect model. From (5. for example. . one must beware that the feedforward control may be very sensitive to model error.76) the controllability of the remaining feedback problem can be analyzed using the rules in Section ´×µ is replaced by 5.76). if ´ µ ½¼ exceeds ½ then the error in must not exceed 10% at this frequency. this means that feedforward control has to be combined with feedback control if the output is sensitive to the is much larger than 1 at some frequency).76). so the benefits of feedforward may be less in practice. From this we find that a delay (or RHP-zero) in ´×µ is an advantage if it cancels a delay (or RHP-zero) in Ñ . In practice. Then from (5.) (5. Model uncertainty. As discussed in Section 5. a delay or a large phase lag in Ñ ´×µ.77) modified to be stable and causal (no prediction).78) we see that the primary potential benefit of feedforward control is less than 1 at frequencies where feedback to reduce the effect of the disturbance and make control is not effective due to. if Combined feedback and feedforward control.e. µ ½ and (5.76) (Reference tracking can be analyzed similarly by setting Ñ  Ê. For example.

and that the input and the output are required to be less than 1 in magnitude. from Rule 5 we require for stability and performance ½ ØÓØ (5. Solution. For both feedback and feedforward control we want and feedforward control we also want large (we then have more time to react). for both feedback and feedforward control (5. while we want the disturbance to have a “small. For large. value has no effect).83) . (a) Qualitative. and small. The combination of (5. Assume ½. (i) ½) First consider feedback control.81) and (5. and carry out the following: a) For each of the eight parameters in this model explain qualitatively what value you would choose from a controllability point of view (with descriptions such as large. By “direct” we mean without any delay or inverse response. This leads to the following conclusion.16 Applications of controllability analysis 5.79) In addition there are measurement delays Ñ for the output and Ñ for the disturbance. it translates time. Clearly.82) where ØÓØ · Ñ is the total delay around the loop. To stay within the input constraints ´ Ù ´ µ ´ µ for frequencies . Specifically.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼½ 5. From Rule 1 we need for acceptable performance ( with disturbances (5. b) Give quantitative relationships between the parameters which should be satisfied to achieve controllability.82) yields the following requirement for controllability Feedback: · Ñ (5. but for feedback does not matter. We want the input to have a “large.81) On the other hand. Consider disturbance rejection for the following process ´× µ  × ½· × ´×µ   × ½· × (5.16. we want and ii) feedforward control only. small. and we want Ñ small for feedforward control (it is not used for feedback). and .1 First-order delay process Problem statement. but otherwise has no effect. All ½ ½ and parameters have been appropriately scaled such that at each frequency Ù ½. ½) we must from Rule 4 require (b) Quantitative. Treat separately the two cases of i) feedback control only.80) Now consider performance where the results for feedback and feedforward control differ. Assume that appropriate scaling has been applied in such a way that the disturbance is less than 1 in magnitude. we the value of want Ñ small for feedback control (it is not used for feedforward). direct and fast effect” on the output. indirect and slow effect”.

84) is clearly satisfied). so instead ¼. as discussed in Section 1.2. This is exactly the ¼ ¼½ rad/s ( ½ ). Let Ý be the room temperature.5. A critical part of controllability analysis is scaling. These no problems with input constraints are expected since .85) ¼.18. see Figure 1. any delay for the disturbance itself yields a smaller “net delay”.87) ½ we must require and to achieve  Ü Ü for small Ü) which is equivalent to (5. 1.16. consider so we can even achieve we use the “ideal” controller obtained by deleting the prediction × .84): Introduce · Ñ .77). Disturbances.¾¼¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (ii) For feedforward control. Feedback control should be used. From Rule 3 at all frequencies. A model in terms of scaled variables was derived in (1.86) From (5.2 Application: Room heating Consider the problem of maintaining a room at constant temperature.26) ´×µ The frequency responses of ¾¼ ½¼¼¼× · ½ and ´ ×µ ½¼ ½¼¼¼× · ½ (5. Is the plant controllable with respect to setpoint changes of magnitude Ê the desired response time for setpoint changes is Ö ½¼¼¼ s (17 min) ? ¿ (¦¿ K) when Solution.84) Proof of (5. Is the plant controllable with respect to disturbances? 2.76) the response with this controller is ´ Ã Ð Ñ · (5. Let the measurement delay for temperature (Ý ) be Ñ ½¼¼ s. In this case perfect control is possible using the controller (5. We same frequency as the upper bound given by the delay. and consider first the case with ¼ (so (5. where crosses 1 in magnitude ( ). From Rule 1 feedback control is necessary up to the frequency ½¼ ½¼¼¼ ¼ ¼½ rad/s.88) are shown in Figure 5. ½ therefore conclude that the system is barely controllable for this disturbance.84). à Р·   ½½ · ×× µ   × ´½     × µ ½· × ½ (5. ½   (using asymptotic values and ¾ 5. Next. ½ we need “only” require and to have Feedforward: · Ñ   (5. Perfect control is not possible. Ù the heat input and the outdoor temperature.   ÃÔ Ö Ø    ½  ½ Ñ   ½· × × ½· × (5. 1.

18: Frequency responses for room heating example 1 0.5 0 −0.5 0 500 3 Ý ´Øµ ٴص 1000 2 1 Ý ´Øµ ִص ٴص 0 0 500 1000 Time [sec] (a) Step disturbance in outdoor temperature Time [sec] (b) Setpoint change ¿ ´½ ¼× · ½µ Figure 5.19: PID feedback control of room heating example .ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼¿ 10 1 Magnitude 10 0 10 −1 10 −5 10 −4 Frequency [rad/s] 10 −3 10 −2 10 −1 Figure 5.

The plant is controllable with respect to the desired setpoint changes. Setpoints. a negative pH is possible — it corresponds to À · ½¼ mol/l) is neutralized by a strong base (pH=15) in a mixing tank with volume Î = 10m¿ .   ¦ To achieve the desired product with pH=7 one must exactly balance the inflow of acid (the disturbance) by addition of base (the manipulated input). Second. Consider the process in Figure 5.19(b) for a desired setpoint change ¿ ´½ ¼× · ½µ using the same PID controller as above. This means that input constraints pose higher than the required Ö ½ ¼s no problem. 5. ¦ 2. the delay is 100 s which is much smaller than the desired response time of 1000 s. This is confirmed by the simulation in Figure 5. ½ Problem statement. The delay in the pH-measurement is Ñ ½¼ s. ´ µ ½ Ö ¼ ¼¼½ [rad/s]. In fact. The output error exceeds its allowed value of 1 for a very short time after about 100 s. Á ¾¼¼ s and ¼ controller of the form in (5.19(a) for a unit step disturbance (corresponding to a sudden 10 K increase in the outdoor temperature) using a PID¼ (scaled variables). We want to use feedback control to ½ (“salt water”) by manipulating keep the pH in the product stream (output Ý ) in the range the amount of base. and thus poses Ê ¿ up to about ½ ¼ ¼¼ [rad/s] which is seven times no problem. one might expect that the main control problem is to adjust the base accurately by means of a very accurate valve.66) with à s. Õ (input Ù) in spite of variations in the flow of acid. First. However.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ conclusions are supported by the closed-loop simulation in Figure 5. Õ (disturbance ). The input goes down to about -0. as we will see this “feedforward” way of thinking is misleading. Intuitively.20: Neutralization process with one mixing tank The following application is interesting in that it shows how the controllability analysis tools may assist the engineer in redesigning the process to make it controllable.16. but then returns quite quickly to zero.8 and thus remains within its allowed bound of ½. .20.3 Application: Neutralization process ACID Õ     Î BASE Õ Õ ¹ Figure 5. and the main hurdle to good control is the need for very fast response times. where a strong acid with pH (yes. we should be able to achieve response times of about ½ ½ without reaching the input constraints.

Note that the steady-state gain in terms of scaled variables is more than a million so the output is extremely sensitive to both the input and the disturbance. .21. [mol/l].90) Then the appropriately scaled linear model for one tank becomes ´×µ ½· × ´×µ  ¾ ½· × ¡ ½¼ (5.21: Frequency responses for the neutralization process with one mixing tank Controllability analysis.92) This requires a response time of ½ ¾ ¼¼ ¼ milliseconds which is clearly impossible in a process control application. From Rule 2. defined as À· ÇÀ   . product flow Õ Divide each variable by its maximum deviation to get the following scaled variables Ý ½¼  Ø ´Î µ Õ ·Õ  Õ (5. compared to that desired in the product stream. input constraints do not pose a problem since all frequencies. and the plant is a simple mixing process modelled by   Õ £ ¼ ¼¼ [ m¿ /s] resulting in a The nominal values for the acid and base flows are Õ£ £ ¼ ¼½ [m¿ /s] ½¼ [l/s].89) Ù Õ Õ£ Õ ¼ Õ£ ¾ (5.81) (Rule 1) we find the frequency up to which feedback is needed ¾ ¼¼ Ö × (5. The reason for this high gain is the much higher concentration in the two feed streams.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼ We take the controlled output to be the excess of acid. and from (5. and is in any case much less than the measurement delay of 10 s. The question is: Can acceptable control be achieved? 10 5 Magnitude 10 0 −4 10 10 −2 Frequency [rad/s] 10 0 10 2 10 4 Figure 5.91) where Î Õ ½¼¼¼ s is the residence time for the liquid in the tank. Here superscript £ denotes the steady-state value. The main control problem is the high disturbance sensitivity. The frequency responses of ´×µ and ´×µ are shown graphically ¾ at in Figure 5. In terms of this variable the control objective is to keep Ñ Ü ½¼  mol/l.

¾¼ ACID ÅÍÄÌÁÎ ÊÁ BASE Ä Ã ÇÆÌÊÇÄ pHC pHI ¹ Figure 5.93) where ¾ ½¼ is the gain for the mixing process.23 for one to four equal tanks in series. From controllability Rules 1 and 5. The magnitude of Ò ´×µ as a function of frequency is shown in Figure 5. The only way to improve controllability is to modify the process. and is the total residence time. 10 0 ¡ Ò Magnitude Ò Ò 10 −5 Ò ½ ¾ Ò ¿ 10 2 10 −1 10 0 Ö ÕÙ Ò Ý ¢ 10 1 Figure 5. Ò ´×µ is the transfer function of the mixing tanks. With Ò equal mixing tanks in series the transfer function for the effect of the disturbance becomes ´×µ Ò ´×µ Ò ´×µ ½ ´ Ò × · ½µÒ (5. This is similar to playing golf where it is often necessary to use several strokes to get to the hole. This is done in practice by performing the neutralization in several steps as illustrated in Figure 5. we must at least require for acceptable disturbance rejection that ´ µ ½ ½ (5.23: Frequency responses for Ò tanks in series with the same total residence time Ò ´×µ ½ ´ Ò × · ½µÒ Ò ½¾¿ .22: Neutralization process with two tanks and one controller Design change: Multiple tanks.94) ¸ .22 for the case of two tanks. ÎØÓØ Õ .

of tanks Ò 1 2 3 4 5 6 7 Total volume ÎØÓØ [Ñ¿ ] 250000 316 40. measurements and control.98 1. which results in a large negative phase for Ä.95) where Õ ¼ ¼½ m¿ /s.96) .6 3. or approximately Ä . ´ µ ½ in (5. the control system in Figure 5.70 Volume each tank [Ñ¿ ] 250000 158 13. and this may be difficult to achieve since ´×µ ´×µ is of high order. With ÎØÓØ Õ we obtain the following minimum value for the total volume for Ò equal tanks in series ¡ ÎØÓØ Õ Ò ´ µ¾ Ò   ½ Õ (5. However.662 m¿ .22 with a single feedback controller will not achieve the desired performance. Ò ´ µ ½ . taking into account the additional cost for extra equipment such as piping.90 1. approximately (see Section 2. i. The minimum total volume is obtained with 18 tanks of about 203 litres each — giving a total volume of 3.16 0. Control system design. Thus. whereas for stability and performance the slope of Ä at crossover should not be steeper than ½. from Rule that we have Ë ½ . This is another plant design change since it requires an additional measurement and actuator for each tank. The solution is to install a local feedback control system on each tank and to add base in each tank as shown in Figure 5. at frequencies lower than 1 we also require that Ë Û . may be optimistic because it only ensures ½ at the crossover frequency . We are not quite finished yet. With ½¼ s we then find that the following designs have the same controllability with respect to disturbance rejection: No.24. With Ò controllers the overall closed-loop response from a disturbance into the first tank to the pH in the last tank becomes Ý Ò ´ ½ µ ½ ½·Ä Ä Ä¸ Ò ½ Ä (5.   Thus. Consider the case of Ò tanks in series.51 6.96 5. The problem is that this requires Ä to drop steeply with frequency. one purpose of the mixing tanks Ò ´×µ is to ½¼ µ at the frequency ´ ¼ ½ reduce the effect of the disturbance by a factor ´ ¾ [rad/s]).94).7 15.2).e.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼ where is the delay in the feedback loop.81 With one tank we need a volume corresponding to that of a supertanker to get acceptable controllability.6.9 9. The condition which formed the basis for redesigning the process. However. mixing. we would probably select a design with 3 or 4 tanks for this example.

we can design each loop Ä ´×µ with a slope of ½ and bandwidth at all such that the overall loop transfer function Ä has slope Ò and achieves Ä (the size of the tanks are selected as before such that ). the conclusions from the controllability analysis should be verified by simulations using a nonlinear model. How many tanks in series should one have? What is the total residence time? (b) The feed to a distillation column has large variations in concentration and the use of one buffer tank is suggested to dampen these. It seems unlikely that any other control strategy can achieve a sufficiently high roll-off for Ä . and the selection of actuators and measurements for control. The following exercise further considers the use of buffer tanks for reducing quality (concentration.¾¼ ACID ÅÍÄÌÁÎ ÊÁ BASE Ä Ã ÇÆÌÊÇÄ pHC BASE pHC ¹ Figure 5. frequencies lower than Thus. Importantly.9 (a) The effect of a concentration disturbance must be reduced by a factor of ½¼¼ at the frequency ¼ rad/min. p. Ò where ½ and where feedback is effective. this application has shown how a simple controllability analysis may be used to make decisions on both the appropriate size of the equipment.24 where we use local feedback with two manipulated inputs (one for each tank).8 Comparison of local feedback and cascade control.     In summary. our analysis confirms the usual recommendation of adding base gradually and having one pH-controller for each tank (McMillan. The effect of the feed concentration on the product . Explain why a cascade control system with two measurements (pH in each tank) and only one manipulated input (the base flow into the first tank) will not achieve as good performance as the control system in Figure 5. as a final test. Exercise 5. Exercise 5. The disturbances should be dampened by use of buffer tanks and the objective is to minimize the total volume. we arrived at these conclusions.24: Neutralization process with two tanks and two controllers. Of course. Our conclusions are in agreement with what is used in industry. É Ä Ã . 208). without having to design any controllers or perform any simulations. In this case. temperature) disturbances in chemical processes. 1984. and the approximation applies at low frequencies .

and use a “slow” level controller à such that the outflow ¾ ÕÓÙØ (the “new” disturbance) is smoother than the inflow Õ Ò (the “original” disturbance). it is optimal to have buffer tanks of equal size. after a step in the output Ý will. time in minutes) ´×µ That is. Explain why this is most likely not a good solution. increase in a ramp-like fashion and reach its maximum allowed value (which is ½) after another ¿ minutes. . but the cost of two smaller tanks is larger than one large tank). (d) Is there any reason to have buffer tanks in parallel (they must not be of equal size because then one may simply combine them)? (e) What about parallel pipes in series (pure delay).97) The design of a buffer tank for a flowrate disturbance then consists of two steps: 1. (c) In case (ii) one could alternatively use two tanks in series with controllers designed as in (i).ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË  × ¿× ¾¼ composition Ý is given by (scaled variables. Design the level controller à ´×µ such that ´×µ has the desired shape (e. Design the size of the tank (determine its volume ÎÑ Ü ) such that the tanks does not overflow or go empty for the expected disturbances in ½ Õ Ò . This is the topic of the following exercise. The idea is to temporarily increase or decrease the liquid volume in the tank to avoid sudden changes in ÕÓÙØ . 2. (Solution: The required total volume is the same. Is this a good idea? Buffer tanks are also used in chemical processes to dampen liquid flowrate disturbances (or gas pressure disturbances). ¾ (ii) The desired transfer function is ´×µ ½ ´ ¾ × · ½µ . determined by a controllability analysis of how ¾ affects the remaining process. and apply this stepwise procedure to two cases: (i) The desired transfer function is ´×µ ½ ´ × · ½µ.g. Õ Ò [m¿ /s] denote a flowrate which acts as a disturbance to the Exercise 5. Problem statement. What should be the residence time in the tank? (c) Show that in terms of minimizing the total volume for buffer tanks in series. (a) Assume the inflow varies in the range Õ£ Ò nominal value. after an initial delay of ½ min. ¦ ½¼¼± where Õ£Ò is the (b) Explain why it is usually not recommended to have integral action in à ´×µ. note that we must always have ´¼µ ½). A material balance yields Î ´×µ à ´×µÎ ´×µ we find that ´Õ Ò ´×µ   ÕÓÙØ ´×µµ × and with a level controller ÕÓÙØ ´×µ ¾ ´×µ à ´×µ ´×µ × · à ´×µ ½ ßÞ ´×µ (5.10 Let ½ process. We add a buffer tank (with liquid volume Î [m¿ ]). Feedback control should be used and there is an additional measurement delay of minutes. Note that the steady-state value of ÕÓÙØ must equal that of Õ Ò .

(a) Assume all variables have been appropriately scaled. 4. ½ is reduced to ¼ ½ [s].13 A heat exchanger is used to exchange heat between two streams. ¾ is reduced to ¾ [s]. 5. so the plant is not controllable). Is the plant controllable with feedback control? (Solution: The delay poses no problem (performance).98) ´ ¼× · ½µ´½¾× · ½µ ´ ¼× · ½µ´½¾× · ½µ where Ì and ̼ are in Æ C. but the effect of the disturbance is a bit too large at high frequencies (input saturation). How does the controller design and the closed-loop response depend on the that steady-state gain ´¼µ ½ ?) Exercise 5. ¾ [s]. The following model in terms of deviation variables is derived from heat balances ¦ ¦ ¦ Õ ´×µ · ¼ ´¾¼× · ½µ ̼ ´×µ (5. The measurement device for the output has transfer function Ñ ´×µ ¼¾ . Is the plant input-output controllable? (b) What is the effect on controllability of changing one model parameter at a time in the following ways: 1.17 Conclusion The chapter has presented a frequency domain controllability analysis for scalar systems applicable to both feedback and feedforward control. a coolant with flowrate Õ (½ ½ kg/s) is used to cool a hot stream with inlet temperature ̼ (½¼¼ ½¼Æ C) to the outlet temperature Ì (which should be ¼ ½¼Æ C). 3. ý is reduced to ¼ ¼¾ . þ is reduced to .12 Let ´×µÀ ´×µ. ½ The unit for time is seconds. and the unit for time is seconds. in which frequency range is it important to know the model well? To answer this problem you may think about the following sub-problems: (a) Explain what information about the plant is used for Ziegler-Nichols tuning of a SISO PID-controller. Õ is in kg/s. 5. ´×µ ½ þ  ¼ × ´¿¼×·½µ´Ì×·½µ . 2. ½ [s].16. and in particular. (b) Is the steady-state plant gain ´¼µ important for controller design? (As an example ½ ½ and design a P-controller à ´×µ à such consider the plant ´×µ ×· with ½¼¼.¾½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 5. Derive the scaled Ì ´×µ model. À ´×µ ý   ½ × .4 Additional exercises Exercise 5.11 What information about a plant is important for controller design. Ì is increased to ¿¼ [s]. The measurement delay for Ì is ¿s. and ´×µ   ¾×. Exercise 5. The nominal parameter values are: ý þ ¿ . We summarized our findings in terms . The main disturbance is on ̼ . and Ì ¾ [s].

and it is found that the heuristic design rules given in the literature follow directly. Interestingly. see page 453 in Chapter 10. These rules are necessary conditions (“minimum requirements”) to achieve acceptable control performance. They are not sufficient since among other things they only consider one effect at a time. The rules may be used to determine whether or not a given plant is controllable. see page 197. The tools presented in this chapter may also be used to study the effectiveness of adding extra manipulated inputs or extra measurements (cascade control). . They may also be generalized to multivariable plants where directionality becomes a further crucial consideration.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾½½ of eight controllability rules. a direct generalization to decentralized control of multivariable plants is rather straightforward and involves the CLDG and the PRGA. The method has been applied to a pH neutralization process. The key steps in the analysis are to consider disturbances and to scale the variables properly.

¾½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .

RHP-poles and disturbances each have directions associated with them. but we will nevertheless see that most of the SISO results may be generalized.1 Introduction In a MIMO system. This makes it more difficult to consider their effects separately. RHP-zeros. as we did in the SISO case. see (6. see (4.38)1 All these are Ð vectors. . Ý ¾ Ù : ’th output direction (singular vector) of the plant. disturbances. the plant. see (4. delays. we generalize the results of Chapter 5 to MIMO systems. ÝÞ and ÝÔ are fixed complex Ý ´×µ and Ù ´×µ are frequency-dependent (× may here be viewed as a Note that Ù here is the ’th output singular vector. ´Þ µÙÞ ¼ ¡ ÝÞ .61) ½ .30) Ý : output direction of a disturbance. 6. We will quantify the directionality of the various effects in and by their output directions: ¯ ¯ ¯ ¯ ½ ÝÞ : output direction of a RHP-zero. while ¢ ½ vectors where Ð is the number of outputs. see (3. and not the ’th input. ´Ô µÙÔ ½ ¡ ÝÔ . RHP-poles. disturbances. We first discuss fundamental limitations on the sensitivity and complementary sensitivity functions imposed by the presence of RHP-zeros. RHP-zeros. Finally.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ÅÁÅÇ Ë ËÌ ÅË In this chapter. Ú Ù . We then consider separately the issues of functional controllability.68) ÝÔ : output direction of a RHP-pole. input constraints and uncertainty. we summarize the main steps in a procedure for analyzing the input-output controllability of MIMO plants.

etc. This norm is sometimes denoted ½ . see (A. ½ and Ö´ µ Ñ Ü ½. ÝÞ Ý . these directions are usually of less interest since we are primarily concerned with the performance at the output of the plant.2) This shows that we cannot have both Ë and Ì small simultaneously and that ´Ë µ is large if and only if ´Ì µ is large.113).2. In this chapter. × ). . . on the grounds that it is unlikely for several disturbances to attain their worst values simultaneously. Ö and . which involve the elements of different matrices rather than matrix norms. The scaling procedure is the same as that for SISO systems. we get ½   ´Ë µ ´Ì µ ½ · ´Ë µ (6.4. the output angle between a pole and a zero is À Ó× ½ ÝÞ ÝÔ ÝÞ ¾ ÝÔ ¾ À Ó× ½ ÝÞ ÝÔ We assume throughout this chapter that the models have been scaled as outlined in Section 1. we see that reference changes may be analyzed as a special by Ê. Ö and are diagonal matrices with elements equal to the maximum change in each variable Ù . in most cases such that they have Euclidean length 1. that is. The control error in terms of scaled variables is then where at each frequency we have the control objective is to achieve Remark 1 Here Ñ Ü is the vector infinity-norm. except that the scaling factors Ù . and À ¡ ½   6.49).1) ½   ´Ì µ ´Ë µ ½ · ´ Ì µ (6.2 Constraints on Ë and Ì 6. This leads to necessary conditions for acceptable performance. the largest element in the vector.1 Ë plus Ì is the identity matrix Á and (A.¾½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ generalized complex frequency. For example. and from this we can define the angle in the first quadrant. we mainly consider their effects separately. The vectors are here normalized ÝÞ ¾ ½ ÝÔ ¾ ½ Ý ´×µ ¾ ½ Ù ´×µ ¾ ½ We may also consider the associated input directions of . case of disturbances by replacing Remark 3 Whether various disturbances and reference changes should be considered separately or simultaneously is a matter of design philosophy. However. but this is not used here to avoid confusing it with the ½ norm of the transfer function (where the denotes the maximum over frequency rather than the maximum over the elements of the vector). From the identity Ë · Ì . ¡ Ý Ö Ù·   ÊÖ Ù´ µ Ñ Ü ½. The inner product gives a number between 0 and 1. Remark 2 As for SISO systems. ´ µ Ñ Ü Ñ Ü ´ µ ½. The angles between the various output directions can be quantified using their inner products: À À ÝÞ ÝÔ .

see Zhou et al.2 Sensitivity integrals For SISO systems we presented several integral constraints on sensitivity (the waterbed effects). then for internal stability the following interpolation constraints apply Ë ´ÔµÝÔ ¼ Ì ´ÔµÝÔ ÝÔ (6.2. but these are in terms of the input zero and pole directions. that Ë ´ÔµÝÔ Ì ´ÔµÄ ½ ´ÔµÝÔ ¼. If ´×µ has a RHPpole at Ô with output direction ÝÔ .6) Since Ë Ì is stable.3 Interpolation constraints RHP-zero.4) .68) there exists an output direction ÝÞ such that ÝÞ ´Þ µ à has internal stability. Ì Ä ½ . and if we assume that Ä´×µ has no RHP-zeros at × Ô then Ä ½ ´Ôµ exists and from (4. . It then follows.4): From (4. and that Ë ´Þ µ In words.72) there exists an output pole direction ÝÔ such that Ä ½ ´ÔµÝÔ ¼ (6. it seems difficult to derive any concrete bounds on achievable performance from them. (1996). ÙÞ and ÙÔ . For Proof of (6. then for internal stability of the feedback system the following interpolation constraints must apply: À ÝÞ Ì ´Þ µ ¼ À ÝÞ Ë ´Þ µ À ÝÞ (6.2. see Boyd and Barratt (1991) and Freudenberg and Looze (1988). although these relationships are interesting. If ´×µ has a RHP-zero at Þ with output direction ÝÞ . For example. ÝÞ Ä´Þ µ ¼.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾½ 6. Other generalizations are also available. 6.6) may be written ½ ¼ ÐÒ Ø Ë´ µ ½ ¼ ÐÒ ´Ë ´ µµ ¡ ÆÔ ½ Ê ´Ô µ (6.3) For a stable Ä´×µ.5): The square matrix Ä´Ôµ has a RHP-pole at × Ô. It then follows from Ì   RHP-pole.5) Proof of (6. ËÁ and ÌÁ . (6.e. ¾ RHP-pole at × Þ . i. it has no RHP-pole at × Ô.4) says that Ì must have a RHP-zero in the same direction as has an eigenvalue of 1 corresponding to the left eigenvector ÝÞ . À ¼. the generalization of the Bode sensitivity integral in (5. However. so Ì ´Ôµ is finite. the integrals are zero. These may be generalized to MIMO systems by using the determinant or the singular values of Ë . from ¾ Similar constraints apply to ÄÁ . Now Ë ´Á · ĵ ½ is stable and has no À À ÄË that ÝÞ Ì ´Þ µ ¼ and ÝÞ ´Á Ë µ ¼. the controller cannot cancel the RHP-zero and it follows that Ä À a RHP-zero in the same direction.

7) where × is the stable and Ñ the minimum-phase version of . and so on.1 MIMO sensitivity peaks. The bound on weighted sensitivity with no RHP-poles was first derived by Zames (1981). Suppose that ´×µ has ÆÞ RHP-zeros Þ with output directions ÝÞ .8) Remark. and Ô replaces Ô ½ . (1996. Then for closed-loop stability the weighted sensitivity function must satisfy for each RHP-zero Þ ÛÈ Ë ½ Here ½ ÛÈ ´Þ µ (6. This is to get consistency with the literature in general. that is. Þ . the pole direction of µ. Note that these realizations may be complex. Ô ´×µ is obtained  ½ by factorizing to the output one RHP-pole at a time. ×  This procedure may be continued to factor out Ô¾ from Ô½ ´×µ where ÝÔ¾ is the output pole direction of Ô½ (which need not coincide with ÝÔ¾ . and factorize ´×µ in terms of (output) Blaschke products as follows2 ´×µ  ½ Ô ´×µ × ´×µ ´×µ Þ ´×µ Ñ ´×µ (6. respectively.9) Let ÛÈ ´×µ be a stable weight. Ô ´×µ and Þ ´×µ are stable ) containing the RHP-poles and all-pass transfer matrices (all singular values are 1 for × HP-zeros. The generalized result below was derived by Havre and Skogestad (1998a). the peaks may be large if the plant has both RHP-zeros and RHP-poles. starting with ´×µ Ô½ ´×µ Ô½ ´×µ Ô À   where Ô½½ ´×µ Á · ¾Ê Ô½½ ÝÔ½ ÝÔ½ where ÝÔ½ ÝÔ½ is the output pole direction for Ô½ .4 Sensitivity peaks Based on the above interpolation constraints we here derive lower bounds on the weighted sensitivity functions. and that a peak on ´Ì µ larger 1 is unavoidable if the plant has a RHPpole. Define the all-pass transfer matrices given in (6. A similar procedure may be used for the RHP-zeros. With this factorization we have the following theorem. and ÆÔ RHP-poles Ô with output directions ÝÔ . The results show that a peak on ´Ë µ larger than 1 is unavoidable if the plant has a RHP-zero.10) ¾ Note that the Blaschke notation is reversed compared to that given in the first edition (1996).8) and compute the real constants ½ À ÝÞ Ô ´Þ µ ¾ ½ ¾  ½ Þ ´Ô µÝÔ ¾ ½ (6.145). State-space realizations are provided by Zhou et al.¾½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 6. ½ ½ if has no RHP-poles. The bounds are direct generalizations of those found for SISO systems. We first need to introduce some notation.  ½ Ô ´×µ ÆÔ À ´Á · ¾Ê ´Ô µ ÝÔ ÝÔ µ × Ô ½  ½ Þ ´×µ ÆÞ À ´Á · ¾Ê ´Þ µ ÝÞ ÝÞ µ × Þ ½ (6. Theorem 6.2. In particular. p. Note that we here only use the output factorizations. Consider a plant ´×µ with RHP-poles Ô and RHPzeros Þ .     replaces Þ ½ .

Proof of ½ in (6. and the projection onto the remaining subspace is ½ × Ò .13) the projection of ÝÞ in the Proof of (6.15) is given by Chen (1995) who presents a slightly improved bound (but his additional factor É´Þ µ is rather difficult to evaluate). Then. We may factorize Ë Ì Ä ½ ËÑ Ô ´×µ and introduce the scalar function À ´×µ ÝÞ ÛÈ ´×µËÑ ´×µÝ which is analytic (stable) in the RHP. ¾ ½ À Á · ¾Ê ´Ô µ ÝÔ ÝÔ × Ô × · Ô Ý ÝÀ · Î Î À × Ô Ô Ô ÝÔ Î ×·Ô × Ô ¼ Á ¼ À ÝÔ ÎÀ   Lower bound on ÛÌ ´×µ ½ Ë ½ and Ì ½ .9): Consider here a RHP-zero Þ with direction ÝÞ (the subscript is Ë Ã omitted). We then have ÛÈ Ë ´×µ ½ ÛÈ ËÑ ½ ´×µ ½ ´Þ µ The final equality follows since ÛÈ is a scalar and À   ÝÞ Ô ½ ´Þ µ. A weaker version of this result was first proved by Boyd and Desoer (1985). For the case with one RHP-zero Þ and one RHP-pole Ô we derive from (6. and (6.15) follows.9) ½   direction of the largest singular value of Ô ½ ´Þ µ has magnitude Þ · Ô Þ Ô Ó× .15): From (6.15) ¡   .1 we get by selecting ÛÈ ´×µ Ë ½ ½ and (6. From Theorem 6.18) with Þ·Ô ½. Á ÝÔ ÝÔ · Î Î À . and À   ÛÈ ´Þ µ ¡ ÝÞ Ô ½ ´Þ µÝ (6.14) ÞÑ Ü ½ ÖÓ× Þ × Ò¾ · Ì ½ Ñ Ô ÔÓÐ × Ü ¾ One RHP-pole and one RHP-zero.9) × ½ ¾ À Ó× ½ ÝÞ ÝÔ is the angle between the output directions of the pole and zero. To prove that ½ ½. if the pole and zero are orthogonal to each other. ¾ Þ · Ô ¾ Ó×¾ Þ  Ô ¾ (6. The proof of ¾ is similar. such that Ì is stable. Ë must have RHP-zeros at Ô . Conversely.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾½ Let ÛÌ ´×µ be a stable weight. Since has RHP-poles at Ô . An alternative proof of (6. where ÝÔ We then get that if the pole and zero are aligned in the same direction such that ÝÞ and ¼. except for one which is Þ·Ô Þ Ô ½ (since Þ and Ô are both in the RHP).15) simplifies to give the SISO-conditions in (5.12) À ˽ ´Þ µ À Ë ´Þ µ  ½ ´Þ µ ÝÞ ÝÞ Ô Ô ´×µ (6. then (6. À   ÝÞ Ô ½ ´Þ µ ¾ .11) ¾ ½ if has no RHP-zeros.13) and we see that all singular values of Ô ´Þ µ are equal to 1. Thus all singular values of  ½  ½ Ô ´Þ µ are 1 or larger. From (6.16) and (5. we follow Chen (1995) and introduce the matrix Î whose columns À together with ÝÔ form an orthonormal basis for ТР. Then for closed-loop stability the weighted complementary sensitivity function must satisfy for each RHP-pole Ô ÛÌ Ì ½ Here ¾ ÛÌ ´Ô µ (6. so Ô ´Þ µ is greater than or equal to 1 in all directions and hence ½. Ý is a vector of unit length which can be chosen freely. then ½ ¾   Æ and Þ½ Ô ¾ ½ and there is no additional penalty for having both a RHP-pole and ¼ a RHP-zero. We finally select Ý such that the lower bound is as large as possible and derive ½ .

Remark 3 In most cases functional uncontrollability is a structural property of the system. we have that ´×µ is Remark 4 For strictly proper systems.35). that is. This follows since the rank of a product of matrices is less than or equal to the minimum rank of the individual matrices.e. if ´×µ has full row rank. As a Remark 2 A plant is functionally uncontrollable if and only if Ð ´ ´ µµ measure of how close a plant is to being functionally uncontrollable we may therefore consider Ð ´ ´ µµ. denoted Ö. ´×µ functionally uncontrollable if Ö Ò ´ µ Ð (the system is input deficient).16) . that is. i. 6.¾½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Later in this chapter we discuss the implications of these results and provide some examples. If the plant is not functionally controllable.     Í ¦Î À . ´×Á µ ½ . By analyzing these directions. Ñ Ð. A system is functionally uncontrollable if Ö Ð.3 Functional controllability Consider a plant ´×µ with Ð outputs and let Ö denote the normal rank of ´×µ.1 Functional controllability. or if additional actuators are needed to increase the rank of ´×µ. and related concepts are “right invertibility” and “output realizability”. which for the interesting case when there is at least as many inputs as outputs. that is. and it may often be evaluated from cause-and-effect graphs. p. In order to control all outputs independently we must require Ö Ð. Another example is when there are fewer inputs than outputs. Ö denoted ݼ . A typical example of this is when none of the inputs Ù affect a particular output Ý which would be the case if one of the rows in ´×µ was identically zero. and we have (analogous to the concept of a zero direction) µ are the last From an SVD of ´ µ Ð Ö columns of Í ´ µ. We will use the following definition: Definition 6. An Ñ-input Ð-output system ´×µ is functionally controllable if the normal rank of ´×µ. is equal to the number of outputs (Ö Ð). an engineer can then decide on whether it is acceptable to keep certain output combinations uncontrolled. 70) for square systems.     Ð. which cannot be affected. The normal rank of ´×µ is the rank of ´×µ at all values of singularities (which are the zeros of ´×µ). or if Ö Ò ´×Á µ Ð (fewer states than outputs). then there are Ð Ö output directions. This term was introduced by Rosenbrock (1970. × except at a finite number of ¼ . Remark 1 The only example of a SISO system which is functionally uncontrollable is the ¼. ´ ´ µµ. it does not depend on specific parameter values. or if Ö Ò ´ µ Ð (the system is output deficient). see (A. the plant must be functionally controllable. A square MIMO system is functional uncontrollable if and only if system ´×µ Ø ´×µ ¼ ×. the uncontrollable output directions ݼ ´ À ݼ ´ µ ´ µ ¼ (6. is the minimum singular value. These directions will vary with frequency.

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË Example 6. the presence of the delay decouples the initial (highfrequency) response. That is. for this example. and note that it is infinite at low frequencies. For MIMO systems we have the surprising result that an increased time delay may sometimes improve the achievable performance. and Ñ Ò can be regarded as a delay pinned to output . Bandwidth is frequencies)? What is the corresponding bandwidth? (Answer: Need equal to . we may compute the magnitude of the RGA (or the condition number) of as a function of frequency. let denote the time delay in the ’th element of ´×µ.1 The following plant is singular and thus not functionally controllable ¾½ ´× µ ×·½ ¾ ×·¾ ½ ×·½ ×·¾ ¾ This is easily seen since column ¾ of ´×µ is two times column ½. that is. To illustrate this.4 Limitations imposed by time delays Time delays pose limitations also in MIMO systems. ÑÒ ÑÒ This bound is obvious since Ñ Ò is the minimum time for any input to affect output . Specifically. but drops to ½ at about frequency ½ . the plant is singular (not functionally controllable). The uncontrollable output directions at low and high frequencies are. On the other hand.17) using a simple diagonal controller Ã × Á . As a simple example. consider the plant ´×µ  × ½ ½ ½ (6. respectively ݼ ´¼µ ½ ½ Ô  ½ ¾ ݼ ´½µ ½ ¾ Ô  ½ 6.17) ¼. Then a lower bound on the time delay for output is given by the smallest delay in row of ´×µ. but their usefulness is sometimes limited since they assume a decoupled closed-loop response (which is usually not desirable in terms of overall performance) and also assume infinite power in the inputs. Holt and Morari (1985a) have derived additional bounds. Use the approximation  × ½ × to show that at low frequencies the elements of Ë ´×µ are of magnitude ½ ´ ·¾µ. effective feedback outputs independently is clearly impossible. for control is possible. provided the bandwidth is larger than about ½ . Exercise 6.1 To further illustrate the above arguments.)   . and controlling the two With ¼. compute the sensitivity function Ë for the plant (6. so we can obtain tight control if the controller reacts within this initial time period. control is easier the larger is. How large must be to have acceptable performance (less than 10% offset at low . In words.

´Ë µ. This means that although the conclusion that the time delay helps is correct. then we must from (5.   6. the derivations given in Exercise 6. For instance.18) in (6. we may to some extent choose the worst direction. The limitations of a RHP-zero located at Þ may also be derived from the bound ÛÈ Ë ´×µ ½ Ñ Ü ÛÈ ´ µ ´Ë ´ µµ ÛÈ ´Þ µ (6. However. that   × is replaced   × so that the plant is not singular at steady-state (but it is close to singular).4 and scales all outputs by their allowed variations such that their magnitudes are of approximately equal importance.1 are not strictly correct. with   × replaced by ¼ ´½ ¾Ò ×µÒ ´½ · Ò is the order of the Pad´ approximation). Furthermore. as for SISO systems. therefore generalize if we consider the “worst” direction corresponding to the maximum singular value.18) involves the maximum singular value (which is associated with the “worst” direction). yields an internally unstable system. if we require tight control at high frequencies. They show for a MIMO plant (the with RHP-zeros at Þ that the ideal ISE-value È “cheap” LQR cost function) for a step ¾ Þ .1 numerically. for example. . ½ and . Thus.10) where ÛÈ ´×µ is a scalar weight.g. Remark 1 The use of a scalar weight ÛÈ ´×µ in (6. Alternatively. RHPdisturbance or reference is directly related to zeros close to the origin imply poor control performance. All the results derived in Section 5. the transfer function ÃË contains an integrator).4 can be generalized.¾¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 1 The observant reader may have noticed that ´×µ in (6. ¾ Þ from For ideal ISE optimal control (the “cheap” LQR problem).17) is singular at × ¼ (even with non-zero) and thus has a zero at × ¼. The limitations they impose are similar to those for SISO systems. This is discussed next. although often not quite so serious as they only apply in particular directions. To “fix” the results we may assume that the plant is only going to be controlled over a limited time so that internal instability is not an issue. and plot the elements of Ë ´ µ e ¾Ò ×µ (where Ò as functions of frequency for ¼½ . and therefore the RHP-zero may not be a limitation in other directions. we may assume.6. we derive from (5. a controller with integral action which cancels this zero. see Qiu and Davison (1993).2 Repeat Exercise 6. Therefore. (e. the SISO result ÁË Section 5.34) Þ ¾. by ¼ Exercise 6. the assumption is less restrictive if one follows the scaling procedure in Section 1. that the bandwidth (in the “worst” direction) must for a real RHP-zero satisfy £ Alternatively.18) is somewhat restrictive. by selecting the weight ÛÈ ´×µ such that we require tight control at low frequencies and a peak for ´Ë µ less than ¾.38) satisfy £ ¾Þ .5 Limitations imposed by RHP-zeros RHP-zeros are common in many practical multivariable problems.4 for SISO systems. Remark 2 Note that condition (6.

Example 3. we require perfect tracking at steady-state.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾½ 6. we cannot achieve perfect control in practice. ̽ with output ½ perfectly controlled.1 Moving the effect of a RHP-zero to a specific output In MIMO systems one can often move the deteriorating effect of a RHP-zero to a given output. Most of the results in this section are from Holt and Morari (1985b) where further extensions can also be found. which may be less important to control well. In all three cases. which also satisfies Ì ´¼µ Á . However. This is possible because. and this imposes the following relationships between the column elements of Ì ´×µ: ½ ¾ Ô  ½  ¼ ¾Ø½½ ´Þ µ   ؾ½ ´Þ µ ¼ ¾Ø½¾ ´Þ µ   ؾ¾ ´Þ µ ¼ (6. ¬¾ . we must pay for this by having to accept some interaction. Of course.19) Ì Ö and examine three possible choices for Ì : ̼ We will consider reference tracking Ý diagonal (a decoupled design). is ̼ ´×µ ½ ¼  ×·Þ ×·Þ  ×·Þ ¼ ×·Þ  ×·Þ ¼ ×·Þ (6. Ì ´¼µ Á . For the output which is not perfectly controlled. i. The output zero direction satisfies ÝÞ ´Þ µ ¼ and we find À ÝÞ À Any allowable Ì ´×µ must satisfy the interpolation constraint ÝÞ Ì ´Þ µ ¼ in (6. Let us first consider an example to motivate the results that follow. but we make the assumption to simplify our argument. To satisfy (6. ¬½ ½ design ̽ ´×µ. One possible choice.20) For the two designs with one output perfectly controlled we choose ̽ ´×µ ¬½ × ×·Þ Ì¾ ´×µ  ×·Þ ×·Þ ¼ ¬¾ × ×·Þ ½ The basis for the last two selections is as follows. and to satisfy (6. This is the same plant considered on page 85 where À we performed some ½ controller designs. so the RHP-zero must be contained in both diagonal elements.19) we then need ؽ½ ´Þ µ ¼ and ؾ¾ ´Þ µ ¼.4).19). A decoupled design has ؽ¾ ´×µ ؾ½ ´×µ ¼. the diagonal element must have a RHP-zero to satisfy (6.8 continued. the columns of Ì ´×µ may still be selected independently. Consider the plant ´×µ ½ ´¼ ¾× · ½µ´× · ½µ ½ ½ ½ · ¾× ¾ ¼ which has a RHP-zero at × Þ ¼ . we must element must have an × term in the numerator to give Ì ´¼µ then require for the two designs The RHP-zero has no effect on output ½ for and no effect on output ¾ for design ̾ ´×µ. and ̾ with output ¾ perfectly controlled.5. and the off-diagonal Á . We therefore see that it is indeed possible to move the effect of the RHP-zero to a particular output.e. although the À ¼ imposes a certain relationship between the elements interpolation constraint ÝÞ Ì ´Þ µ within each column of Ì ´×µ.19).

 ×·Þ ¬ ·½ × ¡ ¡ ¡ ¬Ò × ×·Þ ×·Þ ×·Þ (6. We see that if the zero is not “naturally” aligned with this output. . which occurs frequently if we have a RHP-zero pinned to a subset of the outputs.. This is stated more exactly in the following Theorem. i. see Figure 3. . Specifically. in terms of yielding some ¬ much larger than 1 in magnitude.. i. if ÝÞ is much smaller ¾ÝÞ ÝÞ than 1. then the interactions will be significant. ̼ ´×µ has two. . it is possible to obtain “perfect” control on all outputs zero direction is non-zero. ¼.22).20). as expressed by ¬ . is largest for the case where output ½ is perfectly controlled.   Exercise 6.¾¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We note that the magnitude of the interaction. Ì can be chosen of the form ¾ ½ ¼ . This is reasonable since the zero output direction ÝÞ ¼ ¼ Ì is mainly in the direction of output ½. This was also observed in the controller designs in Section 3. In particular. but we then have to accept some interaction.e. we cannot move the effect of a RHP-zero to an output corresponding to a zero element in ÝÞ . ÝÞ with the remaining output exhibiting no steady-state offset.22) .23) . . À Proof: It is clear that (6.10 on page 86.e. ¼ ¼ ¬  ½ × ×·Þ ¼ ¼ ¼ ¼ ¡¡¡ ¡¡¡ ¼ ¼ ¿ Ì ´×µ ¬½ × ×·Þ ¬¾ × ×·Þ ¡ ¡¡ .2 Assume that ´×µ is square. . requiring a decoupled response generally leads to the introduction of additional RHP-zeros in Ì ´×µ which are not present in the plant ´×µ. . .3 Consider the plant ´× µ ½ ½ « ×·½ « (6. ¼ ¡ ¡¡ ½ ¡ ¡¡ . We also see that we can move the effect of the RHP-zero to a particular output.21) ¼ where . ¼ ¡ ¡¡ ¬ ¼ Ý  ¾ ÝÞ ÓÖ Þ ¼ ¼ ¡¡¡ ½ (6. i. . see also Holt ¾ The effect of moving completely the effect of a RHP-zero to output is quantified by (6. as in design ̼ ´×µ in (6.e. whereas ´×µ has one RHP-zero at × Þ .5.21) satisfies the interpolation constraint ÝÞ Ì ´Þ µ and Morari (1985b). Theorem 6. we have to accept that the multivariable RHP-zero appears as a RHP-zero in each of the diagonal elements of Ì ´×µ. functionally controllable and stable and has a Þ and no RHP-pole at × Þ . In other words. Then if the ’th element of the output single RHP-zero at × ¼. so we have to “pay more” to push its effect to output ¾. .   We see from the above example that by requiring a decoupled response from Ö to Ý .

11) we find that a RHP-pole Ô imposes restrictions on ´Ì µ which are identical to those derived on Ì for SISO systems in Section 5. we get strong interaction with ¬ ¾¼ if we want to control ݾ perfectly. see (6. As for SISO systems this imposes the following two limitations: RHP-pole Limitation 1 (input usage). and × is the “stable version” of RHP-poles mirrored into the LHP.2.25) with its where ÙÔ is the input pole direction. ÛÌ ´Ôµ in RHP-pole Limitation 2 (bandwidth). 1997)(Havre and Skogestad. approximately.)   Exercise 6. (6.25) directly generalises the bound (5. we need to react sufficiently fast and we must require that ´Ì ´ µµ is about ½ or larger up to the frequency ¾ Ô . ´×µ × Þ × Ô × Þ ¼ ½×·½ ×   ¼ ×½ ·½ Ô ½ with Þ  ¾ and Ô ¾ (6. Which output is the most difficult to control? Illustrate your conclusion using Theorem 6.26) The plant has a RHP-pole Ô ¾ (plus a LHP-zero at -2. From the bound ÛÌ ´×µÌ ´×µ ½ (6. Best for « ¼ with zero achievable performance? (Answer: We have a RHP-zero for « at infinity.42) for SISO systems. Thus. the presence of an ustable pole Ô requires for internal stability Ì ´ÔµÝÔ ÝÔ . 2001) ÃË ½ ÙÀ × ´Ôµ ½ ¾ Ô (6.6 Limitations imposed by unstable (RHP) poles For unstable plants we need feedback for stabilization.5 which poses no limitation). Example 6. and which of these values is best/worst in terms of ½. This bound is tight in the sense that there always exists a controller (possibly improper) which achives the bound.2 Consider the following multivariable plant . (Answer: Þ «¾ ½ and ÝÞ  « ½ Ì ). More precicely. The transfer function ÃË from plant outputs to plant inputs must satisfy for any RHP-pole Ô (Havre and Skogestad.7).ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾¿ ½ (a) Find the zero and its output direction.24) 6.8.4 Repeat the above exercise for the plant ´× µ ½ × « ½ × · ½ ´« · ¾µ¾ ×   « (6.) (c) Suppose « ¼ ½. where ÝÔ is the output pole direction. if control at steady-state is required then worst for « ½ with zero at × ¼. The corresponding input and output pole directions are ÙÔ  ¼ ¾ ¼ ÝÔ ½ ¼ . (Answer: Output ¾ is the most difficult since the zero is mainly in that direction. (b) Which values of « yield a RHP-zero.

On the other hand. Next. We next consider an example which demonstrates the importance of the directions as expressed by the angle . This is also the case in MIMO systems provided that the directions coincide. Example 6.27) À where Ó× ½ ÝÞ ÝÔ is the angle between the RHP-zero and RHP-pole. consider « Í« ¼  ½ ½ ¼ Ò ¼ ´×µ   ´¼ ½×·½µ´×·Ôµ × Þ ¼ ×·Þ ´¼ ½×·½µ´× Ôµ ¼ .3 Consider the plant « ´×µ × Ô ¼ ½ ½ ×·Ô ¼ Ó× «   × Ò « × Ò « Ó× « Í« ßÞ × Þ ¼ ½×·½ ¼ ×·Þ ¼ ½×·½ ¼ Þ ¾Ô ¿. This was quantified in Theorem 6.1. From (6. and closed-loop performance is expected to be poor.28) which has for all values of « a RHP-zero at Þ For « ¼Æ the rotation matrix Í« ¼ ´×µ Á . and the plant consists of two decoupled subsystems × Þ ¼ ´¼ ½×·½µ´× Ôµ ×·Þ ¼ ´¼ ½×·½µ´×·Ôµ ¾ and a RHP-pole at Ô Here the subsystem ½½ has both a RHP-pole and a RHP-zero.14) × Ë ½ Ì ½ × Ò¾ · Þ · Ô ¾ Ó×¾ Þ Ô¾ (6. 6.¾¾ The RHP-pole Ô can be factorized as ÅÍÄÌÁÎ ÊÁ ´×µ and Ä ×·¾ ×·Ô ×·¾ ¼ ½×·½ ×   ¼ ×½··½ Ô Ã ÇÆÌÊÇÄ  ½ Ô ´ µ × ´×µ where × ´×µ Ô´ µ × Ô ×·Ô ¼ ¼ ½ ­ ­ ­ ­ ½ Consider input usage in terms of ÃË . for a MIMO plant with single RHP-zero Þ and single RHP-pole Ô we derive from (6. For example. ¿ (6.15) and (6.25) we must have that ÃË ½ ÙÀ × ´Ôµ ½ ¾ Ô ¼  ¼ ¾ ­ ½ ½¾  ¼ ¿  ½ ­ ¼ ­ ­ ¿ ½ ¾ Havre (1998) presents more details including state-space realizations for controllers that achieve the bound.7 RHP-poles combined with RHP-zeros For SISO systems we found that performance is poor if the plant has a RHP-pole located close to a RHP-zero. there are no particular control problems related to ¼Æ for which we have the subsystem ¾¾ .

the angle direction changes from ½ ¼ Ì for « between the pole and zero direction also varies between ¼Æ and ¼Æ . This is seen from the follwoing Table below.40 9. is that there is no interaction between the RHP-pole and RHP-zero in this case. where we also give in (6.55 ½ ¼  ¼ ¼ ¿¿ ¼ Æ ¿¼Æ  ¼ ¼ ½½ ¿ Æ ¼Æ ¼Æ ¼Æ 1. Since in (6.28) the RHP-pole is located at the output of the plant.98 1.15 1.76 3.1: MIMO plant with angle between RHP-pole and RHP-zero.27). « ¼Æ ¿¼Æ ¼Æ and ¼Æ : « ÝÞ À Ó× ½ ÝÞ ÝÔ Ë ½ Ì ½ ­ ´Ë ÃË µ ¼Æ ¼Æ 5. The main difference.01 2 1 0 −1 −2 0 1 2 ¼Æ 2 1 0 −1 ¼ Æ 3 4 5 −2 0 1 2 3 4 5 Time 2 1 0 −1 −2 0 1 2 ¿ Æ Time 2 1 0 −1 ¼Æ 3 4 5 −2 0 1 2 3 4 5 Time Time Figure 6.60 2.0 1. however.0 7.53 1. for four rotation angles. but and « are not equal.59 1.31 1. Solid line: ݽ . Thus.60 2. so we expect this plant to be easier to control.89 2. Dashed line: ݾ . but this time in the off-diagonal elements. For intermediate values of « we do not have decoupled subsystems. and there will be some interaction between the RHP-pole and RHP-zero. Æ RHP-zero output the and we find ÝÔ ¼Æ to ¼ ½ Ì for « ¼ .   À The Table also shows the values of ˽ and Ì ½ obtained by an À½ optimal Ë ÃË . its output direction is fixed ½ ¼ Ì for all values of «. On the other hand.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾ and we again have two decoupled subsystems. Response to step in reference Ö ½ ½ Ì with ½ controller for four different values of .59 ¼ ½ 1.00 7.

For « ¼Æ we have get Ë ½ less than ¾.26).27) are tight since there is only one RHP-zero and one RHP-pole. . so Ë ½ and Ì ½ and it is clearly impossible to 5. « ¼Ó the RHP-pole and RHP-zero do not interact.1 that the response for ݽ is very poor.29) The weight ÏÈ indicates that we require Ë ½ less than ¾. This is as expected because of the closeness of the RHP-pole and zero (Þ ¾ Ô ¿). pole and zero directions provide useful information about a plant. In conclusion. as does the values of in (6. The bounds in (6. Ö À Several things about the example are worth noting: 1. reference.8 Performance requirements imposed by disturbances For SISO systems we found that large and “fast” disturbances require tight control and a large bandwidth. À À Exercise 6. because for « ¼ unstable controller to stabilize it since Ô Þ (see Section 5. but again the issue of directions is important.5 Consider the plant in (6. which must therefore be done appropriately prior to any analysis. The ½ optimal controller is unstable for « Æ the plant becomes two SISO systems one of which needs an surprising. weights that the ½ designs for the four angles yield Ë ½ which are very close to .3. are shown in Figure 6. This ¼ ¼½Á . The angle between the pole and zero. The corresponding responses to a step change in the ½  ½ . We find with these ¼ ½ ¼ ½ ½ ½ ¼¼ . The same results apply to MIMO systems. £ ¼ ¼½ and Å ½ (ÏÙ can be confirmed numerically by selecting ÏÙ and are small so the main objective is to minimize the peak of Ë ). ¼Æ and ¿¼Æ . This is not altogether 6. Compute the lower bounds on Ë ½ and ¾ so that the plant now also has a ÃË ½ . but with Þ RHP-zero. For that ݽ (solid line) has on overshoot due to the RHP-pole. 6. We see from the simulation for « ¼Æ in Figure 6. as required by the performance weight ÏÈ . 4. From the simulation we see 2. 3. However. This is because of the influence of the RHP-pole in output ½. and require tight control up to a ¼ Ö ×. whereas ݾ (dashed line) has an inverse response due to the RHP-zero. The minimum ½ norm for the overall Ë ÃË problem frequency of about £ is given by the value of ­ in Table 6.27). the output pole and zero directions do depend on the relative scaling of the outputs.9). is quite different from the rotation angle « at intermediate values between ¼Æ and ¼Æ .1. and thus tends to push the zero direction towards output ¾.¾¾ design using the following weights ÅÍÄÌÁÎ ÊÁ Ä ¾ £ à ÇÆÌÊÇÄ ÏÙ Á ÏÈ × Å· £ Á Å × ¼ (6. which yields a strong gain in this direction.

For a plant with many disturbances is a column of the matrix . and the plant. Also assume that the frequency the worst-case disturbance may be selected as ´ µ outputs have been scaled such that the performance objective is that at each frequency the Ë 2-norm of the error should be less than 1. Consider a single (scalar) disturbance and let the ). Remark. It may vary between 1 (for Ý ´ µ ´ Ý µ (for Ý Ù) if it is in the “bad” direction. i. ËÝ ¾ ½ (6.33) ¾ ´Ë µ Now ¾ Ë ¾ ´Ë µ ¾ (6. A similar derivation is complicated for MIMO systems because of directions. Here Ù the condition number ­ ´ µ and Ù are the output directions in which the plant has its largest and smallest gain.30) to get the following requirement. With feedback control and the performance objective is then satisfied if Ë ¾ ´Ë µ ½ ¸ Ë ½ ½ (6.2 Disturbance direction. If ´×µ has a RHP-zero at × Þ then the performance may be poor if À the disturbance is aligned with the output direction of this zero. The disturbance direction is defined as vector represent its effect on the outputs (Ý Ý ­´ µ Here ½ ¾ (6. let Ö ¼ and assume that the disturbance has been scaled such that at each ½. To see this. ´ µ ¾ ½.30) The associated disturbance condition number is defined as ´ µ ´ ÝÝ µ .42) For SISO systems. and we therefore have: · ĵ is larger ¯ For acceptable performance ( Ë ¾ ½) we must at least require that ´Á than ¾ and we may have to require that ´Á · ĵ is larger than ¾. Since is a vector we have from (3.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾ Definition 6.32) which shows that the Ë must be less than ½ ¾ only in the direction of Ý . To see this use ÝÞ Ë ´Þ µ ÝÞ À Ë to get and apply the maximum modulus principle to ´×µ ÝÞ Ë ½ À ÝÞ ´Þ µ À ÝÞ Ý ¡ ´Þ µ ¾ (6.32). The disturbance condition number provides a measure of how a disturbance is aligned with Ù) if the disturbance is in the “good” direction.e. (6. we used this to derive tight bounds on the sensitivity function and the loop gain. In the following. see Chapter 3. We here use is a vector.31) Ý is the pseudo-inverse which is  ½ for a non-singular (rather than ) to show that we consider a single disturbance.34) ´Ë µ ½ ´Á · ĵ and ´Ë µ ½ ´Á · ĵ.35) . we can use (6. Plant with RHP-zero.e. Ë ½ and ½ · Ä . which is equivalent to (6. We can also derive bounds in terms of the singular values of Ë . i.

and we will neglect it in the following. The reason is that for Ñ Ñ Ü (see (A. From this we get À ÝÞ ´Þ µ ¼ ¿  ¼ ¡ ½ ¼ ¿  ¼     and from (6.36) is only a necessary (and not sufficient) condition for acceptable performance. We cannot really conclude that the plant is controllable for ¼ since (6. We derive the results for control ´ disturbances. if ¼ . Fortunately. However. we will consider the case of perfect ¼µ and then of acceptable control ´ ½).53). and the question is whether the plant is input-output controllable.¾¾ To satisfy ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ at least require (6. i.36) Ë ½ ½. for any value of ½. this difference is not too important. Remark. The results in this section apply to both feedback and feedforward control. and there may also be other factors that determine controllability. so the values of an Ñ ½ vector we have ÑÜ ¾ max. such as input constraints which are discussed next. the scaling procedure presented in Section 1.94)). we must then for a given disturbance À ÝÞ ´Þ µ ½ where ÝÞ is the direction of the RHP-zero.and ¾-norms are at most a factor Ñ apart.4 leads naturally to the vector max-norm as the way to measure signals and performance.e. For combined disturbances the condition is ÝÞ ´Þ µ ¾ condition ½.   . Show that the disturbance condition number may be interpreted as the ratio between the actual input for disturbance rejection and the input that would be needed if the same disturbance was aligned with the “best” plant direction. and the corresponding results for reference tracking are obtained by replacing by Ê.9 Limitations imposed by input constraints Constraints on the manipulated variables can limit the ability to reject disturbances and track references. i. whether we can achieve Ë ½ ½. As was done for SISO plants in Chapter 5. In the above development we consider at each frequency performance in terms of ¾ (the ¾-norm).36) we conclude that the plant is not input-output controllable if ¼ ¿ ¼ ½.   Exercise.11 on page 134 we have already computed the zero direction. This provides a generalization of the SISOÀ ´Þ µ ½ in (5.e. ¢ Ô Ô Example 6. ´×µ has a RHP-zero Þ and in Example 4.37) It is assumed that the disturbance and outputs have been appropriately scaled.4 Consider the following plant and disturbance models ´×µ ½ × ½ ×·¾ ¾´×   ½µ ´×µ ×·¾ ½ ½ (6. 6.

For a square plant the input needed for perfect disturbance  ½ rejection is Ù (as for SISO systems). we want ½ Ö (6.    ½ ÑÜ ½ ½ For simultaneous disturbances ( is a matrix) the corresponding requirement is  ½ ½ (6.40) where Ö is the frequency up to which reference tracking is required. see (A. For combined reference changes.61)) ´Ê ½ µ ½ Ö for perfect (6. control with Ù ¾ ½ becomes ´ Ý À´ À µ ½ is the pseudo-inverse from (A. by plotting as a function of frequency the individual elements of the matrix  ½ . see (A. has full row rank so that the outputs can be perfectly controlled. ¡ 6. the corresponding condition Ýʵ ½. We will consider both the max-norm and ¾-norm. Then with a single disturbance Ù   Ý (6.63).105)).41) ´ ´ µµ large. or equivalently (see (A. We here measure both the disturbance and the input in terms of the ¾-norm. In most cases the difference between these two norms is of little practical significance.1 Inputs for perfect control We here consider the question: can the disturbances be rejected perfectly while maintaining Ù ½? To answer this. it is usually recommended in a preliminary analysis to consider one disturbance at a time. Then the worst-case disturbance is ´ µ ½) if all elements in the vector  ½ are less than 1 in magnitude. the induced ¾-norm is less than 1. for example. This yields more information about which particular input is most likely to saturate and which disturbance is the most problematic. Two-norm. more generally. Usually Ê is diagonal with all elements larger than ½. However. With combined disturbances we must require we must require that is.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾ Remark. maximum row sum. for mathematical convenience we will also consider the vector 2-norm (Euclidean norm). Ö´ µ ¾ ½. and we get that input saturation is vector). and we must at least require ´ ´ µµ or. Max-norm and square plant. However. Consider a single disturbance ( is a ½.9.39) ´ Ý µ ½.38) where ½ is the induced Ñ Ü-norm (induced -norm. Then the Assume that smallest inputs (in terms of the ¾-norm) needed for perfect disturbance rejection are ¡ ½ where Ý ¾ ½. that avoided ( Ù Ñ Ü is. . The vector Ñ Ü-norm (largest element) is the most natural choice when considering input saturation and is also the most natural in terms of our scaling procedure. to measure the vector signal magnitudes at each frequency makes some difference. For MIMO systems the choice of vector norm. . we must quantify the set of possible disturbances and the set of allowed input signals.106).

For SISO systems we have an analytical solution of (6. A necessary condition for avoiding input saturation (for each get ÍÑ Ò disturbance) is then   ËÁËÇ   ½ Ø Ö ÕÙ Ò × Û Ö ½ (6. but by making the approximation in (6. The question we want ½ for any ½ using inputs to answer in this subsection is: is it possible to achieve ½? We use here the max-norm. and in the general case as a convex optimization problem. with Ù Ñ Ü (the vector infinity-norm).44) We would like to generalize this result to MIMO systems.2 Inputs for acceptable control Let Ö ¼ and consider the response Ù· to a disturbance .e. We here use the latter approach.42) A necessary condition for avoiding input saturation (for each disturbance) is then (6. by considering the maximum allowed disturbance. Unfortunately. the problem can be formulated in several different ways. we do not have an exact analytical result.e.9.45) . minimum requirements) for achieving ÑÜ ½.47) is available. The resulting conditions are therefore only necessary (i. ¡ We consider this problem frequency-by-frequency. Exact conditions Mathematically. Approximate conditions in terms of the SVD At each frequency the singular value decomposition of the plant (possibly non-square) is Í ¦Î À .59) we ´ ½µ .g. at steady-state) then (6. and also to provide more insight. we consider one disturbance at a time.43) If and are real (i. is a scalar and is at each frequency is to compute ÍÑ Ò ¸ ÑÙ Ò Ù Ñ Ü ×Ù Ø Ø ÍÑ Ò ½ Ù· ÑÜ ½ ½ (6. To simplify the problem. a vector.42) can be formulated as a linear programming (LP) problem. a nice approximate generalization (6. see Skogestad and Wolff (1992). the minimum achievable control error or the minimum required input. The worst-case disturbance is then ½ and the problem i. for the vector signals. e. Introduce the rotated control error and rotated input ÍÀ Ù Î ÀÙ (6.46) below. This means that we neglect the issue of causality which is important for plants with RHP-zeros and time delays. from the proof of (5.42).e.¾¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 6.

We then find that each singular value of . 2. Note: Ù is equivalent to requiring (a vector) is the ’th column of Í . and is a vector since we consider a single disturbance. Ú . (6. so from (A. one requires the ’th row sum of Í À be less than ´ µ (at frequencies where the ’th row sum is larger than 1). to Several disturbances. where from (6. For combined disturbances. More precisely. However. As usual.47): Let Ö We have may be interpreted as the ¼ and consider the response Ù· to a single disturbance . ÍÀ ´ ÙÀ   ½µ ´ µ Ù Ñ Ü is less than 1. By looking at the corresponding input singular vector. where the last equality follows since Í we want to find the smallest possible input such that ÑÜ Ñ Ü is less than 1.47) where Ù is the ’th output singular vector of .49) ÙÑÜ ¾   by .ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË and assume that the max-norm is approximately unchanged by these rotations ¾¿½ ÑÜ ÑÜ ÙÑÜ ÙÑÜ (6. similar results are derived for references by replacing Ê.47) is a necessary condition for achieving acceptable control ½) with Ù Ñ Ü ½. Thus. assuming that (6. Condition (6. This is ½ a scalar problem similar to that for the SISO-case in (5.48) ´ µÙ · Í À .48) À À .47) we can find out: 1. the minimum input is when the right hand side is “lined up” to make Ù and (6. one can determine on which outputs we may have to reduce our performance requirements. (6.46) From (A.47) provides a nice generalization of (6. and if we assume ÙÀ (otherwise we may simply set Ù ¼ and achieve ½) then the smallest Ù is achieved ½.47) follows by requiring that Based on (6. For the worst-case disturbance ( ¦Î ½). one can determine which actuators should be redesigned (to get more power in certain directions) and by looking at the corresponding output singular vector.44). must approximately satisfy Ô ÅÁÅÇ ´ µ ÙÀ   ½ Ø Ö ÕÙ Ò × Û Ö ÙÀ ½ (6.94) the error by using the approximation for the max-norm is at most a factor Ñ where Ñ is the number of elements in the vector. Reference commands. whereas Ù (a scalar) is the ’th rotated plant input.124) this would be an equality for the 2-norm.46) ( Ñ Ü ½) for a single disturbance ( holds. for example by redesign. ´ µ. Proof of (6. ÙÀ projection of onto the ’th output singular vector of the plant. In which direction the plant gain is too small. This may give ideas on which disturbances should be reduced. Ù . For which disturbances and at which frequencies input constraints may cause problems. usually we derive more insight by considering one disturbance at a time. This ½ . Í À ´ Ù · µ ¦Ù · Í À (6.59).

We will now analyze whether the disturbance rejection requirements will cause input ¼) and acceptable saturation by considering separately the two cases of perfect control ( control ( Ñ Ü ½). The ¼ ½¼ ¾  ½¼   ½ ½½ ¾ ½½ ½ (6. The inputs needed for perfect disturbance rejection are Ù  ½ where  ½  ½ ¼  ¼ ¼¼  ½ ¾  ¼ ¾½¿ We note that perfect rejection of disturbance ½ ½ requires an input Ù  ½ ¼  ½ ¾ Ì which is larger than ½ in magnitude. On the other hand. The requirement is also satisfied in the low-gain direction since ½ ½ and ¼ ½½ are both less than ¾ ´ µ · ½ ½ . ´ µ ¼ is much less . and from (6.¾¿¾ ÅÍÄÌÁÎ ÊÁ ¾ Ä Ã ÇÆÌÊÇÄ Example 6. possible as it requires a much smaller input Ù 2. but rather by a large ´ µ. The condition number ­ ´ µ imply control problems. However. . and feed rate (¾¼% change) and feed composition (¾¼% change) as disturbances. Perfect control. perfect control of disturbance ½ is not possible ½ is without violating input constraints. Since ´ µ terms of the 2-norm) without reaching input constraints. The elements of the matrix ¼ we are able to perfectly track reference changes of magnitude ¼ (in 2. We will use the approximate requirement (6.45) because the allowed input and ´ µ ¼ ¼. ­ ´ µ. but this does not by itself 5. perfect rejection of disturbance ¾  ¼ ¼¼  ¼ ¾½¿ Ì . The disturbance condition numbers. Approximate result for acceptable control.49) the required input in this ´½ ½µ ¼ ½¿ for both disturbances which is direction is thus only about Ù½ much less than ½. ´ µ ´ µ ½ ½ is large. In this case. 1. The elements in are about times larger than those in should be no problems with input constraints. From an SVD of we have ´ µ Some immediate observations: are larger than ½ so control is needed to reject disturbances. This indicates that the direction of disturbance ½ is less favourable than that of disturbance ¾. In the high-gain direction this is easily satisfied since ½ ¼ and ½ ¾ are both much less than ½ ´ µ · ½ . changes are a factor ¾ smaller. the large value of the condition number is not caused by a small ´ µ (which would be a problem). so input constraints may be an issue after all. for the two disturbances.5 Distillation process Consider a appropriately scaled steady-state model is ¢¾ plant with two disturbances. The elements in are scaled by a factor ¼ compared to (3.50) This is a model of a distillation column with product compositions as outputs. which suggests that there 3. Thus. 4. than the elements in and ½ . We have ÍÀ ½ ¼ ½ ¾ ½ ½ ¼ ½½ ½´ µ ¾´ µ ¼ ¼ and the magnitude of each element in the ’th row of Í À should be less than ´ µ · ½ to avoid input constraints. but we note that the margin is relatively small   . reflux and boilup as inputs.47) to evaluate the inputs needed for acceptable control. are ½½ respectively. 1.

since then a much larger fraction of it must be rejected.47). no such “warning” is provided by the exact numerical method. The reason is that we only need to reject about ½¾% (½ ½¿ ½ ½ ½¿) of disturbance ½ in the low-gain direction. the results are consistent if for both disturbances control is only needed in the high-gain direction. which confirms that input saturation is not a problem. From the example we conclude that it is difficult to judge. The exact values of the minimum inputs ¼ ¼ for disturbance ½ and Ù Ñ Ü needed to achieve acceptable control are Ù Ñ Ü ¼ ¼ for disturbance ¾.9. we conclude that perfect control is not possible.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¿¿ for disturbance ½. However. in the low-gain direction disturbance ½ requires an input ´½ ½ ½µ ¼ ¼ ¾ .   ¼ ½¼ indicate that the two disturbances are about However. Let of frequency. but the difference is much smaller than with perfect control. evaluate ´ µ Í ¦Î À . we found that disturbance ½ may be much more difficult to reject because it has a component of 1. the required inputs  ½ ´ µ for perfect control.50) that the two disturbances have almost it would appear from the column vectors of identical effects. In the above example. (The approximate results indicated that some control was needed for disturbance ½ in the low-gain direction. ½ and compute.37). However. we find that the results based on perfect control are misleading.   In conclusion. since ½ ½ was just above ½. for which the approximate results gave the same value of ¼ ½¿ for both disturbances. require ÃË ½ Ô . However. 3. whether a disturbance is difficult to reject or not. as a function Exercise 6. Again we find disturbance ½ to be more difficult. as acceptable control is indeed possible. so if the inputs and disturbances have been appropriately scaled. simply by looking at the magnitude . More precisely. You will find that both inputs are about ¾ in magnitude at low frequency. The fact that disturbance ½ is more difficult is confirmed in Section 10. but apparently this is inaccurate). This seems inconsistent with the above approximate results.6 Consider again the plant in (6. and compute Í À ´ µ as a function of frequency and compare with the ½) is possible. Exact numerical result for acceptable control.11 for disturbance 2.25) we must ÙÀ × ´Ôµ ½ ¾ where ÃË is the transfer function from outputs to inputs. namely that we can easily see whether we are close to a borderline value where control may be needed in some direction. This can be seen from the second row of ÍÀ . On the other hand. Next. where we found disturbance ½ to be more difficult. the values of Ù Ñ Ü equally difficult.3 Unstable plant and input constraints Active use of inputs are needed to stabilize and unstable plant and from (6. The discussion at the end of the example illustrates an advantage of the approximate analytical method in (6. this changes drastically if disturbance ½ is larger.17 in the low-gain direction of which is about 10 times larger than the value of 0. elements of ¦´ µ · Á to see whether “acceptable control” ( ´ µ 6. of the elements in in (6.10 on page 455 where we also present closed-loop responses. whereas disturbance ¾ requires magnitude of approximately Ù¾ no control (as its effect is ¼ ½½ which is less than ½).

However. rather than simply feedforward control. In this section.  ÃË ´ · Òµ.2) are given by Output uncertainty: Input uncertainty: ¼ ¼ ´Á · Ç µ ´Á · Á µ or or Ç Á ´ ¼   µ  ½  ½ ´ ¼   µ (6.52) These forms of uncertainty may seem similar. This involves analyzing robust performance by use of the structured singular value. and the presence of uncertainty in this frequency range may result in poor performance or even instability. in this section the treatment is kept at a more elementary level and we are looking for results which depend on the plant only. the output and input uncertainties (as in Figure 6. Typically.1 or larger. . and we have that Á or Ç are diagonal matrices. for example. then we have full block (“unstructured”) uncertainty. in many cases the source of uncertainty is in the individual input or output channels. 6. but for any real system we have a crossover frequency range where the loop gain has to drop below 1. However. which are useful in picking out plants for which one might expect sensitivity to directional uncertainty. In Chapter 8. If the required inputs exceed the constraints then stabilization is most 6.53) where ¯ is the relative uncertainty in input channel . Remark. If all the elements in the matrices Á or Ç are non-zero.51) (6. the magnitude of ¯ is 0.10 Limitations imposed by uncertainty The presence of uncertainty requires the use of feedback. to get acceptable performance. we discuss more exact methods for analyzing performance with almost any kind of uncertainty and a given controller. with MIMO systems there is an additional problem in that there is also uncertainty associated with the plant directionality. we focus on input uncertainty and output uncertainty. but we will show that their implications for control may be very different.10. It is important to stress that this diagonal input uncertainty is always present in real systems. Sensitivity reduction with respect to uncertainty is achieved with high-gain feedback.¾¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ù likely not possible. These issues are the same for SISO and MIMO systems.1 Input and output uncertainty In practice the difference between the true perturbed plant ¼ and the plant model is caused by a number of different sources. The main objective of this section is to introduce some simple tools. like the RGA and the condition number. Á ¯½ ¯¾ (6. However. In a multiplicative (relative) form.

However. since from (6. for input uncertainty the sensitivity may be much larger because the elements in the matrix Á  ½ can be much larger than the elements in  ½ are directly related Á .57) .2: Plant with multiplicative input and output uncertainty 6. Since diagonal input uncertainty is always present we can conclude that ¯ if the plant has large RGA elements within the frequency range where effective control is desired. we see that (6. the system may be sensitive to input uncertainty. if the RGA has small elements we cannot conclude that the sensitivity to input uncertainty is small.54) is identical to the result in (5.56) attractive. see (A. This is seen from the following expression for the ¾ ¾ case ¢ Á  ½ ¼. We then get for the two sources of uncertainty becomes ¼ Ý ¼   Ö ¼ Output uncertainty: (6.10. In particular. In this case the RGA is £ Á so For example.56) The RGA-matrix is scaling independent.80): Diagonal input uncertainty: Á  ½ Ò ½ ´ µ¯ (6. the worst-case relative control error ¼ ¾ Ö ¾ is equal to the magnitude of the relative output uncertainty ´ Ç µ. which makes the use of condition (6.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¿ ¹ Á ¹ ¹+ ¹ + Ó Õ Õ ¹+ ¹ + Figure 6. That is. for the actual plant ¼ (with uncertainty) the actual control error ¼  ½ Ö   Ö.2 Effect of uncertainty on feedforward control Consider a feedforward controller Ù ÃÖ Ö for the case with no disturbances ( assume that the plant is invertible so that we can select ¼). then it is not possible to achieve good reference tracking with feedforward control because of strong sensitivity to diagonal input uncertainty. for the nominal case with no uncertainty. We  ½ ÃÖ and achieve perfect control. The reverse statement is not true.55) Input uncertainty: Á For output uncertainty. Still. However.70) for SISO systems.57) the ¾ ½-element of Á  ½ may be large if ¾½ ½½ is large. consider a triangular plant with ½¾ the diagonal elements of Á  ½ are ¯½ and ¯¾ . for diagonal input uncertainty the elements of Á to the RGA.54) ÇÖ ¼  ½ Ö (6. that is. Ý Ö Ù Ö ÃÖ Ö   Ö ¼. ½½ ¯½ · ½¾ ¯¾ ¾½ ½½ ´¯½   ¯¾ µ ½½   ½¾ ¾¾ ½½ ´¯½   ¯¾ µ ¾½ ¯½ · ¾¾ ¯¾ (6.

function is Ý Á . Ideally.63) Ç . consider a SISO system and let ¡ ¼. Ý ¼   Ý Ë¼ ÇÌ Ö Ë ¼ Ç Ý .59) becomes ¡Ì ¡ Ì  ½ Ë ¼ ¡ ¡ ¡  ½ (6.3 Uncertainty and the benefits of feedback To illustrate the benefits of feedback control in reducing the sensitivity to uncertainty.58) and (6. To see this. As a basis for comparison we first consider feedforward control. ˼ compared to Thus. see Section 6. and with feedforward control the relative control error caused by the uncertainty is equal to the relative output uncertainty. However.62) (6. Let the nominal transfer function with feedforward control be Ý ÌÖ Ö where ÌÖ ÃÖ and ÃÖ denotes the feedforward controller. Feedforward control.58)   Feedback control. With model ¼ ¼ ÃÖ . 1975). Remark 1 For square plants. Then Ë ¼ Ë and we have from (6. Ý ¼ Ý Ç ÌÖ Ö Ç Ý . Ì (A. and the change in response is ݼ Ý ´ÌÖ¼ ÌÖ µÖ where error ÌÖ Thus. we consider the effect of output uncertainty on reference tracking.¾¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 6.60) provides a generalization of Bode’s Ì Ì Ë (6. The change in response with model error is ݼ Ý ´Ì ¼ Ì µÖ where from Ideally.10.4.59) and use (6. Ç where ¡Ì Ì ¼ Ì and ¡ differential relationship (2. ÌÖ Á . the opposite may be true in the crossover frequency range where ˼ may have elements larger than 1. With one degree-of-freedom feedback control the nominal transfer Ì Ö where Ì Ä´Á · ĵ ½ is the complementary sensitivity function. and we see that   ¯ with feedback control the effect of the uncertainty is reduced by a factor that with feedforward control.60)   ´ ¼   µ  ½ and (6. Ì ¼   ÌÖ Ö ´ ¼   µ  ½ ÌÖ     Ç ÌÖ (6. We then get ´Á introducing the inverse output multiplicative uncertainty ¼ ǵ (Horowitz and Shaked.   Feedforward control: Feedback control: (Simple proof for square plants: switch ´ ¼ µ ¼ ½ ).   and ¼ ¼ ÌÖ¼   ÌÖ Ç ÌÖ Ì ¼   Ì Ë ÇÌ ¼ in (6.23) for SISO systems.59)     Thus.60) ¼   .10.144) Ì ¼ Ì Ë¼ ÇÌ (6. feedback control is much less sensitive to uncertainty than feedforward control at frequencies where feedback is effective and the elements in ˼ are small.61) Remark 2 Alternative expressions showing the benefits of feedback control are derived by  ½ . Equation (6.

¼ ¾ at low frequencies and exceeds 1 at higher frequencies. we state a lower bound on ´Ë¼ µ for an inverse-based controller in terms of the RGA-matrix of the plant. These minimized condition numbers may be computed using (A.70) where ÛÁ ´×µ and ÛÇ ´×µ are scalar weights.51) and (6.59). ´Ë ¼ µ.65) and the following minimized condition numbers of the plant and the controller (6. At higher frequencies we have for real systems that Ä is small.67) (6. Ë and ˼ are small).e. The objective in the following is to investigate how the magnitude of the sensitivity.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË Remark 3 Another form of (6. From (6.52). is We first state some upper bounds on ´Ë¼ µ. especially at crossover frequencies.66) where Á and Ç are diagonal scaling matrices.63) and (6.69) and singular value inequalities (see Appendix A. ̼   Ì Ë ¼ ´Ä¼   ÄµË (6.4) of the kind ´´Á · Á ÌÁ µ ½ µ ´Á · ½ Á ÌÁ µ ½  ´ ½ Á ÌÁ µ ½ ½  ´ Á µ ´ÌÁ µ ½ ½  ÛÁ ´ÌÁ µ . Similarly. These are based on identities (6. uncertainty only has a significant effect in the crossover region where Ë and Ì both have norms around 1.69) We assume that and ¼ are stable. This is usually at low frequencies.74) and (A.64)     6.68) (6. We also assume closed-loop stability.59) is (Zames. We then get that ´Á · Ç Ì µ ½ and ´Á · Á ÌÁ µ ½ are stable.3.10. (6.6) (6. We will derive upper bounds on ´Ë¼ µ which involve the plant and controller condition numbers ­´ µ £ ­Á ´ µ Ñ Ò ­´ Á ´ µ ´ µ Áµ ­ ´Ã µ £ ­Ç ´Ã µ ´Ã µ ´Ã µ Ñ Ò ­´ Çõ Ç (6.64) we see that with feedback control Ì ¼ Ì is small at frequencies where feedback is effective (i. so that both Ë and Ë ¼ are stable. Typically the uncertainty bound. ÛÇ . so Ì is small.4 Uncertainty and the sensitivity peak We demonstrated above how feedback may reduce the effect of uncertainty. 1981) ¾¿ Conclusion. The following factorizations of Ë ¼ in terms of the nominal sensitivity form the basis for the development: Output uncertainty: Input uncertainty: Ë (see Appendix A. In most cases we assume that the magnitude of the multiplicative (relative) uncertainty at each frequency can be bounded in terms of its singular value ˼ Ë ´Á · Ç Ì µ ½ ¼ Ë ´Á · Á  ½ Ì µ ½ Ë ´Á · Á ÌÁ µ ½  ½ Ë Ë ¼ ´Á · Ì Ã  ½ Á à µ ½ Ë Ã  ½ ´Á · ÌÁ Á µ ½ ÃË ´ Áµ ÛÁ ´ ǵ ÛÇ ÛÁ or (6.75). and again Ì ¼ Ì is small. Thus with feedback. but we also pointed out that uncertainty may pose limitations on achievable performance.67)-(6. is affected by the multiplicative output uncertainty and input uncertainty given as (6.

To improve on this conservativeness we present below some bounds for special cases.74) and (6. Upper bound on ´Ë ¼ µ for output uncertainty ´Ë ¼ µ ´Ë µ ´´Á · Ç Ì µ ½ µ ´Ë µ ½   ÛÇ ´Ì µ From (6. where we either restrict the uncertainty to be diagonal or restrict the controller to be of a particular form. Diagonal uncertainty and decoupling controller. In many cases (6. That is.74) (6. General case (full block or diagonal input uncertainty and any controller). they apply to inverse-based control. 1. The bounds (6.73) are not very useful because they yield unnecessarily large upper bounds.73) we have the important result that if we use a “round” controller.67) we derive (6. Upper bounds on ´Ë ¼ µ for input uncertainty The sensitivity function can be much more sensitive to input uncertainty than output uncertainty.68) may Ë ´ Á µ´Á · Á ÌÁ µ ½ ´ Á µ ½ where Á is freely chosen. For simplicity. 2. à is diagonal so form à ´×µ ÌÁ à ´Á · à µ ½ is diagonal. then the sensitivity function is not sensitive to input uncertainty. and we then be written Ë ¼ get ´Ë ¼ µ ´Ë ¼ µ £ ­Á ´ µ ´Ë µ ´´Á · Á ÌÁ µ ½ µ £ ­Ç ´Ã µ ´Ë µ ´´Á · ÌÁ Á µ ½ µ  ½ .72) (6.72) and (6. The second identity in (6. meaning that ­ ´Ã µ is close to ½. Consider a decoupling controller in the ´×µ  ½ ´×µ where ´×µ is a diagonal matrix. Ð Ø Á where Ø ½·Ð .71) From (6. From (6.69) we derive ´Ë ¼ µ ´Ë ¼ µ ­ ´ µ ´Ë µ ´´Á · Á ÌÁ µ ½ µ ­ ´Ã µ ´Ë µ ´´Á · ÌÁ Á µ ½ µ ´Ë µ ½   ÛÁ ´ÌÁ µ ´Ë µ ­ ´Ã µ ½   ÛÁ ´ÌÁ µ ­´ µ (6.75) ¡ .73) From (6. In this case. be it diagonal or full block.¾¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´ÌÁ µ ½ and Of course these inequalities only apply if we assume ´ Á ÌÁ µ ½.75) apply to any decoupling controller in the form à д׵Á . we will not state these assumptions each time. if we have a reasonable margin to stability ( ´Á · Ç Ì µ ½ ½ is not too much larger than ½). and Á ÌÁ is diagonal. ´ Á µ ÛÁ ´ÌÁ µ ½.68) and (6. poses no particular problem when performance is measured at the plant output.71) we see that output uncertainty. then the nominal and perturbed sensitivities do not differ very much. decoupling with ÌÁ Ì ´Ë µ ½   ÛÁ ´ÌÁ µ ´Ë µ £ ­Ç ´Ã µ ½   ÛÁ ´ÌÁ µ £ ­Á ´ µ (6. which yields input-output In particular.

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË

¾¿

£ Remark. A diagonal controller has ­Ç ´Ã µ ½, so from (6.77) below we see that (6.75) applies to both a diagonal and decoupling controller. Another bound which does apply to any controller is given in (6.77).
3. Diagonal uncertainty (Any controller). From the first identity in (6.68) we get Ë ´Á · ´ Á µ Á ´ Á µ ½ Ì µ ½ and we derive by singular value inequalities

˼
(6.76) (6.77)

´Ë ¼ µ ´Ë ¼ µ

£ Note that ­Ç ´Ã µ ½ for a diagonal controller so (6.77) shows that diagonal uncertainty does not pose much of a problem when we use decentralized control.
Lower bound on

´Ë µ £ ½   ­Á ´ µ ÛÁ ´Ì µ ´Ë µ £ ½   ­Ç ´Ã µ ÛÁ ´Ì µ

´Ë ¼ µ for input uncertainty

Above we derived upper bounds on ´Ë¼ µ; we will next derive a lower bound. A lower bound is more useful because it allows us to make definite conclusions about when the plant is not input-output controllable. Theorem 6.3 Input uncertainty and inverse-based control. Consider a controller à ´×µ д׵  ½ ´×µ which results in a nominally decoupled response with sensitivity Ë × Á and Ø Á where Ø´×µ ½ ×´×µ. Suppose the plant has diagonal complementary sensitivity Ì input uncertainty of relative magnitude ÛÁ ´ µ in each input channel. Then there exists a combination of input uncertainties such that at each frequency

¡

 

¡

where

ÛÁ Ø £´ µ ½ ½ · ÛÁ Ø £´ µ ½ is the maximum row sum of the RGA and ´Ë µ × .
´Ë ¼ µ ´Ë µ ½ ·

(6.78)

The proof is given below. From (6.78) we see that with an inverse based controller the worst case sensitivity will be much larger than the nominal at frequencies where the plant has large ½) this implies RGA-elements. At frequencies where control is effective ( × is small and Ø that control is not as good as expected, but it may still be acceptable. However, at crossover ½ × are both close to 1, we find that ´Ë¼ µ in (6.78) frequencies where × and Ø may become much larger than 1 if the plant has large RGA-elements at these frequencies. The bound (6.78) applies to diagonal input uncertainty and therefore also to full-block input uncertainty (since it is a lower bound).

 

Worst-case uncertainty. It is useful to know which combinations of input errors give poor performance. For an inverse-based controller a good indicator results if we consider Á  ½ , ¯ . If all ¯ have the same magnitude ÛÁ , then the largest possible where Á £´ µ ½ . To obtain this magnitude of any diagonal element in Á  ½ is given by ÛÁ where denotes the row value one may select the phase of each ¯ such that ¯ of £´ µ with the largest elements. Also, if £´ µ is real (e.g. at steady-state), the signs of the ¯ ’s should be the same as those in the row of £´ µ with the largest elements.

 

¡

¾¼

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

Proof of Theorem 6.3: (From Skogestad and Havre (1996) and Gjøsæter (1995)). Write the sensitivity function as

˼

´Á · ¼ Ã µ ½

Ë ´Á · Á ÌÁ µ ½  ½ ßÞ

Á

¯

Ë

×Á

(6.79)

Since is a diagonal matrix, we have from (6.56) that the diagonal elements of ˼ are given in terms of the RGA of the plant as

×¼

×

Ò

(Note that × here is a scalar sensitivity function and not the Laplace variable.) The singular value of a matrix is larger than any of its elements, so ´Ë¼ µ Ñ Ü ×¼ , and the objective in the following is to choose a combination of input errors ¯ such that the worst-case ×¼ is as large as possible. Consider a given output and write each term in the sum in (6.80) as

½

½ ½ · د

£

¢ ´  ½ µÌ

(6.80)

ÛÁ ¯ . We We choose all ¯ to have the same magnitude ÛÁ ´ µ , so we have ¯ ´ µ 3 ½ at all frequencies , such that the phase of ½ · د lies between also assume that د ¼Æ and ¼Æ . It is then always possible to select ¯ (the phase of ¯ ) such that the last term in (6.81) is real and negative, and we have at each frequency, with these choices for ¯ ,

½ · د

  ½ ·Ø¯ د

(6.81)

 

×¼ ×

Ò

½
½·

½·
Ò

Ò

½ ½ · ÛÁ Ø

¡ ÛÁ Ø

¡ د ½ · د ½
½·

ÛÁ Ø Ò ½ · ÛÁ Ø ½

(6.82)

where the first equality makes use of the fact that the row-elements of the RGA sum to 1, È ( Ò ½ ½). The inequality follows since ¯ ÛÁ and ½ · د ½ · د ½ · ÛÁ Ø . This derivation holds for any (but only for one at a time), and (6.78) follows by È (the maximum row-sum of the RGA of ). ¾ selecting to maximize Ò ½

We next consider three examples. In the first two, we consider feedforward and feedback control of a plant with large RGA-elements. In the third, we consider feedback control of a plant with a large condition number, but with small RGAelements. The first two are sensitive to diagonal input uncertainty, whereas the third is not. Example 6.6 Feedforward control of distillation process. process in (3.81).
Consider the distillation

´×µ

¿ The assumption د

is not included in the theorem since it is actually needed for robust stability, so if it does not hold we may have ˼ infinite for some allowed uncertainty, and (6.78) clearly holds.

½

½ ×·½

½¼ ¾  ½¼ ´ µ

 

£´ µ

¿ ½  ¿ ½  ¿ ½ ¿ ½

(6.83)

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
With

¾½

Á

¯½ ¯¾

we get for all frequencies

Á

 ½

¿ ½¯½   ¿ ½¯¾  ¾ ¯½ · ¾ ¯¾ ¿ ¾¯½   ¿ ¾¯¾  ¿ ½¯½ · ¿ ½¯¾

(6.84)

We note as expected from (6.57) that the RGA-elements appear on the diagonal elements in  ½ . The elements in the matrix Á  ½ are largest when ¯½ and ¯¾ have the matrix Á ¼¾ opposite signs. With a ¾¼% error in each input channel we may select ¯½ ¼ ¾ and ¯¾ and find

 

Á

 ½

Thus with an “ideal” feedforward controller and ¾¼% input uncertainty, we get from (6.55) that the relative tracking error at all frequencies, including steady-state, may exceed ½¼¼¼%. This demonstrates the need for feedback control. However, applying feedback control is also difficult for this plant as seen in Example 6.7.

½¿  ½½ ½ ½ ¾  ½¿

(6.85)

Example 6.7 Feedback control of distillation process. Consider again the distillation £ ½ and ­ ´ µ ­Á ´ µ process ´×µ in (6.83) for which we have £´ ´ µµ ½ ½ ½ at all frequencies.
1. Inverse based feedback controller. Consider the controller corresponding to the nominal sensitivity function

à ´×µ

´¼

×µ  ½ ´×µ

Ë ´×µ

× Á ×·¼

The nominal response is excellent, but we found from simulations in Figure 3.12 that the closed-loop response with ¾¼% input gain uncertainty was extremely poor (we used ¯½ ¼ ¾ and ¯¾ ¼ ¾). The poor response is easily explained from the lower RGA-bound on ´Ë¼ µ in (6.78). With the inverse-based controller we have д׵ × which has a nominal phase ¼Æ so from (2.42) we have, at frequency , that ×´ µ Ø´ µ margin of PM ½ ¾ ¼ ¼ . With ÛÁ ¼ ¾, we then get from (6.78) that

 

Ô

´Ë ¼ ´

µµ

¼ ¼

½·

¼ ¼ ¡¼¾¡ ½½

½

¼ ¼

¡

(6.86)

½ at frequency ¼ rad/min.) Thus, we have (This is close to the peak value in (6.78) of ½ and this explains the observed that with ¾¼% input uncertainty we may have ˼ ½ poor closed-loop performance. For comparison, the actual worst-case peak value of ´Ë¼ µ, with the inverse-based controller is 14.5 (computed numerically using skewed- as discussed ¯½ ¯¾ below). This is close to the value obtained with the uncertainty Á ¼¾ ¼¾ ,

 

˼ ½

­ ­ ­ ­ ­

¼ Á· ×

½¾

¼

 ½

 ½ ­ ­ ­
­ ­

½

½ ¾

for which the peak occurs at ¼ rad/min. The difference between the values ½ and ½ illustrates that the bound in terms of the RGA is generally not tight, but it is nevertheless very useful.

¾¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

£ £ Next, we look at the upper bounds. Unfortunately, in this case ­Á ´ µ ­Ç ´Ã µ ½ ½ , so the upper bounds on ´Ë¼ µ in (6.74) and (6.75) are not very tight (they are of magnitude ½ ½ at high frequencies).
2. Diagonal (decentralized) feedback controller. Consider the controller

problem with the simple diagonal controller is that (although it is robust) even the nominal response is poor.

¾ ´ × · ½µ ½ ¼ ¾ ¾ ¡ ½¼ ¾ Ñ Ò ½ ¼  ½ × The peak value for the upper bound on ´Ë¼ µ in (6.77) is ½ ¾ , so we are guaranteed ˼ ½ ½ ¾ , even with ¾¼% gain uncerainty. For comparison, the actual peak in the ¼ ¾  ¼ ¾ is Ë ¼ ½ ½ ¼ . Of course, the perturbed sensitivity function with Á
Ã
´×µ
The following example demonstrates that a large plant condition number, ­ ´ µ, does not necessarily imply sensitivity to uncertainty even with an inverse-based controller.

Example 6.8 Feedback control of distillation process, DV-model. In this example we consider the following distillation model given by Skogestad et al. (1988) (it is the same system as studied above but with the DV- rather than the LV-configuration for the lower control levels, see Example 10.5):

½   ½ ¼ ¼ ¾ £´ µ (6.87) ¼ ¾ ¼ × · ½  ½¼ ¾  ½ ½, ­ ´ µ ¼ and ­ £ ´ µ ½ ½½ at all frequencies. We have that £´ ´ µµ ½ £ ´ µ are small we doÁ not expect problems with input Since both the RGA-elements and ­Á uncertainty and an inverse based controller. Consider an inverse-based controller à ÒÚ ´×µ £ £ ´¼ ×µ  ½ ´×µ which yields ­ ´Ã µ ­ ´ µ and ­Ç ´Ã µ ­Á ´ µ. In Figure 6.3, we show the lower bound (6.78) given in terms of £ ½ and the two upper bounds (6.74) and (6.76) £ given in terms of ­Á ´ µ for two different uncertainty weights ÛÁ . From these curves we see

that the bounds are close, and we conclude that the plant in (6.87) is robust against input uncertainty even with a decoupling controller.

Remark. Relationship with the structured singular value: skewed- . To analyze exactly the worst-case sensitivity with a given uncertainty ÛÁ we may compute skewed- ( × ). With reference to Section 8.11, this involves computing ¡ ´Æ µ with ¡ ´¡Á ¡È µ and

Æ

ÛÁ ÌÁ ÛÁ ÃË and varying × until ´Æ µ × Ë × Ë

given frequency is then

´Ë¼ µ

× ´Æ µ .

½. The worst-case performance at a

Example 6.9 Consider the plant

´×µ
for which at all frequencies £´ µ Á , ­´ µ The RGA-matrix is the identity, but since ½¾

½ ½¼¼ ¼ ½

½¼ , ­ £ ´ µ ½½ ½¼¼ we

£ ½ ¼¼ and ­Á ´ µ ¾¼¼. expect from (6.57) that this

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
2
Magnitude Magnitude

¾¿
2

ͽ Ľ ;

1

; ͽ Ľ

1

10

−1

10

0

10

1

10

2

10

−1

10
(b) ÏÁ

0

10

1

10

2

(a) ÏÁ

Frequency [rad/min]

´×µ ¼ ¾Á

Frequency [rad/min]

·½ ´×µ ¼ ¾ ¼ ××·½ Á

Figure 6.3: Bounds on the sensitivity function for the distillation column with the DV configuration: lower bound Ľ from (6.78), upper bounds ͽ from (6.76) and ; from (6.74) plant will be sensitive to diagonal input uncertainty if we use inverse-based feedback control,  ½ . This is confirmed if we compute the worst-case sensitivity function ˼ for Ã × ¼ ´Á · ÛÁ ¡Á µ where ¡Á is diagonal and ÛÁ ¼ ¾. We find by computing skewed- , × ´Æ½ µ, that the peak of ´Ë ¼ µ is Ë ¼ ½ ¾¼ ¿

Note that the peak is independent of the controller gain in this case since ´×µ is a constant matrix. Also note that with full-block (“unstructured”) input uncertainty (¡Á is a full matrix) the worst-case sensitivity is Ë ¼ ½ ½¼¾½

Conclusions on input uncertainty and feedback control The following statements apply to the frequency range around crossover. By “small’, we mean about 2 or smaller. By “large” we mean about 10 or larger. 1. Condition number ­ ´ µ or ­ ´Ã µ small: robust performance to both diagonal and full-block input uncertainty; see (6.72) and (6.73). £ £ 2. Minimized condition numbers ­ Á ´ µ or ­Ç ´Ã µ small: robust performance to diagonal input uncertainty; see (6.76) and (6.77). Note that a diagonal controller £ always has ­Ç ´Ã µ ½. 3. RGA´ µ has large elements: inverse-based controller is not robust to diagonal input uncertainty; see (6.78). Since diagonal input uncertainty is unavoidable in practice, the rule is never to use a decoupling controller for a plant with large RGA-elements. Furthermore, a diagonal controller will most likely yield poor nominal performance for a plant with large RGA-elements, so we conclude that plants with large RGA-elements are fundamentally difficult to control. £ 4. ­Á ´ µ is large while at the same time the RGA has small elements: cannot make any definite conclusion about the sensitivity to input uncertainty based on the bounds in this section. However, as found in Example 6.9 we may expect there to

¾
be problems.

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

6.10.5 Element-by-element uncertainty
Consider any complex matrix and let denote the ’th element in the RGA-matrix of . The following result holds (Yu and Luyben, 1987; Hovd and Skogestad, 1992): becomes singular if we make a relative Theorem 6.4 The (complex) matrix in its -th element, that is, if a single element in is perturbed change  ½ from to Ô ´½   ½ µ. The theorem is proved in Appendix A.4. Thus, the RGA-matrix is a direct measure of sensitivity to element-by-element uncertainty and matrices with large RGA-values become singular for small relative errors in the elements. Example 6.10 The matrix

½¾ ´ µ

 ¿ ½. Thus, the matrix Ô½¾  

in (6.83) is non-singular. The becomes singular if ½¾

½ ¾-element

´½   ½ ´ ¿ ½µµ

 

 

of the RGA is is perturbed to (6.88)

The above result is an important algebraic property of the RGA, but it also has important implications for improved control: 1) Identification. Models of multivariable plants, ´×µ, are often obtained by identifying one element at a time, for example, using step responses. From Theorem 6.4 it is clear that this simple identification procedure will most likely give meaningless results (e.g. the wrong sign of the steady-state RGA) if there are large RGA-elements within the bandwidth where the model is intended to be used. 2) RHP-zeros. Consider a plant with transfer function matrix ´×µ. If the relative uncertainty in an element at a given frequency is larger than ½ ´ µ then the plant may be singular at this frequency, implying that the uncertainty allows for a RHP-zero on the -axis. This is of course detrimental to performance both in terms of feedforward and feedback control.
Remark. Theorem 6.4 seems to “prove” that plants with large RGA-elements are fundamentally difficult to control. However, although the statement may be true (see the conclusions on page 243), we cannot draw this conclusion from Theorem 6.4. This is because the assumption of element-by-element uncertainty is often unrealistic from a physical point of view, since the elements are usually coupled in some way. For example, this is the case for the distillation column process where we know that the elements are coupled due to an underlying physical constraint in such a way that the model (6.83) never becomes singular even for large changes in the transfer function matrix elements.

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË

¾

6.10.6 Steady-state condition for integral control
Feedback control reduces the sensitivity to model uncertainty at frequencies where the loop gains are large. With integral action in the controller we can achieve zero steady-state control error, even with large model errors, provided the sign of the plant, as expressed by Ø ´¼µ, does not change. The statement applies for stable plants, or more generally for cases where the number of unstable poles in the plant does not change. The conditions are stated more exactly in the following theorem. Theorem 6.5 Let the number of open-loop unstable poles (excluding poles at × ¼) of ´×µÃ ´×µ and ¼ ´×µÃ ´×µ be È and È ¼ , respectively. Assume that the controller à is such that à has integral action in all channels, and that the transfer functions à and ¼ à are strictly proper. Then if

Ø ¼ ´¼µ

Ø ´¼µ

¼ ¼

ÓÖ È   È ¼ Ú Ò Ò ÐÙ Ò Þ ÖÓ ÓÖ È   È ¼ Ó

(6.89)

at least one of the following instabilities will occur: a) The negative feedback closedloop system with loop gain à is unstable. b) The negative feedback closed-loop system with loop gain ¼ à is unstable.
Proof: For stability of both ´Á · à µ ½ and ´Á · ¼ à µ ½ we have from Lemma A.5 È ¼ times as × in Appendix A.6.3 that Ø´Á · Ç Ì ´×µµ needs to encircle the origin È traverses the Nyquist -contour. Here Ì ´¼µ Á because of the requirement for integral action in all channels of à . Also, since à and ¼ à are strictly proper, Ç Ì is strictly ¼ as × . Thus, the map of det´Á · Ç Ì ´×µµ starts at proper, and hence Ç ´×µÌ ´×µ Ø ¼ ´¼µ Ø ´¼µ (for × ¼) and ends at ½ (for × ). A more careful analysis of the Nyquist plot of det´Á · Ç Ì ´×µµ reveals that the number of encirclements of the origin will Ø ´¼µ ¼, and odd for Ø ¼ ´¼µ Ø ´¼µ ¼. Thus, if this be even for Ø ¼ ´¼µ parity (odd or even) does not match that of È È ¼ we will get instability, and the theorem ¾ follows.

 

½

½

 

Example 6.11 Suppose the true model of a plant is given by identification we obtain a model ½ ´×µ,

´×µ,

and that by careful

½ ×·½

½¼ ¾  ½¼

 

½ ´×µ

½ ×·½

½¼  ½¼

 

At first glance, the identified model seems very good, but it is actually useless for control ¾ and Ø ½ ´¼µ ½ purposes since Ø ½ ´¼µ has the wrong sign; Ø ´¼µ (also the RGA-elements have the wrong sign; the ½ ½-element in the RGA is instead of ·¿ ½). From Theorem 6.5 we then get that any controller with integral action designed based on the model ½ , will yield instability when applied to the plant .

 

 

¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

6.11 MIMO Input-output controllability
We now summarize the main findings of this chapter in an analysis procedure for input-output controllability of a MIMO plant. The presence of directions in MIMO systems makes it more difficult to give a precise description of the procedure in terms of a set of rules as was done in the SISO case.

6.11.1 Controllability analysis procedure
The following procedure assumes that we have made a decision on the plant inputs and plant outputs (manipulations and measurements), and we want to analyze the model to find out what control performance can be expected. The procedure can also be used to assist in control structure design (the selection of inputs, outputs and control configuration), but it must then be repeated for each corresponding to each candidate set of inputs and outputs. In some cases the number of possibilities is so large that such an approach becomes prohibitive. In this case some pre-screening is required, for example, based on physical insight or by analyzing the “large” model, ÐÐ , with all the candidate inputs and outputs included. This is briefly discussed in Section 10.4. A typical MIMO controllability analysis may proceed as follows: 1. Scale all variables (inputs Ù, outputs Ý , disturbances , references, Ö) to obtain a scaled model, Ý ´×µÙ · ´×µ Ö ÊÖ; see Section 1.4. 2. Obtain a minimal realization. 3. Check functional controllability. To be able to control the outputs independently, we first need at least as many inputs Ù as outputs Ý . Second, we need the rank of ´×µ to be equal to the number of outputs, Ð, i.e. the minimum singular value of ´ µ, ´ µ Ð ´ µ, should be non-zero (except at possible -axis zeros). If the plant is not functionally controllable then compute the output direction where the plant has no gain, see (6.16), to obtain insight into the source of the problem. 4. Compute the poles. For RHP (unstable) poles obtain their locations and associated directions; see (6.5). “Fast” RHP-poles far from the origin are bad. 5. Compute the zeros. For RHP-zeros obtain their locations and associated directions. Look for zeros pinned into certain outputs. “Small” RHP-zeros (close to the origin) are bad if tight performance at low frequencies is desired. ¢ 6. Obtain the frequency response ´ µ and compute the RGA matrix, £ ´ Ý µÌ . Plants with large RGA-elements at crossover frequencies are difficult to control and should be avoided. For more details about the use of the RGA see Section 3.6, page 87. 7. From now on scaling is critical. Compute the singular values of ´ µ and plot them as a function of frequency. Also consider the associated input and output singular vectors.

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË

¾

8. The minimum singular value, ´ ´ µµ, is a particularly useful controllability measure. It should generally be as large as possible at frequencies where control ½ then we cannot at frequency make independent is needed. If ´ ´ µµ output changes of unit magnitude by using inputs of unit magnitude. . At frequencies where 9. For disturbances, consider the elements of the matrix one or more elements is larger than 1, we need control. We get more information by considering one disturbance at a time (the columns of ). We must require for each disturbance that Ë is less than ½ ¾ in the disturbance direction Ý , i.e. ËÝ ¾ ½ ; see (6.33). Thus, we must at least require ´Ë µ ½ ¾ ¾ and we may have to require ´Ë µ ½ ¾ ; see (6.34).
Remark. If feedforward control is already used, then one may instead analyze Ã Ñ · where à denotes the feedforward controller, see (5.78).

´×µ

10. Disturbances and input saturation: First step. Consider the input magnitudes needed for perfect control by computing the elements in the matrix Ý . If all elements are less than 1 at all frequencies then input saturation is not expected to be a problem. If some elements of Ý are larger than 1, then perfect control ( ¼) cannot be achieved at this frequency, but “acceptable” control ( ¾ ½) may be possible, and this may be tested in the second step. Second step. Check condition (6.47), that is, consider the elements of Í À and make sure that the elements in the ’th row are smaller than frequencies.

´ µ · ½, at all

11. Are the requirements compatible? Look at disturbances, RHP-poles and RHPzeros and their associated locations and directions. For example, we must require À for each disturbance and each RHP-zero that Ý Þ ´Þ µ ½; see (6.35). For combined RHP-zeros and RHP-poles see (6.10) and (6.11). 12. Uncertainty. If the condition number ­ ´ µ is small then we expect no particular problems with uncertainty. If the RGA-elements are large, we expect strong sensitivity to uncertainty. For a more detailed analysis see the conclusion on page 243. 13. If decentralized control (diagonal controller) is of interest see the summary on page 453. 14. The use of the condition number and RGA are summarized separately in Section 3.6, page 87.

A controllability analysis may also be used to obtain initial performance weights for controller design. After a controller design one may analyze the controller by plotting, for example, its elements, singular values, RGA and condition number as a function of frequency.

¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

6.11.2 Plant design changes
If a plant is not input-output controllable, then it must somehow be modified. Some possible modifications are listed below. Controlled outputs. Identify the output(s) which cannot be controlled satisfactorily. Can the specifications for these be relaxed? Manipulated inputs. If input constraints are encountered then consider replacing or moving actuators. For example, this could mean replacing a control valve with a larger one, or moving it closer to the controlled output. If there are RHP-zeros which cause control problems then the zeros may often be eliminated by adding another input (possibly resulting in a non-square plant). This may not be possible if the zero is pinned to a particular output. Extra Measurements. If the effect of disturbances, or uncertainty, is large, and the dynamics of the plant are such that acceptable control cannot be achieved, then consider adding “fast local loops” based on extra measurements which are located close to the inputs and disturbances; see Section 10.8.3 and the example on page 207. Disturbances. If the effect of disturbances is too large, then see whether the disturbance itself may be reduced. This may involve adding extra equipment to dampen the disturbances, such as a buffer tank in a chemical process or a spring in a mechanical system. In other cases this may involve improving or changing the control of another part of the system, e.g. we may have a disturbance which is actually the manipulated input for another part of the system. Plant dynamics and time delays. In most cases, controllability is improved by making the plant dynamics faster and by reducing time delays. An exception to this is a strongly interactive plant, where an increased dynamic lag or time delay, may be helpful if it somehow “delays” the effect of the interactions; see (6.17). Another more obvious exception is for feedforward control of a measured disturbance, where a delay for the disturbance’s effect on the outputs is an advantage. Example 6.12 Removing zeros by adding inputs. Consider a stable ¾ ¢ ¾ plant

½ ´×µ

½ ×·½ ×·¿ ¾ ´× · ¾µ¾ ½

which has a RHP-zero at × ½ which limits achievable performance. The zero is not pinned to a particular output, so it will most likely disappear if we add a third manipulated input. Suppose the new plant is

¾ ´×µ

½ ×·½ ×·¿ ×· ¾ ¿ ´× · ¾µ¾ ½

which indeed has no zeros. It is interesting to note that each of the three ¾ ¾ ´×µ has a RHP-zero (located at × ½, × ½ and × ¿, respectively).

¢ ¾ sub-plants of

ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË

¾

Remark. Adding outputs. It has also been argued that it is possible to remove multivariable zeros by adding extra outputs. To some extent this is correct. For example, it is well-known that there are no zeros if we use all the states as outputs, see Example 4.13. However, to control all the states independently we need as many inputs as there are states. Thus, by adding outputs to remove the zeros, we generally get a plant which is not functionally controllable, so it does not really help.

6.11.3 Additional exercises
The reader will be better prepared for some of these exercises following an initial reading of Chapter 10 on decentralized control. In all cases the variables are assumed to be scaled as outlined in Section 1.4. Exercise 6.7 Analyze input-output controllability for

´×µ

½ ½ ¼×¼½×·½ ·¼ ½ ×¾ · ½¼¼ ×·½

½ ½

Compute the zeros and poles, plot the RGA as a function of frequency, etc.

Exercise 6.8 Analyze input-output controllability for

where

½ ×·½·« « « ×·½·« ´ × · ½µ´ × · ½ · ¾«µ ½¼¼; consider two cases: (a) « ¾¼, and (b) « ¾. ´×µ
̽ÓÙØ and « is the number of heat transfer units. ̾ÓÙØ

Remark. This is a simple “two-mixing-tank” model of a heat exchanger where Ù

Ý

̽ Ò , ̾ Ò

Exercise 6.9 Let

 ½¼ ¼ ¼  ½

Á

½¼ ½ ½ ½¼ ¼

¼ ¼ ¼ ½

(a) Perform a controllability analysis of ´×µ. Ü · Ù · and consider a unit disturbance Þ½ Þ¾ Ì . Which direction (b) Let Ü (value of Þ½ Þ¾ ) gives a disturbance that is most difficult to reject (consider both RHP-zeros and input saturation)? (c) Discuss decentralized control of the plant. How would you pair the variables?

Exercise 6.10 Consider the following two plants. Do you expect any control problems? Could decentralized or inverse-based control be used? What pairing would you use for decentralized control?

´×µ

½ × ½ × ½ ¾ ´× · ½µ´× · ¾¼µ   ¾ ×   ¾¼

Consider also the input magnitudes. Consider the following ¿ ¢ ¿ plant ´×µ ´×µ ½ ´ ¾¼×¾ · ¿¾ × · ½µ ¿¼ ´ ¾ ½× · ½µ ¿¼´ ¾ × · ½µ  ½ ´ × · ½µ ¿½ ¼´ × · ½µ´½ × · ½µ  ½ ½´ × · ½µ ½ ¾ ´  ¿ × · ½µ ½´ ¿× · ½µ ¼ ݽ ݾ Ý¿ Ù½ ´×µ Ù¾ Ù¿ ´×µ ´½ ½ × · ½µ´ × · ½µ . Remark. ×·¾¼ Exercise 6. ×·¾ ½ × ¾¼ .14 (a) Analyze input-output controllability for the following three plants each of which has ¾ inputs and ½ output: ´×µ ´ ½ ´×µ ¾ ´×µµ (i) ½ ´×µ ¾ ´×µ × ¾ .11 Order the following three plants in terms of their expected ease of controllability ½ ´×µ ½¼¼ ½¼¼ ½¼¼ ¾ ´×µ ¿ ´×µ Remember to also consider the sensitivity to input gain uncertainty.g. ½ ´×µ ½ ´×µ ×  ¾ ×·¾ × ¾ ×·¾ ¾ ´×µ ¾ ´×µ × ¾ ½ .12 Analyze input-output controllability for ´× µ ¾´  ×·½µ ¼¼¼× ´ ¼¼¼×·½µ´¾×·½µ ½¼¼×·½ ¿ ¿ ×·½ ×·½ ×·½ ½¼ ×·½ Exercise 6. A similar model form is encountered for distillation -configuration. In which case the physical reason for the columns controlled with the model being singular at steady-state is that the sum of the two manipulated inputs is fixed at .16 Controllability of an FCC process.15 Find the poles and zeros and analyze input-output controllability for ´× µ · ´½ ×µ ½ × ´½ ×µ · ½ × Here is a constant. ×·¾ (ii) (iii) (b) Design controllers and perform closed-loop simulations of reference tracking to complement your analysis. Exercise 6. ½. · Exercise 6.13 Analyze input-output controllability for ´×µ ½¼¼ ½¼¾ ½¼¼ ½¼¼ ½ ´×µ ½¼ ¾ ×·½  ½ ×·½ ½ Which disturbance is the worst? Exercise 6. e. steady-state.¾¼ ´×µ ½ ´×¾ · ¼ ½µ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ½ ¼ ½´×   ½µ ½¼´× · ¼ ½µ × ´× · ¼ ½µ × ½¼¼  ×  × ½¼¼ ½¼¼ ½¼¼  × ½¼¼ ½¼¼ Exercise 6.

That is. etc. Consider three options for the controlled outputs: ¢ ¢ ½ ݽ ݾ ¾ ݾ Ý¿ ¿ ݽ ݾ   Ý¿ In all three cases the inputs are Ù½ and Ù¾ . discuss the effect of the disturbance. which choice of outputs do you prefer? Which seems to be the worst? ¢ It may be useful to know that the zero polynomials: ¡ ½¼ × · ¿ ¾ ¡ ½¼ ¡ ½¼ ¡ ½¼ ×   ¡ ½¼ ׿ · ¿ ׿   ½ ¼ ׿   ¡ ½¼ ¡ ½¼ ¡ ½¼ ×¾ · ½ ¾¾ ¡ ½¼ × · ½ ¼¿ ¡ ½¼¿ ×¾   ½ ¡ ½¼ ×   ¿ ¡ ½¼¾ ×¾ · ¿ ¡ ½¼¿ × · ½ ¼ ¡ ½¼¾ have the following roots:  ¼ ¼  ¼ ¼ ¾  ¼ ¼ ½  ¼ ¼½¿¾ ¼ ¿¼¿  ¼ ¼ ¿¾  ¼ ¼½¿¾ ¼ ½  ¼ ¼ ¿¾ ¼ ¼¾¼¼  ¼ ¼½¿¾ (b) For the preferred choice of outputs in (a) do a more detailed analysis of the expected control performance (compute poles and zeros.12 Conclusion We have found that most of the insights into the performance limitations of SISO systems developed in Chapter 5 carry over to MIMO systems. ( (a) Based on the zeros of the three ¾ ¾ plants. What type of controller would you use? What pairing would you use for decentralized control? (c) Discuss why the ¿ ¢ ¿ plant may be difficult to control. the situation is usually the opposite with model uncertainty because for MIMO systems there is also uncertainty associated with plant directionality. comment on possible problems with input constraints (assume the inputs and outputs have been properly scaled). ¾ ´×µ and ¿ ´×µ. RHPpoles and disturbances. This is actually a model of a fluid catalytic cracking (FCC) reactor where Ù ´ × µÌ represents the circulation. We summarized on page 246 the main steps involved in an analysis of input-output controllability of MIMO plants. ½ ´×µ is called the Hicks control structure and ¿ ´×µ the conventional structure. 6. ½ ´×µ. we have a ¾ ¾ control problem. sketch RGA½½ . For RHP-zeros. and Ý ´Ì½ Ì Ý ÌÖ µÌ represents three temperatures. the issue of directions usually makes the limitations less severe for MIMO than for SISO systems. However. Assume that the third input is a disturbance Ù¿ ). This is an issue which is unique to MIMO systems. airflow and feed composition. More details are found in Hovd and Skogestad (1993). .ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾½ Acceptable control of this ¿ ¿ plant can be achieved with partial control of two outputs with input ¿ in manual (not used). Remark.).

¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .

ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ÇÊ ËÁËÇ Ë ËÌ ÅË In this chapter. 2. other approaches. These differences are referred to as model/plant mismatch or simply model uncertainty. Check Robust stability (RS): determine whether the system remains stable for all plants in the uncertainty set. . The key idea in the À ½ robust control paradigm we use is to check whether the design specifications are satisfied even for the “worst-case” uncertainty. Determine the uncertainty set: find a mathematical representation of the model uncertainty (“clarify what we know about what we don’t know”). if the worst-case plant rarely or never occurs.1 Introduction to robustness A control system is robust if it is insensitive to differences between the actual system and the model of the system which was used to design the controller. Check Robust performance (RP): if RS is satisfied. may yield better performance. Chapter 8 is devoted to a more general treatment. we show how to represent uncertainty by real or complex perturbations and we analyze robust stability (RS) and robust performance (RP) for SISO systems using elementary methods. This approach may not always achieve optimal performance. 7. such as optimizing some average performance or using adaptive control. the linear uncertainty descriptions presented in this book are very useful in many practical situations. determine whether the performance specifications are met for all plants in the uncertainty set. Our approach is then as follows: 1. In particular. Nevertheless. 3.

and there will be an infinite number of possible plants Ô in the set ¥. whereas model uncertainty combined with feedback may easily create instability. Remark.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ It should also be appreciated that model uncertainty is not the only concern when it comes to robustness. ´×µ ¾ ¥ – nominal plant model (with no uncertainty). consider a plant with a nominal model Ý · where represents additive model uncertainty. if a control design is based on an optimization. when we refer to robustness in this book. e. We adopt the following notation: Sometimes Ô is used rather than ¥ to denote the uncertainty set. and assume that a fixed (linear) controller is used. Then the output plant model be Ô of the perturbed plant is Ý Ù· ½· ¾ (7. which denotes performance. We will use a “norm-bounded uncertainty description” where the set ¥ is generated by allowing À½ norm-bounded stable perturbations to the nominal plant ´×µ. The subscript Ô stands for perturbed or possible or ¥ (pick your choice). sometimes denoted the “uncertainty set”. Furthermore. Signal uncertainty ( ¾ Ù· . Other considerations include sensor and actuator failures. This should not be confused with the subscript capital È . Ô ´×µ ¾ ¥ and ¼ ´×µ ¾ ¥ – particular perturbed plant models. Is this an acceptable strategy? In general. We let denote a perturbation which is not normalized. This is easily illustrated for linear systems where the addition of disturbances does not affect system stability. this is the only way of handling model uncertainty within the so-called LQG approach to optimal control (see Chapter 9). in ÛÈ . we mean robustness with respect to model uncertainty.g. the numerical design algorithms themselves may not be robust. For example. This corresponds to a continuous description of the model uncertainty. ¥ – a set of possible perturbed plant models. then robustness problems may also be caused by the mathematical objective function not properly describing the real control problem. and let ¡ denote a normalized perturbation with À ½ norm less than ½. changes in control objectives. Also. To account for model uncertainty we will assume that the dynamic behaviour of a plant is described not by a single linear time invariant model but by a set ¥ of possible linear time invariant models. physical constraints.1) ÔÙ · Ù) for two reasons: ½ Ù) ) . and let the perturbed For example. Uncertainty in the model ( 2. Another strategy for dealing with model uncertainty is to approximate its effect on the feedback system by adding fictitious disturbances or noise. However. where Ý is different from what we ideally expect (namely 1. whereas ¼ always refers to a specific uncertain plant. etc. the opening and closing of loops. the answer is no.

2. in reality Û Ù · ¾ . For example. but some of the parameters are uncertain. The parameters in the linear model may vary due to nonlinearities or changes in the operating conditions. either through deliberate neglect or because of a lack of understanding of the physical process. Any model of a real system will contain this source of uncertainty. Then in the design problem we may make Û large by selecting appropriate weighting functions. it may be important to explicitly take into account be unstable for some model uncertainty when studying feedback control. since the actual input is often measured and adjusted in a cascade manner. Parametric uncertainty. .2 Representing uncertainty Uncertainty in the plant model may have several origins: 1.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ In LQG control we set Û ½ · ¾ where Û is assumed to be an independent variable such as white noise. and the uncertainty will always exceed 100% at some frequency. However. but its presence will never cause instability. In other cases limited valve resolution may cause input uncertainty. At high frequencies even the structure and the model order is unknown. this is often the case with valves where a flow controller is often used. In conclusion. In this case one may include uncertainty to allow for controller order reduction and implementation inaccuracies. usually at high frequencies. Neglected and unmodelled dynamics uncertainty. Finally. 4. The various sources of model uncertainty mentioned above may be grouped into two main classes: 1. Here the structure of the model (including the order) is known. 6. the closed-loop system ´Á · ´ · µÃ µ ½ may ¼. Here the model is in error because of missing dynamics. Measurement devices have imperfections. 2. Even when a very detailed model is available we may choose to work with a simpler (low-order) nominal model and represent the neglected dynamics as “uncertainty”. the controller implemented may differ from the one obtained by solving the synthesis problem. This may even give rise to uncertainty on the manipulated inputs. so Û depends on the signal Ù and this may cause instability in the presence of feedback when Ù depends on Ý . Specifically. 5. 3. There are always parameters in the linear model which are only known approximately or are simply in error. 7. We will next discuss some sources of model uncertainty and outline how to represent these mathematically.

That is. Lumped uncertainty. For completeness one may consider a third class of uncertainty (which is really a combination of the other two): 3. Ö « ´«Ñ Ü   «Ñ Òµ ´«Ñ Ü · «Ñ Òµ is the relative uncertainty in the parameter. Neglected and unmodelled dynamics uncertainty is somewhat less precise and thus more difficult to quantify. are × Þ ×·Þ ½ ×·½ ½ ´ × · ½µ¿ ×¾ · ¼ ½× · ½ ¼½ . In this chapter.1.2) which may be represented by the block diagram in Figure 7. Some examples of allowable ¡ Á ´×µ’s with À½ norm less than one.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Parametric uncertainty will be quantified by assuming that each uncertain parameter is bounded within some region « Ñ Ò «Ñ Ü . This leads to complex perturbations which we normalize such that ¡ ½ ½. we will deal mainly with this class of perturbations. Here the uncertainty description represents one or several sources of parametric and/or unmodelled dynamics uncertainty combined into a single lumped perturbation of a chosen structure. but it appears that the frequency domain is particularly well suited for this class. we have parameter sets of the form «Ô «´½ · Ö« ¡µ where « is the mean parameter value. Here ¡ Á ´×µ is any stable transfer function which at each frequency is less than or equal to one in magnitude. ¹ ÛÁ ¹ ¡Á Ô Õ ¹+ ¹ + Figure 7. and ¡ is any real scalar satisfying ¡ ½. ¡Á ½ ½.1: Plant with multiplicative uncertainty ¹ The frequency domain is also well suited for describing lumped uncertainty. In most cases we prefer to lump the uncertainty into a multiplicative uncertainty of the form ¥Á Ô ´×µ ´×µ´½ · ÛÁ ´×µ¡Á ´×µµ ¡Á ´ µ ßÞ ¡Á ½ ½ ½ (7.

However. In addition. Weinmann (1991) gives a good overview. Alternative approaches for describing uncertainty and the resulting performance may be considered.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ Remark 1 The stability requirement on ¡Á ´×µ may be removed if one instead assumes that the number of RHP poles in ´×µ and Ô ´×µ remains unchanged. and to consider the “average” response. lumped dynamics uncertainty is sometimes called unstructured uncertainty. and it also allows for poles crossing between the left. and it is very well suited when it comes to making simple yet realistic lumped uncertainty descriptions. Another approach is the multi-model approach in which one considers a finite set of alternative models. there are many ways to define uncertainty. one should be careful about using these terms because there can be several levels of structure. Analogously. A problem with the multi-model approach is that it is not clear how to pick the set of models such that they represent the limiting (“worstcase”) plants. and of these the À ½ frequency-domain approach. The multi-model approach can also be used when there is unmodelled dynamics uncertainty. However. which is better suited for representing pole uncertainty. the frequency-domain is excellent for describing neglected or unknown dynamics. especially for MIMO systems. . there are several ways to handle parametric uncertainty. may not be the best or the simplest. This stochastic uncertainty is. from stochastic uncertainty to differential sensitivity (local robustness) and multi-models. normal) distribution of the parameters. Remark 2 The subscript Á denotes “input”. Performance may be measured in terms of the worstcase or some average of these models’ responses. In particular.and right-half planes.3) Even with a stable ¡ Á ´×µ this form allows for uncertainty in the location of an unstable pole. To summarize. however. Parametric uncertainty is sometimes called structured uncertainty as it models the uncertainty in a structured manner. difficult to analyze exactly. is the inverse multiplicative uncertainty ¥Á Ô ´×µ ´×µ´½ · Û Á ´×µ¡ Á ´×µµ ½ ¡ Á´ µ ½ (7. One approach for parametric uncertainty is to assume a probabilistic (e. but it can handle most situations as we will see. used in this book. This approach is well suited for parametric uncertainty as it eases the burden of the engineer in representing the uncertainty. Remark. in order to simplify the stability proofs we will in this book assume that the perturbations are stable.g. but for SISO systems it doesn’t matter whether we consider the perturbation at the input or output of the plant. since ´½ · ÛÁ ¡Á µ ´½ · ÛÇ ¡Ç µ Û Ø ¡Á ´×µ ¡Ç ´×µ Ò ÛÁ ´×µ ÛÇ ´×µ Another uncertainty form.

3 Parametric uncertainty In spite of what is sometimes claimed. Example 7.1 Gain uncertainty.3). (7. Note that it does not make physical sense ¼  corresponds to a pole at infinity in the RHP.5) where Ö is the relative magnitude of the gain uncertainty and is the average gain. since it ¼. the model set (7.4) ¼ ´×µ is a transfer function with no uncertainty. This is discussed in more detail in Section 7. at least if we restrict the perturbations ¡ to be real.7.85). Consider a set of plants. given by Ô ´×µ By writing Ô rewritten as Ô× · ½ ½ ¼ ´×µ ÑÒ Ô ÑÜ (7.2 Time constant uncertainty.6) where ¡ is a real scalar and ´×µ is the nominal plant. .5) with ¡ ½. one should instead consider parametric uncertainty in the pole itself. The uncertainty is in the form of (7. Let the set of possible plants be Ô ´×µ where Ô is an uncertain gain and Ô ¼ ´×µ ÑÒ Ö Ô ÑÜ ¾ (7.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7. for Ô to change sign. To represent cases in which a pole may cross between the half planes.6) Ö . parametric uncertainty may also be represented in the À ½ framework. Example 7. because a value Ô and the corresponding plant would be impossible to stabilize. Here we provide just two simple examples. The usefulness of this is rather limited.8) which is in the inverse multiplicative form of (7. with an uncertain time constant. at least is impossible to get any benefit from control for a plant where we can have Ô with a linear controller. We see that the uncertainty in (7.2) with a constant multiplicative weight ÛÁ ´×µ ¼ and description in (7. similar to (7. as described in (7. By writing Ô ´½ · Ö ¡µ ¸ ÑÒ· Ñ Ü ¾ ¼ ´×µ´½ · Ö ¡µ ´×µ ¸ ´ Ñ Ü   Ñ Òµ (7. however.7) can Ô ´×µ ¼ ½ · × · Ö ×¡ ½ ¼ ½ · × ½ · Û Á ´×µ¡ ßÞ ´×µ Û Á ´×µ Ö × ½· × (7.7) be ´½ · Ö ¡µ.4) may be rewritten as multiplicative uncertainty Ô ´×µ ßÞ ¡ ½ (7. ½ ´× · Ôµ.6) can also handle cases where the gain changes sign ( Ñ Ò Ñ Ü ¼) corresponding to Ö ½.

especially when it comes to controller synthesis. if there are several real perturbations. The exact model structure is required and so unmodelled dynamics cannot be dealt with. Therefore.  ½ a complex perturbation with ¡´ µ ½. which are more difficult to deal with mathematically and numerically. but simply stated the answer is that with several uncertain parameters the true uncertainty region is often quite “disk-shaped”. 2.4 Representing uncertainty in the frequency domain In terms of quantifying unmodelled dynamics uncertainty the frequency-domain approach (À ½ ) does not seem to have much competition (when compared with other norms). It usually requires a large effort to model parametric uncertainty. perturbation is used. . However. and may be more accurately represented by a single complex perturbation.. A parametric uncertainty model is somewhat deceiving in the sense that it provides a very detailed and accurate description. Typically. parametric uncertainty is often represented by complex perturbations. is the raison d’ etre for À½ feedback optiˆ mization.g. However. 7. even though the underlying assumptions about the model and the parameters may be much less exact. 4. Ô How is it possible that we can reduce conservatism by lumping together several real perturbations? This will become clearer from the examples in the next section. for if disturbances and plant models are clearly parameterized then À½ methods seem to offer no clear advantages over more conventional state-space and parametric methods. Real perturbations are required.. a complex multiplicative ´Á · ÛÁ ¡µ. e. 3. then the conservatism is often reduced by lumping these perturbations into a single complex perturbation. This is of course conservative as it introduces possible plants that are not present in the original set.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ As shown by the above examples one can represent parametric uncertainty in the À½ framework. Owen and Zames (1992) make the following observation: The design of feedback controllers in the presence of non-parametric and unstructured uncertainty . In fact. we may simply replace the real perturbation. parametric uncertainty is often avoided for the following reasons: 1. ¡ ½ by For example.

these uncertainty regions have complicated shapes and complex mathematical descriptions.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7.2: Uncertainty regions of the Nyquist plot at given frequencies.4.g.9) Step 1. Thus. Nevertheless. However. they allow for “jumps” in Ô ´ µ from one frequency to the next (e. We therefore approximate such complex regions as discs (circles) as shown in Figure 7. we derive in this and the next chapter necessary and sufficient frequency-by-frequency conditions for robust stability based on uncertainty regions. (1986)) which avoid the second conservative step. For example. from one corner of a region to another). At each frequency.2 seem to allow for more uncertainty. the only conservatism is in the second step where we approximate the original uncertainty region by a larger disc-shaped region as shown in Figure 7.1 Uncertainty regions To illustrate how parametric uncertainty translates into frequency domain uncertainty. and are cumbersome to deal with in the context of control system design.3. as already mentioned these .9) Remark 1 There is no conservatism introduced in the first step when we go from a parametric uncertainty description as in (7.2 the Nyquist plots (or regions) generated by the following set of plants Ô ´×µ ×·½  × ¾ ¿ (7. e. a region of complex numbers Ô ´ µ is generated by varying the three parameters in the ranges given by (7. see Figure 7. This is somewhat surprising since the uncertainty regions in Figure 7. resulting in a (complex) additive uncertainty description as discussed next. see Laughlin et al. ÁÑ ½ ¾ ¼ ¼½ Ê ¼¼ ¼ ¼¾ Figure 7. consider in Figure 7.2. Data from (7.9) to an uncertainty region description as in Figure 7.9).2. Remark 2 Exact methods do exist (using complex region mapping. Step 2.g. In general.3.

ÍÆ

ÊÌ ÁÆÌ

Æ ÊÇ ÍËÌÆ ËË

¾½

 ½

+

 ¾

 ¿

½

¾

Figure 7.3: Disc approximation (solid line) of the original uncertainty region (dashed line). Plot corresponds to ¼ ¾ in Figure 7.2 methods are rather complex, and although they may be used in analysis, at least for simple systems, they are not really suitable for controller synthesis and will not be pursued further in this book. Remark 3 From Figure 7.3 we see that the radius of the disc may be reduced by moving the center (selecting another nominal model). This is discussed in Section 7.4.4.

7.4.2 Representing uncertainty regions by complex perturbations
We will use disc-shaped regions to represent uncertainty regions as illustrated by the Nyquist plots in Figures 7.3 and 7.4. These disc-shaped regions may be generated by additive complex norm-bounded perturbations (additive uncertainty) around a nominal plant

¥

Ô ´×µ

´×µ · Û ´×µ¡ ´×µ

¡ ´ µ

½

(7.10)

where ¡ ´×µ is any stable transfer function which at each frequency is no larger than one in magnitude. How is this possible? If we consider all possible ¡ ’s, then at each frequency ¡ ´ µ “generates” a disc-shaped region with radius 1 centred at 0, so ´ µ · Û ´ µ¡ ´ µ generates at each frequency a disc-shaped region of radius Û ´ µ centred at ´ µ as shown in Figure 7.4. In most cases Û the case).

´×µ is a rational transfer function (although this need not always be

¾¾

ÅÍÄÌÁÎ ÊÁ
ÁÑ

Ä

à ÇÆÌÊÇÄ

+

Ê

Û
+

´

µ

´

µ

+

Figure 7.4: Disc-shaped uncertainty regions generated by complex additive uncertainty,

Ô

·Û ¡

One may also view Û ´×µ as a weight which is introduced in order to normalize the perturbation to be less than 1 in magnitude at each frequency. Thus only the magnitude of the weight matters, and in order to avoid unnecessary problems we always choose Û ´×µ to be stable and minimum phase (this applies to all weights used in this book).
ÁÑ

(centre)

Û

·
Ê

Figure 7.5: The set of possible plants includes the origin at frequencies where ´ µ , or equivalently ÛÁ ´ µ ½

Û ´ µ

The disk-shaped regions may alternatively be represented by a multiplicative uncertainty description as in (7.2),

¥Á

Ô ´×µ

´×µ´½ · ÛÁ ´×µ¡Á ´×µµ

¡Á ´ µ

½

(7.11)

By comparing (7.10) and (7.11) we see that for SISO systems the additive and

ÍÆ

ÊÌ ÁÆÌ

Æ ÊÇ ÍËÌÆ ËË

¾¿

multiplicative uncertainty descriptions are equivalent if at each frequency

ÛÁ ´ µ

Û ´ µ

´ µ

(7.12)

However, multiplicative (relative) weights are often preferred because their ½ the numerical value is more informative. At frequencies where Û Á ´ µ uncertainty exceeds 100% and the Nyquist curve may pass through the origin. This follows since, as illustrated in Figure 7.5, the radius of the discs in the Nyquist plot, Û ´ µ ´ µÛÁ ´ µ , then exceeds the distance from ´ µ to the origin. At these frequencies we do not know the phase of the plant, and we allow for zeros crossing from the left to the right-half plane. To see this, consider a frequency ¼ where ÛÁ ´ ¼ µ ½. Then there exists a ¡Á ½ such that Ô ´ ¼ µ ¼ in (7.11), that is, there exists a possible plant with zeros at × ¦ ¼ . For this plant at frequency ¼ the input has no effect on the output, so control has no effect. It then follows that tight control is not possible at frequencies where Û Á ´ µ ½ (this condition is derived more rigorously in (7.33)).

7.4.3 Obtaining the weight for complex uncertainty
Consider a set ¥ of possible plants resulting, for example, from parametric uncertainty as in (7.9). We now want to describe this set of plants by a single (lumped) complex perturbation, ¡ or ¡Á . This complex (disk-shaped) uncertainty description may be generated as follows: 1. Select a nominal model ´×µ. 2. Additive uncertainty. At each frequency find the smallest radius includes all the possible plants ¥:

Ð ´ µ

which

Ñ Ü Ô´ µ   ´ µ (7.13) È ¾¥ If we want a rational transfer function weight, Û ´×µ (which may not be the case

Ð ´ µ

if we only want to do analysis), then it must be chosen to cover the set, so

Û ´ µ

Ð ´ µ

(7.14)

Usually Û ´×µ is of low order to simplify the controller design. Furthermore, an objective of frequency-domain uncertainty is usually to represent uncertainty in a simple straightforward manner. 3. Multiplicative (relative) uncertainty. This is often the preferred uncertainty form, and we have ¬ ¬ ¬ Ô´ µ   ´ µ ¬ ¬ ¬ ÐÁ ´ µ Ñ Ü ¬ (7.15) ¬
Ô ¾¥

´ µ

and with a rational weight

ÛÁ ´ µ

ÐÁ ´ µ

(7.16)

¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

Example 7.3 Multiplicative weight for parametric uncertainty. Consider again the set
of plants with parametric uncertainty given in (7.9)

¥

Ô ´×µ

×·½

 ×

¾

¿

(7.17)

We want to represent this set using multiplicative uncertainty with a rational weight ÛÁ ´×µ. To simplify subsequent controller design we select a delay-free nominal model

´×µ

×·½

¾ ¾ ×·½

(7.18)

To obtain ÐÁ ´ µ in (7.15) we consider three values (¾, ¾ and ¿) for each of the three ). (This is not, in general, guaranteed to yield the worst case as the worst parameters ( µ case may be at the interior of the intervals.) The corresponding relative errors ´ Ô are shown as functions of frequency for the ¿¿ ¾ resulting Ô ’s in Figure 7.6. The curve

 

Magnitude

10

0

10

−1

10

−2

10

−1

Frequency

10

0

10

1

Figure 7.6: Relative errors for 27 combinations of and with delay-free nominal plant (dotted lines). Solid line: First-order weight ÛÁ ½ in (7.19). Dashed line: Third-order weight ÛÁ in (7.20) for ÐÁ ´ µ must at each frequency lie above all the dotted lines, and we find that ÐÁ ´ µ is ¼ ¾ at low frequencies and ¾ at high frequencies. To derive ÛÁ ´×µ we first try a simple first-order weight that matches this limiting behaviour:

ÛÁ ½ ´×µ

Ì× · ¼ ¾ ´Ì ¾ µ× · ½

Ì

(7.19)

As seen from the solid line in Figure 7.6, this weight gives a good fit of ÐÁ ´ µ, except around ½ where ÛÁ ½ ´ µ is slightly too small, and so this weight does not include all possible ÐÁ ´ µ at all frequencies, we can multiply ÛÁ ½ by a plants. To change this so that ÛÁ ´ µ ½. The following works well correction factor to lift the gain slightly at

ÛÁ ´×µ

Á ½ ´×µ

×¾ · ½ × · ½ ×¾ · ½ × · ½

(7.20)

ÍÆ

ÊÌ ÁÆÌ

Æ ÊÇ ÍËÌÆ ËË

¾

as is seen from the dashed line in Figure 7.6. The magnitude of the weight crosses ½ at ¼ ¾ . This seems reasonable since we have neglected the delay in our nominal about ¼ ¿¿ (see model, which by itself yields ½¼¼% uncertainty at a frequency of about ½ Ñ Ü Figure 7.8(a) below).

An uncertainty description for the same parametric uncertainty, but with a meanvalue nominal model (with delay), is given in Exercise 7.8. Parametric gain and delay uncertainty (without time constant uncertainty) is discussed further on page 268.
Remark. Pole uncertainty. In the example we represented pole (time constant) uncertainty by a multiplicative perturbation, ¡Á . We may even do this for unstable plants, provided the poles do not shift between the half planes and one allows ¡Á ´×µ to be unstable. However, if the pole uncertainty is large, and in particular if poles can cross form the LHP to the RHP, then one should use an inverse (“feedback”) uncertainty representation as in (7.3).

7.4.4 Choice of nominal model
With parametric uncertainty represented as complex perturbations there are three main options for the choice of nominal model: 1. A simplified model, e.g. a low-order, delay-free model. ´×µ. 2. A model of mean parameter values, ´×µ 3. The central plant obtained from a Nyquist plot (yielding the smallest discs). Option 1 usually yields the largest uncertainty region, but the model is simple and this facilitates controller design in later stages. Option 2 is probably the most straightforward choice. Option 3 yields the smallest region, but in this case a significant effort may be required to obtain the nominal model, which is usually not a rational transfer function and a rational approximation could be of very high order. Example 7.4 Consider again the uncertainty set (7.17) used in Example 7.3. The nominal models selected for options 1 and 2 are

½ ´×µ

×·½

¾ ´ ×µ

×·½

 ×

For option 3 the nominal model is not rational. The Nyquist plot of the three resulting discs at ¼ are shown in Figure 7.7. frequency Remark. A similar example was studied by Wang et al. (1994), who obtained the best controller designs with option 1, although the uncertainty region is clearly much larger in this case. The reason for this is that the “worst-case region” in the Nyquist plot in Figure 7.7 corresponds quite closely to those plants with the most negative phase (at coordinates about ´ ½ ½ µ). Thus, the additional plants included in the largest region (option 1) are

 

 

¾
2

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

1

0

-1

¾

+ +

¿

+

½

-2

-3

-4 -2

-1

0

1

2

3

4

Figure 7.7: Nyquist plot of Ô ´ µ at frequency ¼ (dashed region) showing complex disc approximations using three options for the nominal model: 1. Simplified nominal model with no time delay 2. Mean parameter values 3. Nominal model corresponding to the smallest radius

generally easier to control and do not really matter when evaluating the worst-case plant with respect to stability or performance. In conclusion, at least for SISO plants, we find that for plants with an uncertain time delay, it is simplest and sometimes best (!) to use a delay-free nominal model, and to represent the nominal delay as additional uncertainty.

The choice of nominal model is only an issue since we are lumping several sources of parametric uncertainty into a single complex perturbation. Of course, if we use a parametric uncertainty description, based on multiple real perturbations, then we should always use the mean parameter values in the nominal model.

7.4.5 Neglected dynamics represented as uncertainty
We saw above that one advantage of frequency domain uncertainty descriptions is that one can choose to work with a simple nominal model, and represent neglected dynamics as uncertainty. We will now consider this in a little more detail. Consider a set of plants

Ô ´×µ

¼ ´×µ ´×µ

where ¼ ´×µ is fixed (and certain). We want to neglect the term ´×µ (which may be fixed or may be an uncertain set ¥ ), and represent Ô by multiplicative uncertainty with a nominal model ¼ . From (7.15) we get that the magnitude of the relative

ÍÆ

ÊÌ ÁÆÌ

Æ ÊÇ ÍËÌÆ ËË
¬ ¬ Ñ Ü¬ Ô Ô ¬

¾
¬ ¬ ¬ ¬

uncertainty caused by neglecting the dynamics in

ÐÁ ´ µ

 

´×µ is ´ µ ½
(7.21)

´×µ¾¥

Ñ Ü

Three examples illustrate the procedure.   Ô× , where ¼ Ô 1. Neglected delay. Let ´×µ Ñ Ü . We want to represent   Ô × by a delay-free plant ¼ ´×µ and multiplicative uncertainty. Let Ô ¼ ´×µ us first consider the maximum delay, for which the relative error ½     Ñ Ü is shown as a function of frequency in Figure 7.8(a). The relative uncertainty crosses 1 in magnitude at about frequency ½ Ñ Ü , reaches 2 at frequency Ñ Ü (since at Ñ Ü  ½), and oscillates between 0 and 2 at higher frequencies this frequency (which corresponds to the Nyquist plot of   Ñ Ü going around and around the unit circle). Similar curves are generated for smaller values of the delay, and they also oscillate between ¼ and ¾ but at higher frequencies. It then follows that if we consider all ¾ ¼ Ñ Ü then the relative error bound is 2 at frequencies above Ñ Ü, and we have

ÐÁ ´ µ

½    ¾

ÑÜ

ÑÜ ÑÜ
¼.
1

(7.22)

Rational approximations of (7.22) are given in (7.26) and (7.27) with Ö
10
1

10

Magnitude

10

Magnitude

0

10

0

10

−1

10

−1

10

−2

10

−2

½ ÑÜ
10 Frequency (a) Time delay
0

10

2

10

−2

10

−2

½ ÑÜ
10 Frequency (b) First-order lag
0

10

2

Figure 7.8: Multiplicative uncertainty resulting from neglected dynamics

2. Neglected lag. Let ´×µ ½ ´ Ô × · ½µ, where ¼ Ô Ñ Ü . In this case the resulting ÐÁ ´ µ, which is shown in Figure 7.8(b), can be represented by a rational ÐÁ ´ µ where transfer function with Û Á ´ µ

ÛÁ ´×µ

½ 

½ Ñ Ü× · ½

Ñ Ü× Ñ Ü× · ½

This weight approaches 1 at high frequency, and the low-frequency asymptote crosses 1 at frequency ½ Ñ Ü .

¾

ÅÍÄÌÁÎ ÊÁ

Ä

à ÇÆÌÊÇÄ

3. Multiplicative weight for gain and delay uncertainty. Consider the following set of plants   × (7.23) Ô ´×µ Ô Ô ¼ ´×µ Ô ¾ ÑÒ Ñ Ü Ô ¾ ÑÒ Ñ Ü which we want to represent by multiplicative uncertainty and a delay-free nominal ´ Ñ Ü  Ñ Òµ ¾ . Lundstr¨ m Ñ Ò · Ñ Ü and Ö o model, ´×µ ¼ ´×µ where ¾ (1994) derived the following exact expression for the relative uncertainty weight

ÐÁ ´ µ

Ö¾ · ¾´½ · Ö µ´½   Ó× ´ Ñ Ü µµ ¾·Ö

Ô

for for

where Ö is the relative uncertainty in the gain. This bound is irrational. To derive a rational weight we first approximate the delay by a first-order Pad´ approximation to e get ¡     ½ · Ö¾ Ñ Ü × · Ö (7.25) ½   Ñ Ü×   Ñ Ü×   ¾   ´½ · Ö µ ÑÜ ÑÜ ÑÜ

ÑÜ ÑÜ

(7.24)

½· ¾ ×

¾ ×·½

Since only the magnitude matters this may be represented by the following first-order weight ´½ · Ö¾ µ Ñ Ü × · Ö ÛÁ ´×µ (7.26) ÑÜ However, as seen from Figure 7.9 by comparing the dotted line (representing Û Á )
10
1

¾ ×·½

Magnitude

ÐÁ
10
0

ÛÁ

10

−1

10

−2

10

−1

Frequency

10

0

10

1

10

2

Figure 7.9: Multiplicative weight for gain and delay uncertainty in (7.23)

with the solid line (representing Ð Á ), this weight ÛÁ is somewhat optimistic (too ÐÁ ´ µ small), especially around frequencies ½ Ñ Ü . To make sure that Û Á ´ µ at all frequencies we apply a correction factor and get a third-order weight

ÛÁ ´×µ

´½ · Ö¾ µ Ñ Ü × · Ö Ñ Ü× · ½ ¾

¿ ¡   ¾ Ñ Ü¿ ¡¾ ×¾ · ¾ ¡ ¼ ¾¿ ¿ × ·¾¡¼

 

Ñ Ü ¡¾ ¾

¿

¡ ¾ Ñ Ü¿ × · ½ ¿ Ñ Ü×·½ ¡ ¾¿ ¿

(7.27)

ÍÆ

ÊÌ ÁÆÌ

Æ ÊÇ ÍËÌÆ ËË

¾

The improved weight Û Á ´×µ in (7.27) is not shown in Figure 7.9, but it would be almost indistinguishable from the exact bound given by the solid curve. In practical applications, it is suggested that one starts with a simple weight as in (7.26), and if it later appears important to eke out a little extra performance then one can try a higher-order weight as in (7.27). Example 7.5 Consider the set Ô ´×µ Ô   Ô × ¼ ´×µ with ¾ Ô ¿ and ¾ Ô ¿. ¾ ¼ and relative We approximate this with a nominal delay-free plant ¼ ·¼ uncertainty. The simple first-order weight in (7.26), ÛÁ ´×µ ¿½¿××·½¾ , is somewhat optimistic.
To cover all the uncertainty we may use (7.27), ÛÁ ´×µ

¾ ¿ ¿×·¼ ¾ ¡ ½ ½¾×¾ ·¾ ½¾ ×·½ . ½ ×·½ ½ ½¾× ·½ ¿ ×·½

7.4.6 Unmodelled dynamics uncertainty
Although we have spent a considerable amount of time on modelling uncertainty and deriving weights, we have not yet addressed the most important reason for using frequency domain (À ½ ) uncertainty descriptions and complex perturbations, namely the incorporation of unmodelled dynamics. Of course, unmodelled dynamics is close to neglected dynamics which we have just discussed, but it is not quite the same. In unmodelled dynamics we also include unknown dynamics of unknown or even infinite order. To represent unmodelled dynamics we usually use a simple multiplicative weight of the form

ÛÁ ´×µ

× · Ö¼ ´ Ö½ µ× · ½

(7.28)

where Ö¼ is the relative uncertainty at steady-state, ½ is (approximately) the frequency at which the relative uncertainty reaches 100%, and Ö ½ is the magnitude ¾). Based on the above examples of the weight at high frequency (typically, Ö ½ and discussions it is hoped that the reader has now accumulated the necessary insight to select reasonable values for the parameters Ö ¼ Ö½ and for a specific application. The following exercise provides further support and gives a good review of the main ideas. Exercise 7.1 Suppose that the nominal model of a plant is

´×µ

½ ×·½

and the uncertainty in the model is parameterized by multiplicative uncertainty with the weight

ÛÁ ´×µ

¼ ½¾ × · ¼ ¾ ´¼ ½¾ µ× · ½

Call the resulting set ¥. Now find the extreme parameter values in each of the plants (a)-(g) below so that each plant belongs to the set ¥. All parameters are assumed to be positive. One

¾¼

ÅÍÄÌÁÎ ÊÁ

Ä ¼(

à ÇÆÌÊÇÄ
, etc.) and adjust the

 ½ ¼ ½ in (7.15) for each approach is to plot ÐÁ ´ µ parameter in question until ÐÁ just touches ÛÁ ´ µ .

 

(a) Neglected delay: Find the largest (b) Neglected lag: Find the largest

for for for

(c) Uncertain pole: Find the range of

½ ×· (Answer: ¼ to ½ ¿¿).

½ ×·½ (Answer: ¼ ½ ).

  × (Answer: ¼ ½¿).

(d) Uncertain pole (time constant form): Find the range of Ì for ½ ). (e) Neglected resonance: Find the range of to ¼ ). for

½ Ì×·½ (Answer: ¼ to
(Answer: ¼ ¼¾

½ ´× ¼µ¾ ·¾ ´× ¼µ·½ ½ ¼ ¼½×·½
Ñ

(f) Neglected dynamics: Find the largest integer Ñ for

(Answer: ½¿).

  Þ ×·½ (Answer: ¼ ¼ ). These results (g) Neglected RHP-zero: Find the largest Þ for Þ ×·½ imply that a control system which meets given stability and performance requirements for all , plants in ¥, is also guaranteed to satisfy the same requirements for the above plants . ½ ´× (h) Repeat the above with a new nominal plant ½ ´Ì × ½µ). (Answer: Same as above). same except

 

  ½µ (and with everything else the

Exercise 7.2 Repeat Exercise 7.1 with a new weight,

ÛÁ ´×µ

×·¼¿ ´½ ¿µ× · ½

We end this section with a couple of remarks on uncertainty modelling: 1. We can usually get away with just one source of complex uncertainty for SISO systems. 2. With an À½ uncertainty description, it is possible to represent time delays (corresponding to an infinite-dimensional plant) and unmodelled dynamics of infinite order , using a nominal model and associated weights with finite order.

7.5 SISO Robust stability
We have so far discussed how to represent the uncertainty mathematically. In this section, we derive conditions which will ensure that the system remains stable for all perturbations in the uncertainty set, and then in the subsequent section we study robust performance.

ÍÆ

ÊÌ ÁÆÌ

Æ ÊÇ ÍËÌÆ ËË

¾½

¹
-

ÛÁ

¹

¡Á

Ô

¹

Ã

Õ

¹+ ¹ +

Õ¹

Figure 7.10: Feedback system with multiplicative uncertainty

7.5.1 RS with multiplicative uncertainty
We want to determine the stability of the uncertain feedback system in Figure 7.10 when there is multiplicative (relative) uncertainty of magnitude Û Á ´ µ . With uncertainty the loop transfer function becomes

ÄÔ

ÔÃ

à ´½ · ÛÁ ¡Á µ

Ä · ÛÁ Ä¡Á

¡Á ´ µ

½

(7.29)

As always, we assume (by design) stability of the nominal closed-loop system (i.e. ¼). For simplicity, we also assume that the loop transfer function Ä Ô is with ¡Á stable. We now use the Nyquist stability condition to test for robust stability of the closed-loop system. We have

ÊË

¸

ËÝ×Ø Ñ ×Ø Ð

ÄÔ ÄÔ
(7.30)

¸ ÄÔ should not encircle the point   ½
ÁÑ

 ½
¼ ½· Ê

Ä´

µ

Ä´

µ

ÛÁ Ä
Figure 7.11: Nyquist plot of ÄÔ for robust stability

31) ¸ ÛÁ Ì ½ (7.34) is exact (necessary and sufficient) provided there exist uncertain plants such that at each ½ are possible.33) ÊË ¸ Ì ½ ÛÁ (7. the requirement of robust stability for the case with multiplicative uncertainty gives an upper bound on the complementary sensitivity: ¸ ÛÁ Ì ¸ ¬ÛÁ Ä ¬ ¬ Á ¬ ¸ ¬ ½Û·Ä ¬ ¬ Ĭ ½ ½·Ä ½ ½ (7.28) we choose the following uncertainty weight     ÛÁ ´×µ ½¼× · ¼ ¿¿ ´½¼ ¾ µ× · ½ which closely matches this relative error.34) is not satisfied. Convince yourself that   ½   Ä point  ½ to the centre of the disc representing Ä Ô . For this plant the relative error ´ ¼ µ is ¼ ¿¿ at low frequencies. Encirclements are avoided if none of the discs cover  ½. This is not the case as seen from Figure 7.12 where we see that the satisfying ¡ ½ ý ´½· ý µ exceeds magnitude of the nominal complementary sensitivity function ̽ the bound ½ ÛÁ from about ¼ ½ to ½ rad/s.32) (7. Graphical derivation of RS-condition. e.6 Consider the following nominal plant and PI-controller ´×µ ¿´ ¾× · ½µ à ´×µ ´ × · ½µ´½¼× · ½µ à ½¾ × · ½ ½¾ × Recall that this is the inverse response process from Chapter 2. Example 7. then (7. By trial and error we find that reducing the gain to à ¾ ¼ ¿½ just achieves RS.¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 1. so the condition could equivalently be written in terms of Û Á ÌÁ or ÛÇ Ì . as is seen from the curve for ̾ þ ´½ · þ µ in Figure 7.11. this is the case if the perturbation is restricted to be real. To achieve robust stability we need to reduce the controller gain. and that ÛÁ Ä is the radius of the disc.g. and it is ¾ at high frequencies.11 ÊË Note that for SISO systems ÛÁ ÛÇ and Ì ÌÁ à ´½ · à µ ½ .34) is only sufficient for RS. Consider the Nyquist plot of Ä Ô as shown ½ · Ä is the distance from the in Figure 7. and we get from Figure 7. Thus. we select à à ½ ½ ½¿ as suggested by the Ziegler-Nichols’ tuning rule. We now want to evaluate whether the system remains ´½· ÛÁ ¡Á µ where ¡Á ´×µ is any perturbation stable for all possible plants as given by Ô ½. Initially. If this is not the frequency all perturbations satisfying ¡´ µ case. so (7. It results in a nominally stable ´ ¿×·½µ ´ ×· closed-loop system.34) We see that we have to detune the system (i. it is ½ at about ¼ ½ rad/s. make Ì small) at frequencies where the relative uncertainty Û Á exceeds ½ in magnitude. Condition (7.6). as for the parametric gain uncertainty in (7.12. . Based on this and (7.e. Suppose that one “extreme” uncertain plant is ¼ ´×µ ½µ¾ .

38) and we have rederived (7. with à ¾ ¼ ¿½ there exists an allowed ¾ ´½ · ÛÁ ¡Á µ that yields Ì¾Ô ½· Ô Ãþ on the complex ¡Á and a corresponding Ô Ô limit of instability. For the “extreme” plant ¼ ´×µ we find as expected that the closed-loop system is ½ ½¿. Thus. ÊË ¸ ¸ ¸ ½ · ÄÔ ¼ ÄÔ ½ · ÄÔ ¼ ÄÔ ½ · Ä · ÛÁ Ä¡Á ¼ (7. since the set of plants is norm-bounded. Unstable plants. The stability condition (7. 2. then there must be another Ä Ô¾ in the uncertainty set which goes exactly through  ½ at some frequency. This illustrates that gain by almost a factor of two to à condition (7. and a violation of this bound does not imply instability for a specific plant ¼ . Therefore.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ½ ÏÁ ¾¿ ̽ (not RS) Magnitude 10 0 10 −1 ̾ (RS) −3 10 10 −2 Frequency 10 −1 10 0 10 1 Figure 7. However. the nominal loop transfer function Ä´ µ does not encircle  ½. it then follows that if some ÄÔ½ in the uncertainty set encircles  ½. Since Ä Ô is assumed stable. However. Remark.37) At each frequency the last condition is most easily violated (the worst case) when the ½ and with phase such that complex number ¡ Á ´ µ is selected with ¡Á ´ µ the terms ´½ · ĵ and ÛÁ Ä¡Á have opposite signs (point in the opposite direction).33) also applies to the case when Ä and ÄÔ are unstable as long as the number of RHP-poles remains the same for each plant in the .33).34) is only a sufficient condition for stability. and the nominal closed-loop is stable. we can increase the ¼ before we get instability.36) ¡Á ½ (7. Thus ÊË ¸ ½ · Ä   ÛÁ Ä ¼ ¸ ÛÁ Ì ½ (7. with à ¾ ¼ ¿½ the system is stable with reasonable unstable with à ½ margins (and not at the limit of instability as one might have expected). Algebraic derivation of RS-condition.12: Checking robust stability with multiplicative uncertainty Remark.35) (7.

13: Å ¡-structure 3. The Å ¡-structure provides ÊË a very general way of handling robust stability. The derivation is based on applying the Nyquist stability condition to an alternative “loop transfer function” Å ¡ rather than Ä Ô . Notice that the only source of instability in Figure 7.39) is the transfer function from the output of ¡ Á to the input of ¡ Á . This derivation is a preview of a general analysis presented in the next chapter. and (7. where ¡ ¡Á and Å ÛÁ Ã ´½ · Ã µ ½ ÛÁ Ì (7. We assume that ¡ and Å ÛÁ Ì are stable. The argument goes as follows. and we will discuss this at length in the next chapter where we will see that (7. the former implies that and Ô must have the same unstable poles.38) since Å Û Á Ì . Thus.41) Å´ µ ½ (7. the latter is equivalent to assuming nominal stability of the closed-loop system. We now apply the Nyquist stability condition to the system in Figure 7. ¡ Ù¡ Ý¡ ¹ Å Figure 7.13. so we must make sure that the perturbation does not change the number of encirclements.10 is the new feedback loop created by ¡ Á . The Nyquist stability condition then determines RS if and only if the “loop transfer function” Å ¡ does not encircle  ½ for all ¡.40) The last condition is most easily violated (the worst case) when ¡ is selected at each ½ and the terms Å ¡ and 1 have opposite signs (point in frequency such that ¡ the opposite direction).33) is the condition which guarantees this. If ¼) feedback system is stable then the stability of the system in the nominal (¡ Á Figure 7.42) is essentially a clever application of the small gain theorem where we avoid the usual conservatism since any phase in Å ¡ is allowed. Å ¡-structure derivation of RS-condition. ¸ ¸ . The reader should not be too concerned if he or she does not fully understand the details at this point.42) which is the same as (7. We therefore get ÊË ¸ ½ · Å¡ ¼ ¡ ½   Å´ µ ¼ (7.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ uncertainty set.13.33) and (7. This follows since the nominal closed-loop system is assumed stable.10 is equivalent to stability of the system in Figure 7. ½ (7.

This illustrates the conservatism involved in replacing a real perturbation by a complex one. is the gain margin (GM) from classical control. given ļ ¼ à . use of (7. Show that this is indeed the case using the numerical values from Example 7.47) Here both Ö and depend on Ñ Ü . represent the gain uncertainty as complex multiplicative uncertainty. .ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ 7. (b) One expects Ñ Ü ¿ to be even more conservative than Ñ Ü ¾ since this uncertainty description is not even tight when ¡ is real. Note that no iteration is (a) Find ÛÁ and use the RS-condition ÛÁ Ì ½ ̼ is independent of Ñ Ü . can we multiply the loop gain. The robust stability condition ÛÁ Ì ½ ½ (which is derived for complex ¡) with Û Á Ö then gives ­ ­ ­Ö ­ Ñ Ü ½ Ñ Ü·½ (7. but depends on Ñ Ü . needed in this case since the nominal model and thus Ì Exercise 7.43) as multiplicative complex uncertainty with nominal model ¼ (rather than ¼ used above). and the exact factor by which we can increase Å ¾.7 To check this numerically consider a system with ļ ½¼ ¼ ½  ×·¾ . Ñ Ü ½ ½ . Alternatively. Condition (7.43) ÄÔ find the largest value of Ô Ä¼ Ô ¾ ½ ÑÜ Ñ Ü such that the closed-loop system is stable.47) must be solved iteratively to find Ñ Ü ¾ .47) yields is less than GM=¾. The exact value of Ñ Ü . Real perturbation. We find .33)) Ñ Ü½ where Å ½ ¼ is the frequency where Ä ¼ ÄÔ Ô Ä¼ ½ ļ ´ ½ ¼ µ  ½ ¼Æ. 1. ¾ [rad/s] and ļ ´ ½ ¼ µ the loop gain is. (7.7. and (7.47) would be exact if ¡ were complex. ļ ´½ · Ö ¡µ Ö (7. which as expected Ñ Ü¾ Example 7.44) 2.3 Represent the gain uncertainty in (7. We have (recall (2. Complex perturbation. instability? In other words.46) ļ ­ ­ ½ · ļ ­½ ­ ½ (7. before we get (7. but since it is not we expect Ñ Ü ¾ to be somewhat smaller than GM. On the other hand.45) where Ñ Ü·½ ¾ Note that the nominal Ä Ä¼ is not fixed.2 Comparison with gain margin By what factor. × ×·¾ ½ to find Ñ Ü ¿ .5.44). which is obtained with ¡ real. Ñ Ü . from (7.

and assume stability of the nominal closed-loop system. Actually.5. Assume for simplicity that the loop transfer function Ä Ô is stable.53) Remark. the RS-condition (7. In this derivation we have assumed that ÄÔ is stable. Control implications. and since ÄÔ is in a norm-bounded set we have ÊË ¸ ¸ ¸ ½ · ÄÔ ¼ ÄÔ ½ · Ä´½ · Û Á ¡ Á µ ½ ½ · Û Á¡ Á · Ä ¼ ¼ ¡Á ¡Á ½ ½ (7.52) (7.48) Algebraic derivation.49) (7.50) (7.14) in which Ô ´½ · Û Á ´×µ¡ Á µ ½ (7. Thus ÊË ¸ ½·Ä   ÛÁ ¸ Û ÁË ½ ¼ (7. From (7.53) we find that the requirement of robust stability for the case with inverse multiplicative uncertainty gives an upper bound on the sensitivity.3 RS with inverse multiplicative uncertainty ÛÁ Ù¡ ¡Á Ý¡ - ¹ à ¹+ + Õ ¹ Õ¹ Figure 7. but this is not necessary as one may show by deriving the condition using the Å ¡-structure.51) The last condition is most easily violated (the worst case) when ¡ Á is selected at ½ and the terms ½ · Ä and Û Á ¡ Á have opposite each frequency such that ¡ Á signs (point in the opposite direction).14: Feedback system with inverse multiplicative uncertainty We will derive a corresponding RS-condition for a feedback system with inverse multiplicative uncertainty (see Figure 7. ÊË ¸ Ë ½ Û Á (7.54) .53) applies even when the number of RHP-poles of Ô can change.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7. Robust stability is then guaranteed if encirclements by Ä Ô ´ µ of the point  ½ are avoided.

Ë ½ may not always be possible. However. Thus. The reason is that this uncertainty represents pole uncertainty. This may be somewhat surprising since we intuitively expect to have to detune the system (and make Ë ½) when we have uncertainty. Then we have the interpolation constraint Ë ´Þ µ ½ and we must as a prerequisite for RS. where we consider multiplicative uncertainty. Ä´ µ must stay outside a disc of radius Û È ´ µ centred on  ½.6. while this condition tells us to do the opposite. we cannot have large pole uncertainty with Û Á ´ µ ½ (and hence the possibility of instability) at frequencies where the plant has a RHP-zero.6 SISO Robust performance 7.55) to be satisfied for all possible plants. and we then know that we need feedback ( Ë ½) in order to stabilize the system.55) Now ½ · Ä represents at each frequency the distance of Ä´ µ from the point  ½ in the Nyquist plot.15)). ÊÈ ¸ ÛÈ ËÔ ½ ËÔ ¸ ÛÈ ½ · ÄÔ ÄÔ ÔÃ (7. Û Á Ë ½ ½.56) (7.15. This is consistent with the results we obtained in Section 5.1 SISO nominal performance in the Nyquist plot Consider performance in terms of the weighted sensitivity function as discussed in Section 2.2. that is.7. 7.58) . The condition for nominal performance (NP) is then ÆÈ ¸ ÛÈ Ë ½ ¸ ÛÈ ½·Ä (7. so Ä´ µ must be at least a distance of Û È ´ µ from  ½. where we see that for NP.2 Robust performance For robust performance we require the performance condition (7. In particular.16.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ We see that we need tight control and have to make Ë small at frequencies where the uncertainty is large and Û Á exceeds ½ in magnitude.57) This corresponds to requiring Ý ½ ¡ Á in Figure 7. and the set of possible loop transfer functions is ÄÔ Ä´½ · ÛÁ ¡Á µ Ä · ÛÁ Ä¡Á (7.9. This is illustrated graphically in Figure 7. assume that the plant has a RHP-zero at × Þ .6. see (5. and at frequencies where Û Á exceeds 1 we allow for poles crossing from the left to the right-half plane ( Ô becoming unstable). require that Û Á ´Þ µ ½ (recall the maximum modulus theorem. including the worst-case uncertainty. 7.

we see from Figure 7. Condition (7. ÊÈ ¸ Ñ Ü Û È ËÔ ËÔ ½ (7.17 that the condition for RP is that the two discs. with radii Û È and ÛÁ Ä .56) we have that RP is satisfied if the worst-case (maximum) weighted sensitivity at each frequency is less than 1. For RP we must require that all possible Ä Ô ´ µ stay outside a disc of radius Û È ´ µ centred on  ½.15: Nyquist plot illustration of nominal performance condition ÛÈ ½·Ä 1. Graphical derivation of RP-condition. Algebraic derivation of RP-condition.60) (7.¾ ÅÍÄÌÁÎ ÊÁ ÁÑ Ä Ã ÇÆÌÊÇÄ  ½ ¼ ½· Ä´ µ Ê Ä´ ÛÈ ´ µ µ Figure 7. Since Ä Ô at each frequency stays within a disc of radius Û Á Ä centred on Ä.16: Diagram for robust performance with multiplicative uncertainty . Since their centres are located a distance ½ · Ä apart. that is. do not overlap.61) ¸ Ñ Ü ´ ÛÈ Ë · Û Á Ì µ 2. the RP-condition becomes ÊÈ or in other words ¸ ÛÈ · ÛÁ Ä ¸ ÛÈ ´½ · ĵ ½ ÊÈ ½·Ä · ÛÁ Ä´½ · ĵ ½ ½ (7.17.59) ½ (7. From the definition in (7.57) is illustrated graphically by the Nyquist plot in Figure 7.62) - ¹ Ã Õ ¹ ÛÁ ¹ ¡Á ¹+¹ + ¹ + Õ¹ + ÛÈ Ý ¹ Figure 7.

This means that for SISO systems we can closely approximate the RP-condition in terms of an ½ problem. At a given ½ (RP) is satisfied if (see Exercise 7.63) into (7.61) for this problem is closely approximated by the following mixed sensitivity ½ condition: À ­ ­ ­ ÛÈ Ë ­ ­ ­ ­ ÛÁ Ì ­ ½ ÑÜ Ô ÛÈ Ë ¾ · ÛÁ Ì ¾ ½ (7.61) can be used to derive bounds on the loop shape Ä .65) Ä (at frequencies where (7.63) and by substituting (7. 1. we will see in the next chapter that the situation can be very different for MIMO systems. so there is little need to make use of the structured singular value.61). Remarks on RP-condition (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË Im ¾ 1+L|jw| −1 0 L|jw| w|jw| Re wL Figure 7.4) frequency we have that ÛÈ Ë · ÛÁ Ì Ô À Ä or if ½ · ÛÈ ½   ÛÁ ½   ÛÈ ½ · ÛÁ (at frequencies where ÛÁ ÛÈ ½) ½) (7.64) To be more precise.62) we rederive the RP-condition in (7. and the worst-case (maximum) is obtained at each frequency by selecting ¡ Á =1 such that the terms ´½ · ĵ and ÛÁ Ä¡Á (which are complex numbers) point in opposite directions. We get Ñ Ü ÛÈ ËÔ ËÔ ÛÈ ½ · Ä   ÛÁ Ä ÛÈ Ë ½   ÛÁ Ì (7.64) is within a factor of at most ¾ to condition (7. However.61).66) . Ñ Ü should be replaced by ×ÙÔ.61).17: Nyquist plot illustration of robust performance condition ÛÈ ½ · ÄÔ (strictly speaking. 2. The perturbed sensitivity is ËÔ ´Á ·ÄÔ µ ½ ½ ´½·Ä·ÛÁ Ä¡Á µ. the supremum). The RP-condition (7. we find from (A. The RP-condition (7.95) that condition (7.

66).65) and (7. see also Exercise 7.128). For each system. The loop-shaping conditions (7.65) and (7. 4.65) and (7. The worst-case weighted sensitivity is equal to skewed.66) is most ½.18.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Conditions (7.4 Derive the loop-shaping bounds in (7.68) (a) Derive a condition for robust performance (RP). ÛÈ Ë · ÛÁ Ì in (7. is not equal to the worst-case weighted sensitivity. The structured singular value Ñ ÜËÔ ÛÈ ËÔ . (b) For what values of ÖÙ is it impossible to satisfy the robust performance condition? ¼ .( × ) with fixed uncertainty.10. Condition ½ and ÛÈ ½ (tight (7. (1996) who derive bounds also in terms of Ë and Ì .3. Consider two cases for the nominal loop transfer function: 1) ý ´×µ (c) Let ÖÙ ¼ ½ × .5 Also derive. Exercise 7.66) may in the general case be obtained numerically from -conditions as outlined in Remark 13 on page 320. see Section 8.5. in summary we have for this particular robust performance problem: ÛÈ Ë · ÛÁ Ì Note that and × are closely related since × ½ if and only if × ÛÈ Ë ½   ÛÁ Ì (7. Hint: Start from the RP-condition in the form ÛÈ · ÛÁ Ä ½·Ä and use the facts that ½·Ä ½  Ä and ½·Ä Ä   ½. see (8.66) which are sufficient for ÛÈ Ë · ÛÁ Ì ½ (RP). given in (7. We will discuss in much more detail in the next chapter. Thus. Exercise 7. Consider robust performance of the SISO system in Figure 7.65) is most useful at low frequencies where generally ÛÁ performance requirement) and we need Ä large. Conversely. This is discussed by Braatz et al. ÛÈ   ½ ½   ÛÁ ½   ÛÈ ÛÁ   ½ Example 7. the following necessary bounds for RP ½ and ÛÁ ½ and ÛÁ ½µ ½µ Ä Ä Hint: Use ½·Ä ½· Ä. sketch the magnitudes of S and its ¼ × and 2) þ ´×µ × ½·× performance bound as a function of frequency.67) ½. from (which must be satisfied) ÛÈ Ë · ÛÁ Ì ´for ÛÈ ´for ÛÈ ½. useful at high frequencies where generally ÛÁ ÛÈ ½ and we need Ä small. Does each system satisfy robust performance? .8 Robust performance problem.61) is the structured singular value ( ) for RP 3.65) and (7. for which we have ÊÈ ¸ ¬ ¬ ¬ ¬ ¬Ý ¬ ¬ ¬ ½ ¡Ù ½ ÛÈ ´×µ ¼¾ · ¼½ ÛÙ ´×µ × ÖÙ × ×·½ (7. and furthermore derive necessary bounds for RP in addition to the sufficient bounds in (7. The term ´ÆÊÈ µ for this particular problem. condition (7.63) (although many people seem to think it is).66) may be combined over different frequency ranges. (more than 100% uncertainty).

70) A simple analysis shows that the worst case corresponds to selecting ¡Ù with magnitude 1 such that the term ÛÙ ¡Ù Ë is purely real and negative.6. With the weights in (7. This is seen by checking the RP-condition (7. so RP equivalent to ÖÙ · ¼ ¾ cannot be satisfied if ÖÙ ¼ .71) (7.19.8 Solution. 7.73) graphically as shown in Figure 7. and assume that the closedloop is nominally stable (NS). ½ (c) Design ˽ yields RP. where the possible sensitivities (7. The conditions for nominal performance (NP). robust stability (RS) and robust performance (RP) can then be summarized as follows ÆÈ ÊË ÊÈ ¸ ¸ ¸ ÛÈ Ë ½ ÛÁ Ì ½ ÛÈ Ë · ÛÁ Ì (7.72) (7.76) .75) (7.3 The relationship between NP.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾½ ¹ - ÛÙ Ã ¹ ¹ ¡Ù Õ ¹ ¹+ ¹+ Õ¹ + + ÛÈ Ý ¹ Figure 7. while ˾ does not. ˽ has a peak of ½ while ˾ has a peak of about ¾ .73) ½ ÛÈ ´ Ûµ · ÛÙ ´ Ûµ ½ at high frequencies and therefore (b) Since any real system is strictly proper we have Ë ½ as . we must at least require ÖÙ ¼ for RP. Therefore.69) ËÔ ÊÈ ½ ½ · à · ÛÙ ¡ Ù ¬ ¬ ¬ ÛÈ Ë ¬ ¬ ¬ ¬ ½ · ÛÙ ¡Ù Ë ¬ Ë ½ · ÛÙ ¡Ù Ë ½ ¡Ù The condition for RP then becomes ¸ (7.18: Diagram for robust performance in Example 7. RS and RP Consider a SISO system with multiplicative uncertainty.68) this is we must require ÛÙ ´ µ · ÛÈ ´ µ ½. and hence we have ÊÈ ¸ ¸ ¸ ÛÈ Ë ½   Û Ù Ë ÛÈ Ë · ÛÙ Ë ½ Ë ´ Ûµ (7.74) ½ (7. (a) The requirement for RP is are given by ÛÈ ËÔ ½ ËÔ .

To satisfy RS we generally want Ì small. whereas to satisfy NP we generally want Ë small. for SISO systems.78) We conclude that we cannot have both Û È ½ (i. as we will see in the next chapter. good performance) and ÛÁ ½ (i. we cannot make both Ë and Ì small at the same frequency because of the identity Ë · Ì ½.¾¾ 10 1 ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ë¾ Magnitude 10 0 ½ ÛÈ · Û٠˽ 10 −1 10 −2 10 −2 10 −1 Frequency 10 0 10 1 10 2 Figure 7. both for SISO and MIMO systems and for any uncertainty. where Ë · Ì Ë · Ì ½. we will automatically get RP when the subobjectives of NP and RS are satisfied. for MIMO systems we may get very poor RP even though the subobjectives of NP and RS are individually satisfied.19: Robust performance test From this we see that a prerequisite for RP is that we satisfy NP and RS. However. 7. This has implications for RP.6. RP is not a “big issue” for SISO systems. To see this consider the following two cases as illustrated . In addition. One explanation for this is that at frequencies where Û Á ½ the uncertainty will allow for RHPzeros. if we satisfy both RS and NP. since Û È Ë · ÛÁ Ì Ñ Ò ÛÈ ÛÁ ´ Ë · Ì µ.4 The similarity between RS and RP Robust performance may be viewed as a special case of robust stability (with mulptiple perturbations).e. and we derive at each frequency ÛÈ Ë · ÛÁ Ì Ñ Ò ÛÈ ÛÁ (7. and we know that we cannot have tight performance in the presence of RHPzeros. and this is probably the main reason why there is little discussion about robust performance in the classical control literature. within a factor of at most ¾. more than ½¼¼% uncertainty) at the same frequency. On the other hand. then we have at each frequency ÛÈ Ë · ÛÁ Ì ¾ Ñ Ü ÛÈ Ë ÛÁ Ì ¾ (7. This applies in general. Thus.e.77) It then follows that.

79) (b) We will now derive the RS-condition for the case where Ä Ô is stable (this assumption may be relaxed if the more general Å ¡-structure is used. (a) The condition for RP with multiplicative uncertainty was derived in (7.e. Since we use the À½ norm to define both uncertainty and performance and since the weights in Figures 7.80) . or by simply evaluating the conditions for the two cases as shown below. We want the system to be closed-loop stable for all possible ¡ ½ and ¡¾ . This may be argued from the block diagrams. ½ · ÄÔ ¼.20: (a) RP with multiplicative uncertainty (b) RS with combined multiplicative and inverse multiplicative uncertainty ½ and As usual the uncertain perturbations are normalized such that ¡ ½ ½ ¡¾ ½ ½.61). but with Û½ replaced by Û È and with Û¾ replaced by Û Á .20: (a) Robust performance with multiplicative uncertainty (b) Robust stability with combined multiplicative and inverse multiplicative uncertainty in Figure 7.20(a) and (b) are the same. That is. We found ÊÈ ¸ Û½ Ë · Û¾ Ì ½ (7. and therefore ÊË ¸ ½ · ÄÔ ¼ ÄÔ (7. i. see (8. the distance between Ä Ô and  ½ must be larger than zero. respectively. RS is equivalent to avoiding encirclements of  ½ by the Nyquist plot of ÄÔ . are identical.127)). the tests for RP and RS in cases (a) and (b).ÍÆ ÊÌ ÁÆÌ - ¹ Ã Õ ¹ Æ ÊÇ ÍËÌÆ ËË Û¾ ¹ ¾¿ ¡¾ ¹+ ¹ + (a) ¹ + Õ¹ + Û½ Þ ¹ - ¹ Ã Õ ¹ Û¾ ¹ ¡¾ ¹+ ¹ + (b) ¹ + Õ¹ + ¡½ Û½ Figure 7.

83) (7.84) which is the same condition as found for RP.81) (7.7 Examples of parametric uncertainty We now provide some further examples of how to represent parametric uncertainty.87) The magnitude of the weight Û Á ´×µ is equal to Ö at low frequencies. If Ö is larger than 1 then the plant can be both stable and unstable.1 Parametric pole uncertainty Consider uncertainty in the parameter in a state space model. Ý Ý · Ù. We get ÊË ¸ ¸ ½ · Ä   ÄÛ¾   Û½ Û½ Ë · Û¾ Ì ½ ¼ (7. ´×  Ôµ.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (7.86) which can be exactly described by inverse multiplicative uncertainty as in (7. More generally. As seen from the RS-condition in (7. corresponding to the uncertain transfer function Ô ´×µ consider the following set of plants Ô ´×µ ½ ´×µ ×  Ô ¼ ÑÒ Ô ÑÜ (7. 7.7.48) with nominal model ¼ ´×µ ´×   µ and Û Á ´×µ Ö ×  (7.85) If Ñ Ò and Ñ Ü have different signs then this means that the plant can change from stable to unstable with the pole crossing through the origin (which happens in some applications).53). a value of Û Á larger than 1 means that Ë must be less than 1 at the .82) ¸ ¸ ½ · Ä´½ · Û¾ ¡¾ µ´½   Û½ ¡½ µ ½ ¼ ¡½ ¡¾ ½ · Ä · ÄÛ¾ ¡¾   Û½ ¡½ ¼ ¡½ ¡¾ Here the worst case is obtained when we choose ¡ ½ and ¡¾ with magnitudes 1 such that the terms ÄÛ¾ ¡¾ and Û½ ¡½ are in the opposite direction of the term ½ · Ä. The perturbations ¡ must be real to exactly represent parametric uncertainty. 7. This set of plants can be written as Ô ¼ ´×µ ×   ´½ · Ö ¡µ  ½ ¡ ½ (7.

2 Parametric zero uncertainty Consider zero uncertainty in the “time constant” form.92).90). Time constant form. Then the possible zeros Þ Ô  ½ Ô cross from the LHP to the RHP through infinity: Þ Ô  ½ ¿ (in LHP) and ÞÔ ½ (in RHP). Û Á ´×µ 7. whereas in (7. as in ´½ · Ô ×µ ¼ ´×µ Ñ Ò Ô Ñ Ü (7. is given in Example 7.88) the uncertainty affects the model at high frequency. Exercise 7. An example of the zero uncertainty form in (7. The set of plants in (7. Remark.6 Parametric zero uncertainty in zero form. Show that the resulting multiplicative weight is ÛÁ ´×µ ÖÞ Þ ´× · Þ µ and explain why the set of plants given by (7.85) the uncertainty affects the model at low frequency. The corresponding uncertainty weight as derived in (7. (7. whereas the ¼ and approaches zero at high frequencies.87) is Ö at Ö × (7.10.90) and (7.89) ½· × ¼ and approaches Ö at high frequency. Both of the two zero uncertainty forms.91) The magnitude Û Á ´ µ is small at low frequencies. Explain what the implications are for control if ÖÞ ½.88) This results in uncertainty in the pole location. For cases with Ö cross from the LHP to the RHP (through infinity).92). Consider the following alternative form of parametric zero uncertainty Ô ´×µ ´× · ÞÔ µ ¼ ´×µ ÞÑ Ò ÞÔ ÞÑ Ü (7. can occur in practice.8) is This weight is zero at weight Û Á in (7. and approaches Ö (the relative ½ we allow the zero to uncertainty in ) at high frequencies. let  ½ Ô ¿. which allows for changes in the steady-state gain. For example.92) is entirely different from that with the zero uncertainty in “time constant” form in (7.85). The reason is that in (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ same frequency.92) which caters for zeros crossing from the LHP to the RHP through the origin (corresponding to a sign change in the steady-state gain). It is also interesting to consider another form of pole uncertainty.7.90) may be written as multiplicative (relative) uncertainty with Ô ´×µ ÛÁ ´×µ Ö × ´½ · ×µ (7.90) where the remaining dynamics ¼ ´×µ are as usual assumed to have no uncertainty. . but the set of plants is entirely different from that in (7. which is consistent with the fact that we need feedback (Ë small) to stabilize an unstable plant. namely that associated with the time constant: Ô ´×µ ½ ´×µ ×·½ ¼ Ô ÑÒ Ô ÑÜ (7.

more suited for numerical calculations. mass. Introduce ¨´×µ ¸ ´×Á and we get   µ  ½ . is given by Packard (1988). Some of the Æ ’s may have to be repeated. This description has multiple perturbations.98) ´×Á   Ô µ ½ ´×Á     Ͼ ¡Ï½ µ ½ ´Á   ¨´×µÏ¾ ¡Ï½ µ ½ ¨´×µ This is illustrated in the block diagram of Figure 7.99) Ô  ¿  ¾ ·Æ½  Û½ Û½ ·Æ¾ ¼  Û¾  ½ Û½  Û½ ¾Û¾ ¼ ßÞ ßÞ ßÞ ·  Û½ Û½ (7. For example.7.e. we may write Ô · Æ · Ͼ ¡Ï½ (7.94)  ½ Ô · Ô (7.5(d)).9 Suppose Ô is a function of two parameters Ô ¿ · Û¾ ƾ ( ½ ƾ ½) as follows: ½ · Û½ ƽ ( ½ ƽ ½) Ô Then  ¾   Ô Ô · ¾«Ô Ô   «Ô  Ô (7.93) (7.21. volume. Consider an uncertain state-space model Ü Ý or equivalently ÔÜ · ÔÙ ÔÜ · ÔÙ (7. (7. etc.100) ¼  Û¾ ¾Û¾ ¼ Ͼ ßÞ ½ ƽ ¼ ¼ ¼ ƾ ¼ ¼ ßÞ Æ¾ ¼ ¡ ½  ½ ½ ¼ ¼ ßÞ½ Ͻ ¾ (7.3 Parametric state-space uncertainty The above parametric uncertainty descriptions are mainly included to gain insight. Ô · È Æ Ô · È Æ Ô · È Æ Ô · È Æ (7. and fairly clear that we can separate out the perturbations affecting then collect them in a large diagonal matrix ¡ with the real Æ ’s along its diagonal. A general procedure for handling parametric uncertainty.95) Ô ´×µ Ô ´×Á   Ô µ Assume that the underlying cause for the uncertainty is uncertainty in some real (these could be temperature.). and «Ô Example 7. so it cannot be represented by a single perturbation. and assume parameters ƽ ƾ in the simplest case that the state-space matrices depend linearly on these parameters i.101) .97) where ¡ is diagonal with the Æ ’s along its diagonal. but it should be and .96) where and model the nominal system. which is in the form of an inverse additive perturbation (see Figure 8.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7.

ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ Ͼ ¡ Ͻ µ ½ ´×Á   Ô µ ½ ¹ + + ¹ ´×Á   Ö -matrix ¹ Figure 7. ½ £ µÜ½ · ´ £   ½µÙ ܽ  ´½ · Õ Ü¾ ܽ   ´½ · Õ£ µÜ¾   Ù ½ ½ ½ ¡ ܾ ½ . yields. etc. Zhou et al. Also.21: Uncertainty in state space Note that ƽ appears only once in ¡. note that seemingly nonlinear parameter dependencies may be rewritten in our standard linear block diagram form. and it is given that at steady-state we always have Õ£ £ (physically. we may have an upstream mixing tank where a fixed amount of A is fed in). 1996). ܽ is the concentration of reactant A.10 Parametric uncertainty and repeated perturbations. Ü ½ £ .. Consider the following state space description of a SISO plant1 ½ This is actually a simple model of a chemical reactor (CSTR) where Ù is the feed flowrate. «½·Û¾ ƾ ¾ . ¾ Õ  Õ   Õ   ½ Î ÑÓÐ × · ½ Î   ¾ Î ÑÓÐ × ¡ ¡ where the superscript £ signifies a steady-state value.102) Î Î where Î is the reactor volume. Linearization and introduction of deviation variables. Ý Ü¾ is the concentration of intermediate product B and Õ£ is the steady-state value of the feed flowrate. and Ù Õ. for ·Û½ ƽ Æ ¾ example.103). Example 7. By introducing Õ£ we get the model in (7. 1988. whereas ƾ needs to be repeated. we can handle Æ ½ (which would need Æ ½ repeated). The values of Õ£ and £ depend on the operating point. It can be shown that the minimum number of repetitions uncertainty in of each Æ in the overall ¡-matrix is equal to the rank of each matrix (Packard. with ½ Î and £ . This is related to the ranks of the matrices ½ (which has rank ½) and ¾ (which has rank ¾). Additional repetitions of the parameters Æ may be necessary if we also have and . This is illustrated next by an example. This example illustrates how most forms of parametric uncertainty can be represented in terms of the ¡representation using linear fractional transformations (LFTs). Component balances yield Ü ÔÜ · ÔÙ Ý Ü (7.

so the above description generates a set of The constant ¼ ¼ ½. The transfer function for this plant is Ô ´×µ  ´× · ´ ·¾ ½ µ´  ¼ ½ µ µ ´× · ½ · µ¾ (7. however.¾ Ô ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ  ´½ · µ ¼ may vary during operation. A little more analysis will show why. It is not suggested. it has a block-diagram representation and may thus be written as a linear fractional transformation (LFT).103) ¦ ¼ · ¼ ½Æ Æ (7.105) which may be written as a scalar LFT Ô½ ´Æ µ Ù ´Æ Æ µ Ò¾¾ · Ò½¾ Æ ´½   Ò½½ Æ µ ½ Ò¾½ (7. the plant uncertainty may be For our specific example with uncertainty in both represented as shown in Figure 7. Let us first consider the input gain uncertainty for state ½.22 where à ´×µ is a scalar controller. we may pull out the perturbations and collect them in a ¿ ¿ diagonal ¡-block with the scalar perturbation Æ repeated three times. but it is relatively straightforward with the appropriate software tools. Even though Ô½ is a nonlinear function of Æ .106) with Ò¾¾ ½ Ò½½ ¼ ¾ Ò½¾ Ò¾½ ¼ . and we will need to use repeated perturbations in order to rearrange the model to fit our standard formulation with the uncertainty represented as a block-diagonal ¡-matrix.21. The above example is included in order to show that quite complex uncertainty representations can be captured by the general framework of block-diagonal perturbations. Consequently. that such a complicated description should be used in practice for this example. Next consider the pole uncertainty caused by variations in the -matrix.22 to fit Figure 3. It is rather tedious to do this by hand.104) Note that the parameter enters into the plant model in several places. Remark.109) .108) and we may then obtain the interconnection matrix È by rearranging the block diagram of Figure 7. that is. ¢ ¾ ¡ Æ ¿ Æ Æ (7.107) and . the variations in Ô½ ´½ µ . which may be written as possible plants. which may be written as     Ô  ½ ½  ½ ¼ ·  ¼ ½  ¼ ½ ¼ ¼ ¼ Æ ¼ Æ (7. Assume that ½  ´½ · µ ¼ Ô ½   ½ ½ ¼ ½ (7. We have   Ô½ ´Æ µ ½  ¼   ¼ ½Æ ¼ · ¼ ½Æ ½   ¼ ¾Æ ½ · ¼ ¾Æ (7.

(1987) considered the following parametric uncertainty description Ô ´×µ Ô   Ô× Ô× · ½ Ô ¾ ÑÒ Ñ Ü Ô ¾ ÑÒ Ñ Ü Ô ¾ ÑÒ Ñ Ü (7.6 are Ô¾ ´×µ   ´× · ¼ ¿µ ´× · ½ µ¾ (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ ¹  ¹ ¹ Æ   ¹ à ´×µ ÙÕ ¹  ½ ½  ¾ ¹¹¹ Õ Æ ¼ ¼ Æ  ½ ¼ ¼  ½ µ ½ ¹ ´×Á   ¹ Õ Õ¹ ¼ ¹ ½ Ý Õ¹ Figure 7. we know that because of the RHP-zero crossing through the origin. ¿  ¼ ½× ´¾× · ½µ´¼ ½× · ½µ¾ ´×µ (a) Derive and sketch the additive uncertainty weight when the nominal model is (b) Derive the corresponding robust stability condition. Laughlin et al.112) .g.22: Block diagram of parametric uncertainty ¼ and we note that it has a RHP-zero for ¼ ½ .110) From a practical point of view the pole uncertainty therefore seems to be of very little importance and we may use a simplified uncertainty description with zero uncertainty only. the performance at low frequencies will be very poor for this plant.7 Consider a “true” plant ¼ ´×µ ¿ ´¾× · ½µ. and that the steady state gain is zero for and 0. The three plants corresponding to ´×µ ·   ´´××· ½¼ µµ¾ Ô½ ´×µ ½     ´´××· ½¼ ½µ¾ µ ¼ ¼ .111) In any case. Ô ´×µ · µ   ´´×· ½ÞÔµ¾ ×  ¼ ½ ÞÔ ¼ ¿ (7. e. 7.8 Additional exercises Exercise 7.8 Uncertainty weight for a first-order model with delay. (c) Apply this test for the controller à ´×µ Is this condition tight? × and find the values of that yield stability. Exercise 7.

Comment on your answer. not quite around frequency ).114) (a) Show that the resulting stable and minimum phase weight corresponding to the uncertainty description in (7. Exercise 7.119) .15) and compare with ÛÁÄ .20) since the nominal plant is different. What kind of uncertainty might ÛÙ and ¡Ù represent? Exercise 7.117) ´½ · ÛÁ ¡Á µ with nominal model ´×µ as multiplicative uncertainty Ô ¼ ´×µ and Ö ´ ÑÜ uncertainty weight ÛÁ Ñ Ò µ ´ Ñ Ü · Ñ Ò µ. They chose the mean parameter values as ( giving the nominal model ´× µ ´×µ ¸ ×·½  × (7.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ) where all parameters are assumed positive. Find the frequency where the weight crosses ½ in magnitude.1 how to represent scalar parametric gain uncertainty Ô ´×µ Ô ¼ ´×µ where ÑÒ Ô ÑÜ (7. We showed in Example 7.116) ¿ ´¾× · ½µ with multiplicative and we want to use the simplified nominal model ´×µ uncertainty.9 Consider again the system in Figure 7. (c) Find ÐÁ ´ µ using (7.2) include all possible plants? (Answer: No. Plot ÐÁ ´ µ and approximate it by a rational transfer function ÛÁ ´×µ.19) or (7. (b) Plot the magnitude of ÛÁÄ as a function of frequency.113) and suggested use of the following multiplicative uncertainty weight ÛÁÄ ´×µ ÑÜ¡ × · ½ ¡ Ì× · ½   ½ Ñ Ò× · ½  Ì × · ½ ´½ ¾ ×¾ · ½ Ì Ñ Ü  ÑÒ (7. ¡Á is here a real scalar.115) Note that this weight cannot be compared with (7.11 Parametric gain uncertainty.115) and the uncertainty model (7.118) Ö and ´×µ ¾ ÑÒ Ñ Ü Ñ Ü· ÑÒ (7. and compare this with ½ Ñ Ü . Exercise 7.17) is ÛÁÄ ´×µ × · ¼ ¾µ ´¾× · ½µ´¼ ¾ × · ½µ (7. Does the weight (7. ½ ¡Á ½. Alternatively. we can represent gain uncertainty as inverse multiplicative uncertainty:     ¥Á with Û Á Ô ´×µ ´×µ´½ · Û Á ´×µ¡ Á µ ½ where  ½ ¡ Á ½ (7.18.10 Neglected dynamics. Assume we have derived the following detailed model Ø Ð ´×µ ¿´ ¼ × · ½µ ´¾× · ½µ´¼ ½× · ½µ¾ (7.

¼. ¡ ½ ½. Sketch the Bode plot for the two extreme values of .ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾½ (a) Derive (7.12 The model of an industrial robot arm is as follows ´×µ ¾ ¼´ ×¾ · ¼ ¼¼¼½× · ½¼¼µ ¾ · ¼ ¼¼¼½´ ¼¼ · ½µ× · ½¼¼´ ¼¼ · ½µµ ×´ × where ¼ ¼¼¼¾ ¼ ¼¼¾ . .119).118) does not allow for Ô (c) Discuss why (b) may be a possible advantage. robust performance with multiplicative uncertainty. the inverse multiplicative uncertainty imposes sensitivity. We showed that the requirement of robust stability for the case of multiplicative complex uncertainty imposes an upper bound on the allowed complementary ½ Similarly.9 Conclusion In this chapter we have shown how model uncertainty for SISO systems can be represented in the frequency domain using complex norm-bounded perturbations.118) and (7. and to extend the results to MIMO systems and to more complex uncertainty descriptions we need to make use of the structured singular value. What kind of control performance do you expect? Discuss how you may best represent this uncertainty. Exercise 7. At the end of the chapter we also discussed how to represent parametric uncertainty using real perturbations. whereas Û È Ë · ÛÁ Ì is the structured singular value for evaluating robust performance with multiplicative uncertainty. Û È Ë · ÛÁ Ì The approach in this chapter was rather elementary. Û Á Ë ½ . This is the theme of the next chapter.) Ô   (b) Show that the form in (7. ¾ 7. (Hint: The gain variation in (7. ÛÁ Ì an upper bound on the sensitivity. .117) can be written exactly as ´½ Ö ¡µ. We also derived a condition for ½ . where we find that ÛÁ Ì and Û Á Ë are the structured singular values for evaluating robust stability for the two sources of uncertainty in question.

¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .

we discussed how to find È and Æ for cases without uncertainty. As . if the controller is given and we want to analyze the uncertain system.2. Æ . consider Figure 8. in terms of minimizing . In Section 3.8.1 and 7. We also show how the “optimal” robust controller. To illustrate the main idea. The procedure with uncertainty is similar and is demonstrated by examples below. À 8. as shown in Figure 8. Our main analysis tool will be the structured singular value. If we also pull out the controller à .4 where it is shown how to pull out the perturbation blocks to form ¡ and the nominal system Æ . input uncertainty.1.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ Æ Ä ËÁË The objective of this chapter is to present a general method for analyzing robust stability and robust performance of MIMO systems with multiple perturbations. e. The starting point for our robustness analysis is a system representation in which the uncertain perturbations are “pulled out” into a blockdiagonal matrix.3. can be designed using à -iteration.g. .2.1) .8. This involves solving a sequence of scaled ½ problems. we use the Æ ¡-structure in Figure 8. see Section 8.1 General control configuration with uncertainty For useful notation and an introduction to model uncertainty the reader is referred to Sections 7. . This form is useful for controller synthesis. where Æ is real. . ¿ ¾ ¡½ ¡ ¡ .. Alternatively. where each ¡ represents a specific source of uncertainty. ¡ (8. or parametric uncertainty. we get the generalized plant È . ¡ Á ..

Æ is related to È and à by a lower LFT Æ Ð ´È à µ ¸ Ƚ½ · Ƚ¾ à ´Á   Ⱦ¾ à µ ½È¾½ (8.1: General control configuration (for controller synthesis) ¡ Ù¡ Û ¹ ¹ Ý¡ Æ Þ ¹ Figure 8. we can then rearrange the system into the To analyze robust stability of Å ¡- ¡ Ù¡ ¹ Ý¡ Å Figure 8. is related (8.112)).111). Þ to Æ and ¡ by an upper LFT (see (3.2: Æ ¡-structure for robust performance analysis shown in (3. Û. the uncertain closed-loop transfer function from Û to Þ .3) Ù ´Æ ¡µ ¸ ƾ¾ · ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ .2) Similarly.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¡ Ù¡ Û Ù Ý¡ ¹ ¹ ¹ È Ã ¹Þ Ú Figure 8.3: Å ¡-structure for robust stability analysis .

ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¾ ¹ ¡¿ ¹ Û ¹ ¹ ¡¾ ¹ ¹ ¡½ ¹ Þ ¹ (a) Original system with multiple perturbations · ¡½ ¡ ¡¾ ¡¿ ¹ Û ¹ ¹ ¹ Þ ¹ Æ (b) Pulling out the perturbations Figure 8.4: Rearranging an uncertain system into the Æ ¡-structure .

ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Æ ½½ is the transfer function from the output to 8. Furthermore.3. but then the resulting robust stability and performance conditions will be harder to derive and more complex to state. that is. For example. The following example illustrates that for MIMO systems it is sometimes critical to represent the coupling between uncertainty in different transfer function elements. and therefore in the robustness analysis we do not want to allow all ¡ such that (8.3 where Å the input of the perturbations. and for a real scalar perturbation  ½ Æ ½. each individual perturbation is assumed to be stable and is normalized. repetition is often needed to handle parametric uncertainty. we have additional structure. 8.6) . As a consequence MIMO systems may experience much larger sensitivity to uncertainty than SISO systems. Consider a distillation process where at steady-state ½¼ ¾  ½¼   £ Ê ´ µ ¿ ½  ¿ ½  ¿ ½ ¿ ½ (8. if we use a suitable form for the uncertainty and allow for multiple perturbations.1 Differences between SISO and MIMO systems The main difference between SISO and MIMO systems is the concept of directions which is only relevant in the latter.47) the maximum singular value of a block diagonal matrix is equal to the largest of the maximum singular values of the ¡ that individual blocks. Remark. Only the subset which has the blockdiagonal structure in (8.4) For a complex scalar perturbation we have Æ ´ µ ½ .1) should be considered.7.2 Representing uncertainty As usual. then we can always generate the desired class of plants with stable perturbations.5) is satisfied. as shown in Section 7. so assuming ¡ stable is not really a restriction.¾ structure of Figure 8. In some cases the blocks in ¡ may be repeated or may be real. ´¡ ´ µµ ½ (8. The assumption of a stable ¡ may be relaxed. Example 8.2. it then follows for ¡ ´¡ ´ µµ ½ ¸ ¡ ½ ½ (8. Since from (A.1 Coupling between transfer function elements.5) Note that ¡ has structure.

multiplicative input uncertainty and multiplicative output uncertainty: ¥ ¥Á ¥Ç Ô Ô Ô · ´Á · Á µ ´Á · Ç µ Û ¡ Á Û Á ¡Á Ç Û Ç ¡Ç (8.9) (8. as discussed in Chapter 7 for SISO systems.2 Parametric uncertainty The representation of parametric uncertainty.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¾ From the large RGA-elements we know that becomes singular for small relative changes in the individual elements. 8.1 The uncertain plant in (8. Six common forms of unstructured uncertainty are shown in Figure 8.10) .2. the material balance). Since variations in the steady-state gains of ¼% from or more may occur during operation of the distillation process.5. where at each frequency any ¡´ µ satisfying ´¡´ µµ ½ is allowed.5(a). However. so Ô is never singular for this uncertainty.2.3 Unstructured uncertainty Unstructured perturbations are often used to get a simple uncertainty model. the simple uncertainty description used in (8.7) may be represented in the additive uncertainty · Ͼ ¡ Ͻ where ¡ Æ is a single scalar perturbation. For the numerical data above Ø irrespective of Æ . for a distillation process. e. this seems to indicate that independent control of both outputs is impossible. for the distillation process a more reasonable description of the gain uncertainty is (Skogestad et al. usually with dimensions compatible with those of the plant.7) originated from a parametric uncertainty description of the distillation process. In Figure 8. from (6. The reason is that the transfer function elements are coupled due to underlying physical constraints (e. form Exercise 8. never becomes singular. this conclusion is incorrect since.7)). (b) and (c) are shown three feedforward forms. (Note that Ø is not generally true for the uncertainty description given in (8.88) we know that perturbing the ½ ¾-element to makes singular..g. carries straight over to MIMO systems. Ô 8. However. Find Ͻ and Ͼ .7) Ø Ô Ø Ô Û in this case is a real constant.g. the inclusion of parametric uncertainty may be more significant for MIMO plants because it offers a simple method of representing the coupling between uncertain transfer function elements. For example. Specifically. 1988)     ¦ Ô where Æ · Û  Æ  Æ Æ Æ ½ (8. For example. Û ¼. We here define unstructured uncertainty as the use of a “full” complex perturbation matrix ¡.8) (8. additive uncertainty.

Another common form of unstructured uncertainty is coprime factor uncertainty discussed later in Section 8.12) (8. one can have several perturbations which themselves are unstructured. We have here used scalar weights Û. (e) Inverse multiplicative input uncertainty. (f) Inverse multiplicative output uncertainty In Figure 8.11) (8. (c) Multiplicative output uncertainty. inverse additive uncertainty. Remark.5: (a) Additive uncertainty.2.5(d). In practice. which may be combined . so sometimes one may want to use matrix weights.¾ ÅÍÄÌÁÎ ÊÁ Ä Û Ã ÇÆÌÊÇÄ ¹Û ¹¡ ¡ Õ ¹ (a) ¹¹ + + ¹ + + ¹ (d) Õ¹ ¡Á ¹ ÛÁ ¹ ¡Á ÛÁ Õ (b) ¹¹ + + ¹ ¹ + + ¹ Õ (e) ¹ ¹ Û Ç ¹ ¡Ç ¹ ÛÇ ¡Ç Õ (c) ¹¹ + + ¹ ¹ + + Õ¹ (f) Figure 8. we may have ¡Á at the input and ¡Ç at the output. (e) and (f) are shown three feedback or inverse forms. (d) Inverse additive uncertainty.13) The negative sign in front of the ’s does not really matter here since we assume that ¡ can have any sign. For example. but perturbation. (b) Multiplicative input uncertainty. inverse multiplicative input uncertainty and inverse multiplicative output uncertainty: ¥ ¥Á ¥Ç Ô Ô Ô ´Á   µ ½ ´Á   Á µ ½ ´Á   Ç µ ½ Û ¡ Á Û Á¡ Á Ç Û Ç¡ Ç (8. ¡ denotes the normalized perturbation and the “actual” Û¡ ¡Û. Ï ¾ ¡Ï½ where Ͻ and Ͼ are given transfer function matrices.6.

´Á · ÛÇ ¡Ç µ ¡Ç ½ ½ (8.4). Lumping uncertainty into a single perturbation For SISO systems we usually lump multiple sources of uncertainty into a single complex perturbation. ´Á · ÛÁ ¡Á µ ¡Á ½ ½ (8. this ¡ is a block-diagonal matrix and is therefore no longer truly unstructured.15). If this is not the case. In particular. If the resulting uncertainty weight is reasonable (i.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¾ into a larger perturbation. Ô where. and the subsequent analysis shows that robust stability and performance may be achieved. but then it makes a difference whether the perturbation is at the input or the output. then one may try to lump the uncertainty at the input instead. a set of plants ¥ may be represented by multiplicative output uncertainty with a scalar weight Û Ç ´×µ using Ô where. often in multiplicative form. This may also be done for MIMO systems. Since output uncertainty is frequently less restrictive than input uncertainty in terms of control performance (see Section 6. This is discussed next.15). then this lumping of uncertainty at the output is fine.17) However. using multiplicative input uncertainty with a scalar weight. Moving uncertainty from the input to the output For a scalar plant. However. This is because one cannot in general shift a perturbation from one location in the plant (say at the input) to another location (say the output) without introducing candidate plants which were not present in the original set. we first attempt to lump the uncertainty at the output. For example. one should be careful when the plant is ill-conditioned. we have Ô ´½ · ÛÁ ¡Á µ ´½ · ÛÇ ¡Ç µ and we may simply “move” the multiplicative uncertainty from the input to the output without .15) (we can use the pseudo-inverse if is singular). similar to (7. ¡ ¡Á ¡Ç .10.16) ÐÁ ´ µ Ñ Ü Ô ¾¥    ½ ´ Ô   µ´ µ¡ ÛÁ ´ µ ÐÁ ´ µ (8. similar to (7.e. in many cases this approach of lumping uncertainty either at the output or the input does not work well.14) ÐÇ ´ µ Ñ Ü Ô ¾¥   ¡ ´ Ô   µ  ½ ´ µ ÛÇ ´ µ ÐÇ ´ µ (8. it must at least be less than 1 in the frequency range where we want control).

18) as multiplicative output uncertainty. for multivariable plants we usually need to multiply by the condition number ­ ´ µ as is shown next.20) which is much larger than Ð Á ´ µ if the condition number of the plant is large.21) Proof of (8. if we want to represent (8. Suppose the true uncertainty is represented as unstructured input uncertainty ( Á is a full matrix) on the form ´Á · Á µ (8. i. Then ´ ¡Á  ½ µ ÎÍ ´Í ¦¦ Î À µ ´¦¦µ ´ µ ´  ½ µ ­ ´ µ. This would imply that the plant could become singular at steady-state and thus impossible to control. Û Á ÛÇ . see (A. write Á ÛÁ ¡Á where we allow any ¡ Á ´ µ satisfying ´¡Á ´ µµ ½ Then at a given frequency ÐÇ ´ µ ÛÁ Ñ Ü ´ ¡Á  ½ µ ¡Á ÛÁ ´ µ ­ ´ ´ µµ (8. this.7) with ½¼ (corresponding to about ½¼% uncertainty in each element). and the ½ ¾ in condition number of the plant is ½ ½ . ÛÁ ¼ ½.¿¼¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ changing the value of the weight.17) the magnitude of multiplicative input uncertainty is ÐÁ ´ µ Ñ Ü ´  ½ ´ Ô   µµ Á Ñ Ü ´ Áµ Á (8. Therefore.2 Assume the relative input uncertainty is ½¼%. Then from (8. which we know ½¼ ·  ½¼  ½¼ ½¼ (8. then from (8. Then we must select м ÛÇ ¼ ½ ½ ½ order to represent this as multiplicative output uncertainty (this is larger than 1 and therefore not useful for controller design).22)   .18) Ô Then from (8. to represent ¼ in terms of input for which we have ÐÁ uncertainty we would need a relative uncertainty of more than ½ ¼¼%. ¢ Also for diagonal uncertainty ( Á diagonal) we may have a similar situation. that is.80). if the plant has large RGA-elements then the elements in Á  ½ will be much larger than those of Á . ¾ Example 8. To see . making it impractical to move the uncertainty from the input to the output.19) On the other hand.15) ÐÇ ´ µ Ñ Ü ´´ Ô   µ  ½ µ Á Ñ Ü ´ Á Á  ½ µ (8. However.e. Û Example 8. For example.21): Write at each frequency Í ¦Î À and  ½ Í ¦Î À . Select ¡Á À (which is a unitary matrix with all singular values equal to 1).3 Let ¥ be the set of plants generated by the additive uncertainty in (8.7) one plant ¼ in this set (corresponding to Æ ½) has ¼ ´  ½ ´ ¼ µµ ½ ¿.

6 for the uncertain plant in Figure 7. Ideally.9)). ¡ Ù ¹ ¹ À½½ À¾½ À½¾ À¾¾ Ý ¹ Figure 8.) thereby generating several perturbations. we would like to lump several sources of uncertainty into a single perturbation to get a simple uncertainty description. Ý Ô Ù. In such cases we may have to represent the uncertainty as it occurs physically (at the input.22. etc.7)–(3. Often an unstructured multiplicative output perturbations is used.23)     Exercise 8. from the above discussion we have learnt that we should be careful about doing this.2 A fairly general way of representing an uncertain plant Ô is in terms of a linear fractional transformation (LFT) of ¡ as shown in Figure 8.8)-(8.24) . represented by LFT. Obtain À for each of the six uncertainty where Ͼ ¡Ï½ (Hint for the inverse forms: ´Á Ͻ ¡Ï¾ µ ½ forms in (8.6 for the uncertain plant in Figure 7. Therefore output uncertainty works well for this particular example. in the elements.3 Obtain À in Figure 8. Fortunately. see (3.6: Uncertain plant. For uncertainty associated with unstable plant poles.13) using  ½ Ͼ . we should use one of the inverse forms in Figure 8.4 Obtain À in Figure 8. 8. Á · Ͻ ¡´Á Ͼ Ͻ ¡µ À½½ À½¾ ¡ À¾½ À¾¾ À¾¾ · À¾½ ¡´Á   À½½ ¡µ ½ À½¾ (8. at least for plants with a large condition number. we can instead represent this additive uncertainty as multiplicative output uncertainty (which is also generally preferable for a subsequent controller design) with ÐÇ ´´ ¼ µ  ½ µ ¼ ½¼.2.20(b).6.23) Exercise 8.5.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼½ is incorrect. see (8. However.   Conclusion. Exercise 8. Here Ô Ù À¾¾ is the nominal plant model.4 Diagonal uncertainty By “diagonal uncertainty” we mean that the perturbation is a complex diagonal matrix ¡´×µ Æ ´×µ Æ´ µ ½ (8.

For example. in which the input variables are the flows of hot and cold water. ½ is (approximately) the frequency where the relative uncertainty reaches 100%. If we choose to represent this as unstructured input uncertainty (¡Á is a full matrix) then we must realize that this will introduce non-physical couplings at the input to the plant. namely Û´×µ × · Ö¼ ´ Ö½ µ× · ½ (8. and it increases at higher frequencies to account for neglected or uncertain dynamics (typically. Remark 2 The claim is often made that one can easily reduce the static input gain uncertainty to significantly less than 10%. resulting in a set of plants which is too large. Consider again (8. consider a bathroom shower. Remark 1 The diagonal uncertainty in (8. This type of diagonal uncertainty is always present. associated with each input.) which based on the controller output signal. is at least 10% at steady-state (Ö ¼ ¼ ½).5. the uncertainty Û . Ù . With each input Ù there is associated a separate physical system (amplifier. and the resulting robustness analysis may be conservative (meaning that we may incorrectly conclude that the system may not meet its specifications). valve.28).¿¼¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (usually of the same size as the plant). let us consider uncertainty in the input channels. and since it has a scalar origin it may be represented using the methods presented in Chapter 7. Ö ½ ¾).27) Normally we would represent the uncertainty in each input or output channel using a simple weight in the form given in (7. Typically. but this is probably not true in most cases. generates a physical plant input Ñ Ñ ´×µÙ (8. A commonly suggested method to reduce the uncertainty is to measure the actual input (Ñ ) and employ local feedback (cascade control) to readjust Ù .27) originates from independent scalar uncertainty in each input channel. As a simple example. but for representing the uncertainty it is important to notice that it originates at the input.25) The scalar transfer function ´×µ is often absorbed into the plant model ´×µ. .26) Ô ´×µ which after combining all input channels results in diagonal input uncertainty for the plant Ô ´×µ ´½ · ÏÁ ¡Á µ ¡Á Æ ÏÁ ÛÁ (8. actuator. We can represent this actuator uncertainty as multiplicative (relative) uncertainty given by ´×µ´½ · ÛÁ ´×µÆ ´×µµ Æ ´ µ ½ (8. and Ö ½ is the magnitude of the weight at higher frequencies. To make this clearer.28) where Ö¼ is the relative uncertainty at steady-state.25). this is the case if ¡ is diagonal in any of the six uncertainty forms in Figure 8. Diagonal uncertainty usually arises from a consideration of uncertainty or neglected dynamics in the individual input channels (actuators) or in the individual output channels (sensors). etc. signal converter.

assume that the nominal flow in our shower is 1 l/min and we want to increase it to 1.1 which has inputs Ù Û Ù Ì and outputs Ý Þ Ú Ì . see Example 3. even in this case there will be uncertainty related to the accuracy of each measurement. Here ÏÁ is a normalization weight for the uncertainty and ÏÈ is a performance weight. For example.7.e.1 l/min is 1% too low). Note that it is not the absolute measurement error that yields problems. It is always present and a system which is sensitive to this uncertainty will not work in practice.3 Obtaining È .7 (remember to break ¡ ¡ . but rather the error in the sensitivity of the measurement with respect to changes (i.1 l/min.7: System with multiplicative input uncertainty and performance measured at the output Example 8. It often restricts achievable performance with multivariable control. as given in (8. that is. and an input gain uncertainty of 20%.27).11 l/min (measured value of 1. ¹ ÏÁ - Ý¡¹ ¡Á Ù¡ Û Þ ¹ + Õ¹ ÏÈ ¹ + Ú¹ Ã Ù Õ ¹ +¹ + Figure 8. Consider a feedback system with multiplicative input uncertainty ¡Á as shown in Figure 8. should always be considered because: 1.13) or simply by inspecting Figure 8. in terms of deviation variables we want Ù ¼ ½ [l/min]. 8. 2. In conclusion. By writing down the equations (e. Æ and Å We will now illustrate. However. Suppose the vendor guarantees that the measurement error is less than 1%. But. We want to derive the generalized plant È in Figure 8.4 System with input uncertainty.g. the “gain” of the sensor). Æ and Å in a given situation.99 l/min (measured value of 1 l/min is 1% too high) to 1. diagonal input uncertainty. even with this small absolute error. the actual flow rate may have increased from 0.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼¿ One can then imagine measuring these flows and using cascade control so that each flow can be adjusted more accurately. by way of an example. how to obtain the interconnection matrices È . corresponding to a change Ù¼ ¼ ½¾ [l/min].

32) we have ƽ½ ƽ¾ ÏÁ ÃË .¿¼ the loop before and after à ) we get ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ È ÏÈ ¼   ¼ ÏÁ ÏÈ Ï È  Á   (8. Of course. then à and we then exit the feedback loop and get the term ´Á · à µ ½ ). Û Þ The upper left block.35) where from (8.K) . and that it is much simpler to derive Æ directly from the block diagram as described above.2).29) using the lower LFT in (8.31) Ð ´È Æ function from inputs à µ using (8. Ƚ½ and then find Æ ÏÈ ¼ ÏÈ ¼ Ƚ¾ ÏÁ ÏÈ (8. ƽ½ . Thus.e. Ù¡ to outputs Ý¡ (without breaking the loop before and after à ).     Exercise 8. Show that this uncertain transfer function from Û to Þ is ÏÁ ÌÁ .2.32) Alternatively. Next.30) (8. we have Å ÏÁ à ´Á · à µ ½ ÏÁ ÌÁ .2 (we first meet ÏÁ . You will note that the algebra is quite tedious. i. to derive ƽ¾ .5 Show in detail how È in (8.29) is derived. Æ for the case when the multiplicative uncertainty is at the . Note that the transfer function from Ù¡ to Ý¡ (upper left element in È ) is ¼ because Ù¡ has no direct effect on Ý¡ (except through à ).8 Derive È and output rather than the input.2). We get (see Exercise 8.6 For the system in Figure 8.32) is the transfer function from Ù¡ to Ý¡ .32) from È in (8. in the MATLAB -toolbox we can evaluate Æ Ð ´È à µ using the command N=starp(P. partition È to be compatible with à . Exercise 8.29) It is recommended that the reader carefully derives È (as instructed in Exercise 8. Exercise 8. deriving Æ from È is straightforward using available software.7 Derive Æ in (8. and with a specific ¡ the perturbed transfer function Ù ´Æ ¡µ from Û to Þ is obtained with the command F=starp(delta. we want to derive the matrix Æ corresponding to Figure 8.7 we see easily from the block diagram that the ÏÈ ´Á · ´Á · ÏÁ ¡Á µÃ µ ½ . in (8.3 for evaluating robust stability. we start at the output (Ý¡ ) and move backwards to the input (Û) using the MIMO Rule in Section 3. ƾ½ ÏÈ Ë and ƾ¾ ÏÈ Ë . Exercise 8.       Remark. This is the transfer function Å needed in Figure 8.7)  ÏÁ à ´Á · à  ½ ½  ÏÁ à ´Á · འ½ µ  µ  ÏÈ ´Á · à µ ÏÈ ´Á · à µ Ⱦ½    Á Ⱦ¾   (8. which is the transfer function from Û to Ý¡ . we can derive Æ directly from Figure 8.7 by evaluating the closed-loop transfer For example. For example.5).N). First. is identical to Ù ´Æ ¡µ evaluated using (8.

18.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼ Exercise 8. Exercise 8.18. Exercise 8.4 Definitions of robust stability and robust performance We have discussed how to represent an uncertain set of plants in terms of the Æ ¡structure in Figure 8.23).2.33) ½ and ¡Ç ½ ½.34) ŠϽÁ ¼ Ï½Ç ¼  ÌÁ  ÃË Ï¾Á ¼ Ë  Ì ¼ Ï¾Ç Exercise 8. Exercise 8.10 Find È for the uncertain plant Ô in (8.9 Find È for the uncertain system in Figure 7.3 Show that ¡Á ¡Ç (8.8: System with input and output multiplicative uncertainty Exercise 8. uncertain plant ƽ½ for studying robust stability for the ¹ ϽÁ ¹ ¡Á ¹ ϾÁ - ¹Ï½Ç ¹ ¡Ç ¹Ï¾Ç ¹ ¹ + + ¹Ã Õ Õ ¹ + + Figure 8. Consider the block diagram in Figure 8. The next step is to check whether we have stability and acceptable performance for all plants in the set: .8 into the Å ¡-structure in Figure 8. The set of possible plants is given by Ô ´Á · Ï¾Ç ¡Ç Ï½Ç µ ´Á · ϾÁ ¡Á ϽÁ µ (8.20(b). 8. Collect the perturbations into ¡ where ¡Á ½ and rearrange Figure 8.12 Find the transfer function Å Ô in (8.23) when Û What is Å ? Ö and Þ Ý   Ö.11 Find the interconnection matrix Æ for the uncertain system in Figure 7.8 where we have both input and output multiplicative uncertainty blocks.13 Å ¡-structure for combined input and output uncertainties.14 Find for the uncertain system in Figure 7.

Robust stability (RS) analysis: with a given controller à we determine whether the system remains stable for all plants in the uncertainty set.38) (8. Remark 2 Convention for inequalities. yielding the equivalent condition RS ¡ ½ ½ if Å ½ ½. that is. and Þ the ´¡µÛ. This is because the frequency-by-frequency conditions can also be satisfied for unstable systems. A typical choice is Û È ËÔ (the weighted sensitivity function).¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 1. In Figure 8. where Û È is the performance weight (capital È for performance) and Ë Ô represents the set of perturbed sensitivity functions (lower-case Ô for perturbed). At the end of the chapter we also discuss how to synthesize controllers such that we have “optimal robust performance” by minimizing over the set of stabilizing controllers. Before proceeding. ÊË ¡ ½ alternatively have bounded the uncertainty with a strict inequality.2 our requirements for stability and performance can then be summarized as follows ¡µ ¸ ƾ¾ · ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ (8. As a prerequisite for nominal performance (NP). (We could condition with a strict inequality. where from (8. we must first satisfy nominal stability (NS). Remark 1 Important. we determine how “large” the transfer function from exogenous inputs Û to outputs Þ may be for all plants in the uncertainty set. Û represents the exogenous inputs (normalized disturbances and references). we need to define performance more precisely.39) These definitions of RS and RP are useful only if we can test them in an efficient manner.36) (8.37) and NS (8. as our analysis tool.) Remark 3 Allowed perturbations.3) exogenous outputs (normalized errors). In this book we use the convention that the perturbations are bounded such that they are less than or equal to one. for example. We will show how this can be done by introducing the structured singular value. ´¡µ ½ ½ In terms of the Æ ¡-structure in Figure 8. Robust performance (RP) analysis: if RS is satisfied.40) . robust stability (RS) and robust performance (RP). 2. This results in a stability ½ if Å ½ ½. We have Þ Ù ´Æ We here use the À½ norm to define performance and require for RP that for all allowed ¡’s. without having to search through the infinite set of allowable perturbations ¡.35) ÆË ÆÈ ÊË ÊÈ ¸ Æ × ÒØ ÖÒ ÐÐÝ ×Ø Ð ¸ ƾ¾ ½ ½ and NS ¸ Ù ´Æ ¡µ × ×Ø Ð ¡ ¡ ½ ½ ¸ ¡ ¡ ½ ½ and NS ½ ½ (8. .2. For simplicity below we will use the shorthand notation ¡ Ò Ñ¡ Ü (8.

2 is equivalent to the stability of the Å ¡-structure in Figure 8. By allowed perturbations we mean that the ½ norm of ¡ is less or equal to ½. Assume that the nominal system Å ´×µ and the perturbations ¡´×µ are stable. To be mathematically exact. We also assume that ¡ is stable.3 is stable for all allowed perturbations (we have RS) if and . where À ¾ ¡ ¡¾¡ ¡½ ½ is the set of unity norm-bounded perturbations with a given structure should also be defined. and not only Æ ¾¾ must be stable ). and that ¡ has a specified block-diagonal structure where certain blocks may be restricted to be real. for example by ¡. Thus.7.40) by ¡ ¡ . Æ is stable (which means that the whole of Æ . the stability of the system in Figure 8. 8. we should replace ¡ in (8. when we have nominal stability (NS).ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼ to mean “for all ¡’s in the set of allowed perturbations”. This gets rather involved. and the number of complex blocks. this amount of detail is rarely required as it is usually clear what we mean by “for all allowed perturbations” or “ ¡”.3 where Å Æ ½½ .41) We thus need to derive conditions for checking the stability of the Å ¡-structure. as in (8. Theorem 8.2 for which the transfer function from Û to Þ is.g. given by ¡ Ù ´Æ ¡µ ƾ¾ · ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ (8. Fortunately. and “maximizing over all ¡’s in the set of allowed perturbations”.41) that the only possible source of instability is the feedback term ´Á   Æ ½½ ¡µ ½ . but as can be seen from the statement it also applies to any other convex set of perturbations (e.1 Determinant stability condition (Real or complex perturbations). We then see directly from (8. such that if ¡ ¼ is an allowed perturbation then so is ¡ ¼ where is any real scalar such that ½. The allowed structure Ñ ¢Ñ ¡ ƽ ÁÖ½ ÆË ÁÖË ¡½ ¡ Æ ¾Ê ¡ ¾ where in this case Ë denotes the number of real scalars (some of which may be repeated).35). The next theorem follows from the generalized Nyquist Theorem 4. Suppose that the system is nominally stable (with ¡ ¼). It applies to À½ norm-bounded ¡-perturbations. sets with other structures or sets bounded by different norms).5 Robust stability of the Å -structure Consider the uncertain Æ ¡-system in Figure 8. that is. Consider the convex set of perturbations ¡. Then the Å ¡-system in Figure 8. ¡ ½ ½.

Assume there exists a perturbation ¡¼ such that the image of Ø´Á   Å ¡¼ ´×µµ encircles the origin as × traverses the Nyquist D-contour.42) µ ´ (8. there then exists another perturbation in the set. µ ¸ µ µ       Remark 1 The proof of (8. (8. ¡. and an ¼ such that Ø´Á Å ¡¼¼ ´ ¼ µµ ¼.2.43) is proved by proving not(8. Theorem 8.45) is proved by proving not(8.43) (8. Then ´Å ¡¼ µ ½ for some perturbation ¡¼ such that ´Å ¡¼ µ ¡¼ where is eigenvalue .45) (8. (8. such that if ¡ ¼ is an allowed perturbation then so is ¡ ¼ where is any complex scalar such that ½.46) is simply the definition of Ñ Ü¡ . ¡¼¼ ¯¡¼ with ¯ ¼ ½ . and there always exists another perturbation in the set. Then the Å ¡.system in Figure 8.3 is stable for all allowed perturbations (we have RS) if and only if ´Å ¡´ µµ or equivalently ½ ¡ ½ (8.45) relies on adjusting the phase of scalar and thus requires the perturbation to be complex. the equivalence ¾ between (8.43): First note that with ¡ ¼.43) ( RS) is “obvious”: It follows from the definition of the spectral radius . Ø´Á   Å ¡µ ½ at all frequencies.¿¼ only if Nyquist plot of ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ø ´Á   Å ¡´×µµ does not encircle the origin. Finally.44) Proof: Condition (8.45) (8.42) (8.44) is equivalent to (8.1). such that É Å ¡¼¼ µ ·½ (real and positive) and therefore ´ a complex scalar with É Ø´Á Å ¡¼¼ µ ´Á Å ¡¼¼ µ ´½ ´Å ¡¼¼ µµ ¼. (8.1 which applies to complex perturbations.42) µ not(8.42) is simply the generalized Nyquist Theorem (page 147) applied to a positive feedback system with a stable loop transfer function Å ¡. ´Å ¡¼ µ using the complex . ¡ ¸ Ø ´Á   Å ¡´ µµ ¼ ¡ ¸ ´Å ¡µ ½ ¡ (8. Consider the class of perturbations. Assume that the nominal system Å ´×µ and the perturbations ¡´×µ are stable.2 Spectral radius condition for complex perturbations.43) since (see Appendix A.43) (8. ¡¼¼ ½.45) not(8. Because ¾ (8. and applies also to real ¡’s.43): This is obvious since by “encirclement of the origin” we also include the origin itself. Ø´Á   µ É   ´Á   µ and ´Á   µ ½  ´ µ ¾ The following is a special case of Theorem 8.46) ÊË ¸ Ñ Ü ´Å ¡´ µµ ¡ Proof: (8.42) (8.45) and (8. the Nyquist contour and its map is closed.43): Assume there exists a ½ at some frequency.

47) Ñ Ü ´Å ¡µ ¡ Ñ Ü ´Å ¡µ ¡ Ñ Ü ´¡µ ´Å µ ¡ Proof: In general. we need where the second inequality in (8.49) .ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼ Remark 2 In words. Lemma 8. ¡. we consider the special case where ¡´×µ is allowed to be any (full) ½.48) µ ´ µ ´ µ. Remark 3 Theorem 8.2 directly yield the following theorem: Theorem 8. Then the (8. This will be the case if for any Å there exists an ´Å µ. Theorem 8. Assume that the nominal system Å ´×µ is stable (NS) and that the perturbations ¡´×µ are stable.2 tells us that we have stability if and only if the spectral radius of Å ¡ is less than 1 at all frequencies and for all allowed perturbations.116)).3 is stable for all perturbations ¡ satisfying ¡ ½ ½ (i. Now. follows since Í Ñ¡ Ü ´Å ¡µ Ñ¡ Ü ´Å ¡µ ¾ Lemma 8. and this is difficult to check numerically.4 RS for unstructured (“full”) perturbations. The main problem here is of course that we have to test the condition for an infinite set of ¡’s.76). The second to last equality À Í  ½ and the eigenvalues are invariant under similarity transformations.e. 8.6 RS for complex unstructured uncertainty In this section. the spectral radius ( ) provides a lower bound on the spectral norm ( ) (see (A.3 Let ¡ be the set of all complex matrices such that following holds ´¡µ ´Å µ ½.3 together with Theorem 8.48) follows since ´ to show that we actually have equality. This is often referred to as complex transfer function matrix satisfying ¡ ½ unstructured uncertainty or as full-block complex perturbation uncertainty. we have RS) if and only if ´Å ´ µµ ½ Û ¸ Å ½ ½ (8. forms the basis for the general definition of the structured singular value in (8. and we have Ñ¡ Ü ´¡µ ´Å µ ´Å µ (8. Such a ¡¼ does indeed exist if we allow ¡¼ to be allowed ¡¼ such that ´Å ¡¼ µ Î Í À where Í and Î a full matrix such that all directions in ¡¼ are allowed: Select ¡¼ are matrices of the left and right singular vectors of Å Í ¦Î À . Then ´¡¼ µ ½ and ´Å ¡¼ µ ´Í ¦Î À Î Í À µ ´Í ¦Í À µ ´¦µ ´Å µ. which applies to both real and complex perturbations. Then the Å ¡-system in Figure 8.1.

60) . and determine the transfer function matrix ŠϽ ż Ͼ (8.49) may be rewritten as (8. The small gain theorem applies to any operator norm satisfying .6.55) (8.50) ( ) also follows directly from the small gain theorem by choosing Ä Å ¡. Theorem 8.54) and (8. use of the ¾ norm yields neither necessary nor sufficient conditions for stability. from (8.54) (8. Remark 2 An important reason for using the ½ norm to describe model uncertainty.5.58) For example.56) (8.59) we get for multiplicative input uncertainty with a scalar weight: ÊË Ô ´Á · ÛÁ ¡Á µ ¡Á ½ ½ ¸ ÛÁ ÌÁ ½ ½ (8. In contrast. since the ¾ norm does not in general satisfy µ ¡ À À À ¡ 8. is that the stability condition in (8.¿½¼ ÅÍÄÌÁÎ ÊÁ ÊË ¸ ´Å ´ µµ ´¡´ µµ ½ Ä ¡ à ÇÆÌÊÇÄ Remark 1 Condition (8.55) follow from the diagonal elements in the Å -matrix in (8. (8.50) The sufficiency of (8. We do not get sufficiency .1 Application of the unstructured RS-condition We will now present necessary and sufficient conditions for robust stability (RS) for each of the six single unstructured perturbations in Figure 8. where Å ¼ for each of the six cases becomes (disregarding some negative signs which do not affect the subsequent robustness condition) is given by · ´Á · Á µ Ô Ô ´Á · Ç µ ´Á   µ ½ Ô ´Á   Á µ ½ Ô  ½ Ô ´Á   Ç µ Ô Å¼ ż ż ż ż ż à ´Á · à µ ½ ÃË Ã ´Á · à µ ½ ÌÁ  ½ Ì Ã ´Á · à µ ´Á · à µ ½ Ë ´Á · à µ ½ ËÁ ´Á · à µ ½ Ë (8. with Ͼ ¡Ï½ ¡ ½ ½ (8.50) is both necessary and sufficient.52) from the output to the input of the perturbation. Note that the sign of Å ¼ does not matter as it may be absorbed into ¡.59) For instance.57) (8.53) (8.54) and (8.4 then yields ÊË ¸ Ͻ ż Ͼ ´ µ ½ ½ Û (8.51) To derive the matrix Å we simply “isolate” the perturbation.34). and the others are derived in a similar fashion.

In order to get tighter conditions we must use a tighter uncertainty description in terms of a block-diagonal ¡. An “exception” to this is when the uncertainty blocks enter or exit from the same location in the block diagram. Since we have no weights on the perturbations. the unstructured uncertainty descriptions in terms of a single perturbation are not “tight” (in the sense that at each frequency all complex perturbations satisfying ´¡´ µµ ½ may not be possible in practice).61) In general.9: Coprime uncertainty Robust stability bounds in terms of the À ½ norm (RS ¸ Å ½ ½) are in general only tight when there is a single full perturbation block. (7. 1990). for which the set of plants is where ÅР½ÆÐ is a left coprime factorization of the nominal plant. 8.6. and has proved to be very useful in applications (McFarlane and Glover. If we norm-bound the combined (stacked) uncertainty.33) follows as a special case of (8.58): ÊË Ô ´Á   Û Ç ¡ Ç µ ½ ¡Ç ½ ½ ¸ Û ÇË ½ ½ (8.9.53) follows as a special case of the inverse multiplicative output uncertainty in (8. because they can then be stacked on top of each other or side-by-side. the above RSconditions are often conservative. see (4.60) Similarly. it allows both zeros and poles to cross into the right-half plane.62) . Thus. This uncertainty description is surprisingly general.2 RS for coprime factor uncertainty ¹ ¡Æ + ¹ - ¡Å Õ ¹ ÆÐ ¹+ ¹ +  Ã ÅР½ Õ Figure 8. in an overall ¡ which is then a full matrix.20). it Ô ´ÅÐ · ¡Å µ ½ ´ÆÐ · ¡Æ µ ¡Æ ¡Å ½ ¯ (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½½ Note that the SISO-condition (7. we then get a tight condition for RS in terms of Å ½ . One important uncertainty description that falls into this category is the coprime uncertainty description shown in Figure 8.

15 Consider combined multiplicative and inverse multiplicative uncertainty at ´Á ¡ Ç Ï Ç µ ½ ´Á · ¡Ç ÏÇ µ . ¸ Ž ½¯ (8. The question is whether we can take advantage of the fact that ¡ ¡ is structured to obtain an RS-condition which is tighter than (8. ¡ Ç ¡Ç ½ and derive a necessary and sufficient condition for robust stability of the closed-loop system. Ô ½. combined uncertainty.65) We have here written “if” rather than “if and only if” since this condition is only sufficient for RS when ¡ has “no structure” (full-block uncertainty). Ô Exercise 8. To test for robust stability we rearrange the system into the Å ¡-structure and we have from (8. where we choose to norm-bound the the output.¿½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ is reasonable to use a normalized coprime factorization of the nominal plant.25).49) ÊË ´Å ´ µµ ½ (8.64) The above robust stability result is central to the À ½ loop-shaping design procedure ÊË ¡Æ ¡Å ½ The comprime uncertainty description provides a good “generic” uncertainty description for cases where we do not use any specific a priori uncertainty information. In (8.7 RS with structured uncertainty: Motivation Consider now the presence of structured uncertainty.3 with ¡ ¡Æ ¡Å Å   à ´Á · à µ ½ ÅР½ Á ¯ (8.4 discussed in Chapter 9. so it is not normalized to be less than 1 in this case.65).   8. Note that the uncertainty magnitude is ¯. ¡Æ ½ ¯ and ¡Å ½ ¯. where ¡ ¡ is blockdiagonal. from (A. In any case. see (4. However.62) we bound the combined (stacked) uncertainty. Make a block diagram of the uncertain plant. Remark. ¡Æ ¡Å ½ ¯.63) We then get from Theorem 8. so it is not an important issue from a practical point of view. This is because this uncertainty description is most often used in a controller design procedure where the objective is to maximize the magnitude of the uncertainty (¯) such that RS is maintained. which is not quite the same as bounding the individual blocks.45) we see that these two approaches differ at most by a factor of ¾. One idea is to make use of the fact that . to test for RS we can rearrange the block diagram to match the Å ¡-structure in Figure 8.

¹  ½ ¹ Å Å  ½ ¹ ¡ NEW Å : Figure 8.10). However. .68) is identical to (8. We will return with more examples of this compatibility Á and we have later. i. This clearly has no effect on stability.  ½ . we must select ´ Å  ½ µ ´Å µ.66) where is a scalar and Á is an identity matrix of the same dimension as the ’th perturbation block.65) must also apply if we replace Å by Å have ÊË ´ Å  ½ µ ½ (8. introduce the block-diagonal scaling matrix Á (8. ¡ ¡. we have ¡ ¡  ½ . and ´ Å  ½ µ may be significantly smaller than ´Å µ.67) in (8.10: Use of block-diagonal scalings.68) where is the set of block-diagonal matrices whose structure is compatible to that of ¡. Remark 1 Historically. the RS-condition in (8. and therefore the “most improved” (least This applies for any conservative) RS-condition is obtained by minimizing at each frequency the scaled singular value. discussed in detail in the next section. ´Å µ. we get more degrees of freedom in . Note that when ¡ is a full matrix. note that with the chosen form for the scalings we have for ¡  ½ .68) directly motivated the introduction of the structured singular value. when ¡ has structure. . that is.10. and we that (8. As one might guess. To this effect. Next.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ SAME UNCERTAINTY ¿½¿ ¡½ ¡¾ . ¡ . and so as expected (8. Now rescale the inputs and outputs to Å and ¡ by inserting the matrices and  ½ on both sides as shown in Figure 8.66). and we have ÊË Ñ Ò ´ µ¾ ´ ´ µÅ ´ µ ´ µ ½ µ ½ (8.e. This means each perturbation block ¡  ½ (see Figure 8.65). ¡ stability must be independent of scaling.

69) where as before is compatible with the block-structure of ¡. which is the one we will use.3 is stable for all block-diagonal ¡’s which satisfy ¡´ µ if ÑÒ ´ µÅ ´ µ ´ µ ½ ½ (8. A particular smallest ¡ singularity has ´¡µ ½ Ú½ ÙÀ .68) is essentially a scaled version of the small gain theorem. Although in some cases it may be convenient to use other norms. Thus. Remark. ½ ¼ ¼ ¼ ¿½ ¾ ¼ ¼ ½ ¿ ½ ¾ ¼ ¿½ Ø´Á   Å ¡µ ¼ ¾¼¼  ¼ ½¼¼ ¼ ¾¼¼  ¼ ½¼¼ ¼. The Å ¡½ Û structure in Figure 8. For the case where ¡ is “unstructured” (a full matrix). This is sometimes shown explicitly by using the notation ¡ ´Å µ. and the spectral radius. mu. for example. a similar condition applies when we use other matrix norms. then ´Å µ ½ ´¡µ. Å . Remark 2 Other norms. which achieves this is ¡ ½ ½ Example 8. Thus ´Å µ (8. In fact. ´Å µ depends not only on Å but also on the allowed structure for ¡. for block-diagonal complex perturbations we generally have that ´Å µ is very close to Ñ Ò ´ Å  ½ µ. We will use to get necessary and sufficient conditions for robust stability and also for robust performance. we usually prefer because for this norm we get a necessary and sufficient RS-condition. How is defined? A simple statement is: Find the smallest structured ¡ (measured in terms of matrix Á   Å ¡ singular. the smallest ¡ which yields ½ ´Å µ. Mathematically. Any matrix norm may be used.¿½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ we have that ´Å µ Ñ Ò ´ Å  ½ µ.71) The perturbation with ´¡µ when ¡ is a full matrix.8 The structured singular value The structured singular value (denoted Mu. SSV or ) is a function which provides a generalization of the singular value. Consider Å ¡  ½  ½ ½ Ú½ ÙÀ ½ ½ ½ ´Å µ ¾ ¾  ¼ ¼ ¼ ¼ ¿ ½ ¾ ¼ ¼ ¼  ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼  ¼ makes À (8.70) Clearly. Å ½ (maximum row sum).72) ¿½ ¾ . . ´¡µ) which makes the ´Å µ ½ ¸ Ñ Ò ´¡µ ¡ Ø´Á   Å ¡µ ¼ ÓÖ ×ØÖÙ ØÙÖ ¡ (8. and we have ´Å µ ´Å µ.5 Full perturbation (¡ is unstructured). . or Å ¾ ´Å µ. the Frobenius norm. or any induced matrix norm such as Å ½ (maximum column sum). ´ µ¾ 8. Condition (8.

definition of .75) The definition of in (8.71). the smallest diagonal ¡ which makes Ø´Á Å ¡µ ¼ is   ½ ½ ¼ ¡ ¿ ¼  ½ Å in (8. we have for a scalar Å. The real non-negative function ´Å µ. The following example demonstrates that also depends on whether the perturbation is real or complex. Thus ´Å µ ¿ when ¡ is a diagonal matrix.5 continued. i. However. The above example shows that depends on the structure of ¡. A larger value of is “bad” as it means that a smaller perturbation makes Á   Å ¡ singular. and is then the ½ Ñ.e. ´Å µ ¸ ÑÒ Ñ ½ (8. This follows ½ Å such that ´½ Å ¡µ ¼. Example 8. whereas a smaller value of is “good”. is defined by ½ Ø´Á   Ñ Å ¡µ ¼ ÓÖ ×ØÖÙ ØÙÖ ¡ ´¡µ If no such structured ¡ exists then ´Å µ ¼. If we restrict ¡ to be diagonal ¼.73) with ´¡µ ¼ ¿¿¿. Diagonal perturbation (¡ is structured). which is impossible when ¡ is real and Å has an imaginary component. If Å is a scalar then in most cases ´Å µ Å . For the matrix (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ Note that the perturbation ¡ in (8.72) is a full matrix. so in this case ´Å µ ¼. We can do this by scaling ¡ by a factor Ñ . In summary. we prefer to normalize ¡ such that ´¡µ ½.6 of a scalar. Example 8. .1 Structured Singular Value.   ¡ ÓÑÔÐ Ü ¡Ö Ð ´Å µ ´Å µ Å (8. This results in the following alternative reciprocal of this smallest Ñ .70) involves varying ´¡µ.74) Å ÓÖ Ö Ð Å ¼ ÓØ ÖÛ × (8.70) by selecting ¡ we can select the phase of ¡ such that Å ¡ is real. and looking for the smallest Ñ which makes the matrix Á   Ñ Å ¡ singular. this requires that from (8. This is illustrated then we need a larger perturbation to make Ø´Á   Å ¡µ next. called the structured singular value. However. Definition 8.76) A value of ½ means that there exists a perturbation with ´¡µ ½ which is just large enough to make Á   Å ¡ singular. Let Å be a given complex matrix and let ¡ ¡ denote a set of complex matrices with ´¡µ ½ and with a given block-diagonal structure (in which some of the blocks may be repeated and some may be restricted to be real).

This follows from the identity (A. . but this need not be the case.78) In words. 4. At the same time (in fact. and gradually increase Ñ until we first find an allowed ¡ with ´¡µ ½ such that ´Á Ñ Å ¡µ is singular (this value of Ñ is then ½ ). In most cases Å and ¡ are square. In many respects. ½. The sequence of Å and ¡ in the definition of does not matter.1 Remarks on the definition of 1. ´«Å µ « ´Å µ for any real scalar «. ´Å µ has a number of other advantages.8. one possible way to obtain numerically.14) ½ Ž½ ¡½ and ½¾ Á   ½ ž¾ ¡¾ . then ½ Ñ cannot be the ÑÅ ¡ µ ¼ structured singular value of Å .8.77) 5. in the same issue of the same journal) Safonov (1982) introduced the Multivariable Stability Margin Ñ for a diagonally perturbed system as the inverse of . Thus.77) and work with either Å ¡ or ¡Å (whichever has the lowest dimension). Ñ ´Å µ ´Å µ ½ . that is. since there exists a smaller scalar Ñ Ñ such that Ø´Á Ñ Å ¡µ ¼ where ¡ ½ ¡¼ and ´¡µ ½. Then ¡´Å µ Proof: Consider with ½½ Á   Ø´Á   ½ Å ¡µ where ¡ ´Å µ and use Schur’s formula in (A. ¼ we obtain Á 3. However. ´Å µ. This agrees with our intuition that we cannot improve robust stability by including another uncertain perturbation. The structured singular value was introduced by Doyle (1982).2 Properties of Two properties of for real and complex ¡ which hold for both real and complex perturbations are: 1. this is a more natural definition of a robustness margin. is to start with Ñ ¼ . and the spectral norm. since if 2. then we make use of (8.76) will always have ´¡µ ¼ ¼ ¼ Ø´Á ¼ for some ¡¼ with ´¡¼ µ ½.78) simply says that robustness with respect to two perturbations taken together is at least as bad as for the worst perturbation considered alone. (8. ¾ Ñ Ü ¡½ ´Å½½ µ ¡¾ ´Å¾¾ µ (8. Let ¡ may have additional structure) and let Å be partitioned accordingly. such as providing a generalization of the spectral radius. The ¡ corresponding to the smallest Ñ in (8. ´Å µ. ¡½ ¡¾ be a block-diagonal perturbation (in which ¡ ½ and ¡¾ 2. By “allowed” we mean that ¡ must have the specified block-diagonal structure and that some of the blocks may have to be real.¿½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. If they are non-square.12) which yields         Ø´Á   Ñ Å ¡µ Ø´Á   Ñ ¡Å µ (8. Readers who are primarily interested in the practical use of may skip most of this material. The remainder of this section deals with the properties and computation of . Note that with Ñ Ñ Å ¡ Á which is clearly non-singular. 8.

8. Lemma 8. and the equivalence between (8. 8.47): ¡ ÙÐÐ Ñ ØÖ Ü 4. This is discussed below and in more detail in the survey paper by Packard and Doyle (1993). This follows because complex perturbations include ´Å µ real perturbations as a special case. . e. For a full block complex perturbation we have from (8. the lower bounds.79) and (8.79). e. may be computed relatively easily.79) ´Å µ Ñ Ü¡ ´¡µ ½ ´Å ¡µ Proof: The lemma follows directly from the definition of and (8. also hold for real or mixed ¡ ´Å µ real/complex perturbations ¡.87). generally hold only for complex perturbations.79) since there are no degrees-of-freedom for the ¾ maximization.46).5 For complex perturbations ¡ with ´¡µ ½: (8. the upper bounds given below for complex perturbations. Ñ Ò ¾ ´ Å  ½ µ in (8.81) for complex perturbations is bounded by the spectral radius and the singular value (spectral norm): ´Å µ ´Å µ ´Å µ (8. 1. ´Å µ in (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ In addition. The results are mainly based on the following result.43) ¾ Properties of for complex perturbations Most of the properties below follow easily from (8.g.81). ´Å µ ´Å µ (8. which may be viewed as another definition of that applies for complex ¡ only. 2. 3. ´«Å µ « ´Å µ for any (complex) scalar «. For a repeated scalar complex perturbation we have ¡ ÆÁ ´Æ × ÓÑÔÐ Ü × Ð Öµ ´Å µ ´Å µ (8.3 for complex ¡ When all the blocks in ¡ are complex.80) and (8. whereas selecting ¡ full gives the most degrees-of-freedom.82).80) Proof: Follows directly from (8. since selecting ¡ ÆÁ gives the fewest degrees-of-freedom for the optimization in (8.79). However.82) This follows from (8.g.

that is.79) by writing ÅÍ ¡ and so Í may always be absorbed into ¡.79). the global minimum) and it may be shown (Doyle.84) and (8. ´¡µ.e. Improved upper bound. 1993). Define to be the set of matrices which commute with ¡ (i. It follows from a generalization of the maximum modulus theorem for rational ¾ functions.86) is not convex and so it may be difficult to use in calculating numerically. Then ´ÅÍ µ ´Å µ ´ÍÅ µ Å ¡¼ where ´¡¼ µ ´Í ¡µ (8.87) This optimization is convex in (i. 8.e.85) ¡´ Å µ Ñ¡ Ü ´ Å ¡µ 7. numerical evidence suggests that the bound is tight (within a few percent) for 4 blocks or more.83) Proof: Follows from (8.86) is motivated by combining (8. 1982) that the inequality is in fact an equality if there are 3 or fewer blocks in ¡. satisfy ¡ ¡ ).82) to yield ´Å µ Ñ Ü ´ÅÍ µ Í ¾Í The surprise is that this is always an equality. Furthermore. The fourth equality again follows from (8.82) that ´Å µ ÑÒ ¾ ´ Å  ½ µ (8. Then for complex ¡ µ ´ µ (by the The first equality is (8. ¡ ¡. the optimization in (8.83) and (8. The result (8.. ¾ 6. The second equality applies since ´ eigenvalue properties in the Appendix).86) Proof: The proof of this important result is given by Doyle (1982) and Packard and Doyle (1993). Define Í as the set of all unitary matrices same block-diagonal structure as ¡.¿½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 5.79). Improved lower bound. Í with the ´Å µ Ñ ÜÍ ¾Í ´ÅÍ µ (8. Consider any unitary matrix Í with the same structure as ¡. has only one minimum. The key step is the third equality which applies ¾ only when ¡ ¡ . Then it follows from (8. Then ´Å µ ¡ ´Å µ (8. . the worst known example has an upper bound which is about 15% larger than (Balas et al. Unfortunately.84) ´ ŵ Proof: ´Å µ Ò ´ Å  ½ µ ´ ŵ ´Å µ follows from Ñ¡ Ü ´Å ¡ µ Ñ¡ Ü ´Å ¡µ (8. Consider any matrix which commutes with ¡.

¾ ¡´ µ ´ µ ¡´ µ ¡ (8. see also Exercise 8. µ when ¡ has a structure similar 11. then one may without loss of generality set Ò ½. for cases where ¡ has one or more scalar blocks. The following property is useful for finding ´ to that of or : (8. 10.94) Note: (8. let ½ ¾ Ò .e. they are both diagonal) then ¡´ µ ´ µ ¡´ µ (8. and we simply µ ´ µ ´ µ (which is not a very exciting result since we always get ´ have ´ µ ´ µ ´ µ ´ µ). 1993).91) In short. This follows from Property 1 with ½.g.20.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ’s which commute with ¡ are ¿½ Some examples of ¡ ¡ ¡ ¡ ÆÁ ÙÐÐ Ñ ØÖ Ü ÙÐÐ Ñ ØÖ Ü Á ¡½ ´ ÙÐе ¼ ¼ ¡¾ ´ ÙÐе ¡½ ´ ÙÐе ƾ Á Æ¿ Æ (8. (b) If ¡ has the same structure as (e. 9. When we maximize over ¡ then Î generates a certain set of matrices with ´Î µ ½. one may simplify the optimization in (8. we see that the structures of ¡ and are “opposites”.92) ¡´ µ ´ µ ¡ ´ µ Ò and note that ´ Å  ½ µ ´ ¼ Å ¼ ½ µ. and for scalars ¼ (Packard Hermitian positive definite. ¡” Proof: The proof is from (Skogestad and Morari.87) by fixing one of the scalar blocks in equal to 1. For example. Use the fact that ´ µ Ñ Ü¡ ´¡ µ Ñ Ü¡ ´Î µ ´ µ where Î ¡ ´ µ.92): (a) If is a full matrix then the structure of ¡ is a full matrix. i. Let us extend this set by ½ and with the same structure as ¡ .93) .86) by fixing one of the corresponding unitary scalars in Í equal to 1. One can always simplify the optimization in (8. Proof: Let ¼ ½ Similarly. Some special cases of (8. We maximizing over all matrices Î with ´Î µ µ Ñ ÜÎ ´Î µ ´ µ ¾ then get ´ Î ´ µ ´ µ.94) is stated incorrectly in Doyle (1982) since it is not specified that have the same structure as . Without affecting the optimization in (8. we may assume the blocks in to be À ¼. ¡ must .90) ½Á ¾ ´ ÙÐе ¿ (8.88) ½Á ¼ (8. 1988a). and “ Here the subscript “¡ ” denotes the structure of the matrix denotes the structure of ¡.89) ¼ ¾Á (8.87). and Doyle.

we get the spectral radius µ ´ µ ´ µ. 13. see (A. Ä.97) may be used to derive a sufficient loop-shaping bound on a transfer function of interest.95) ¡´ Ê µ ¼ (8. For ¡ full. Skogestad and Morari (1988a) show that the best upper bound is the which solves ¡ ƽ½ ƽ¾ ƾ½ ƾ¾ ½ (8.97) where ¡ ¡ Ê ..7 Let Å and ¡ be complex ¾ ¢ ¾ matrices. For a diagonal ¡. Example 8.98) Proof: For ¡ ÆÁ .93) is: ´¡µ ¡ ´Å µ ´Êµ ¾ ¼ ¡ (8. e. 1988a).126). The result is proved by (Skogestad and Morari. and is easily computed using skewed. we should have used Ò because the set of allowed ’s is not bounded and therefore the exact minimum may not be achieved (although we may be get arbitrarily close). (b) Use of the lower “bound” in (8. ´Å µ ´Å µ and ´Å µ · since Å is singular and its nonØÖ´Å µ · . ´Å µ ´Å µ and ´Å µ zero eigenvalue is ½ ´Å µ Ô ¾ ¾ · ¾ ¾ since Å is singular and its non-zero singular value is ´Å µ Å .96) where ¡ ¡ Ê . Assume that Å is an LFT of Ê: Šƽ½ · ƽ¾ Ê´Á   ƾ¾ ʵ ½ ƾ½ . Then Å ´Å µ ´Å µ · ´Å µ · Ô ¾ ¾·¾ ¾ ÓÖ ¡ ÓÖ ¡ ÓÖ ¡ ƽ ƾ ÙÐÐ Ñ ØÖ Ü ÆÁ (8.g. (c) Use of the upper bound in (8.87) (which is exact here since we have only two blocks).e. Ê may be Ë . Above we have used Ñ Ò . Remark. Ì .86) (which is always exact). The use of Ñ Ü¡ (rather than ×ÙÔ¡ ) is mathematically correct since the set ¡ is closed (with ´¡µ ½). it is interesting to consider three different proofs of the result ´Å µ · : (a) A direct calculation based on the definition of . A generalization of (8. Given the -condition ¡ ´Å µ ½ (for RS or RP).¿¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (c) If ¡ ÆÁ (i. ¡ consists of repeated scalars). ´Êµ . which guarantees that ¡ ´Å µ ½ when ¡ ´Æ½½ µ ½. The problem is to find an upper bound on Ê. (8.92) and (8. The following is a further generalization of these bounds. . To be mathematically correct. Ä  ½ or à . A useful special case of this is inequality ´ ´Å ¡µ 12.

70) we have that     ¾ Exercise 8. i. and we may fix one of them equal Å in (8. and ¡ ´ µ ´ µ then realizing that we can always select ¡½ and ¡¾ such that ´ ¡¾ Ô ½ µ (recall (8. We get Æ ½ ´ · µ. ½ ƽ ƾ ¼. Then ¼ ßÞ ¼ ´ ÔÅ µ Å ´ µ ´ µ ´ µ ´Å µ Ñ Ü ´ µ ´ µ Ô ÓÖ ¡ ÓÖ ¡ ÓÖ ¡ ¡½ ¡¾ ¡ ÙÐÐ ÙÐÐ Ñ ØÖ Ü (8. For block-diagonal ¡. ½  ½ Æ ¡Æ   · ¡Æ which make this matrix singular. Show that for ¡ ¡½ ¡¾ Ž½ Ž¾ ž½ ž¾ Ž¾ ½Å½½ ž¾ ž½ (8.14) we get ´Å µ Ô Ô Ô ´ µ and ´Å µ ´ µ follows.16.98) and a diagonal ¡ show that ´Å µ · using the lower “bound” ´Å µ Ñ ÜÍ ´ÅÍ µ (which is always exact). Í ½ (the blocks in Í are unitary scalars. ÅÀÅ ¾ .77) we then get ƾ ƽ ƾ Å¡ ½   ƽ   ƾ Ø´Á   Å ¡µ The smallest ƽ and obtained when ƽ Ø´Á   ¡Å µ ½   ƽ ƾ ƾ . ´Å µ ´ µ ´ µ follows in a similar way using ´Å µ Ñ Ü¡ ´Å ¡µ Ñ Ü¡½ ¡¾ ´ ¡¾ ¡½ µ.47)).e. The solution is ¾·¾ · ¾ ´Å µ · . are ƾ Æ and the phases of ƽ and ƾ are adjusted such that ¼.100) Example 8. Solution: Use ½ .8 Let Å be a partitioned matrix with both diagonal blocks equal to zero.36) that (c) For ´Å µ ´ Å  ½ µ ½ Ô ¾· ¾· ¾· ¾ Ô (8. ´Å µ Ñ Ü ´ µ ´ µ follows since ´Å µ ´Å À Å µ where À À .99) which gives which weÔ want to minimize with respect to . Since Å  ½ is a singular matrix we have from (A. Exercise 8.101) ÆÁ Proof: From the definition of eigenvalues and Schur’s formula (A. We have Å¡ From (8.17 Let be a complex scalar. Hint: Use to ½).16 (b) For Å in (8. and from (8.98) and a diagonal ¡ show that ´Å µ · using the upper bound Ñ Ò ´ Å  ½ µ (which is exact in this case since has two “blocks”).ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ Æ½ ¿¾½ We will use approach (a) here and leave (b) and (c) for Exercise 8.

Exercise 8. Show by a counterexample that this is not true.104) ½ ´Å µ. and we obtain the From the definition of in (8.20 Assume ´ µ is not Exercise 8. From the determinant stability condition in (8. or when ¡ is full? (Answers: Yes and No). Theorem 8. µ 8. Assume that the nominal system Å and the perturbations ¡ are stable.3 is stable for all allowed perturbations with ´¡µ ½ . we scale the uncertainty ¡ by Ñ .9 Robust stability with structured uncertainty Consider stability of the Å ¡-structure in Figure 8. and are square matrices.103) A problem with (8.18 Let Å be a complex ¿ ¢ ¿ matrix and ¡ ƽ ƾ Æ¿ · · . Prove that Å Exercise 8.21 If (8.43) which applies to both complex and real perturbations we get ÊË ¸ Ø´Á   Å ¡´ µµ ¼ ¡ ´¡´ µµ ½ (8. namely Ø´Á   Ñ Å ¡µ ¼ (8. Show by a counterexample that in general equal to ´ µ.19 Let and ´Å µ be complex scalars.¿¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Exercise 8. Show that for ¡ ƽ ƾ · (8.3 for the case where ¡ is a set of norm-bounded block-diagonal perturbations. and look for the smallest Ñ which yields “borderline instability”. Then the Å ¡-system in Figure 8.105) .76) this value is Ñ following necessary and sufficient condition for robust stability.6 RS for block-diagonal perturbations (real or complex). To find the factor Ñ by which the system is robustly stable. Under what conditions is ´ µ ´ µ? (Hint: Recall (8.102) Does this hold when ¡ is scalar times identity. if and only if ´Å ´ µµ ½ (8.84)).103) is that it is only a “yes/no” condition.94) were true for any structure of ¡ then it would imply ´ ´ µ ´ µ.

In the second example the structure makes no difference.107) .6 is really a theorem.105) for robust stability may be rewritten as ÊË ¸ ´Å ´ µµ ´¡´ µµ ½ (8. and the system is unstable. or a restatement of the definition of .106) which may be interpreted as a “generalized small gain theorem” that also takes into account the structure of ¡. In either case. so if ´Å µ ½ at all frequencies the required perturbation Ø´Á   Å ¡µ ¼ is larger than 1. Example 8. The use of unstructured uncertainty and ´ÌÁ µ is conservative ¡Á ´ÌÁ µ feedback system in Figure 8.7 for the case when the multiplicative input uncertainty is diagonal.ÊÇ ÍËÌ ËÌ Proof: ´Å µ ¡ to make ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¾¿ ½ ¸ Ñ ½.11: Robust stability for diagonal input uncertainty is guaranteed since ½ ÛÁ . In the first example.9 RS with diagonal input uncertainty 10 10 2 Consider robust stability of the 1 Magnitude ÛÁ ´ ½ µ ´ÌÁ µ ¡Á ´ÌÁ µ 10 10 10 10 0 −1 −2 −3 10 −3 10 −2 10 −1 10 0 10 1 10 2 Frequency Figure 8. A nominal ¾ ¾ plant and the controller (which represents PI-control of a distillation process using the DV-configuration) is given by ¢ ´×µ ½   × · ½  ½¼ ½ ¾  ½ à ´×µ ½ · ×  ¼ ¼¼½ ¼ ×  ¼ ¼ ¼ (8. we see from (8. ¾ Condition (8. the structure of the uncertainty is important. ´Å µ ½ ¸ Ñ ½.105) that it is trivial to check for robust stability provided we can compute . One may argue whether Theorem 8. and an analysis based on the À ½ norm leads to the incorrect conclusion that the system is not robustly stable. so if ´Å µ ½ at some frequency there does exist a perturbation with ´¡µ ½ such that Ø´Á   Å ¡µ ¼ at this frequency. Let us consider two examples that illustrate how we use to check for robust stability with structured uncertainty. and the system is stable. On the other hand.

1.80) we see that the non-physical constant perturbation ƽ ƾ   .7. from ¼ ½ yields instability. so complex repeated perturbations. and therefore we conclude that the use of the singular value is conservative in this case.11. using the command mu with blk = [-2 0] in the -toolbox in MATLAB) yields a peak -value of ½. diagonal or repeated complex scalar).12. so to guarantee RS we can at most tolerate ½¼% (complex) uncertainty at low frequencies.7. from which we see that ƽ ´Ì µ is ½¼ at low frequencies. which increases at high frequencies.6 yields (time in minutes). 3. the use of complex that real perturbations ƽ rather than real perturbations is not conservative in this case. ´ÌÁ µ can be seen to be larger than ½ ÛÁ ´ µ over a wide frequency range. Since ´Ì µ crosses ½ at about ½¼ rad/s. where we found ¼ ½ and ƾ ¼ ½ yield instability.109) This condition is shown graphically in Figure 8. available software (e.32)). The uncertainty may be represented as multiplicative input uncertainty as shown in Figure 8. Thus. The increase with frequency allows for various neglected dynamics associated with the actuator and valve. (Indeed. On rearranging the block diagram to match the Å ¡-structure in Figure 8. With repeated real perturbations.) On the other hand. we have that ´Ì µ instability may occur with a (non-physical) complex ƽ ƾ of magnitude ¼ ½. At low frequencies ´Ì µ is about ½¼.1 with the plant ´×µ given in (3. Also in Figure 8. However. This confirms the results from Section 3.22 Consider the same example and check for robust stability with full-block multiplicative output uncertainty of the same magnitude. so the system is robustly stable.) (3. This demonstrates the need for the structured singular value. The controller results in a nominally stable system with acceptable performance. Exercise 8. with repeated scalar perturbations (i. reaching a value of ½ (½¼¼% uncertainty) at about ½ rad/min. we plot ´Ì µ as a function of frequency.7 ÛÁ Á where ÛÁ ´×µ is a where ¡Á is a diagonal complex matrix and the weight is ÏÁ scalar. so for RS there is no difference between multiplicative input and In this case ÌÁ multiplicative output uncertainty. full-block uncertainty is not reasonable for this plant. 1 from Section à Á . at least for ¡Á diagonal. we can tolerate more than ½¼¼% uncertainty at frequencies above ½¼ rad/s. We ´Ì µ irrespective of the structure of the complex multiplicative find for this case that ´Ì µ perturbation (full-block. and the RS-condition ´Å µ ½ in Theorem 8. This shows that the system would be unstable for full-block input uncertainty (¡Á full).10 RS of spinning satellite. Example 8.g. Assume there is complex multiplicative uncertainty in each manipulated input of magnitude ÊË ¸ ¡Á ´ÌÁ µ ½ ÛÁ ´ µ ¡Á ƽ ƾ (8.76) and the controller sensitive this design is to multiplicative input uncertainty.108) ¼ ×·½ This implies a relative uncertainty of up to ¾¼% in the low frequency range. the uncertainty in each channel is identical) there is a difference between real and complex perturbations.3 we get Å ÛÁ à ´Á · à µ ½ ÛÁ ÌÁ (recall (8. In Figure 8.80). We want to study how Ì . (Solution: RS is satisfied).11 and is seen to be satisfied at all frequencies.e. with (3. so we can tolerate a perturbation ƽ ƾ of magnitude ½ before getting instability (This is confirmed by considering the characteristic polynomial in ƾ ½ yields instability. Recall Motivating Example No.   However.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÛÁ ´×µ × · ¼ ¾ (8.

i. how large can one particular source of uncertainty be before we get instability? We define this value as ½ × . But if we want to keep some of the uncertainty blocks fixed. We may view × ´Å µ as a generalization of ´Å µ.12: -plot for robust stability of spinning satellite 8. In practice. . and we have that skewed. let ¡ want to find how large ¡ ¾ can be before we get instability. The solution is to select ÃÑ ÃÑ Å ¡µ Á ¼ ÑÁ Ñ which makes ¼ (8.9. with available software to compute . This iteration is straightforward since increases uniformly with Ñ .110). and × for ½.e. × for ½.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¾ 10 1 Magnitude ´Ì µ 10 0 10 −1 10 −2 10 −1 10 0 10 1 10 2 Frequency Figure 8. ¡½ ¡¾ and assume we have fixed ¡ ½ ½ and we For example.is Ø´Á   × ´Å µ ¸ ½ Ñ Note that to compute skewed. × for ½..1 in order to guarantee stability. where × is called skewed. × ´Å µ is always further from 1 than ´Å µ is. we obtain × by iterating on Ñ until ´ÃÑ Å µ ½ where ÃÑ may be as in (8.110) and look at each frequency for the smallest value of ¼.1 What do ½ and skewed- mean? A value of ½ ½ for robust stability means that all the uncertainty blocks must be decreased in magnitude by a factor 1.we must first define which part of the perturbations is to be constant.

2. but first a few remarks: 1.1 Testing RP using To test for RP. Essentially.13. which itself may be a block-diagonal matrix. Assume nominal stability such that Æ is (internally) stable. for the nominal system (with ¡ ¡È ´Æ¾¾ µ. and we see that ¡È must be a full matrix. we first “pull out” the uncertain perturbations and rearrange the uncertain system into the Æ ¡-form of Figure 8. Our RP-requirement. Condition (8. Then ÊÈ where ¸ ¸ ½ Ù ´Æ ¡µ ½ Û ¡´Æ ´ µµ ½ ¡ ¡ ¼ ¼ ¡È ½ ¡ ½ ½ (8.113) and ¡È is a full complex perturbation with the same dimensions as Ì. À . It is a fictitious uncertainty block representing the À ½ performance specification.112) allows us to test if ½ ½ for all possible ¡’s without having to test each ¡ individually. ¡ ¡È .111) (8. Here 2.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. whereas ¡È is a full complex matrix stemming from the ½ norm performance specification. is that the À ½ norm of the transfer function Ù ´Æ ¡µ remains less than 1 for all allowed perturbations. This also holds for MIMO systems. ¼µ we get from (8. is defined such that it directly addresses the worst case. 8. as illustrated by the step-wise derivation in Figure 8.81) that ´Æ¾¾ µ For example. Note that the block ¡ È (where capital È denotes Performance) is always a full matrix. Step B is the key step and the reader is advised to study this carefully in the treatment below. the RPcondition is identical to a RS-condition with an additional perturbation block. Rearrange the uncertain system into the Æ ¡structure of Figure 8. represents the true uncertainty.10 Robust performance Robust performance (RP) means that the performance objective is satisfied for all possible plants in the uncertainty set. Below we prove the theorem in two alternative ways.112) is computed with respect to the structure (8.7 Robust performance. as given in (8.10. This may be tested exactly by computing ´Æ µ as stated in the following theorem.39). Theorem 8. We showed in Chapter 7 that for a SISO system with an À ½ performance objective. even the worst-case plant. The -condition for RP involves the enlarged perturbation ¡ ¡.13.

Recall first from Theorem 8. Collect ¡ and ¡È into the block-diagonal matrix ¡. Step A. ¾ Algebraic proof of Theorem 8. Introduce Ù ´Æ ¡µ from Figure 8. Since ¡ always has structure. where ¡ È is a full complex matrix. Step D.114) implies that RS ( ¡ ´Æ½½ µ ½) and where as just noted NP ( ´Æ¾¾ µ ½) are automatically satisfied when RP ( ´Æ µ ½) is satisfied. However. 4. 5.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¾ 3.2.4 that stability of the Å ¡structure in Figure 8. (8. From this theorem. From (8. For a generalization of Theorem 8.7 The definition of gives at each frequency ´Æ ¡ßÞ µ ÊÈ ¡È ´Æ¾¾ µ Ñ Ü ¡ ´Æ½½ µ ¡È ´Æ¾¾ µ ßÞ ßÞ ÊË ÆÈ (8. Æ ½ ½. the use of the conservative for robust performance. The theorem is proved by the equivalence between the various block diagrams in Figure 8. note that NS (stability of Æ ) is not guaranteed by (8. Block diagram proof of Theorem 8.14) we have Ø´Á   Æ ¡µ Ø´Á   ƽ½ ¡µ ¡ Ø´Á   ƽ½ ¡µ ¡  Æ ½¾ Ø Á Æ ½½ ¡ Á ÆÆ ¡È   ¾¾ ¡Ô ¾½ ¡ Ø Á   ƾ¾ ¡È   ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ ¡È Ø Á   ´Æ¾¾ · ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ µ¡È .13. Step C. (1996).7 see the main loop theorem of Packard and Doyle (1993). let Ù ´Æ ¡µ denote the perturbed closed-loop system for which we want to test RP.112) and must be tested separately (Beware! It is a common mistake to get a design with apparently great RP. We then have that the original RP-problem is equivalent to RS of the Æ ¡-structure which from Theorem 8. but which is not nominally stable and thus is actually robustly unstable). This is simply the definition of RP. ¡ ´Æ ´ µµ ½¸ Ø´Á   Æ ´ µ¡´ µµ ¼ ¡ ´¡´ µµ ½ By Schur’s formula in (A. is generally ´Æ¾¾ µ.78) we have that À½ norm. Step B (the key step).6 is equivalent to ¡ ´Æ µ ½. see also Zhou et al.114) ½ ½. where ¡ is a full complex matrix.3.7 In the following. we get that the RP-condition ½ ½ is equivalent to RS of the ¡È -structure. is equivalent to Å ½ ½.

¡È ½ ½ ¡ ½ ½ ´¡µ Ñ ¡È ¡ is RS. ¡È ½ ½ ¡ ½ ½ ¹ ¹ STEP D Æ Ñ ¡ ¼ ¡ ¼ ¡È is RS. ¹ ¡ ½ ½ Æ Ñ ¡´Æ µ (RS theorem) ½ Û Ù ´Æ ¡) Figure 8. .¿¾ ÅÍÄÌÁÎ ÊÁ RP STEP A Ä Ã ÇÆÌÊÇÄ Ñ Ñ ´¡µ ½ ½ STEP B ¡ ½ ½ ¡È ¹ STEP C is RS.13: RP as a special case of structured RS.

116) (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ Ø´Á   ƽ½ ¡µ ¡ Ø´Á   Ù ´Æ ¡µ¡È µ ¿¾ Since this expression should not be zero. Ø´Á   ƽ½ ¡µ and for all ¡ ¼ ¡ ¸ ¡ ´Æ½½ µ ½ ´Ê˵ Ø´Á   ¡È µ ¼ ¡È ¸ ¡È ´ µ ½ ¸ ´ µ ½ (RP definition) Theorem 8. Note that it is not necessary to test for RS separately as it follows as a special case of the RP requirement.e. RS and RP Rearrange the uncertain system into the Æ ¡-structure of Figure 8.7 is proved by reading the above lines in the opposite direction. with an associated block structure ¡. both terms must be non-zero at each frequency. where the block-diagonal perturbations satisfy ¡ ½ ½. ¾ 8. Then we have: ÆË ÆÈ ÊË ÊÈ ¸ Æ ´ ÒØ ÖÒ ÐÐݵ ×Ø Ð ¸ ´Æ¾¾ µ ¡È ½ and NS ¸ ¡ ´Æ½½ µ ½ and NS ¡ ¼ ¸ ¡ ´Æ µ ½ ¡ ¼ ¡È (8.117) and NS (8. i.2 Summary of -conditions for NP. Introduce Ù ´Æ ¡µ ƾ¾ · ƾ½ ¡´Á   ƽ½ ¡µ ½ ƽ¾ ½ ½ for all allowable and let the performance requirement (RP) be perturbations. we therefore define À ´×µ ¡ ¸ Ñ Ü ¡ ´À ´ µµ For a nominally stable system we then have ÆÈ ¸ ƾ¾ ½ ½ ÊË ¸ ƽ½ ¡ ½ ÊÈ ¸ Æ ¡ ½ . whereas ¡ È always is a full complex matrix.2. Although the structured singular value is not a norm. For a stable rational transfer matrix À ´×µ.10. it is sometimes convenient to refer to the peak -value as the “¡-norm”.115) (8. Note that nominal stability (NS) must be tested separately in all cases.118) Here ¡ is a block-diagonal matrix (its detailed structure depends on the uncertainty we are representing).

does not directly give us the worst-case performance. as one might have expected. That is.122) . i.muup] = wcperf(N. At the frequency where × ´Æ µ has its peak. at each frequency skewed.10. the single command wcperf combines these three steps: [delwc. What does this mean? The definition of tells us that our RP-requirement would be satisfied exactly if we reduced both the performance requirement and the uncertainty by a factor of 1. as illustrated in Figure 8. The corresponding worst-case perturbation may be obtained as follows: First compute the worst-case performance at each frequency using skewed.9. and then find a stable. one needs to keep the magnitude of the perturbations fixed ( ´¡µ ½).. 8.121) where the set of plants is given by ´Á · ÛÁ ¡Á µ ¡Á ½ (8. all-pass transfer function that matches this.3 Worst-case performance and skewedAssume we have a system for which the peak -value for RP is ½ ½.120) Note that underestimates how bad or good the actual worst-case performance is.is the value × ´Æ µ which solves ´ÃÑ Æ µ ½ ÃÑ Á ¼ ¼ ½ × (8. In the MATLAB -toolbox.. To find the worst-case weighted performance for a given uncertainty. Ñ Ü ¡ ´ ´¡µµ. So. We have. in this case.of Æ as discussed in Section 8. Ñ Ü ´ Ð ´Æ ¡µ´ µµ ´¡µ ½ × ´Æ ´ µµ (8.14. The performance requirement is then ÊÈ ¸ ÛÈ ´Á · Ô Ã µ ½ Ô ½ ½ ½ Ô (8. we may extract the corresponding worst-case perturbation generated by the software. This follows because × ´Æ µ is always further from 1 than ´Æ µ.¿¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. Remark.11 Application: RP with input uncertainty We will now consider in some detail the case of multiplicative input uncertainty with performance defined in terms of weighted sensitivity. that is.1.1).e. we scale the performance part of Æ by a factor Ñ ½ × and iterate on Ñ until ½. we must compute skewed.1.119) To find × numerically.blk.mulow.

however. 2.121) and (8. we will: 1. Consider a multivariable distillation process for which we have already seen from simulations in Chapter 3 that a decoupling controller is sensitive to small errors in the input gains. it is less suitable for controller synthesis.14: Robust performance of system with input uncertainty multivariable systems. 5. so that useful connections can be made with results from the previous chapter. This problem is excellent for illustrating the robustness analysis of uncertain - ¹ Ã Õ ¹ ÏÁ ¹ ¡Á ¹+ ¹ + ¹ +¹ Õ + Û ÏÈ Þ ¹ · ¡Á Û ¹ ¹ Æ Þ ¹ Figure 8. so the performance objective is the same for all the outputs. Find the interconnection matrix Æ for this problem.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿½ Here ÛÈ ´×µ and ÛÁ ´×µ are scalar weights. Consider the SISO case. It should be noted. For example. but we will also consider the case when ¡ Á is a full matrix. 4. that although the problem set-up in (8. Make comparisons with the case where the uncertainty is located at the output. and the uncertainty is the same for all inputs.122) is fine for analyzing a given controller. Find some simple bounds on for this problem and discuss the role of the condition number. 3. We will find that for RP is indeed much larger than 1 for this decoupling controller. In this section. We will mostly assume that ¡Á is diagonal. . the problem formulation does not penalize directly the outputs from the controller.

argument based on the Nyquist plot of Ä Robust performance optimization. conditions (8.115)(8.102). which was derived in Chapter 7 using a simple graphical Ã. and we see that where we have used Ì Á (8.61). as shown in Figure 8.83). This may be solved using à -iteration as outlined later in Section 8.2 blocks of Æ .123) ÛÈ Ë ÛÈ Ë where ÌÁ à ´Á · à µ ½ . in terms of weighted sensitivity with multiplicative uncertainty for a SISO system.126) (8.118).129) ÛÁ Ì Û Á Ô · ÛÈ Ë differs from Ì From (A.124) (8.14. Ë ´Á · à µ ½ and for simplicity we have omitted ´ÍÆ µ with unitary the negative signs in the 1.125) (8. For SISO systems ÌÁ Ì .128) ÃË .1 and 1. which is easier to solve both mathematically and numerically. ¼ Á . ÃË and ÌÁ are stable ÛÈ Ë ½ ÛÁ ÌÁ ½ ÛÈ Ë · ÛÁ ÌÁ ½ ÛÁ ÌÁ ÛÁ ÌÁ ÛÈ Ë ÛÈ Ë ÛÁ ÌÁ · ÛÈ Ë (8.11.12.127) The RP-condition (8. is to minimize the peak value (À ½ norm) of the mixed sensitivity matrix ÛÈ Ë ÆÑ Ü (8. Here ¡ ¡ Á may be a full or diagonal matrix (depending on the physical situation).127) follows from (8. since ´Æ µ Í  Á ¼ For a given controller à we can now test for NS. RS and RP using (8. as in (8. A closely related problem.123) become ÆË ÆÈ ÊË ÊÈ ´Æ µ ¸ ¸ ¸ ¸ Ë .118) with Æ as in (8.127) is identical to (7.11.95) we get that at each frequency ´Æ µ Ô ´ÆÑ Ü µ ÛÁ Ì ¾ · ÛÈ Ë ¾ by at most a factor ¾. minimizing ÆÑ Ü ½ is close to optimizing robust performance in terms of ´Æ µ. thus involves minimizing the peak value of ´Æ µ ÛÁ Ì · ÛÈ Ë . 8. that is ÛÁ ÌÁ ÛÈ Ë ÛÁ ÃË ÛÈ Ë (8. Thus. we get.2 RP with input uncertainty for SISO system For a SISO system.1 Interconnection matrix On rearranging the system into the Æ ¡-structure. ÛÁ ÌÁ ÛÁ ÃË Æ (8.64).¿¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. recall (7. Ë . see (8.32). NP.115)-(8. .

a closed-loop bandwidth of about 0. but that the response with 20% gain uncertainty in each input channel was extremely poor. NP With the decoupling controller we have ¬ ¬× ¬ ¬ ´Æ¾¾ µ ´ÛÈ Ë µ ¾·¼ ¼ ×·¼ ¬ ¬ ¬ ¬ and we see from the dashed-dot line in Figure 8.12 that this controller gave an excellent nominal response.132) With reference to (7.11.72) we see that the performance weight Û È ´×µ specifies integral action. Ë .26) we see the weight Û Á ´×µ may approximately represent a 20% gain error and a neglected time delay of 0.9 min.15 that the NP-condition is easily satisfied: ´ÛÈ Ë µ is small at low frequencies (¼ ¼ ¼ ¼ ¼ at ¼) and approaches ½ ¾ ¼ at high frequencies.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿¿ 8. We will now confirm these findings by a -analysis. NP.05 [rad/min] (which is relatively slow in the presence of an allowed time delay of 0. Note that example. . To this effect we use the following weights for uncertainty and performance: ÛÁ ´×µ ×·¼¾ ¼ ×·½ ÛÈ ´×µ × ¾·¼ ¼ × (8. NS and RP. With reference to (2.9 min) and a maximum peak for ´Ë µ of Å × ¾. Recall from Figure 3.3 Robust performance for ¾ ¢ ¾ distillation process ¼ × ´×µ ½ (8. We now test for NS. Û Á ´ µ levels off at 2 (200% uncertainty) at high frequencies. ÃË and Ì Á are stable. ¡Á is a diagonal matrix in this NS With and à as given in (8. 2) and the corresponding inverse-based controller: ´×µ ½   × · ½ ½¼ ¾  ½¼ à ´×µ The controller provides a nominally decoupled system with Ä where ÐÁ Ë × ×·¼ ¯Á and Ì Ø ½ ¯ ØÁ ¼ ×·¼ (8. so the system is nominally stable.130) Consider again the distillation process example from Chapter 3 (Motivating Example No.131) Ð ¼ × ¯ ½ ½·Ð ½ ½ ¿× · ½ We have used ¯ for the nominal sensitivity in each loop to distinguish it from the Laplace variable ×.130) we find that Ë .

The peak of the actual worstcase weighted sensitivity with uncertainty blocks of magnitude 1. for our particular plant and controller in (8..15 that RS is easily satisfied.93. the weighted sensitivity will be about 6 times larger than what we require. The MATLAB -toolbox commands to generate Figure 8. which may be computed using skewed. This is confirmed by the -curve for RP in Figure 8. ¡ to 6.107) which is ill-conditioned with ­ ´ µ at all frequencies (but note that the RGA-elements of are all about ¼ ). With an .7 min before we get instability. The peak value is close (8.¿¿ 6 ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ RP 4 2 RS 0 −3 10 10 −2 NP 10 −1 10 0 10 1 10 2 Frequency Figure 8. This means before the that we may increase the uncertainty by a factor of ½ ¼ ¿ ½ worst-case uncertainty yields instability.15: -plots for distillation process with decoupling controller RS Since in this case ÛÁ ÌÁ ÛÁ Ì is a scalar times the identity matrix. independent of the structure of ¡ Á . as is confirmed in the following exercise. that they are the same. Exercise 8. meaning that even with 6 times less uncertainty.130) it appears from numerical calculations and by use of (8. we can tolerate about 38% gain uncertainty and a time delay of about 1. Of course. we have.12 that the robust performance is poor.135) below.15 which was computed numerically using ¡ ´Æ µ with Æ as in ¡Á ¡È and ¡Á ƽ ƾ . that ¡Á ´ÛÁ ÌÁ µ ÛÁ Ø ¬ ¬ ¬¼ ¬ ¬ ×·½ ¬ ¾ ´¼ × · ½µ´½ ¿× · ½µ ¬ ¬ and we see from the dashed line in Figure 8. with unstructured uncertainty (¡ Á full) is larger than with structured uncertainty (¡ Á diagonal).123). that is. The peak value of ¡Á ´Å µ over frequency is Å ¡Á ¼ ¿.23 Consider the plant ¼ ´×µ in (8. However. In general.1.15 are given in Table 8. is for comparison 44. this is not generally true. RP Although our system has good robustness margins (RS easily satisfied) and excellent nominal performance we know from the simulations in Figure 3.

input to Wi = ’[u]’. Wp=daug(wp. Wp. % dynk=nd2sys([75 1].2 -109.93 for % delta=1). w(2) . musup % % mu for RS.5000).1:2).61). G=mmult(Dyn. wi=nd2sys([1 0. % (ans = 0.1).Kinv). 2 2].7726).6]. % % mu for RP.’:’.dynk).dyn).rowp.0.[1 1.rowg] = mu(Nf. u(2)]’. [mubnds.1). input to Wp = ’[G+w]’. muRS = sel(mubnds.’:’.rowp.rowd. % % Generalized plant P. pkvnorm(muRS) % % mu for NP (= max.[75 1]). pkvnorm(muRP) % % Worst-case weighted sensitivity % [delworst.musup] = wcperf(Nf. 1 1].0. -G-w]’. Dyn=daug(dyn. % Nnp=sel(Nf. omega = logspace(-3.blk. % (musup = 44.3.minv(G0)). muRP = sel(mubnds.7).’:’.[1 1. inputvar = ’[ydel(2). pkvnorm(muNP) vplot(’liv.1).rowd.blk.’c’).rowg]=mu(Nnp.wi).1). sysoutname = ’P’. 108.sens.5 1]).muNP).rowp.[0. % blk = [1 1. % wp=nd2sys([10 1]. .muslow.e-5].8 -86. % % Inverse-based control.4. % systemnames = ’G Wp Wi’. % Nrs=sel(Nf.5).1: MATLAB program for -analysis (generates Figure 8. Kinv=mmult(Dynk.sens. [mubnds.2]. outputvar = ’[Wi. Wi=daug(wi.[2 2].muRP.muRS.1:2.rowd.m’.rowg]=mu(Nrs.5242).ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ Table 8.e-5]. Dynk=daug(dynk. sysic. % N = starp(P. input to G = ’[u+ydel]’. muNP = sel(mubnds.G0).[10 1. 1 1. cleanupsysic = ’yes’. % (ans = 0. dyn = nd2sys(1. singular value of Nnp). % (ans = 5.’c’).15) % Uses the Mu toolbox G0 = [87. [mubnds.3:4).3:4. Nf = frsp(N.sens. % % Weights.wp).omega).’c’).

134) Proof of (8. i.123).e.¿¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¼ ´×µ ½ . .133) where is either the condition number of the plant or the controller (the smallest one should be used): ­ ´ µ ÓÖ ­ ´Ã µ (8.46) ÑÒ Æ½½  ½ ƾ½ ¢ ƽ¾ ƾ¾ ÛÁ ÌÁ  ½ ÛÈ Ë ÛÁ ÃË ÛÈ Ë ´ÛÁ Ì Á Á  ½ £µ · ´ÛÈ Ë ¢  ½ Á £µ  ½ µ · ´ÛÈ Ë µ ´  ½ Á µ ´ÛÁ ÌÁ µ ´Á ßÞ ßÞ ½· ´  ½ µ ½·  ½ ´ µ Ô Õ and selecting ´ µ ´  ½ µ ´Æ µ Ô ­ ´ µ gives ´ÛÁ ÌÁ µ · ´ÛÈ Ë µ ´½ · ­ ´ µµ A similar derivation may be performed using Ë Ã ½ ÌÁ to derive the same expression ¾ but with ­ ´Ã µ instead of ­ ´ µ. This is confirmed by (8. Let Æ be given as in (8.e.133) we see that with a “round” controller. The value of is much smaller in the former case.11.132). 8. there is less sensitivity to uncertainty (but it may be difficult to achieve NP in this case).135) below. On the other hand.87) yields ´Æ µ where from (A. compute for RP with both diagonal and fullinverse-based controller à ´×µ × block input uncertainty using the weights in (8. Then Þ ÊÈ ß ¡ ´Æ µ Þ ÊË ß Þ ÆÈ ß Ô ´ÛÁ ÌÁ µ · ´ÛÈ Ë µ ´½ · µ (8. We consider unstructured multiplicative input uncertainty (i. (8. ¡ Á is a full matrix) and performance measured in terms of weighted sensitivity. From (8. we would expect for RP to be large if we used an inverse-based controller for a plant with a large condition number.4 Robust performance and the condition number In this subsection we consider the relationship between for RP and the condition number of the plant or of the controller. one with ­ ´Ã µ ½.133): Since ¡Á is a full matrix. since then ­ ´Ã µ ­ ´ µ is large. Any controller.

Suppose that at each frequency the worst-case sensitivity is ´Ë ¼ µ. Example 8. and have used the fact that is unitary invariant. for RP increases approximately Ô in proportion to ­ ´ µ. 293-295.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ Example 8. This is higher than the actual value of ´Æ µ which is . it is possible to derive an analytic expression for for RP with Æ as in (8.135) yields plot in Figure 8. We then have that the worst-case weighted sensitivity is equal to skewed.133) is generally not tight. which illustrates that the bound in (8. we have ­ ´ µ at all frequencies. denotes the ’th singular value of . frequencies.12 For the distillation column example studied above. The upper -bound in Á Á yields (8. ÛÈ ¯ ¼ ½ and ÛÁ Ø ¼ ¾.: Ñ Ü ´ÛÈ ËÔ µ ËÔ ´ÛÈ Ë ¼ µ × ´Æ µ . and hence the desired selecting at each frequency ¾ result. We see that for plants with a large condition number. (1996) pp.15. (8.87) with ´Æ µ ÑÒ ÛÁ ØÁ ÛÈ ¯´ Ô Ñ ÒÑ Ü Ñ ÒÑ Ü ÛÁ Ø´ µ ½ ÑÒ ÛÈ ¯Á ÛÁ Ø ÛÁ Ø´ µ ½ ÛÈ ¯´ µ ÛÈ ¯ µ ÛÁ ØÁ ÛÁ Ø´ ¦µ ½ ÛÈ ¯´ ¦µ ÛÈ ¯Á ÛÈ ¯ ¾ · ÛÁ Ø ¾ · ÛÈ ¯ ¾ · ÛÁ Ø´ µ ½ ¾ We have here used the SVD of Í ¦Î À at each frequency. we have at frequency ½ rad/min. and at frequency Û 1 rad/min the upper bound given by (8. see (8.11 For the distillation process studied above.135) where ¯ is the nominal sensitivity and ­ ´ µ is the condition number of the plant. For more details see Zhou et al. and since ­ ´ µ ½ ½ at all Ô ´Æ µ ¼ ½ · ¼ ¾ · ¿¼ ½ which agrees with the Worst-case performance (any controller) We next derive relationships between worst-case performance and the condition number.131)) and unstructured input uncertainty. The expression is minimized by ÛÁ Ø ´ ÛÈ ¯ ´ µ ´ µµ. With an inverse-based controller (resulting in the nominal decoupled system (8.123): × ¡ ´Æ µ ÛÈ ¯ ¾ · ÛÁ Ø ¾ · ÛÈ ¯ ¡ ÛÁ Ø ­ ´ µ · ½ ­´ µ (8.135): The proof originates from Stein and Doyle (1991).133) becomes ´¼ ¾ · ¼ ½µ´½ · ½ ½ µ ½¿ ½. Ô ­ ´Ã µ ½½ Inverse-based controller. Proof of (8.99).

and 8.137) at rad/min is × demonstrates that these bounds are not generally tight.¿¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Now.10.13 For the distillation process the upper bound given by (8.4 we derived a number of upper bounds on and referring back to (6. (8. This is Ñ ÜËÔ ´ÛÈ ËÔ µ which as found earlier is ½ higher than the actual peak value of (at frequency ½ ¾ rad/min). recall that in Section 6.72) we find ´Ë ¼ µ. (8. We then have ´Ë µ ½   ´ÛÁ ÌÁ µ ´ÛÈ Ë µ ½   ´ÛÁ ÌÁ µ (8.78) we also derived a lower bound in terms of the RGA.138) ÛÇ Ì ÛÇ Ì ÛÈ Ë ÛÈ Ë and for any structure of the uncertainty ´Æ µ is bounded as follows: ÊË ß Þ ÊÈ ß Þ Ô ÛÇ Ì ÛÇ Ì ´Æ µ ¾ Û Ë ÛÈ Ë È ßÞ ÆÈ Æ (8.5 Comparison with output uncertainty Consider output multiplicative uncertainty of magnitude Û Ç ´ the interconnection matrix µ.10.76) and (6. when ½.136) ´Ë ¼ µ × ´Æ µ ­´ µ ´ÛÈ Ë ¼ µ A similar bound involving ­ ´Ã µ applies. from (8.137) holds for any controller and for any structure of the uncertainty (including ¡ Á unstructured). Remark 1 In Section 6. we may.139) .11.137) and expressions similar Remark 2 Since × ½) to (6.77). ½ ½ ¡ ¼ ½ ´½   ¼ ¾µ ½¾½. Equation (8.4 we derived tighter upper bounds for cases when ¡Á is restricted to be diagonal and when we have a decoupling controller. In this case we get (8.133).137) where as before denotes either the condition number of the plant or the controller (preferably the smallest). Example 8. derive the following sufficient (conservative) tests for RP ( ´Æ µ with unstructured input uncertainty (any controller): RP RP RP ´ ´ ´ ´ÛÈ Ë µ · ´ÛÁ ÌÁ µ ´½ · ´ÛÈ Ë µ · ´ÛÁ ÌÁ µ ½ ´ÛÈ Ë µ · ´ÛÁ Ì µ ½ Ô µ ½ where denotes the condition number of the plant or the controller (the smallest being the most useful). In (6.

However.139). ´¡ Ç µ and ´¡È µ. 8. namely Ñ Ò´Ñ Ò Æ ´Ã µ  ½ ½ µ (8. for complex perturbations a method known as à -iteration is available. However. Ô Ç ¡È and individual perturbations.12. To start the iterations. It combines À½ -synthesis and -analysis. and often yields good results.45) the difference between bounding the combined ¡ perturbations.2) and from (A. one selects an initial stable rational transfer matrix ´×µ with appropriate structure.24 Consider the RP-problem with weighted sensitivity and multiplicative output uncertainty. Use this to prove (8. and 2) the stacked case when ¡ ¡ ¡È .1 à -iteration At present there is no direct method to synthesize a -optimal controller.4 that multiplicative output uncertainty poses no particular problem for performance. Derive the interconnection matrix Æ for.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ This follows since the uncertainty and performance blocks both enter at the output (see Section 8. The starting point is the upper bound (8. This confirms our findings from Section 6. ÛÈ Ë Exercise 8.12 -synthesis and à -iteration The structured singular value is a very powerful tool for the analysis of robust performance with a given controller. The identity matrix is often a good initial choice for provided the system has been reasonably scaled for performance. one may also seek to find the controller that minimizes a given -condition: this is the -synthesis problem. is at most a factor of ¾. The à -iteration then proceeds as follows: The idea is to find the controller that minimizes the peak value over frequency of this upper bound. 8.87) on in terms of the scaled singular value ´Æ µ Ñ Ò ´ Æ  ½ µ ¾ Æ ´Ã µ  ½ ½ with respect to either à or by alternating between minimizing (while holding the other fixed).140) à ¾ . It also implies that for practical purposes we may optimize robust performance with output uncertainty by minimizing the À ½ norm of the stacked matrix ÛÇ Ì . 1) the conventional case with ¡ ¡ ¡È .6. Thus.10. in this case we “automatically” achieve RP (at least within Ô ¾) if we have satisfied separately the subobjectives of NP and RS.

Æ  ½ ½ ½. which usually improves the numerical properties of the À ½ optimization (Step 1) and also yields a controller of lower order. The true -optimal controller would have a flat -curve (as a function of frequency). 3. Synthesize an À ½ controller for the scaled problem. D-step. The DK-iteration depends heavily on optimal solutions for Steps 1 and 2. For most cases. practical experience suggests that the method works well in most cases. One reason for preferring a low-order fit is that this reduces the order of the À ½ problem. ­ . with a finite-order the controller (because Ä´ ½µ controller we will generally not be able (and it may not be desirable) to extend the flatness to infinite frequencies. if the iterations converge slowly. This may be caused by numerical problems or inaccuracies (e. the iterations may converge to a local optimum. The order of the controller resulting from each iteration is equal to the number of states in the plant ´×µ plus the number of states in the weights plus twice the number of states in ´×µ. the upper bound -value in Step 2 being higher than the À½ norm obtained in Step 1). However. The iteration may continue until satisfactory performance is achieved.g. and it may be difficult to judge whether the iterations are converging or not. K-step. but because we use a finite-order ´×µ to approximate the -scales. then one may consider going back to the initial problem and rescaling the inputs and outputs.g. In the à -step (Step 1) where the À ½ controller is synthesized. This yields a blend of À ½ and À¾ optimality with a controller which usually has a steeper high-frequency roll-off than the À ½ optimal controller. we get a controller of finite (but often high) order. or until the À½ norm no longer decreases. 2. joint convexity is not guaranteed.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 1. Ñ ÒÃ Æ ´Ã µ  ½ ½ with fixed ´×µ. Find ´ µ to minimize at each frequency ´ Æ  ½ ´ µµ with fixed Æ. or by a poor fit of the -scales. with an À ½ norm. which is 5% higher than the optimal value. One fundamental problem with this approach is that although each of the minimization steps (K-step and D-step) are convex. preferably by a transfer function of low order. In any case. and will thus be of infinite order. and also on good fits in Step 3. One may even experience the -value increasing. ­ Ñ Ò ). the true -optimal controller is not rational. except at infinite frequency where generally has to approach a fixed value independent of ¼ for real systems). it is often desirable to use a slightly sub-optimal controller (e. However. In some cases the iterations converge slowly. Fit the magnitude of each element of ´ µ to a stable and minimum phase transfer function ´×µ and go to Step 1. . Therefore.

12. The resulting controller will have very large gains at high frequencies and should not be used directly for implementation. as discussed in detail in Section 8.11.12.141) where we want to keep Å constant and find the highest achievable bandwidth frequency £ . In -synthesis.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ 8.12.4). 8. Nevertheless we will use it here as an example of -synthesis because of its simplicity. ½) and minimizing the integral square deviation of away from (i. then the interpretation is that at this frequency we can tolerate ½ -times more uncertainty and still satisfy our performance objective with a margin of ½ . In practice. Sometimes it helps to switch the optimization between minimizing the peak of (i.e. £ . The optimization problem becomes Ñ Ü £ ×Ù Ø Ø ´Æ µ ½ (8. Sometimes the uncertainty is fixed. depends on be implemented as an outer loop around the à -iteration. one can add extra rolloff to the controller (which should work well because the system should be robust with respect to uncertain high-frequency dynamics). We noted there that this set-up is fine for analysis. . as it does not explicitly penalize the outputs from the controller. The latter is an attempt to “flatten out” . For example. consider the performance weight ÛÈ ´×µ × Å· £ ×· £ (8. This may 8. ´ µ   ¾ ) where usually is close to 1. This may be achieved by numerical optimization where is minimized with respect to the controller parameters The problem here is that the optimization is not generally convex in the parameters. the interconnection matrix for the RP-problem. but less suitable for controller synthesis.e.3 Fixed structure controller Sometimes it is desirable to find a low-order controller with a given structure.2 Adjusting the performance weight Recall that if at a given frequency is different from 1.4 Example: -synthesis with à -iteration We will consider again the case of multiplicative input uncertainty and performance defined in terms of weighted sensitivity. and we effectively optimize worst-case performance by adjusting a parameter in the performance weight.142) where Æ . the designer will usually adjust some parameter(s) in the performance or uncertainty weights until the peak value is close to 1. or one may consider a more complicated problem set-up (see Section 12.

It includes the plant model.123) and ¡ diagonal input uncertainty (which is always present in any real problem).143) We will now use à -iteration in attempt to obtain the -optimal controller for this example. The objective is to minimize the peak value ¡Á ¡È . Then the block-structure is defined.16: Uncertainty and performance weights. As ½ ¾ ¿ initial scalings we select ¼ Á¾ ½ ¼ ½. 10 2 The uncertainty weight Û Á Á and performance weight Û È Á are given in (8.16. We will consider of ¡ ´Æ µ. and are shown graphically in Figure 8. Notice that there is a frequency range (“window”) where both weights are less than one in magnitude. ¼ Á . so ¡ Á is a ¾ ¢ ¾ diagonal matrix. the uncertainty weight and the performance weight.2. the À½ software produced a 6 state controller (2 states from the plant model and 2 from each of the weights) . 1. but not the controller which is to be designed (note that Æ Ð ´È à µ). È is then scaled with the matrix ¾ where Á¾ is associated with the inputs and outputs from the controller (we do not want to scale the controller). The appropriate commands for the MATLAB -toolbox are listed in Table 8. we proceed with the problem description. and we may set ¿ ½.29) is constructed. ¡ È is a full ¾ ¢ ¾ matrix representing the performance specification. where Æ is given in (8. Magnitude 10 1 ÛÈ 10 0 ÛÁ 10 −1 10 −3 10 −2 10 −1 10 0 10 1 10 2 Frequency Figure 8. Step 1: With the initial scalings.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ With this caution in mind. we use the model of the simplified distillation process ´×µ ½   × · ½ ½¼ ¾  ½¼ (8. it consists of two ½ ¢ ½ blocks to represent ¡ Á and a ¾ ¢ ¾ block to for Æ  ½ then has the structure represent ¡È .132). Again. First the generalized plant È as given in (8. Iteration No. Note that we have only three complex uncertainty blocks. The scaling matrix Á¾ where Á¾ is a ¾ ¢ ¾ identity matrix. so ´Æ µ is equal to the upper bound Ñ Ò ´ Æ  ½ µ in this case.

[10 1.mubnds). dsysr=daug(dsysR. u(2)]’.2]. murp=pkvnorm(mubnds. Nf=frsp(Nsc.dsysr]=musynflp(dsysl.rowg] = mu(Nf. blk = [1 1. % % GOTO STEP 1 (unless satisfied with murp).m’.). vplot(’liv. 4.0. 2 2].e-5].sens.gmin. % systemnames = ’G Wp Wi’.5 1]).eye(2)).wp).blk.omega).rowd.9. nu=2.3.2 -109.sens.61).nmeas.rowp.[75 1]).K). sysoutname = ’P’. dyn = nd2sys(1.sens. wi = nd2sys([1 0.G0). % % Initialize.gamma] = hinfsyn(DPD. dsysr=dsysl.’c’). Wp.minv(dsysr)). Wp = daug(wp. outputvar = ’[Wi. tol=0. % wp = nd2sys([10 1].dsysR]=msf(Nf. gmin=0. nmeas=2.8 -86.blk).rowd. [K. % choose 4th order.inf) % % STEP 3: Fit resulting D-scales: % [dsysl. % (Remark: % Without scaling: % N=starp(P. % % START ITERATION.gmax. input to G = ’[u+ydel]’. Dyn = daug(dyn.tol).eye(2)).2: MATLAB program to perform à -iteration % Distillation % process.eye(2)). cleanupsysic = ’yes’.mubnds.01.wi).05*gamma. . sysic.nu. % % % STEP 2: Compute mu using upper bound: % [mubnds.blk. Wi = daug(wi. -G-w]’. input to Wp = ’[G+w]’. % dsysl=daug(dsysL. G = mmult(Dyn. inputvar = ’[ydel(2). gamma=2. 1 1. w(2) .ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ Table 8.nmeas.dyn). % % STEP 1: Find H-infinity optimal controller % with given scalings: % DPD = mmult(dsysl. % % Generalized plant P. % order: 4.4. % Uses the Mu toolbox G0 = [87. % % New Version: % [dsysL. dsysl = daug(d0.rowd.6].nu).P.Nsc. input to Wi = ’[u]’. % omega = logspace(-3.5). % % Approximated % integrator.[0.eye(2). 0. gmax=1. 108.d0. % % Weights. d0 = 1.

17: Change in during à -iteration Analysis of -“optimal” controller à ¿ The final -curves for NP. Step 3: The frequency-dependent ½ ´ µ and ¾ ´ µ from Step 2 were each fitted using a 4th order transfer function. 3 Optimal 0.17.18 and labelled “Iter.019. Iteration No. it was decided to stop the iterations.019. Since the improvement was very small and since the value was very close to the desired value of 1. To confirm this we considered the nominal plant and six perturbed plants ¼ ´×µ ´×µ Á ´×µ . Iter. corresponding to a peak value of =1.6 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency [rad/min] Figure 8.18.1818. ½ Iteration No. Step 1: With the 8 state scaling ½ ´×µ the À½ software gave a 22 ½ Æ ´ ½ µ ½ ½ ½ ¼¾¿ . 2 Iter.019 with controller à ¿ is only slightly above 1. so the performance specification ´ÛÈ ËÔ µ ½ is almost satisfied for all possible plants.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ with an À½ norm of ­ ½ ½ ¾¿. The fit is very good so it is hard to distinguish the two curves. The objectives of RS and NP are easily satisfied. Step 2: This controller gave a peak state controller and value of of 1. RS and RP with controller à ¿ are shown in Figure 8. ½ ´Ûµ and the fitted 4th-order transfer function (dotted line) are shown in Figure 8. the peak -value of 1. 1” in Figure 8. Step 2: The upper -bound gave the -curve shown as curve “Iter. 2.19. 2” in Figure 8. 3. Step 3: The resulting scalings ¾ were only slightly changed from the previous iteration as can be seen from ¾ ´ µ labelled “Iter. 1”.0238. ¾ is not shown because it was found that ½ ¾ (indicating that the worst-case full-block ¡ Á is in fact diagonal).8 0. Step 1: With the scalings ¾ ´×µ the À½ norm was only slightly reduced from 1. Furthermore. The resulting controller with 22 states (denoted à ¿ in the following) gives a peak -value of 1. 1 1 Iter.024 to 1.

the following input gain perturbations are allowable Á½ ½¾ ¼ ¼ ½¾ Á¾ ¼ ¼ ¼ ½¾ Á¿ ½¾ ¼ ¼ ¼ Á ¼ ¼ ¼ ¼ These perturbations do not make use of the fact that Û Á ´×µ increases with frequency.2 0 −3 10 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency [rad/min] Figure 8.6 0.19: -plots with -“optimal” controller ÿ where Á Á · ÛÁ ¡Á is a diagonal transfer function matrix representing input uncertainty (with nominal Á ¼ Á ).4 RS 0.18: Change in RP 1 -scale ½ during à -iteration NP 0.8 0. Recall that the uncertainty weight is ÛÁ ´×µ ×·¼¾ ¼ ×·½ which is 0.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿ 4th order fit 10 0 Initial Magnitude 10 −1 Iter. Two allowed dynamic perturbations for the diagonal elements in Û Á ¡Á are ¯½ ´×µ  × · ¼ ¾ ¯ ´×µ   × · ¼ ¾ ¾ ¼ ×·½ ¼ ×·½ .2 in magnitude at low frequencies. 1 Optimal Iter. Thus. 2 10 −2 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency [rad/min] Figure 8.

Lower solid line: Nominal plant .4 0.8 0. and the others with dotted lines. ´Ë ¼ µ.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ½ ÛÈ 10 0 10 −1 10 −2 10 −1 10 0 10 1 Frequency [rad/min] Figure 8. The sensitivity for the nominal plant is shown by the lower solid line.6 0.20: Perturbed sensitivity functions ´Ë¼ µ using -“optimal” controller ÿ . Û 1 0. and is seen to be almost below the bound ½ ÛÁ ´ µ for all seven cases ( ¼ ) illustrating that RP is almost satisfied.2 0 0 10 20 30 40 50 60 70 ݽ ݾ 80 90 100 Time [min] Figure 8. Solid line: nominal plant.21: Setpoint response for Dashed line: uncertain plant ¼ ¿ -“optimal” controller ÿ . Dotted lines: Plants ¼ ½ . is shown in Figure 8.20 for the nominal and six perturbed plants. At low frequencies the worst-case corresponds closely to a plant with . corresponding to elements in ½ ´×µ ½ · ¯½ ´×µ ½¾  ¼ Á of ½ ×·½ ¼ ×·½ ½ ´×µ ¾ ´×µ Á ½ · ¯¾ ´×µ ¾ ´ ×µ ¼  ¼ ¿¿× · ½ ¼ ×·½ so let us also consider Á ¼ ½ ´×µ ¼ ¼ ½ ´×µ ¼ The maximum singular value of the sensitivity. Upper solid line: Worst-case plant ¼ .

It has a peak value of ´Ë ¼ µ of about 2.6 rad/min. ÃÓÔØ . and the sensitivity function for the corresponding worst-case plant ¼ ´×µ ´×µ´Á · ÛÁ ´×µ¡Û ´×µµ found with the Û software is shown by the upper solid line in Figure 8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿ gains 1. The initial choice of scaling of about 1. Remarks on the -synthesis example.21 both for the nominal case (solid line) and for 20% input gain uncertainty (dashed line) using the plant ¼ ¿ (which we know is one of ¿ the worst plants). 4. is shown in Figure 8. That is. so we might as well have used a full matrix for ¡Á . such as ¼ . To find the “true” worst-case performance and plant we used the MATLAB -tools command wcperf as explained in Section 8. The resulting design produces the curves labelled optimal in Figures 8. then ÃÓÔØ Î Ã× Í À where Ã× is a diagonal matrix. but shows no strong sensitivity to the uncertainty.10. 1. o 1994).144) 2.8.20. The ½ software may encounter numerical problems if È ´×µ has poles on the -axis. the worst-case of these six plants ¾ ¿ seems to be ¼ Á . let Í ¦Î À . For example. By trial and error. 3.17 and 8. The “worst-case” plant is not unique.3 on page 330. But this does not hold in general. may be synthesized using ½ -synthesis with the following third-order -scales À ½ ´×µ ¾ ´×µ ¾ ´¼ ¼¼½× · ½µ´× · ¼ ¾ µ´× · ¼ ¼ µ ´´× · ¼ µ¾ · ¼ ¾ µ´× · ¼ ¼½¿µ ¿ ½ (8. The time response of Ý ½ and ݾ to a filtered setpoint change in Ý ½ .79 rad/min.20. Ö½ ½ ´ × · ½µ. The response with uncertainty is seen to be much better than that with the inverse-based controller studied earlier and shown in Figure 3. The response is interactive.02 (above the allowed bound of 2) at 1. The corresponding controller. This arises because in this example Í and Î are constant matrices.2 and 0. For more details see Hovd (1992) and Hovd et al. Petter Lundstr¨ m was able to reduce the peak o -value for robust performance for this problem down to about ÓÔØ ¼ (Lundstr¨ m. This gives a worstcase performance of Ñ Ü ËÔ ÛÈ ËÔ ½ ½ ¼¿ . it is likely that we could find plants which were more consistently “worse” at all frequencies than the one shown by the upper solid line in Figure 8.05 at 0. Note that the optimal controller ÃÓÔØ for this problem has a SVD-form. Remark. and has a peak of about 2. ¼ or ¼ .18. Á gave a good design for this plant with an ½ norm 5. and many long nights.12.18. This is the reason why in the MATLAB code we have moved the integrators (in the performance weights) slightly into the left-half plane. Overall. For this particular plant it appears that the worst-case full-block input uncertainty is a diagonal perturbation. which has ´Ë ¼ µ close to the bound at low frequencies. (1994). This scaling worked well because the inputs and outputs had been scaled À À . and there are many plants which yield a worstcase performance of Ñ ÜËÔ ÛÈ ËÔ ½ ½ ¼¿ .

13 Further remarks on 8. Therefore.145) (8.53. we are still far away from the optimal value which we know is less than 1. one can find bounds on the allowed time variations in the perturbations. For a comparison. 1994). 1995). 8. ÙÒ× Ð ´×µ ½ ×·½ ¼  ¼ ½ ¼ ¾  ½ ¼ (8. On the other hand. It may be argued that such perturbations are unlikely in a practical situation.25 Explain why the optimal -value would be the same if in the model (8. (1988) which was in terms of unscaled physical variables. 1.49. 1. it can be argued that slowly time-varying uncertainty is more useful than constant perturbations. However.143) [min] to another value. the use of assumes the uncertain perturbations to be time-invariant. consider the original model in Skogestad et al. we see that if we can get an acceptable controller design using constant -scales. Note that the -iteration itself we changed the time constant of would be affected.9 (Step 1) resulting in a peak -value of 5.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ to be of unit magnitude.143). then we know that this controller will work very well even for rapid changes in the plant . In some cases. Subsequent iterations yield with 3rd and 4th order fits of the -scales the following peak -values: 2. À Exercise 8. However.46.13. the scaled singular value ´ Æ  ½ µ is a tight upper Æ  ½ ½ forms bound on ´Æ µ in most cases. 1. which is related to scaling the plant model properly.42.44.92.145) has all its elements 100 times smaller than in the scaled model (8.67. Nevertheless. This demonstrates the importance of good initial -scales. using this model should give the same optimal -value but with controller gains 100 times Á works very poorly in this case.1 Further justification for the upper bound on For complex perturbations. The reason for this.22. is that when all uncertainty blocks are full and complex. Æ right.87.2 (Step 2). 1. starting the à -iteration with first iteration yields an ½ norm of 14.58. In addition. The larger. and therefore that it is better to minimize Æ  ½ ½ instead of ´Æ µ. 1. 1. 2. by considering how ´ µ varies with frequency. the upper bound provides a necessary and sufficient condition for robustness to arbitrary-slow time-varying linear uncertainty (Poolla and Tikku. and minimizing the upper bound  ½ ½ is also of interest in its own the basis for the DK-iteration. 1. However. Another interesting fact is that the use of constant -scales ( is not allowed to vary with frequency). At this point (after 11 iterations) the -plot is fairly flat up to 10 [rad/min] and one may be tempted to stop the iterations. provides a necessary and sufficient condition for robustness to arbitrary-fast time-varying linear uncertainty (Shamma. 1.

there are -matrices. the reader is referred to Young (1993). As mentioned. This does not mean. The current algorithms.13. Most of these involve generating a perturbation which makes Á   Å ¡ singular. The -matrices (which should not be confused with the plant transfer function ´×µ) have real diagonal elements at locations where ¡ is real and have zeros elsewhere.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿ is then model. mixed real and complex perturbations. where in addition to -matrices. employ a generalization of the upper bound ´ Å  ½ µ. (1995).146)) may be arbitrarily conservative. a and a with the appropriate block-diagonal structure such that ½ ½ ´Á · ¾ µ  ´ Å  ½   ¬ then ½ µ´Á · ¾ µ  ½ (8.2 Real perturbations and the mixed problem We have not discussed in any detail the analysis and design problems which arise with real or. An alternative approach which involves solving a series of scaled à -iterations is given by Tøffner-Clausen et al.146) ¬ . and a very high order fit may be required for the -scales.13. 1995). 8. whereas the present upper bounds for mixed perturbations (see (8. the upper bound ´ Å  ½ µ for complex perturbations is generally tight. ´Å µ 8. For more details. see (8. Another advantage of constant -scales is that the computation of straightforward and may be solved using LMIs. 1994). that practical algorithms are not possible. implemented in the MATLAB -toolbox.151) below. however. There also exists a number of lower bounds for computing . . There is also a corresponding The practical implementation of this algorithm is however difficult. more importantly. The algorithm in the -toolbox makes use of the following result from Young et al. (1992): if there exists a ¬ ¼. which exploit the block-diagonal structure of the perturbations.3 Computational complexity It has been established that the computational complexity of computing has a combinatoric (non-polynomial. real or mixed real/complex perturbations. “NP-hard”) growth with the number of parameters involved even for purely complex perturbations (Toker and Ozbay. which exploit the structure of the real perturbations. à -iteration procedure for synthesis (Young. and we have described practical algorithms for computing upper bounds of for cases with complex.

148) is also referred to as the state-space test. which is the well-known stability condition for discrete ¡Þ ´ µ systems. Thus. and real perturbations (from the uncertainty). since the state-space matrices are contained explicitly in À in (8. it follows that the discrete time formulation is convenient if we want to consider parametric uncertainty in the state-space matrices. by introducing Æ Þ ½ Þ and ¡Þ ÆÞ Á we have from the main loop theorem of Packard and Doyle (1993) (which generalizes Theorem 8. note that in this case nominal stability (NS) follows as a special case (and thus does not need to be tested separately). and the resulting -problem which involves repeated complex perturbations (from the evaluation of Þ on the unit circle). . Second.78) that ´ µ ½. Thus. but at the expense of a complicated -calculation. representing the discrete “frequencies”. 1993). This is discussed by Packard and Doyle (1993). In particular. The condition in (8. since when ¡ ´À µ ½ (NP) we have from (8. and ¡ È is a full complex matrix. this results in real perturbations.148) only considers nominal performance (NP). Æ ´Þ µ ´ÞÁ   µ ½ · Ù ´À ½ Áµ Þ À (8. we see that the search over frequencies in the frequency domain is avoided.147).13. Consider a discrete time system Ü ·½ Ü · Ù Ý Ü · Ù ´ÞÁ   The corresponding discrete transfer function matrix from Ù to Ý is Æ ´Þ µ µ ½ · .4 Discrete case It is also possible to use for analyzing robust performance of discrete-time systems (Packard and Doyle. is difficult to solve numerically both for analysis and in particular for synthesis.148) ¡´À µ ½ ¡ Condition (8. note that Æ ´Þ µ may be written the unit circle ( Þ as an LFT in terms of ½ Þ . note that the À½ norm of a discrete transfer function is Æ ½ ¸ Ñ Ü ´ ´ÞÁ   µ ½ · µ Þ ½ This follows since evaluation on the -axis in the continuous case is equivalent to ½) in the discrete case. We can also generalize the treatment to consider RS and RP.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. representing the singular value performance specification. However. For this reason the discrete-time formulation is little used in practical applications.147) where ¡Þ is a matrix of repeated complex scalars.7) that Æ ½ ½ (NP) if and only if ¡Þ ¡È (8. However. a full-block complex perturbation (from the performance specification). First.

À (where may have a block-diagonal structure) such does there exist an that È À È   ÉÀ É · Ê · ÊÀ · Ë ¼ (8. To compute the upper bound for based on this which is an LMI in LMI we need to iterate on ¬ .ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ 8. the matrices È É Ê and Ë are functions of a real parameter ¬ . To analyze robust stability (RS) of an uncertain system we make use of the Å ¡structure (Figure 8. For given matrices È É Ê and Ë .152) which is tight (necessary and sufficient) for the special case where at each frequency any complex ¡ satisfying ´¡µ ½ is allowed. which make LMIs attractive computationally. what is the largest ¬ for which there is no solution.5 Relationship to linear matrix inequalities (LMIs) An example of an LMI problem is the following. to handle parametric uncertainty). These inequality conditions produce a set of solutions which are convex.153) where ´Å µ is the structured singular value ´Å µ. .149) Depending on the particular problem. including À¾ and À½ optimal control problems.151) ¸ ´ Å  ½ µ ¬ ¸ ´  ½ Å À Å  ½ µ ¬ ¾  ½ Å À Å  ½   ¬ ¾ Á ¼ ¸ Å À ¾ Å   ¬¾ ¾ ¼ ¾ ¼. the may be replaced by . The reader is referred to Boyd et al. where certain blocks may also be real (e. It has been shown that a number of other problems. and we want to know. The calculation of makes use of the fact that ¡ has a given block-diagonal structure. Sometimes. From the small gain theorem.13. can be reduced to solving LMIs.150) (8.g. for example.3) where Å represents the transfer function for the “new” feedback part generated by the uncertainty. More generally. ÊË ´ ´Å µ ½ (8. The upper bound for can be rewritten as an LMI: (8.14 Conclusion In this chapter and the last we have discussed how to represent uncertainty and how to analyze its effect on stability (RS) and performance (RP) using the structured singular value as our main tool. 8. (1994) for more details. the tight condition is ÊË ¸ ´Å µ ½ (8.

We aim to make the system robust to some “general” class of uncertainty which we do not explicitly model. and we derived ÊÈ ¸ ´Æ µ ½ (8. at least for design. so one should be careful about 2. 2. normalized coprime factor uncertainty provides a good general class of uncertainty. The use of including too many sources of uncertainty. Potentially. has proved itself very useful in applications. Practical -analysis We end the chapter by providing a few recommendations on how to use the structured singular value in practice.154) where is computed with respect to the block-diagonal structure ¡ ¡È . by considering first simple sources of uncertainty such as multiplicative input uncertainty. see Chapter 10. should one consider more detailed uncertainty descriptions such as parametric uncertainty (with real perturbations). and then to see whether the performance specifications can be met. Because of the effort involved in deriving detailed uncertainty descriptions. The robust stability and performance is then analyzed in simulations and using the structured singular value. for example. This second approach has been the focus of the preceding two chapters.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We defined robust performance (RP) as Ð ´Æ ¡µ ½ ½ for all allowed ¡’s. and the associated Glover-McFarlane À ½ loop-shaping design procedure. Since we used the À½ norm in both the representation of uncertainty and the definition of performance. One then iterates between design and analysis until a satisfactory solution is obtained. and the resulting analysis and design may be unnecessarily conservative. the rule is to “start simple” with a crude uncertainty description. For MIMO systems. we found that RP could be viewed as a special case of RS. 1. It should be noted that there are two main approaches to getting a robust design: 1. Analysis and. it yields better designs. Only if they can’t. synthesis using can be very involved. and the subsequent complexity in synthesizing controllers. For SISO systems the classical gain and phase margins and the peaks of Ë and Ì provide useful general robustness measures. . it is therefore recommended to start with the first approach. especially if parametric uncertainty is considered. noise and disturbances – otherwise it becomes very unlikely for the worst case to occur. Here ¡ represents the uncertainty and ¡ È is a fictitious full uncertainty block representing the À ½ performance bound. In applications. but it may require a much larger effort in terms of uncertainty modelling. We explicitly model and quantify the uncertainty in the plant and aim to make the system robust to this specific uncertainty. in particular. implies a worst-case analysis.

The relative (multiplicative) form is very convenient in this case. If is used for synthesis. is most commonly used for analysis. recommend that you keep the uncertainty fixed and adjust the parameters in the performance weight until is close to 1.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ 3. . There is always uncertainty with respect to the inputs and outputs. then we 4. so it is generally “safe” to include diagonal input and output uncertainty.

¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .

Ý are the outputs to be controlled.ÇÆÌÊÇÄÄ Ê ËÁ Æ In this chapter. therefore.6 has been successfully applied. The plant à interconnection is driven by reference commands Ö. we present practical procedures for multivariable controller design which are relatively straightforward to apply and which. and measurement noise Ò. This methodology for controller design is central to the practical procedures described in this chapter. In February 1981. in our opinion.1 Trade-offs in MIMO feedback design The shaping of multivariable transfer functions is based on the idea that a satisfactory definition of gain (range of gain) for a matrix transfer function is given by the singular values of the transfer function. have an important role to play in industrial control. 9. consider the one and controller degree-of-freedom configuration shown in Figure 9. By multivariable transfer function shaping. and Ù are the control . the first six papers of which were on the use of singular values in the analysis and design of multivariable feedback systems. the classical loop-shaping approach to control system design as described in Section 2. output disturbances . For industrial systems which are either SISO or loosely coupled. that reliable generalizations of this classical approach have emerged. The paper by Doyle and Stein (1981) was particularly influential: it was primarily concerned with the fundamental question of how to achieve the benefits of feedback in the presence of unstructured uncertainty. But for truly multivariable systems it has only been in the last decade. To see how this was done.1. we mean the shaping of singular values of appropriately specified transfer functions such as the loop transfer function or possibly one or more closed-loop transfer functions. and through the use of singular values it showed how the classical loop-shaping ideas of feedback design could be generalized to multivariable systems. or so. the IEEE Transactions on Automatic Control published a Special Issue on Linear Multivariable Control Systems.

then a further closed-loop objective is perturbation. For robust stability in the presence of an additive perturbation make ´ÃË µ small.2) These relationships determine several closed-loop objectives. Alternatively. For example. in addition to the requirement that à stabilizes .1) (9. For control energy reduction make ´ÃË µ small. If the unstructured uncertainty in the plant model is represented by an additive · ¡. For robust stability in the presence of a multiplicative output perturbation make ´Ì µ small. 3. Feedback design is therefore a trade-off over frequency of conflicting objectives. The closed-loop requirements ½ to cannot all be satisfied simultaneously. For reference tracking make ´Ì µ ´Ì µ ½. then we have: 6. 4. disturbance rejection is . we have the following important transfer function Ì relationships: Ö + ¹ - ¹ à ٠¹ ¹+ + + + Õ ¹Ý Ò Figure 9. 2. Ô 5.1: One degree-of-freedom feedback configuration Ý´×µ Ù´×µ Ì ´×µÖ´×µ · Ë ´×µ ´×µ   Ì ´×µÒ´×µ à ´×µË ´×µ Ö´×µ   Ò´×µ   ´×µ (9. if the uncertainty is modelled by a multiplicative output perturbation such that Ô ´Á · ¡µ .¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ signals. i. For disturbance rejection make ´Ë µ small. namely: 1. In terms of the sensitivity function Ë ´Á · à µ  ½ and the closed-loop à ´Á · à µ  ½ Á   Ë .e. This is not always as difficult as it sounds because the frequency ranges over which the objectives are important can be quite different. For noise attenuation make ´Ì µ small.

It also follows that at the bandwidth frequency (where ½ ´Ë ´ µµ Ô ¾ ½ ½) we have ´Ä´ µµ between 0. it is relatively easy to approximate the closed-loop requirements by the following open-loop objectives: 1.51) that ´Äµ   ½ ½ ´Ë µ ´Äµ · ½ (9.3) ½ ´Äµ at frequencies where ´Äµ is much larger from which we see that ´Ë µ than 1. From Figure 9. valid for frequencies at which ´ à µ ½. 5 and 6 are conditions which are valid frequencies.41 and 2. From this we see that at frequencies where we want high gains (at low frequencies) the “worst-case” direction is related to ´ à µ. 4. from Ì Ä´Á · ĵ ½ it follows that ´Ì µ ´Äµ at frequencies where ´Äµ is small. valid for frequencies at which ´ à µ ½. for good performance. ½. over specified frequency ranges. ´ à µ must be made to lie above a performance boundary for all up to Ð . ¼ Ð and important at high frequencies.2.2. and for robust stability ´ à µ must be forced below a robustness boundary for all above . recall from (3. but to . For reference tracking make ´ à µ large. For robust stability to an additive perturbation make ´Ã µ small. For noise attenuation make ´ à µ small. For robust stability to a multiplicative output perturbation make ´ à µ small. 2. For disturbance rejection make ´ à µ large. whereas the above design requirements are all in terms of closed-loop transfer functions. valid for ½. Furthermore. 5. For control energy reduction make ´Ã µ small. In classical loop shaping.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ typically a low frequency requirement. Typically. valid for frequencies at which ´ à µ ½. valid for frequencies at which ´ à µ ½. Exercise 9. 3. To shape the singular values of à by selecting à is a relatively easy task. while noise mitigation is often only relevant at higher frequencies.1 Show that the closed-loop objectives 1 to 6 can be approximated by the openloop objectives 1 to 6 at the specified frequency ranges. the open-loop requirements 1 and 3 are valid and important at low . as illustrated in Figure 9. it is the magnitude of the open-loop transfer function Ä Ã which is shaped. it follows that the control engineer must design à so that ´ à µ and ´ à µ avoid the shaded regions.41. while 2. valid for frequencies at which ´ à µ ½. However. whereas at frequencies where we want low gains (at high frequencies) the “worst-case” direction is related to ´ à µ. That is. Thus. 4. frequencies at which ´ à µ 6.

the roll-off rate from high to low gain at crossover is limited by phase requirements for stability. Closed-loop stability cannot be determined from open-loop singular values. Both of these loop transfer recovery (LTR) procedures are discussed below after first describing traditional LQG control. where à ´ µ ½. which implies that stability margins vary from one break point to another in a multivariable system. see Section 2. but this is in terms of the eigenvalues of à and results in a limit on the roll-off rate of the magnitude of the eigenvalues of à . not the singular values (Doyle and Stein. For SISO systems. For MIMO systems a similar gain-phase relationship holds in the crossover frequency region. 1981). To overcome this difficulty Doyle and Stein (1981) proposed that the loop-shaping should be done with a controller that was already known to guarantee stability. . Recall that à is not in general equal to à .2. noise attenuation. and in practice this corresponds to a .2.¿ log magnitude ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´ õ Û log(Û ) Robust stability. An immediate consequence roll-off rate less than ¼ of this is that there is a lower limit to the difference between and Ð in Figure 9. control energy reduction boundary ÛÐ Performance boundary ´ õ Figure 9. The stability constraint is therefore even more difficult to handle in multivariable loop-shaping than it is in classical loop-shaping. They also gave a dual “robustness recovery” procedure for designing the filter in an LQG controller to give desirable properties in à .2: Design trade-offs for the multivariable loop transfer function à do this in a way which also guarantees closed-loop stability is in general difficult.6. it is clear from Bode’s work (1945) that closed-loop stability is closely related to open-loop gain and phase near the crossover frequency . They suggested that an LQG controller could be used in which the regulator part is designed using a “sensitivity recovery” procedure of Kwakernaak (1969) to give desirable properties (gain and phase margins) in à . In particular.

Û and ÛÒ are white noise processes with covariances Û ´ØµÛ ´ ¨ ÛÒ ´ØµÛÒ ´ and ¨ © Û ´ØµÛÒ ´ µÌ ¨ µÌ © µÌ ¼ © Ï Æ´Ø   µ Î Æ ´Ø   µ ¨ © ÛÒ ´ØµÛ ´ µÌ (9.5) where we for simplicity set ¼ (see the remark below). That is. In this section. Aerospace engineers were particularly successful at applying LQG.8) where is the expectation operator and Æ ´Ø   µ is a delta function. 9. Accurate plant models were frequently not available and the assumption of white noise disturbances was not always relevant or meaningful to practising control engineers. we will describe the LQG problem and its solution. which are usually assumed to be uncorrelated zero-mean Gaussian stochastic processes with constant power spectral density matrices Ï and Î respectively. we recommend Anderson and Moore (1989) and Kwakernaak and Sivan (1972). we will discuss its robustness properties. Its development coincided with large research programs and considerable funding in the United States and the former Soviet Union on space related problems. and we will describe procedures for improving robustness. but when other control engineers attempted to use LQG on everyday industrial problems a different story emerged.6) (9. That is. such as rocket manoeuvering with minimum fuel consumption. These were problems. As a result LQG designs were sometimes not robust enough to be used in practice. reached maturity in the 1960’s with what we now call linear quadratic gaussian or LQG Control. and that the measurement noise and disturbance signals (process noise) are stochastic with known statistical properties. Û and ÛÒ are the disturbance (process noise) and measurement noise inputs respectively.4) (9.2 LQG control Optimal control.1 Traditional LQG and LQR problems In traditional LQG Control.7) ¼ (9.2. Many text books consider this topic in far greater detail. building on the optimal filtering work of Wiener in the 1940’s. which could be well defined and easily formulated as optimizations.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ 9. it is assumed that the plant dynamics are linear and known. . we have a plant model Ü Ý Ü· Ù·Û Ü · Ù · ÛÒ (9.

10) where ÃÖ is a constant matrix which is easy to compute and is clearly independent plant noise. to give ٴص  ÃÖ Ü´Øµ. known as the Separation Theorem or Certainty Equivalence Principle. so that optimal state estimate is given by a Kalman filter and is independent of É and Ê.3. The solution to the LQG problem. Û ÛÒ Ý Ö Ù ¹ Plant Deterministic linear quadratic regulator constant. the statistical properties of the Ò Ó Ü   Ü Ì Ü   Ü is minimized.3: The Separation Theorem . The name LQG arises parameters) such that É from the use of a linear model. the above LQG problem without Û and ÛÒ . The required solution to the LQG problem is then found by replacing Ü by Ü. The next step is to find an of Ï and Î .  ÃÖ Ü Kalman Filter dynamic system Figure 9. as illustrated in Figure 9. It consists of first determining the optimal control to a deterministic linear quadratic regulator (LQR) problem: namely. an integral Quadratic cost function.¿¼ ´ ÅÍÄÌÁÎ ÊÁ Ä µ à ÇÆÌÊÇÄ The LQG control problem is to find the optimal control ٴص which minimizes  £ ½ Ì¢ Ì ÐÑ Ü ÉÜ · ÙÌ ÊÙ Ø Ì ½Ì ¼ (9. is surprisingly simple and elegant. We therefore see that the LQG problem and its solution can be separated into two distinct parts. The optimal estimate Ü of the state Ü. and Gaussian white noise processes to model disturbance signals and noise.9) where É and Ê are appropriately chosen constant weighting matrices (design ÉÌ ¼ and Ê ÊÌ ¼. It happens that the solution to this problem can be written in terms of the simple state feedback law ٴص  ÃÖ Ü´Øµ (9.

11) The optimal solution (for any initial state) is ٴص  Ã Ö Ü´Øµ.4. with Ü Ü · Ù · à ´Ý   ܵ (9.e.12) ÃÖ Ì and Riccati equation Ê ½ Ì Ê ½ Ì ¼ is the unique positive-semidefinite solution of the algebraic Ì ·   ·É ¼ (9.13) Kalman filter.4: The LQG controller and noisy plant We will now give the equations necessary to find the optimal state-feedback matrix ÃÖ and the Kalman filter. Optimal state feedback. The LQR problem. is the deterministic initial value problem: given the system Ü Ü · Ù with a non-zero initial state Ü´¼µ. where (9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿½ Û ÛÒ Ý ÕÙ ¹ Plant à + ¹ + ܹ + Ê Õ¹ Õ Kalman Filter Ý ¹+  ÃÖ Ü LQ Regulator Figure 9. The Kalman filter has the structure of an ordinary state-estimator or observer.14) . by minimizing the deterministic cost ÂÖ ½ ¼ ´Ü´ØµÌ Éܴص · Ù´ØµÌ Êٴصµ Ø (9. find the input signal ٴص which takes the system to the zero state (Ü ¼) in an optimal manner. as shown in Figure 9. i. where all the states are known.

  Exercise 9. assuming positive feedback).5.4.9).2 For the plant and LQG controller arrangement of Figure 9. We here design an LQGcontroller with integral action (see Figure 9. It is exactly as we would have expected from the Separation Theorem. ut is not easy to see where to position the reference input Ö. and how integral action may be included. is given by (9.1 LQG design for inverse response process. and the LQG-controlled system ½ É ¾ µ and is internally stable. Remark 2 If the plant model is bi-proper.     For the LQG-controller. which minimizes Ü Ü Ì Ü Ü Ã Ì where Riccati equation Ì Ì Î  ½ Ì Î  ½ ¼ is the unique positive-semidefinite solution of the algebraic ·   ·Ï ¼ (9.14) has the extra term à ٠on the right-hand side. The LQG control problem is to minimize  in (9. as shown in Figure 9. Here the control error Ö   Ý is integrated and the regulator ÃÖ is designed for the plant augmented with these integrated states.17) has the extra term ·Ã ÃÖ .5) for the SISO inverse response process . its transfer function. One strategy is illustrated Figure 9.¿¾ ÅÍÄÌÁÎ ÊÁ Ò Ä Ó Ã ÇÆÌÊÇÄ . Remark 1 The optimal gain matrices à and ÃÖ exist. with a nonzero -term in (9. and the -matrix of the LQG-controller in (9. then the Kalman filter equation (9.16) LQG: Combined optimal state estimation and optimal state feedback. Example 9. if desired.15) The optimal choice of à .17) It has the same degree (number of poles) as the plant.4. provided the systems with state-space realizations ´ ½ ´ Ï ¾ µ are stabilizable and detectable. from Ý to Ù (i. show that the closed-loop dynamics are described by Ø Ü Ü Ü   ÃÖ ¼  Ã ÃÖ Ü Ü Ü · Á Á  Ã ¼ Û ÛÒ This shows that the closed-loop poles are simply the union of the poles of the deterministic ÃÖ ) and the poles of the Kalman filter (eigenvalues of LQR system (eigenvalues of à ).4. The structure of the LQG controller is illustrated in Figure 9.5).e. is easily shown to be given by ÃÄÉ ´×µ ×   ÃÖ   à à  ÃÖ ¼  ½ Ì   Ì Î  ½   Ê  Ê ½ Ì Ì Î  ½ ¼ (9.

Closed-loop response to unit step in reference.5: LQG controller with integral action and reference input 2.   Exercise 9.5 2 1. Recall from (2.3 Derive the equations used in the MATLAB file in Table 9. The reponse is good and compares nicely with the loop-shaping design in Figure 2. For Ê Ì ´Ü ÉÜ · ÙÌ ÊÙµ Ø we choose É such that only the integrated the objective function  states Ý Ö are weighted and we choose the input weight Ê ½.17. and be choose Î ½ (measurement noise). . introduced in Chapter 2.5 1 0. For the noise weights we select Ï ½. The resulting closed-loop response is shown in Figure 9. The MATLAB file in Table 9. so we augment the plant ´×µ with an integrator before designing the state feedback regulator.ÇÆÌÊÇÄÄ Ê + Ö¹ ËÁ Æ ¿¿ Û ÛÒ - ¹ Ê ¹ -Ã Ö ¹ Kalman Filter ¹ Õ Ù Plant Õ ¹Ý Õ Figure 9.5 0 −0.5 0 5 10 15 20 25 30 35 40 45 50 ݴص ٴص Time [sec] Figure 9.6.1. The Kalman filter is set up so that ÛÁ (process we do not estimate the integrated states. The standard LQG design procedure does not give a controller with integral action. (Only noise directly on the states) with Û the ratio between Û and Î matter and reducing Î yields a faster response).6: LQG design for inverse response process. (Only the ratio between É and Ê matters and reducing Ê yields a faster response).1 was used to design the LQG controller.26) that the plant model has a RHP-zero and is given ¿´ ¾×·½µ by ´×µ ´ ×·½µ´½¼×·½µ .

and a (minimum) phase margin of ¼ Æ in each plant input control channel.2 Robustness properties For an LQG-controlled system with a combined Kalman filter and LQR control law there are no guaranteed stability margins. 0nm=zeros(nm). 0mm=zeros(mm).R).n+1:n+m). This means that in the LQR-controlled system ¨ © Ù  ÃÖ Ü.m] = size(b).V). V = 1*eye(m).c. % 3) Form controller from [r y]’ to u Ac=[0mm 0mn. 1977) ´Á · that.1: MATLAB commands to generate LQG controller in Example 9.Bc. Dc = [0mm 0mm]. He showed.1:n). 1964. Kr=lqr(A.d] = tf2ss(num.1).d) % no.-c 0mm].¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 9.B. 0mn=zeros(mn).b. 0nn=zeros(nn). Safonov and Athans. [a.m)].18) From this it can be shown that the system will have a gain margin equal to infinity. B = [b.Kri=Kr(1:m. ½¾ Ñ . Cc = [-Kri -Krp].c. that there exist LQG combinations with arbitrarily small gain margins. R=eye(m). Ke = lqe(a. W = eye(n).W.0nm +Ke]. % original plant % plant is (a. of outputs (y) % no. Krp=Kr(1:m.Dc). Includes % integrators. % Model dimensions: p = size(c. % 1) Design state feedback regulator A = [a 0nm. the sensitivity function Ë ÃÖ ´×Á   µ ½ µ ½ satisfies the Kalman inequality ´Ë ´ µµ ½ Û (9. of states and inputs (u) % % % % % % % % % % augment plant with integrators weight on integrated error input weight optimal state-feedback regulator extract integrator and state feedbacks don’t estimate integrator states process noise model (Gd) process noise weight measurement noise weight Kalman filter gain % integrators included % Final 2-DOF controller. [n. for an LQR-controlled system (i.1 % Uses the Control toolbox num=3*[-2 1].5. a gain reduction margin equal to 0.Q. den=conv([5 1]. a complex perturbation can be introduced at the plant inputs without causing instability providing (i) or ¼ and ¼ ½. Q=[0nn 0nm. by example. Klqg = ss(Ac.-b*Kri a-b*Krp-Ke*c].0mn eye(m. This was brought starkly to the attention of the control community by Doyle (1978) (in a paper entitled “Guaranteed Margins for LQG Regulators” with a very compact abstract which simply states “There are none”).Bnoise. Kr and Kalman filter 9.[10 1]). if the weight Ê is chosen to be diagonal.2. However.c. % 2) Design Kalman filter Bnoise = eye(n). assuming all the states are available and no stochastic inputs) it is well known (Kalman.den).Cc. Bc = [eye(m) -eye(m).b.-d].e.

2 LQR design of a first order process. and is illustrated in Example 9. For a single-input plant. which is a general property of LQR designs. Arguments dual to those employed for the LQR-controlled system can then be used to show that.12) Ô ¼ ¸ · Ô ½. In particular. ½ ´×   µ with the state-space realization Example 9.ÇÆÌÊÇÄÄ Ê (ii) ËÁ Æ ¿ ½ and ¼Æ. pole from ×   Ô Ô   ¾·½ ÊÜ  ½   ½ ½ small (“cheap control”) the gain crossover frequency of the loop transfer function Ô ÃÖ ´× µ is given approximately by ½ Ê. ½¾ Ñ where Ñ is the number of plant inputs. Consider a first order process ´×µ ܴص ÂÖ Ü´Øµ · ٴص Ý ´Øµ ܴص so that the state is directly measured. Note that the root on the input) and moves to ¼) and unstable ( ¼) plants ´×µ with the same value locus is identical for stable ( ¼ we see that the minimum input energy needed to stabilize the of . This was first shown by Kalman (1964). gives Ù ÃÖ Ü where from (9. if the power spectral density matrix Î is chosen to . the Nyquist plot of Ä´ µ avoids the unit disc centred on the critical point ½. É ½) ¾ ¾ Ê  Ê ¼   ´ ʵ¾ · Ê. The algebraic Riccati equation (9.2 below. the root locus for the The closed-loop pole is located at × for Ê (infinite weight optimal closed-loop pole with respect to Ê starts at × along the real axis as Ê approaches zero. The surprise is that it is also true for the unstable plant with ¼ even though the phase of Ä´ µ varies from ½ ¼Æ to ¼Æ . for ) is obtained with the input Ù ¾ Ü. Thus. Notice that it is itself a feedback system. that is Ë ´ µ ½ ½ · Ä´ µ ½ at all frequencies. The optimal control is given by ¾ ·½ Ê and we get the closed-loop system Ü ¾ · ½ Ê ¼. since ¼.4. Furthermore. This is obvious for the stable plant with ¼ since ÃÖ ¼ and then the phase of Ä´ µ varies from ¼Æ (at zero frequency) to ¼Æ (at infinite frequency). the above shows that the Nyquist diagram of the open-loop regulator transfer function Ã Ö ´×Á   µ ½ will always lie outside the unit circle with centre  ½. For Ä Ê     ÃÖ           Consider now the Kalman filter shown earlier in Figure 9. which moves the plant (corresponding to Ê to its mirror image at × . For a non-zero initial state the cost function to be minimized is ½ ¼ ´Ü¾ · ÊÙ¾ µ Ø .13) becomes ( ·   Ê ½ · ½ Ê· ÃÖ Ê Ü·Ù which. Note also that Ä´ µ has a roll-off of -1 at high frequencies.

19) (9.21) (9. Consequently.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ be diagonal. the Nyquist diagram of the open-loop filter transfer function ´×Á   µ  ½ à will lie outside the unit circle with centre at  ½. a gain reduction margin of 0. then at the input to the Kalman gain matrix à there will be an infinite gain margin.23) .7: LQG-controlled plant labelled points 1 to 4 are respectively Ľ ´×µ ľ ´×µ Ä¿ ´×µ Ä ´×µ where ¢ £ ÃÖ ¨´×µ ½ · ÃÖ · à  ½ à ¨´×µ  ÃÄÉ ´×µ ´×µ   ´×µÃÄÉ ´×µ ÃÖ ¨´×µ (regulator transfer function) ¨´×µÃ (filter transfer function) (9.17) and ¨´×µ ´×Á   µ ½ ´×µ ¨´×µ is the plant model.20) (9.5 and a minimum phase margin of ¼ Æ .7. (9. so why are there no guarantees for LQG control? To answer this. An LQR-controlled system has good stability margins at the plant inputs. consider the LQG controller arrangement shown in Figure 9. for a single-output plant. The loop transfer functions associated with the Û Ù 3¢ ÛÒ ¢ 1 ¹ Plant. ÃÄÉ ´×µ Figure 9. and a Kalman filter has good stability margins at the inputs to à . G(s) ¢ 2 Ý Õ  ÃÖ Õ ¨´×µ + + à ¢ 4 + - ¹ Controller.22) ÃÄÉ ´×µ is as in (9.

We will only give an outline of the major steps here. 1981) show how. edited by Niemann and Stoustrup (1995). These procedures are considered next. most likely as software. . it is necessary to assume that the plant model ´×µ is minimum phase. Alternatively. 1981) or to the tutorial paper by Stein and Athans (1987). Doyle and Stein. It is necessary to assume that the plant model ´×µ is minimum phase and that it has at least as many inputs as outputs. with its guaranteed stability margins. At points 3 and 4 we have the guaranteed robustness properties of the LQR system and the Kalman filter respectively. the LQG loop transfer function Ä ½ ´×µ can be made to approach by designing à in the Kalman filter to be large using the robustness recovery procedure of Doyle and Stein (1979). For a more recent appraisal of LTR. and explain why Ä ´×µ (like Ä¿ ´×µ) has such a simple form. But at the actual input and output of the plant (points 1 and 2) where we are most interested in achieving good stability margins. The LQG loop transfer function Ä ¾ ´×µ can be made to approach ¨´×µÃ . since we will argue later that the procedures are somewhat limited for practical control system design. either Ä ½ ´×µ can be made to tend asymptotically to Ä ¿ ´×µ or ľ ´×µ can be made to approach Ä ´×µ. Exercise 9. we aim to make Ä ¾ ´×µ ´×µÃÄÉ ´×µ approximately equal to the Kalman filter transfer function ¨´×µÃ . by a suitable choice of parameters. we have complex transfer functions which in general give no guarantees of satisfactory robustness properties.2. we recommend a Special Issue of the International Journal of Robust and Nonlinear Control. Ä¿ ´×µ and Ä ´×µ are surprisingly simple. Doyle and Stein. 1979. and so we have good stability margins where they are not really needed and no guarantees where they are. in fact they are uncontrollable from Ù. Notice also that points 3 and 4 are effectively inside the LQG controller which has to be implemented.3 Loop transfer recovery (LTR) procedures For full details of the recovery procedures. 1969. but this time it must have at least as many outputs as inputs.4 Derive the expressions for Ľ ´×µ. we refer the reader to the original communications (Kwakernaak. That is. Ä¿ ´×µ and Ä ´×µ. 9. Again. For Ä¿ ´×µ the reason is that after opening the loop at point 3 the error dynamics (point 4) of the Kalman filter are not excited by the plant inputs. for a minimum phase plant procedures developed by Kwakernaak (1969) and Doyle and Stein (1979. ÃÖ ¨´×µ The procedures are dual and therefore we will only consider recovering robustness at the plant output. Fortunately. if Ã Ö in the LQR problem is designed to be large using the sensitivity recovery procedure of Kwakernaak (1969). ľ ´×µ.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ Remark.

As the À ½ theory developed. and he also criticized the representation of uncertain disturbances by white noise processes as often unrealistic. however.4. A more direct and intuitively appealing method for multivariable loop-shaping will be given in Section 9. This is because the recovery procedures work by cancelling the plant zeros. the two approaches of À ¾ and À½ control were seen to be more closely related than originally thought. But as methods for multivariable loop-shaping they are limited in their applicability and sometimes difficult to use. Their main limitation is to minimum phase plants. (1989). Notice that Ï and Î are being used here as design parameters and their associated stochastic processes are considered to be fictitious.3 À¾ and À½ control Motivated by the shortcomings of LQG control there was in the 1980’s a significant shift towards À½ optimization for robust control. Because of the above disadvantages.1. where is a scalar. This development originated from the influential work of Zames (1981). raise. the recovery procedures are not usually taken to their limits ´ ¼µ to achieve full recovery. In this section. although an earlier use of À ½ optimization in an engineering context can be found in Helton (1976). ´×µÃ ÄÉ ´×µ or ÃÄÉ ´×µ ´×µ. where Ë is a scaling matrix which can be used to balance. The cancellation of lightly damped zeros is also of concern because of undesirable oscillations at these modes during transients. Much has been written on the use of LTR procedures in multivariable control system design. 9. we design a Kalman filter whose transfer function ¨´×µÃ is desirable. The result is a somewhat ad-hoc design procedure in which the singular values of a loop transfer function.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ First. or lower the singular values. in an iterative fashion. by choosing the power spectral density matrices Ï and Î so that the minimum singular value of ¨´×µÃ is large enough at low frequencies for good performance and its maximum singular value is small enough at high frequencies for robust stability. When the singular values of ¨´×µÃ are thought to be satisfactory. are indirectly shaped. as discussed in section 9. we will begin with a general . Zames argued that the poor robustness properties of LQG could be attributed to the integral criterion in terms of the À¾ norm. and a cancelled non-minimum phase zero would lead to instability. As tends to zero ´×µÃÄÉ tends to the desired loop transfer function ¨´×µÃ . A further disadvantage is that the ¼µ which brings about full recovery also introduces high limiting process ´ gains which may cause problems with unmodelled dynamics. but rather a set of designs is obtained (for small ) and an acceptable design is selected. loop transfer recovery is achieved by designing Ã Ö in Ì and Ê an LQR problem with É Á . This is done. particularly in the solution process. In tuning Ï and Î we should be careful to choose Î as diagonal and Ï ´ Ë µ´ Ë µ Ì . see for example Glover and Doyle (1988) and Doyle et al.

Such a general formulation is afforded by the general configuration shown in Figure 9.1 General control problem formulation There are many ways in which feedback design problems can be cast as À ¾ and À½ optimization problems.102) the closed-loop transfer function from Û to Þ is given by the linear fractional transformation Þ (9. or modified to suit his or her application. commercial software for solving such problems is now so easily available.24) (9.26) ½ ¾ The signals are: Ù the control variables. The general À ¾ and À½ problems will be described along with some specific and typical control problems.27) Ð ´È à µÛ È s ½ ½½ ¾½ ¾ ½¾ ¾¾ ¿ .3.25) with a state-space realization of the generalized plant È given by (9. It is not our intention to describe in detail the mathematical solutions. since efficient. and Þ the so-called “error” signals which are to be minimized in some sense to meet the control objectives. Rather we seek to provide an understanding of some useful problem formulations which might then be used by the reader. Ú the measured variables. It is very useful therefore to have a standard problem formulation into which any particular problem may be manipulated.8 is described by Û Ù ¹ ¹ È Ã Þ ¹ Ú Figure 9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ control problem formulation into which we can cast all À ¾ and À½ optimizations of practical interest.8 and discussed earlier in Chapter 3. The system of Figure 9. 9. As shown in (3. Û the exogenous signals such as disturbances Û and commands Ö.8: General control configuration Þ Ú Ù È ´×µ Û Ù Ã ´×µÚ ¾ Ƚ½ ´×µ Ƚ¾ ´×µ Ⱦ½ ´×µ Ⱦ¾ ´×µ Û Ù (9.

28) À¾ and À½ control involve the minimization of the À ¾ and À½ norms of respectively. ½½ strictly proper. by a scaling of Ù and Ú and a unitary transformation of Û and Þ . see for example Maciejowski (1989).3. both À¾ and À½ require the solutions to two Riccati equations. ¾ ¾½ ½½ ¼ and ¾¾ ¼.. Recall that À ¾ is the set of strictly proper stable transfer functions. see (Safonov et al. The most general. for simplicity of exposition. In À½ . is required but they do significantly simplify the algorithm formulas. an equivalent À ½ problem can be constructed in which they are. . õ Ƚ½ · Ƚ¾ à ´Á   Ⱦ¾ à µ ½ Ⱦ½ Ð ´È õ First some remarks about the algorithms used to solve such problems.   Á ¾ has full column rank for all ½ ½¾   Á ½ has full row rank for all . In addition.4. and assumption (A2) is sufficient to ensure the controllers are proper and hence realizable. widely available and widely used algorithms for À ¾ and À½ control problems are based on the state-space solutions in Glover and Doyle (1988) and Doyle et al. and they both exhibit a separation structure in the controller already seen in LQG control. nor ¾¾ ¼. We will consider each of them in turn. neither ½½ ¼. (1989). it is also sometimes assumed that ½¾ and ¾½ are given by (A6) ½¾ ¼ Á and ¾½ ¢ ¼ Á £ .¿¼ where ÅÍÄÌÁÎ ÊÁ Ð ´È Ä Ã ÇÆÌÊÇÄ (9. This can be achieved. they both give controllers of state-dimension equal to that of the generalized plant È . 1995). the following additional assumptions are sometimes made . An algorithm for À ½ control problems is summarized in Section 9. ½¾ and ¾½ have full rank. 1989) and (Green and Limebeer. without loss of generality. Assumptions (A3) and (A4) ensure that the optimal controller does not try to cancel poles or zeros on the imaginary axis which would result in closed-loop ¼ makes Ƚ½ instability. It is worth mentioning again that the similarities between À ¾ and À½ theory are most clearly evident in the aforementioned algorithms. For simplicity. For example. Assumption (A1) is required for the existence of stabilizing controllers à . If they are not zero. The following assumptions are typically made in À ¾ and À½ problems: (A1) (A2) (A3) (A4) (A5) ´ ¾ ¾ µ is stabilizable and detectable. Assumption (A5) is conventional in À ¾ control. ¾¾ ¼ makes Ⱦ¾ strictly proper and simplifies the formulas in the À ¾ algorithms.

That is. That is ¨ © ۴ص۴ µÌ ÁÆ´Ø   µ (9.10. 9. This contrasts significantly with À ¾ theory. It also has the following stochastic interpretation. it should be said that À ½ algorithms. in which the optimal controller is unique and can be found from the solution of just two Riccati equations.g. find a sub-optimal controller. ´ ½ µ is stabilizable and ´ Assumption (A7) is common in Ì terms in the cost function ( ½¾ Ì noise are uncorrelated ( ½ ¾½ may be replaced by (A8). in general. In general. and the designer specified weighting functions. to find an optimal À ½ controller is numerically and theoretically complicated.1 and A. and the process noise and measurement ¼).31) . in LQG where there are no cross ½ ½ µ is detectable. Suppose in the general control configuration that the exogenous input Û is white noise of unit intensity. Notice that if (A7) holds then (A3) and (A4) Lastly. then it probably means that your control problem is not well formulated and you should think again.3. reducing ­ until the minimum is reached within a given tolerance.1 and noted in Tables A. for a specified ­ a stabilizing controller is found for which Ð ´È à µ ½ ­ . -tools or the Robust Control toolbox of MATLAB) complains.ÇÆÌÊÇÄÄ Ê (A7) (A8) ËÁ Æ ¿½ Ì Ì ½¾ ½ ¼ and ½ ¾½ ¼. if the software (e. As discussed in Section 4. the interconnection structure. Therefore. À ¾ control e.2 À¾ optimal control × The standard À ¾ optimal control problem is to find a stabilizing controller à which minimizes ´×µ ¾ ½ ½ ØÖ ´ µ ´ µÀ ¾  ½ Ð ´È õ (9. ¼). most sensibly posed control problems will meet them. Whilst the above assumptions may appear daunting. the À¾ norm can be given different deterministic interpretations.30) The expected power in the error signal Þ is then given by ´ ½ Ì ÐÑ Þ ´ØµÌ Þ ´Øµ Ø Ì ½ ¾Ì  Ì µ (9.2 on page 538.29) For a particular problem the generalized plant È will include the plant model. If an optimal controller is required then the algorithm can be used iteratively.g. This is illustrated for the LQG problem in the next subsection.

33) (9. by minimizing the À ¾ norm.38) where Û is a white noise process of unit intensity.3.32) 9. due to a unit intensity white noise input.35) The LQG problem is to find Ù £ ½ Ì¢ Ì ÐÑ Ü ÉÜ · ÙÌ ÊÙ Ø Ì ½Ì ¼ (9.37) and represent the stochastic inputs Û .36) This problem can be cast as an À ¾ optimization in the general framework in the following manner. ÛÒ as Ï ¾ ¼½ Û ¼ ξ µ (9.2. is minimized.34) Û ´Øµ ÛÒ ´Øµ Â Û ´ µÌ ÛÒ ´ µÌ à ´×µÝ such that ´ (9. Define an error signal Þ as is minimized with É ÉÌ ¼ and Ê ÊÌ ¼.39) . we are minimizing the root-mean-square (rms) value of Þ .3 LQG: a special À¾ optimal controller An important special case of À ¾ optimal control is the LQG problem described in subsection 9. Þ Û ÛÒ ´ É ¾ ¼½ ¼ ʾ ½ ½ Ü Ù (9. ¾ ¾ ¾ Ð ´È à µ ¾ (by Parseval’s Theorem) (9. Then the LQG cost function is  ½ Ì ÐÑ Þ ´ØµÌ Þ ´Øµ Ø Ì ½Ì ¼ Ð ´È õ ¾ ¾ (9.1.¿¾ ¨ © ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ØÖ Þ ´ØµÞ ´ØµÀ £ ½ ½ ¢ ØÖ ´ µ ´ µÀ ¾  ½ Thus. the output (or error) power of the generalized system. For the stochastic system Ü Ý where Ü· Ù·Û Ü · ÛÒ Ï ¼ Æ´Ø   µ ¼ Î µ (9.

1989) to this formulation gives the familiar LQG optimal controller as in (9. With the standard assumptions for the LQG problem..41) The above formulation of the LQG problem is illustrated in the general setting in Figure 9. then õ ½ Ñ Ü ´ Ð ´È à µ´ µµ (9.4 With reference to the general control configuration of Figure 9...8.40) È È½½ Ƚ¾ Ⱦ½ Ⱦ¾ s ¾ Ͻ ¼ ¼ ¼ ¼ ½ ¾ ¼. Û ¹ Ï ¾½ Û È µ ½ + ¹ Ê ½¾ ¹ ¹ Þ Õ¹ ¹ Î ½¾ Ù + ¹ + ¹ ´×Á   Õ ¹ É ¾½ Ý ÛÒ + ¹ à Figure 9.17).. Let Þ Ð ´È à µÛ.¼ .10.43) .3.42) Ð ´È õ ½ ۴ص Ñ Ü Þ ´Øµ ¾ ¼ ۴ص ¾ (9.. One is that it minimizes the peak of the maximum singular value of Ð ´È ´ µ à ´ µµ..-½ ¼ ξ ¼ ¾ ɽ (9.. It also has a time domain interpretation as the induced (worstcase) 2-norm.2 the À ½ norm has several interpretations in terms of performance.9.ÇÆÌÊÇÄÄ Ê where ËÁ Æ ¿¿ and the generalized plant È is given by Þ ´×µ Ð ´È ¾ à µÛ´×µ ¿ (9.-Ê . the standard À ½ optimal control problem is to find all stabilizing controllers à which minimize À½ optimal control Ð ´È As discussed in Section 4.-¼.9: The LQG problem formulated in the general control configuration 9. application of the general À¾ formulas (Doyle et al.

¼ is a solution to the algebraic Riccati equation Ì Ì Ì Ì ½ · ½ · ½ ½ · ½ ´­  ¾ ½ ½   ¾ ¾ µ ½ ¼ ¢ £ Ì Ì · ´­  ¾ ½ ½   ¾ ¾ µ ½ ¼.47) ½   ¾ ½ Ľ   ½ ¾ ½ ´Á   ­  ¾ ½ ½ µ ½  ¾ ½ Ì ½ · ¾ ½ · ½ Ľ ¾ ·­ (9.1.e.48) ½ ½ and É´×µ is any stable proper transfer function such that É ½ ­ . . (1989). we get à ´×µ à ½½ ´×µ   ½Ä½ ´×Á   ½ µ ½ ½ (9. and such that Re (iii) ´ ½ ½ µ ­ ¾ All such controllers are then given by à Р´Ã ɵ where (i) ½ (9.46) ¼ Á   ¾ Á ¼ Ì Ì (9.50) ¾   ½ Ľ ¿ ½ ÛÛÓÖ×Ø ½ ßÞ ½ ¾ ½ ½ ¾ .24)-(9. and it is often computationally (and theoretically) simpler to design a suboptimal one (i.¿ where ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ General À½ algorithm. The central controller can be separated into a state estimator (observer) of the form Ü Ü · ­  ¾ Ì Ü · Ù · Ä ´ Ü   ݵ (9.3. For É´×µ ¼.26). there exists a stabilizing controller à ´×µ such that Ð ´È à µ ½ ­ if and only if In practice.44) (9.49) ½ ½ This is called the “central” controller and has the same number of states as the generalized plant È ´×µ. with assumptions (A1) to (A8) in Section 9.45) ½ ¾ à ´×µ s (9.8 described by equations (9. . Then the À½ sub-optimal control problem is: given a ­ ­ Ñ Ò. and by reducing ­ iteratively. The algorithm is summarized below with all the simplifying assumptions. one close to the optimal ones in the sense of the À ½ norm). an optimal solution is approached. Let ­ Ñ Ò be the minimum value of Ð ´È à µ ½ over all stabilizing controllers à . For the general control configuration of Figure 9. and such that Re (ii) ½ ¼ is a solution to the algebraic Riccati equation Ì Ì Ì ½ · ½ Ì · ½ ½ · ½ ´­  ¾ ½ ½   ¾ ¾ µ ½ ¼ ¢ £ Ì Ì · ½ ´­  ¾ ½ ½   ¾ ¾ µ ¼. Þ ´Øµ ¾ ÕÊ ¼ ½ È Þ ´Øµ ¾ Ø is the ¾-norm of the vector signal. it is usually not necessary to obtain an optimal controller for the À ½ problem. find all stabilizing controllers à such that Ð ´È à µ ½ ­ This can be solved efficiently using the algorithm of Doyle et al.

­ -iteration. À ½ optimization is used to shape the singular values of specified transfer functions over frequency. as well as the usual disturbances. to within a specified tolerance. ÃË . In Section 2. the reader is referred to the original source (Glover and Doyle. In the signal-based approach.4.7. Given all the assumptions (A1) to (A8) the above is the most simple form of the general À½ algorithm. noise.14) we see that it contains an additional term ½ ÛÛÓÖ×Ø .4.51) Upon comparing the observer in (9.3. and command signals. Earlier in this chapter. In each case we will examine a particular problem and formulate it in the general control configuration. The justification for this is that the initial design. Therefore stopping the iterations before the optimal value is achieved gives the design an À ¾ flavour which may be desirable. The above result provides a test for each value of ­ to determine whether it is less than ­Ñ Ò or greater than ­ Ñ Ò . If we desire a controller that achieves ­ Ñ Ò . This is discussed in Section 9. thereby ensuring desirable bandwidths and roll-off rates. Note that for the special case of À ½ loop shaping this extra term is not present. we would expect a user to have access to commercial software such as MATLAB and its toolboxes. we distinguished between two methodologies for À ½ controller design: the transfer function shaping approach and the signal-based approach. The maximum singular values are relatively easy to shape by forcing them to lie below user defined bounds. 9. we seek to minimize the energy in certain error signals given a set of exogenous input signals. . 1988). by examining a typical one degree-of-freedom configuration. In practice. for practical designs it is sometimes recommended to perform only a few iterations of the À ½ algorithm. A difficulty that sometimes arises with À ½ control is the selection of weights such that the À½ optimal controller provides a good trade-off between conflicting objectives in various frequency ranges. Thus. is similar to an À ¾ design which does trade-off over various frequency ranges. Figure 9.5 Mixed-sensitivity À½ control Mixed-sensitivity is the name given to transfer function shaping problems in which the sensitivity function Ë ´Á · à µ  ½ is shaped along with one or more other closed-loop transfer functions such as ÃË or the complementary sensitivity function Ì Á   Ë .ÇÆÌÊÇÄÄ Ê and a state feedback ËÁ Æ ¿ Ù ½Ü (9. we saw quite clearly the importance of Ë . The latter might include the outputs of perturbations representing uncertainty. In the former. and Ì . where ÛÛÓÖ×Ø can be interpreted as an estimate of the worst-case disturbance (exogenous input). then we can perform a bisection on ­ until its value is sufficiently accurate. where some of the assumptions are relaxed. For the more general situation.50) with the Kalman filter in (9. Both of these two approaches will be considered again in the remainder of this section.1. after one iteration.

and ÃË the transfer function between and the control signals. Tracking is not an issue and therefore for this problem it makes sense to shape the closed-loop transfer functions Ë and ÃË in a one degree-of-freedom setting. It is not difficult from Figure 9. The disturbance is typically a low frequency signal.6. To do this we could select a scalar low pass filter Û ½ ´×µ with a bandwidth equal to that of the disturbance. This can be useful for systems with channels of quite different bandwidths when diagonal weights are recommended. Remark. but it is far more useful in practice to minimize ­ ­ ­ ­ Û½ Ë Û¾ ÃË ­ ­ ­ ­ ½ (9. that we have a regulation problem in which we want to reject a disturbance entering at the plant output and it is assumed that the measurement noise is relatively insignificant. and to determine the elements of the generalized plant È as Ƚ½ Ⱦ½ Ͻ ¼  Á Ƚ¾ Ⱦ¾   Ͻ  Ï¾ (9. To see how this mixed-sensitivity problem can be formulated in the general setting.10. and then find a stabilizing controller that minimizes Û½ Ë ½ .11 and Section 3.4. It focuses on just one closed-loop transfer function and for plants without right-half plane zeros the optimal controller has infinite gains. It is important to include ÃË as a mechanism for limiting the size and bandwidth of the controller. usually ÏÙ Á . In the presence of a nonminimum phase zero. Note we have here outlined an alternative way of selecting the weights from that in ÏÈ was selected with a crossover frequency Example 2. therefore. but anything more complicated is usually not worth the effort. the scalar weighting functions Û ½ ´×µ and Û¾ ´×µ can be replaced by matrices Ͻ ´×µ and Ͼ ´×µ.10 to show that Þ ½ Ͻ ËÛ and Þ¾ Ͼ ÃËÛ as required. and hence the control energy used. the stability requirement will indirectly limit the controller gains. and therefore it will be successfully rejected if the maximum singular value of Ë is made small over the same low frequencies. where Þ½ Ͻ Ý and Þ¾  Ï¾ Ù.53) . we can imagine the disturbance as a single exogenous input and define an error ¢ Ì Ì £ Þ½ Þ¾ Ì . as illustrated in signal Þ Figure 9. The size of ÃË is also important for robust stability with respect to uncertainty modelled as additive plant perturbations. This cost function alone is not very practical.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Suppose.52) where Û¾ ´×µ is a scalar high pass filter with a crossover frequency approximately equal to that of the desired closed-loop bandwidth. There Ͻ equal to that of the desired closed-loop bandwidth and Ͼ ÏÙ was selected as a constant. Recall that Ë is the transfer function between and the output. In general.

we have in this tracking problem Þ ½ Ͻ ËÛ and Þ¾ Ͼ ÃËÛ. you will see that the exogenous input Û is passed through a weight Ï ¿ before it impinges on the system.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ È Û ¹ Ͻ ¹ Þ½ ¹ Þ¾ ¼ ¹  Ï¾ Ù Þ Õ¹ ¹ + ÕÝ ¹ + + Setpoint Ö Ú Ã Figure 9.56) .10.11. Here we consider a tracking problem. Ï ¿ is chosen to weight the input signal and not directly to shape Ë or ÃË .-¾Ú ¿ Ƚ½ Ƚ¾ Ⱦ½ Ⱦ¾ õ Ͻ Ë Ï¾ ÃË Û Ù (9. is to find a stabilizing controller which minimizes ­ ­ ­ ­ Ͻ Ë Ï¾ Ì ­ ­ ­ ­ ½ (9. again in a one degree-offreedom setting.54) and Ð ´È (9.55) Another interpretation can be put on the Ë ÃË mixed-sensitivity optimization as shown in the standard control configuration of Figure 9. An example of the use of Ë ÃË mixed sensitivity minimization is given in Chapter 12. and the error Ͻ ´Ö   ݵ and Þ¾ Ͼ Ù. Another useful mixed sensitivity optimization problem. In this helicopter problem. This signal-based approach to weight selection is the topic of the next sub-section. The exogenous input is a reference command Ö. As in the regulation problem signals are Þ½  Ï½ of Figure 9. where it is used to design a rotorcraft control law.10: Ë ÃË mixed-sensitivity optimization in standard form (regulation) where the partitioning is such that ¾ Þ½ Þ .

¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ È Û Ö ¹ ¹ Ͻ Ͼ ¹ Þ½ ¹ Þ¾ Ú Þ Ù Õ¹ ¹+ Ã Õ Figure 9.11: Ë ÃË mixed-sensitivity minimization in standard form (tracking) The ability to shape Ì is desirable for tracking problems and noise attenuation. The Ë Ì mixed-sensitivity minimization problem can be put into the standard control configuration as shown in Figure 9. The elements of the È Û Ö ¹ ¹ Ͻ Ͼ ¹ Þ½ ¹ Þ¾ Ú Þ ¹ Ù Õ¹ + Õ Ã Figure 9.12. It is also important for robust stability with respect to multiplicative perturbations at the plant output.57) .12: Ë Ì mixed-sensitivity optimization in standard form corresponding generalized plant È are Ƚ½ Ⱦ½ Á Ͻ ¼ Ƚ¾ Ⱦ¾  Ï½   Ͼ (9.

we would have to approximate ½ × ½ by ×·¯ .3. In which case.1 is not satisfied. we define the class of external signals affecting the system and we define the norm of the error signals we want to keep small. The focus of attention has moved to the size of signals and away from the size and bandwidth of selected closed-loop transfer functions. With two. If they are not. for example.58) ½ formulate a standard problem. the mixed-sensitivity approach is not general enough and we are forced to look at more advanced techniques such as the signal-based approach considered next. Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals. Therefore if we wish. information might be given about several exogenous signals in addition to a variety of signals to be minimized and classes of plant perturbations to be robust against. stable.11. This is exactly what was done in Example 2. draw the corresponding control configuration and give expressions for the generalized plant È . But the standard assumptions preclude such a weight. assumption (A1) in Section 9.5 For the cost function Ͻ Ë Ï¾ Ì Ï¿ ÃË ¿­ ­ ­ ­ ­ ­ (9. The bandwidth requirements on each are usually complementary and simple. Similarly one might be interested in weighting ÃË with a non-proper weight to ensure that à is small outside the system bandwidth. low-pass and high-pass filters are sufficient to carry out the required shaping and trade-offs.6 Signal-based À½ control The signal-based approach to controller design is very general and is appropriate for multivariable problems in which several objectives must be taken into account simultaneously. The trick here is to replace a non-proper term such as ´½ · ½ ×µ by ´½ · ½ ×µ ´½ · ¾ ×µ where ¾ ½ . the process is relatively easy. as in Figure 9. and the general À ½ algorithm is not applicable. We stress that the weights Ï in mixed-sensitivity À½ optimal control must all be stable. we define the plant and possibly the model uncertainty. to emphasize the minimization of Ë at low frequencies by weighting with a term including integral action.13. where ¯ ½. A useful discussion of the tricks involved in using “unstable” and “non-proper” weights in À ½ control can be found in Meinsma (1995). Weights are also used is the if a perturbation is used to model uncertainty. where . For more complex problems.ÇÆÌÊÇÄÄ Ê ËÁ Æ ­¾ ­ ­ ­ ­ ­ ¿ Exercise 9.3. In this approach. 9. The shaping of closed-loop transfer functions as described above with the “stacked” cost functions becomes difficult with more than two functions.

In the absence of model uncertainty. ¹ Ï ¹ ¡ Õ ¹+ ¹ + ¹ Figure 9.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ nominal model. but LQG can be generalized to include frequency dependent weights on the signals leading to what is sometimes called Wiener-Hopf design. the weights É and Ê are constant. we are required to minimize the À ½ norm. As we have already seen. However.13. When we consider a system’s response to persistent sinusoidal signals of varying frequency. and ¡ represents unmodelled dynamics usually normalized via Ï so that ¡ ½ ½. ¹ ÏÖ ¹ Ö ¹ Ò Ï Ï Ö× ÝÖ ¹ ÝÑ Õ ¹ ¹ + ¹ à ٹ + ¹ Ï ¹Þ½ Õ ¹ +Õ Ý + ¹ ÏÒ + ¹ ¹ ÏÙ ¹Þ¾ Figure 9. as it always should be.14: A signal-based À½ control problem A typical problem using the signal-based approach to À ½ control is illustrated in the . in which the exogenous signals are assumed to be stochastic (or alternatively impulses in a deterministic setting) and the error signals are measured in terms of the 2-norm. As in mixed-sensitivity À ½ control. see Chapter 8 for more details. or simply À ¾ control. À ½ is clearly the more natural approach using component uncertainty models as in Figure 9. Ï is a weighting function that captures the relative model fidelity over frequency.13: Multiplicative dynamic uncertainty model LQG control is a simple example of the signal-based approach. when uncertainty is addressed. or when we consider the induced 2-norm between the exogenous input signals and the error signals. the weights in signal-based À ½ control need to be stable and proper for the general À½ algorithm to be applicable. there does not appear to be an overwhelming case for using the À ½ norm rather than the more traditional À ¾ norm.

where ¡ ½ ½. We have assumed in this statement that the signal weights have normalized the 2-norm of the exogenous input signals to unity.8.15. we require the structured singular value ¹Ï Ö ¹ Ï Ö× ¹ ¹Ã ÝÑ Ò ¹ ÏÒ + ¹ + ¹ ÏÖ ÝÖ Õ Õ ¡ ¹ Ï Ý¹ ¡ ¹ Ù¡ ¹ + ¹ Ï ¹Þ½ Õ ¹ +¹ + ¹ + ÕÝ + ¹ ÏÙ ¹Þ¾ Figure 9. set points. respectively. The weights Ï . Ï and ÏÒ may be constant or dynamic and describe the relative importance and/or frequency content of the disturbances.14. and noise signals. can be applied. outlined in Chapter 8. Ö Ò Ö× ÝÑ Þ Ù Ù Þ½ Þ¾ (9. The weight Ï Ö is a desired closed-loop transfer function between the weighted set point Ö × and the actual output Ý . The problem can be cast as a standard À½ optimization in the general control configuration by defining ¾ ¿ Û Ú in the general setting of Figure 9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿½ interconnection diagram of Figure 9. This problem is a non-standard À ½ optimization. The problem we now want to solve is: find a stabilizing controller à such that the À ½ norm of the transfer function between Û and Þ is less than 1 for all ¡. Mathematically.59) Suppose we now introduce a multiplicative dynamic uncertainty model at the input to the plant as shown in Figure 9. and à is the controller to be designed. It is a robust performance problem for which the -synthesis procedure. The weights Ï and ÏÙ reflect the desired frequency content of the error ´Ý   ÝÖ µ and the control signals Ù.60) . and are nominal models of the plant and disturbance dynamics.15: An À½ robust performance problem ´Å ´ µµ ½ (9.

or weight selection. For many industrial control problems. the open-loop plant is augmented by pre and post-compensators to give a desired shape to the singular values of the open-loop frequency response. Whilst the structured singular value is a useful analysis tool for assessing designs. . -synthesis is sometimes difficult to use and often too complex for the practical problem at hand. For simplicity. The formulas are relatively simple and so will be presented in full. a step by step procedure for À ½ loop-shaping design is presented.¿¾ ¾ ¿ ÅÍÄÌÁÎ ÊÁ ¾ ¿ Ä Ã ÇÆÌÊÇÄ where Å is the transfer function matrix between Ö Ò Æ and Þ½ Þ¾ ¯ (9. In its full generality.61) and the associated block diagonal perturbation has 2 blocks: a fictitious performance Ì and Þ Ì Þ Ì Ì . It is essentially a two stage design process. as proposed by McFarlane and Glover (1990). 9. it should be based on classical loop-shaping ideas and it should not be limited in its applications like LTR procedures. This can be a limitation if there are stringent requirements on command following. the -synthesis problem is not yet solved mathematically. Then the resulting shaped plant is robustly stabilized with respect to coprime factor uncertainty using À ½ optimization. thesis of Hyde (1991) and has since been successfully applied to several industrial problems. An important advantage is that no problem-dependent uncertainty modelling. and an uncertainty block ¡ block between Ì ÖÌ ÒÌ ½ ¾ between Ù¡ and Ý¡ . we present such a controller design procedure. In the next section. 1989). Following this.4 À½ loop-shaping design We will begin the section with a description of the À ½ robust stabilization problem (Glover and McFarlane. and explicit formulas for the corresponding controllers are available. First. is required in this second step. This systematic procedure has its origin in the Ph. where solutions exist the controllers tend to be of very high order. the algorithms may not always converge and design problems are sometimes difficult to formulate directly. This is a particularly nice problem because it does not require ­ -iteration for its solution. The procedure synthesizes what is in effect a single degree-of-freedom controller. but is not as complicated as synthesis.D. The loop-shaping design procedure described in this section is based on À ½ robust stabilization combined with classical loop shaping. a design procedure is required which offers more flexibility than mixed-sensitivity À ½ control.

64) Ô . it is in fact quite general. the associated robustness tests is in terms of the maximum singular values of various closedloop transfer functions.2.1. We will consider the stabilization of a plant factorization (as discussed in Section 4. Before presenting the problem. It is now common practice.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿¿ However. two stable perturbations can be used. taken one at a time. but a family of perturbed plants defined by ¨ ¢ £ © ¡Æ ¡Å ½ ¯ ´Å · ¡Å µ ½ ´Æ · ¡Æ µ (9. as shown in Section 8. With a single perturbation. we will first recall the uncertainty model given in (8.6. 9. classical gain and phase margins are unreliable indicators of robust stability when defined for each channel (or loop). A perturbed Ô ´Å · ¡Å µ ½ ´Æ · ¡Æ µ (9. one on each of the factors in a coprime factorization of the plant.4. because simultaneous perturbations in more than one loop are not then catered for. as seen in Chapter 8.62) and Å Æ for simplicity. but even these are limited. as shown by Limebeer et al. The objective of robust stabilization it to stabilize not only the nominal model .62). the procedure can be extended by introducing a second degree-of-freedom in the controller and formulating a standard À½ optimization problem which allows one to trade off robust stabilization against closed-loop model-matching. are required to capture the uncertainty.2. as discussed in section More general perturbations like 9. restricts the plant and perturbed plant models to either have the same number of unstable poles or the same number of unstable (RHP) zeros. ¡Æ are stable unknown transfer functions which represent the uncertainty in the nominal plant model . We will describe this two degrees-of-freedom extension and further show that such controllers have a special observer-based structure which can be taken advantage of in controller implementation.63) where ¡Å . Although this uncertainty description seems unrealistic and less intuitive than the others. To overcome this. to model uncertainty by stable norm-bounded dynamic (complex) matrix perturbations. ¨ © and .2. and for our purposes it leads to a very useful À ½ robust stabilization problem. Use of a single stable perturbation.1 Robust stabilization For multivariable systems.5) which has a normalized left coprime Å  ½Æ where we have dropped the subscript from plant model Ô can then be written as (9. (1993).

To maximize this stability margin is the problem of robust stabilization of normalized coprime factor plant descriptions as introduced and solved by Glover and McFarlane (1989). For the perturbed feedback system of Figure 9.64).65) ­ is the À½ norm from ´Á   à µ ½ is the sensitivity function for this positive feedback arrangement. the stability property is robust if and only if the nominal feedback system is stable and ­ Notice that ­ ­ ­ ­ à ´Á   à µ ½ Å  ½ ­ ­ ­ Á ½ to Ù and Ý ­ ½ ¯ (9. as already derived in (8.67) where Ê Ë Á· Ì and is the unique positive definite solution of the following algebraic Riccati equation ´   Ë  ½ Ì µÌ · ´   Ë  ½ Ì µ  Ë  ½ Ì · Ì Ê ½ ¼ (9. denotes the spectral radius (maximum µ of .66) where ¡ À denotes Hankel norm.16: À½ robust stabilization problem where ¯ ¼ is then the stability margin. is the eigenvalue).¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¹ ¡Æ + ¹ - ¡Å Õ¹ Ù Æ ¹+ ¹ + à Š ½ Ý Õ Figure 9. and for a minimal state-space realization ´ unique positive definite solution to the algebraic Riccati equation ´   Ë  ½ Ì µ · ´   Ë  ½ Á· Ì Ì µÌ   Ì Ê ½ · Ë  ½ Ì ¼ (9.68) .16. The lowest achievable value of ­ and the corresponding maximum stability margin ¯ are given by Glover and McFarlane (1989) as ­Ñ Ò ¯ ½ Ü Ñ Ò ½  Æ Å ¾ À ½ Ó  ¾ ´½ · ´ µµ ¾ ½ (9.

from Glover and McFarlane (1989). If for some unusual reason the truly optimal controller is required. is given by · · ­ ¾ ´ÄÌ µ ½ Ì Ì Ã ´Á   à µ ½ Å  ½ ­ ­ ­ Á ½ Ì´ ­ (9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ Notice that the formulas simplify considerably for a strictly proper plant.e. A solution to this problem is given in Glover (1984).69) · µ ­ ¾ ´ÄÌ µ ½  Ë  ½´ Ì · ´½   ­ ¾ µÁ · µ   Ì Ì (9.72) The MATLAB function coprimeunc. the details of which can be found in Safonov et al.3 below. and determine a transfer function expression and a state-space realization for the generalized plant È .71) (9.2.8. À 9. i.73) Exercise 9. Remark 3 Alternatively.4. then this problem can be resolved using a descriptor system approach. Remark 2 Notice that. ´Í Î µ is a right coprime ­Ñ Ò are given by à factorization of à . where Í and Î are stable. and Í Î satisfy ­ ­ ­ ­    Æ££ · Í Å Î ­ ­ ­ ­ The determination of Í and Î is a Nehari extension problem: that is. all controllers achieving ­ ÍÎ  ½ . such that Ê · É ½ is minimized. and thus (9.70) cannot be implemented.70).70). then Ä ´ µÁ · . if ­ ­Ñ Ò in (9. the minimum being Ê£ À .70) (9.2 A systematic À½ loop-shaping design procedure Robust stabilization alone is not much use in practice because the designer is not able to specify any performance requirements. It is important to emphasize that since we can compute ­Ñ Ò from (9. a problem in which an unstable transfer function Ê´×µ is approximated by a stable transfer function É´×µ.66) we get an explicit solution by solving just two Riccati equations (aresolv) and avoid the ­ -iteration needed to solve the general À ½ problem. Remark 1 An example of the use of coprimeunc is given in Example 9. which is singular. To do this McFarlane and Glover (1990) . A controller (the “central” controller in McFarlane and Glover (1990)) which guarantees that ­ ­ ­ ­ ­ for a specified ­ Ã Ä s ­Ñ Ò . ½ Æ Å À (9.6 Formulate the ½ robust stabilization problem in the general control configuration of Figure 9. listed in Table 9. (1989). when ¼. can be used to generate the controller in (9.

Dc=-d’. Z = x2/x1.70) .xerr.c.d: State-space description of (shaped) plant..xerr.eig. F=-inv(S)*(d’*c+b’*X).Bc.Z] = aresolv(A1’.Cc. R=eye(size(d*d’))+d*d’.Dc. L=(1-gam*gam)*eye(size(X*Z)) + X*Z.d.wellposed. % S=eye(size(d’*d))+d’*d. % gamrel: gamma used is gamrel*gammin (typical gamrel=1. % % INPUTS: % a.x2. À½ controller in (9.1) % % OUTPUTS: % Ac.c..b. [x1.x2. reig min] = ric schr([A1’ -Q1.Q1.gamrel) % % Finds the controller which optimally ‘‘robustifies’’ a given shaped plant % in terms of tolerating maximum coprime uncertainty. Q1=c’*inv(R)*c. x2. Cc=b’*X.R1. Ac=a+b*F+gam*gam*inv(L’)*Z*c’*(c+d*F).R1).Bc. Bc=gam*gam*inv(L’)*Z*c’. % Alt.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 9.. -R1 -A1]). reig min] = ric schr([A1 -R1.gammin]=coprimeunc(a.2: MATLAB function to generate the % Uses the Robust Control or Mu toolbox function [Ac. [x1.eig. R1=b*inv(S)*b’.wellposed.. fail.Q1). Mu toolbox: %[x1. fail.X] = aresolv(A1. % Alt. % Optimal gamma: gammin=sqrt(1+max(eig(X*Z))) % Use higher gamma. Mu toolbox: %[x1.Dc: "Robustifying" controller (positive feedback). A1=a-b*inv(S)*d’*c.Cc. X = x2/x1.x2. gam = gamrel*gammin. -Q1 -A1’]).b.

Finally.8 we argued that for acceptable disturbance rejection with minimum input Ͻ should be similar to . The controller Ã× is synthesized by solving the robust ¹ Ͻ × ¹ Ã× ¹ Ͼ Figure 9. for example.76) .and post-compensators respectively.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ proposed pre.54) which was studied in detail in Chapter 2.75) We want as good disturbance rejection as possible. Consider the disturbance process in (2. Then after neglecting the high-frequency dynamics in ´×µ this yields an initial weight Ͻ ¼ . To improve the performance at low frequencies we add integral action.and post-compensating the plant to shape the open-loop singular values prior to robust stabilization of the “shaped” plant.3 Glover-McFarlane ½ loop shaping for the disturbance process. and the gain crossover frequency the final design should be about 10 rad/s. then the shaped plant × is given by (9. and can easily be implemented using the formulas already presented and reliable algorithms in. À ´×µ ½ ¾¼¼ ½¼× · ½ ´¼ ¼ × · ½µ¾ ´×µ ½¼¼ ½¼× · ½ (9. Û for In Example 2. the loop shape (“shaped plant”) ×  ½ Ͻ is desired. The above procedure contains all the essential ingredients of classical loop shaping. We will afterwards present a systematic procedure for selecting the weights Ï ½ and Ͼ . to make the response a little faster we multiply the gain by a factor ¾ to get the weight     Ͻ ×·¾ × (9. ½ and we select Ͻ to get We first present a simple SISO example.4. where Ï ¾ acceptable disturbance rejection. so usage. and we also add a phase-advance term × · ¾ to reduce the slope for Ä from ¾ at lower frequencies to about ½ at crossover.17.17: The shaped plant and controller stabilization problem of section 9. If Ͻ and Ͼ are the pre.74) × Ï¾ Ï ½ as shown in Figure 9. Example 9. MATLAB.1 for the shaped plant × with a normalized left   coprime factorization × Å× ½Æ× . The feedback controller for the plant is then à Ͻ Ã× Ï¾ .

The returned variable gammin (­Ñ Ò ) is the inverse of the magnitude of coprime uncertainty we can tolerate before we get instability.2: [Ac. and. and the Here the shaped plant × function returns the “robustifying” positive feedback controller Ã× with state-space matrices and .gamrel) À Ͻ has state-space matrices and .B. gamrel is the value of ­ relative to ­Ñ Ò . and we usually require that ­Ñ Ò is less than . We now “robustify” this design so that the shaped plant tolerates as much ½ coprime factor uncertainty as possible. and Ͻ has ½ state). We see that the change in the loop shape is small.18(b). has states. The corresponding disturbance response is shown in Figure 9.Dc. × Ã× This yields a shaped plant × Ï½ with a gain crossover frequency of 13. the response with the “controller” à Ͻ is too oscillatory. Dashed line: Initial “shaped” design.18(b) and is seen to be much improved.5 10 0 0 −2 10 10 −2 10 0 10 2 −0.5 0 1 2 3 Frequency [rad/s] (a) Loop shapes Ä´ µ Time [sec] (b) Disturbance response Figure 9. By applying this to our example we get ­Ñ Ò ¾ ¿ and an overall controller à Ͻ Ã× with states ( × .Cc. corresponding to 25% allowed coprime uncertainty. as may expected. × . The corresponding loop shape × Ã× is shown by the solid line in Figure 9. The response with the controller à shaping controller ÿ ´×µ designed in Chapter 2 (see curves Ä¿ and Ý¿ in Figure 2. it is also very similar . The response for reference tracking with controller à Ͻ Ã× is not shown.D. Ã× has the same number of poles (states) as × . The response to a unit step in the disturbance response is shown by the dashed line in Figure 9. This may be done with the MATLAB -toolbox using the command ncfsyn or with the MATLAB Robust Control toolbox using the function coprimeunc given in Table 9. This translates into better margins: the gain margin (GM) is improved from ½ ¾ (for (for × Ã× ).18(a). We want ­Ñ Ò ½ as small as possible. and thus Ã× .21). and the magnitude of × ´ µ is shown by the dashed line in Figure 9. In general. Ͻ Ã× is quite similar to that of the loopRemark. The × ) to ¿ gain crossover frequency Û is reduced slightly from ½¿ to ½¼ ¿ rad/s. and the phase margin (PM) is improved from ½¿ ¾Æ to ½ Æ .gammin]=coprimeunc(A.C. and was in our case selected as ½ ½. Solid line: “robustified” design.¿ 10 4 ÅÍÄÌÁÎ ÊÁ 1 Ä Ã ÇÆÌÊÇÄ Magnitude 10 2 × Ã× × 0.18(a).Bc. and we note with interest that the slope around crossover is somewhat gentler.7 rad/s.18: Glover-McFarlane loop-shaping design for the disturbance process.

To reduce this overshoot we would need to use a two degrees-of-freedom controller. it is recommended that the following systematic procedure is followed when using À ½ loop-shaping design: 1. (b) Each input is scaled by a given percentage (say 10%) of its expected range of operation. generate plots corresponding to those in Figure 9. (2001). À ½ loop-shaping has been tested in flight on a Bell 205 fly-by-wire helipcopter. There are a variety of methods available including normalization with respect to the magnitude of the maximum or average value of the signal in question. Skill is required in the selection of the weights (pre. That is. Compute the gain and phase margins and compare closed-loop instability with à and use (2. Bedford in 1993.75) using the weight Ͻ in (9. it enables meaningful analysis to be made of the robustness properties of the feedback system in the frequency domain. Exercise 9. In both cases find maximum delay that can be tolerated in the plant before instability arises. Scaling with respect to maximum values is important if the controllability analysis of earlier chapters is to be used. (1999) and Smerlas et al. The À ½ loop-shaping procedure has also been extensively studied and worked on by Postlethwaite and Walker (1992) in their work on advanced control of high performance helicopters. This is very important for most design procedures and is sometimes forgotten. This application is discussed in detail in the helicopter case study in Section 12.7 Design an ½ loop-shaping controller for the disturbance process in (9. if one is to go straight to a design the following variation has proved useful in practice: (a) The outputs are scaled such that equal magnitudes of cross-coupling into each of the outputs is equally undesirable. Based on these. An example of this type of scaling is given in the aero-engine case study of Chapter 12. More recently. scaling improves the conditioning of the design problem. but it has a slightly smaller overshoot of ¾½% rather than ¾ %. In general. À .2.37) to compute the the disturbance and reference responses. and for loop-shaping it can simplify the selection of weights. and other studies. i. Next.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ to that with ÿ (see Figure 2. Their work culminated in a successful flight test of À ½ loop-shaping control laws implemented on a Harrier jump-jet research vehicle at the UK Defence Research Agency (DRA). An excellent illustration of this is given in the thesis of Hyde (1991) who worked with Glover on the robust control of VSTOL (vertical and/or short take-off and landing) aircraft. However.18. Scale the plant outputs and inputs.e.and post-compensators Ï ½ and Ͼ ).23). the inputs are scaled to reflect the relative actuator capabilities. ¾´× · ¿µ × (which results in an initial × which would yield repeat the design with Ͻ ½). see Postlethwaite et al. also for the UK DRA at Bedford. but experience on real applications has shown that robust controllers can be designed with relatively little effort by following a few simple rules.76).

reflecting the relative importance of the outputs to be controlled and the other measurements being fed back to the controller. calculate the maximum stability ½ ­Ñ Ò. and synthesize a suboptimal controller using equation (9. Integral action.70). Next. then go back margin ¯Ñ Ü to step 4 and modify the weights. This would normally mean high gain at low frequencies. The weights should be chosen so that no unstable hidden modes are created in × .¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 2. Ï is diagonal and is adjusted so that actuator rate limits are not exceeded for reference demands and typical disturbances on the scaled plant outputs. ¯ Ñ Ü ¼ ¾ . Ï ¾ is usually chosen as a constant. 1. 4. This requires some trial and error. for low frequency performance. for simplicity.and post-compensators which. we discuss the selection of weights to obtain the shaped plant where Ͻ ÏÔ Ï Ï × Ï¾ Ͻ (9. In this case. 6. Ï Ô contains the to be dynamic shaping. Optional: Align the singular values at a desired bandwidth using a further constant weight Ï cascaded with ÏÔ .10. where Ͻ ÏÔ Ï Ï . select ­ ­ Ñ Ò. Select the elements of diagonal pre. The purpose of this pseudo-diagonalization is to ease the design of the pre.77) 3. roll-off rates of approximately 20 dB/decade (a slope of about  ½) at the desired bandwidth(s). Robustly stabilize the shaped plant × using the formulas of the previous section. phase-advance for reducing the roll-off rates at crossover.1 . and phase-lag to increase the roll-off rates at high frequencies should all be placed in Ï Ô if desired. When ¯ Ñ Ü (respectively ­Ñ Ò ) the design is usually successful. by about ½¼±. The relative gain array can be useful here. The align algorithm of Kouvaritakis (1974) which has been implemented in the MATLAB Multivariable Frequency-Domain Toolbox is recommended. For example. 0. will be chosen to be diagonal. If the margin is too small.and post-compensators Ï Ô and Ͼ so that the singular values of Ï ¾ ÏÔ are desirable. A small value of ¯Ñ Ü indicates that the chosen singular value loop-shapes are . at least 25% coprime factor uncertainty is allowed. if there are feedback measurements of two outputs to be controlled and a velocity signal. This is effectively a constant decoupler and should not be used if the plant is ill-conditioned in terms of large RGA elements (see Section 6. Otherwise. Some trial and error is involved here. then Ï ¾ might be chosen 1. Order the inputs and outputs so that the plant is as diagonal as possible. There is usually ¼¾ no advantage to be gained by using the optimal controller. and we also find that the shape of the open-loop singular values will not have changed much after robust stabilization. Optional: Introduce an additional gain matrix Ï cascaded with Ï to provide control over actuator usage. Ͼ Ͻ .4). with higher rates at high frequencies. where ¼ ½ is in the velocity signal channel. 5. First.

is justified theoretically in McFarlane and Glover (1990). The configuration shown in Figure 9.19 has been found useful when compared with the conventional set up in Figure 9. there are no pole-zero cancellations between the plant and controller (Sefton and Glover. The constant prefilter ensures a steady-state gain of ½ between Ö and Ý . but the interested reader is referred to the excellent book on the subject by Vinnicombe (2001) and the paper by Glover et al. We will conclude this subsection with a summary of the advantages offered by the above À½ loop-shaping design procedure: It is relatively easy to use... Implement the controller.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿½ incompatible with robust stability requirements. ones with all-pass factors. 7. can be interpreted in terms of simultaneous gain and phase margins in all the plant’s inputs and outputs. That the loop-shapes do not change much following robust stabilization if ­ is small (¯ large). 8. The derivation of these margins is based on the gap metric (Gergiou and Smith. assuming integral action in Ï ½ or . It has recently been shown (Glover et al. when the À½ loop-shaping weights Ï ½ andϾ are diagonal. A discussion of these mesures lies outside the scope of this book. Pole-zeros cancellations are common in many À ½ control problems and are a problem when the plant has lightly damped modes.   À£À Á . 1990) and the -gap metric (Vinnicombe. and co-inner if . This is because Ö ¹ Ã×´¼µÏ¾´¼µ ¹ + Ù× ¹ Ͻ Ã× Ù Ý× ¹ Ͼ Õ ¹Ý Figure 9. which in turn corresponds to a maximum stability margin ¯ Ñ Ü ½ ­Ñ Ò. Definition: A stable transfer function matrix À ´×µ is inner if ÀÀ £ Á . being based on classical loop-shaping ideas. 1993) measures for uncertainty.8 First a definition and some useful properties. Analyze the design and if all the specifications are not met make further modifications to the weights. 1992).1. Except for special systems. The operator À £ is defined as À £ ´×µ À Ì ´ ×µ. There exists a closed formula for the À ½ optimal cost ­Ñ Ò . No ­ -iteration is required in the solution. here defined in terms of comprime factor perturbations.19: A practical implementation of the loop-shaping controller the references do not directly excite the dynamics of Ã × . 1990. 2000) that the stability margin ¯ Ñ Ü ½ ­Ñ Ò. which can result in large amounts of overshoot (classical derivative kick). Tsai et al. ¯ ¯ ¯ ¯ Exercise 9. (2000).

For many tracking problems.78) is equivalent to  ½ where Ë× ´Á × Ã× µ . commands or references. a one degree-of-freedom structure may not be sufficient. We are fortunate that the above robust stabilization problem is also one of great practical significance.3 Two degrees-of-freedom controllers Many control design problems possess two degrees-of-freedom: on the one hand.   Ã× Ë× Ã× Ë× × Ë× Ë× × ­ ­ ­ ­ ½ (9.19 a simple constant prefilter can easily be implemented for steady-state accuracy. this will not be sufficient and a dynamic two degrees-of-freedom design is . A general two degrees-of-freedom feedback control scheme is depicted in Figure 9. This shows that the problem of finding a stabilizing controller to minimise the 4-block cost function (9. one degree-of-freedom is left out of the design. the difference between a command and the output. although as we showed in Figure 9. À À Ã× ´Á   à µ ½ Å  ½ ­ × × × ­ ­ Á ½ ­ ­ ­ ­ ­ (9. The commands and feedbacks enter the controller separately and are independently processed. however.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ   Å× ½ Æ× .79) has an exact solution. from a computational point of view. that Equipped with the above definition and properties. But in cases where stringent time-domain specifications are set on the output response. 9. show for the shaped × the matrix Å× Æ× is co-inner and hence that the ½ loop-shaping cost function ­ ­ ­ ­ Properties: The ½ norm is invariant under right multiplication by a co-inner function and under left multiplication by an inner function. Sometimes.4. and the controller is driven (for example) by an error signal i.20.79) Whilst it is highly desirable.e. measurement or feedback signals and on the other. Ö ¹ ¹ Controller ¹ Õ ¹Ý Figure 9. to have exact solutions for À½ optimization problems.20: General two degrees-of-freedom feedback control scheme The À½ loop-shaping design procedure of McFarlane and Glover is a one degreeof-freedom design. such problems are rare.

. An additional prefilter part of the controller is then introduced to force the response of the closed-loop system to follow that of a specified model. with a normalized coprime factorization ×   Å× ½ Æ× . often called the reference model.21. ¬ is the scaled reference. In Hoyle et al.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿¿ required. and is a scalar parameter that the designer can increase to place more emphasis on model matching in the optimization at the expense of robustness. Both parts of the controller are synthesized by solving the design problem illustrated in Figure 9. (1993) a two degrees-of-freedom extension of the Glover-McFarlane procedure was proposed to enhance the modelmatching properties of the closed-loop.80) where ý is the prefilter. and Ý is the measured output. It is assumed that the measured outputs and the outputs to be controlled are the same although this assumption can be removed as shown later. which minimizes the À½ norm of the transfer function between the signals ¢ Ì Ì £Ì and ¢ ÙÌ Ý Ì Ì £Ì as defined in Figure 9. We will show this later. The purpose of the prefilter is to ensure that ´Á   × Ã¾ µ ½ × Ã½   ÌÖ ½ ­  ¾ (9.21: Two degrees-of-freedom À½ loop-shaping design problem ¢ £ ý þ for The design problem is to find the stabilizing controller à the shaped plant × Ï½ . + ¹ ¡Æ× ¹ - ¡Å× + Õ Ý¹ -¹ Ö ¹ Á ¬¹ Õ Ã½ + ¹ ¹ +ÕÙ× Æ× ¹ + ¹ Å× ½ + ¹ ÌÖ Ã¾ Á ¹ Figure 9. Ì Ö .81) ÌÖ is the desired closed-loop transfer function selected by the designer to introduce time-domain specifications (desired response characteristics) into the design process.21. (1991) and Limebeer et al. With this the feedback part of the controller is designed to meet robust stability and disturbance rejection requirements in a manner similar to the one degree-of-freedom loop-shaping design procedure except that only a pre-compensator weight Ï is used. The control signal to the shaped plant Ù × is given by Ù× Ã½ þ ¬ Ý (9. The problem is Ö × easily cast into the general control configuration and solved suboptimally using standard methods and ­ -iteration. þ is the feedback controller.

namely.Å-× ½.Á ¼ ¼   ¼ Å× ½ × ¿ ¾ Ö --Ù× ¿ (9.1) and (2..86) ÌÖ ...82) In the optimization. Notice that the (1. the (1.85) (9. In addition.1) blocks help to limit actuator usage and the (3. if the shaped plant × and the desired stable closed-loop transfer function have the following state-space realizations × s s × × Ö Ö × × Ö Ö (9.-×-. we have that ¾ Ù× Ý ¿ ¾ (9. and the two degrees-of-freedom controller reduces to an ordinary À ½ loop-shaping controller..21 and a little bit of algebra. To put the two degrees-of-freedom design problem into the standard control configuration. the À ½ norm of this block matrix transfer function is minimized.83) ¼ ¼ Á   ¼ Å× ½ ×   -¾-Ì-Ö.. For ¼.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ From Figure 9..84) ÌÖ Further.2) and (2. we can define a generalized plant È by ¾ ´Á   þ × µ ½ ý ´Á¢   × Ã¾µ ½ × Ã½ ¾ ´Á   × Ã¾ µ ½ × Ã½   ÌÖ £   þ´Á   × Ã¾ µ ½ Å× ½  ½ Å× ½   ´Á   × Ã¾ µ   ´Á   × Ã¾ µ ½ Å× ½ ¿ Ö Ù× Ý --- ¿ ¬ Ý È½½ Ƚ¾ Ⱦ½ Ⱦ¾ ¾ ¾ Ö --Ù× ¿ (9. the × robust stabilization problem. the problem reverts£to minimizing ¢ Ì the À½ norm of the transfer function between and ÙÌ Ý Ì .1) block corresponds to model-matching.3) block is linked to the performance of the loop.2) blocks taken together are associated with robust stabilization and the (3.

..g.. velocity feedback) and therefore when beneficial should be used by the feedback controller þ .. Remark 1 We stress that we here aim to minimize the ½ norm of the entire transfer function in (9... .87) à . À À À Remark 2 Extra measurements.Ê. .. and the optimal controller could be obtained from solving a series of ½ optimization problems using à -iteration. see Section 8. Note that Ê× and used in standard À ½ algorithms (Doyle et al. a designer has more plant outputs available as measurements than can (or even need) to be controlled. In the optimization problem. if there are four feedback measurements and only by ÏÓ Ê× the first three are to be controlled then ¾ ÏÓ ½ ¼ ¼ ¼ ¼ ½ ¼ ¼ ¼ ¼ ½ ¼ ¿ (9..ÇÆÌÊÇÄÄ Ê ¾ ËÁ Æ ¿ ¿ then È may be realized by Ì Ì   ´ × × · × × µÊ× ½ ¾ × ¼ ¼ ¼ Ö Ö ¼ ¼ ¼ ¼ Á ½ ¼ ¼ Ê× ¾ × × ½¾ ¾ Ö × ×. This is not guaranteed by the optimization which aims to minimize the -norm of the error.. These extra measurements can often make the design problem easier (e. The command signals Ö can be scaled by a constant matrix Ï to make the closed-loop transfer function from Ö to the controlled outputs ÏÓ Ý match the desired model ÌÖ exactly at steady-state.. This matrix selects from the output measurements Ý only those which are to be controlled and hence included in the model-matching part of the optimization. In some cases. ÏÓ is introduced between Ý and the summing junction. 1989) to synthesize the controller Ì Á · × × ...87) for È one simply replaces × by ÏÓ × and Ê× ½ ¾ in the fifth row..× Ö --. This problem would involve the structured singular value. controlled....88) Remark 3 Steady-state gain matching.89) Á if there are no extra feedback measurements beyond those that are to be Recall that ÏÓ ¢ £ ý Ï Ã¾ .¾ .82).67) for × .. and in the realization (9.12... In Figure 9.3.... The resulting controller is à .21..¼ ¼ Á ¼ ¼ ½ ¼ ¼ Ê× ¾ × × × ¼ ¼ (9.. The required scaling is given by ½ Ï ¢ £ ÏÓ ´Á   × ´¼µÃ¾ ´¼µµ ½ × ´¼µÃ½ ´¼µ  ½ ÌÖ ´¼µ (9. For example.. only the equation for the error is ½¾ affected. MATLAB commands to synthesize the controller are given in Table 9. An alternative problem would be to minimize the ½ norm form Ö to subject to an upper bound on ¡Æ× ¡Å× ½ .. and × is the unique positive definite solution to the generalized Riccati equation (9. This can be accommodated in the two degrees-offreedom design procedure by introducing an output selection matrix ÏÓ .....

Ss = eye(ms)+Ds’*Ds.C.nr)]. B2 = [Bs. gmin = 1. D = [D11 D12.Q1.’. Q1 = Bs*inv(Ss)*Bs’.mr+ls).R1).ms] = size(Ds).rho*Cs -rho*rho*Cr].D21 D22]. Robust Control toolbox: % [Z1.-rho*rho*Dr rho*sqrt(Rs)]. gmax.ns+nr). Robust toolbox.zerr. [l2. gam] = hinfsyn(P. Rs = eye(ls)+Ds*Ds.ms).21 % but may get extra states.B.eig. gtol = 0. P = pck(A. C2 = [zeros(mr.Bs*inv(Ss)*Ds’*Cs). D12 = [eye(ms). ncon.Bs. C1 = [zeros(ms.Ar). gmax = 5.ns+nr).Ds.zeros(nr. [lr. Zs = Z2/Z1.Ds].Dr] = unpck(Tref).mr) sqrt(Rs).zeros(ls. % Alt.01.ls).mr) sqrt(Rs)].rho*Ds]. D22 = [zeros(mr. A = daug(As. -Q1 -A1]). D11 = [zeros(ms.ns] = size(As).Cs zeros(ls.nr] = size(Ar).zeros(ls.ls)]. A1 = (As . Br zeros(nr.wellposed. nmeas.Cs.m2] = size(D12). B1 = [zeros(ns.Ds] = unpck(Gs). [ns. reig min] = ric schr([A1’ -R1.mr) ((Bs*Ds’)+(Zs*Cs’))*inv(sqrt(Rs)).m1] = size(D21). % Alternative: Use sysic to generate P from Figure 9.Br. Gnclp. % % Gamma iterations to obtain H-infinity controller % [l1. B = [B1 B2]. [Z1.Z2.nr). % % Choose rho=1 (Designer’s choice) and % build the generalized plant P in (9.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 9. [ls. [K. since states from Gs may enter twice.D).Zs] = aresolv(A1’. fail. [nr. D21 = [rho*eye(mr) zeros(mr.ms)]. ncon = m2. Z2.mr] = size(Dr). [Ar. C = [C1.87) % rho=1. use command: hinfopt % À½ 2-DOF controller in (9. R1 = Cs’*inv(Rs)*Cs.3: MATLAB commands to synthesize the % Uses MATLAB mu toolbox % % INPUTS: Shaped plant Gs % Reference model Tref % % OUTPUT: Two degrees-of-freedom controller K % % Coprime factorization of Gs % [As. gmin. % Alt. gtol). nmeas = l2.C2].80) .Cs zeros(ls.Cr.

2. Replace the prefilter à ½ . but without a post-compensator weight Ï ¾ . something in the range 1 to 3 will usually suffice. For the shaped plant × parameter . Remember to include Ï Ó in to a specified tolerance to get à the problem formulation if extra feedback measurements are to be used. As seen from (9. 6. redesign making adjustments to and possibly Ï ½ and ÌÖ . solve the standard À ½ optimization £ problem defined by È in (9. 3.4. the desired response Ì Ö .22: Two degrees-of-freedom À½ loop-shaping controller 9. It lends itself to implementation in a gain-scheduled scheme. if required. Analyse and. this extra term is not present. Set the scalar parameter to a small value greater than 1.51) the controller has an observer/state feedback structure. as shown by Hyde and Glover (1993).50) and (9. Design a one degree-of-freedom À ½ loop-shaping controller using the procedure of Subsection 9. The clear structure of À ½ loopshaping controllers has several advantages: ¯ ¯ It is helpful in describing a controller’s function. The final two degrees-of-freedom Figure 9.2.22 We will conclude this subsection with a summary of the main steps required to synthesize a two degrees-of-freedom À ½ loop-shaping controller.87) ¢ ý þ . by ý Ï to give exact model-matching at steady-state. For À ½ loop-shaping controllers. Select a desired closed-loop transfer function Ì Ö between the commands and controlled outputs. À½ loop-shaping controller is illustrated in Ö ¹ Ï ¹ ý + ¹ +¹ Ͻ þ ¹ Ý ¹ Controller Figure 9. .ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ 1. 5. especially to one’s managers or clients who may not be familiar with advanced control. having a disturbance term (a “worst” disturbance) entering the observer state equations.4 Observer-based structure for À½ loop-shaping controllers À½ designs exhibit a separation structure in the controller.4. whether of the one or two degreesof-freedom variety. and the scalar 4. but the observer is nonstandard. Ͻ . Hence Ͻ .

1990). Ù× and Ý× are respectively the input and output of the shaped plant.94) where × and × are the appropriate solutions to the generalized algebraic Riccati equations for × given in (9. In Figure 9. with a stabilizable and detectable state-space realization We will present the controller equations.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¯ It offers computational savings in digital implementations and some multi-mode switching schemes. For simplicity we will assume the shaped plant is strictly proper.92) where Ü× is the observer state. À½ loop-shaping designs.67) and (9. 1995). as shown in (Sefton and Glover. He considers a stabilizable and detectable plant × s × × ¼ × (9. Walker (1996) has shown that the two degrees-of-freedom À ½ loop-shaping controller also has an observer-based structure. The equations are In which case.95) and a desired closed-loop transfer function ÌÖ s Ö Ö ¼ Ö (9.90) À½ loop-shaping controller can be realized as an observer for the shaped plant plus a state-feedback control law. an implementation of an observer-based À ½ loop-shaping controller is shown in block diagram form.68). the single degree-of-freedom Ü× Ù× × Ü× · À× ´ × Ü×   Ý× µ · × Ù× Ã× Ü× (9.23. as shown in (Samar.91) (9. for both one and two degrees-of-freedom × s × × ¼ × (9. The same structure was used by Hyde and Glover (1993) in their VSTOL design which was scheduled as a function of aircraft forward speed. and À× Ã×     Ì × × ¢ Ì  ¾ × Á  ­ Á £   ­  ¾ × ×  ½ × (9.96) .93) (9.

87) simplifies to Ì × × × ¿ ¾ È s ¼ ¼ Ö Ö ¼ ¼ ¼ Á ¼ ¼ Á ¼ × ¾ Ö Á × . . where (9.. and ½ ¼ is a solution to the algebraic Riccati equation ¢ Ô Ì ½ · Ì ½ · ½ ½   Ì´ Ì µ such that Re · £ ¼ (9..100) (9...-¼.¼ ---...98) ¼ . (i) (ii) à ý þ £ satisfying ­ ½ · ¾ .ÇÆÌÊÇÄÄ Ê ËÁ Æ Ö + ¹ Ï ´Úµ ¹ + Ù× Õ ¹Ï½´Úµ Ù¹ À× ´Úµ + ¹+¹ + Ê Õ ¹Ý Ͼ ´Úµ Ý× × ´Ú µ + ¿ × ´Ú µ ¹ Õ × ´Ú µ Õ Ã× ´Úµ Figure 9.23: An implementation of an scheduling against a variable Ú À½ loop-shaping controller for use when gain in which case the generalized plant ¾ È ´×µ in (9. and only if....97) Walker then shows that a stabilizing controller Ð ´È à µ ½ ­ exists if.101) ´ Ì Â µ ½ ´ Ì Â · Ì ½ µ  ½½ ½¾ ÁÛ ¼ ÁÞ ¼ ¼  ­ ¾ÁÛ .....¼ ¼ Á ¼ ¼ ¼ Á ¼ ¼ × ¼ ¼ × ¼ ¼ ½ ¾ ½ ½½ ¾½ ¢ ¾ ½¾ ¾¾ ¿ (9...99) (9.

¼¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Notice that this À½ controller depends on the solution to just one algebraic Riccati equation.106) The structure of this controller is shown in Figure 9. These can be obtained from a continuous-time design using . in the standard configuration. This is a characteristic of the two degrees-of-freedom À ½ loopshaping controller (Hoyle et al. a model of the desired closed-loop transfer function Ì Ö (without Ö ) and a state-feedback control law that uses both the observer and reference-model states.24. 9. Likewise. Ö (9.103) (9. if the weight Ï ½ ´×µ is the same at all the design operating points. not two. parts of the observer may not change. Therefore whilst the structure of the controller is comforting in the familiarity of its parts. respectively. For implementation purposes. for example. The reference-model part of the controller is also nice because it is often the same at different design operating points and so may not need to be changed at all during a scheduled operation of the controller.4. As in the one degree-of-freedom case. then a stabilizing controller Ð ´È à µ ½ ­ has the following equations: Ü× ÜÖ Ù× where × Ü× · À× ´ × Ü×   Ý× µ · × Ù× Ö ÜÖ · Ö Ö Ì Ì   × ½½½ Ü×   × ½½¾ ÜÖ (9.104) ½½½ and ½½¾ are elements of ½ ½½½ ½¾½ × ½½¾ ½¾¾ ¼ (9.107) The controller consists of a state observer for the shaped plant × . and exogenous input Û. where ÁÞ and ÁÛ are unit matrices of dimensions equal to those of the error signal Þ . à ´×µ satisfying Walker further shows that if (i) and (ii) are satisfied. this observer-based structure is useful in gainscheduling. it also has some significant advantages when it comes to implementation.102) (9..5 Implementation issues Discrete-time controllers.105) which has been partitioned conformally with ¼ and À× is as in (9. discrete-time controllers are usually required. 1991). where the state-feedback gain matrices × and Ö are defined by × Ì × ½½½ Ö Ì × ½½¾ (9.93).

a number of important implementation issues needed to be addressed. Then ٠Ͻ Ù× . observer-based state-space equations are derived directly in discrete-time for the two degrees-of-freedom À ½ loop-shaping controller and successfully applied to an aero engine.24: Structure of the two degrees-of-freedom À½ loop-shaping controller .ÇÆÌÊÇÄÄ Ê ËÁ Æ ¼½ a bilinear transformation from the ×-domain to the Þ -domain. This application was on a real engine. a Spey engine. Let the weight Ï ½ have a realization Ͻ s Û Û Û Û (9.108) and let Ù be the input to the plant actuators and Ù × the input to the shaped plant. As this was a real application. they will be briefly mentioned now. but there can be advantages in being able to design directly in discrete-time. (1995). When implemented in Hanus form. An anti-windup scheme is therefore required on the weighting function Ï ½ . which is a Rolls Royce 2-spool reheated turbofan housed at the UK Defence Research Agency. the expression for Ù becomes output from shaped plant À× + Ý× × - Ý× ¹ input to shaped plant × + ¹ +¹ + Ê ÕÕ Ü¹ × × × Ù× ¹ Õ Ö + ¹ +¹ Ê ÜÖ¹ Ö Õ - Ö commands Ö Figure 9. In À½ loop-shaping the pre-compensator weight Ï ½ would normally include integral action in order to reject low frequency disturbances acting on the system. However. Pyestock. The approach we recommend is to implement the weight Ï ½ in its self-conditioned or Hanus form. Although these are outside the general scope of this book. in the case of actuator saturation the integrators continue to integrate their input and hence cause windup problems. In Samar (1995) and Postlethwaite et al. Anti-windup.

it was found useful to condition the reference models and the observers in each of the controllers. Thus when on-line.110) where Ùas is the actual input to the shaped plant governed by the on-line controller. This consisted of three controllers. a multi-mode switched controller was designed. In the aero-engine application. The Hanus form prevents windup by keeping the states of Ͻ consistent with the actual plant input at all times.104) is not invertible and therefore cannot be self-conditioned. where the actuators are each modelled by a unit gain and a saturation. When there is no saturation Ù Ù.108) Bumpless transfer. Ù the dynamics are inverted and driven by Ù so that the states But when Ù remain consistent with the actual plant input Ù .¼¾ (Hanus et al. The situation is illustrated in Figure 9.103) and (9. Notice that such an implementation requires Ͻ to be invertible and minimum phase. which were switched between depending on the most significant outputs at any given time. To ensure smooth transition from one controller to another .e.109) simplifies to (9.9 Show that the Hanus form of the weight Ù. the dynamics of Ï ½ remain unaffected and (9.. when ٠Ͻ in (9. The reference model with state feedback given by (9. that is the measurement at the output of the actuators which therefore contains information about possible actuator saturation.109) simplifies to (9.102) but when off-line the state equation becomes Ü× × Ü× · À× ´ × Ü×   Ý× µ · × Ùas (9. Ù× ¹ Self¹ conditioned Ͻ actuator saturation Ù ¹ ¡¡ ¹ Ù Õ ¹ ¹ Figure 9.bumpless transfer . each designed for a different set of engine output variables.25: Self-conditioned weight Ͻ Exercise 9. the observer state evolves according to an equation of the form (9. in discrete-time the . However. when there is no saturation i.109) where Ù is the actual plant input.108). 1987) ÅÍÄÌÁÎ ÊÁ Û  Û Û Û Ä Ã ÇÆÌÊÇÄ Ù  ½ Û ¼ Û   Û Û½ ¼ Ù× Ù (9.25.

then the weights will need modifying in a redesign. The following quote from Rosenbrock (1974) illustrates the dilemma: In synthesis the designer specifies in detail the properties which his system must have. However. We have tried to demonstrate here that the observer-based structure of the À ½ loopshaping controller is helpful in this regard.. For complex problems. we recommend -analysis as discussed in Chapter 8. and if the design is not robust. such as unstable plants with multiple gain crossover frequencies. it may be more appropriate to follow a signal-based À ¾ or À½ approach.g. An alternative to À ½ loop shaping is a standard À ½ design with a “stacked” cost function such as in S/KS mixed-sensitivity optimization. but this complexity is rarely necessary in practice. Moreover. Satisfactory solutions to implementation issues such as those discussed above are crucial if advanced control methods are to gain wider acceptance in industry. It combines classical loop-shaping ideas (familiar to most practising engineers) with an effective method for robustly stabilizing the feedback loop. with more functions the shaping becomes increasingly difficult for the designer. Sometimes one might consider synthesizing a -optimal controller. 9. In other design situations where there are several performance objectives (e. model following and model uncertainty). In this approach.. .5 Conclusion We have described several methods and techniques for controller design. For the former. in the aero-engine example the reference models for the three controllers were each conditioned so that the inputs to the shaped plant from the off-line controller followed the actual shaped plant input Ù as given by the on-line controller. it may not be easy to decide on a desired loop shape. Consequently. the resulting controller should be analyzed with respect to robustness and tested by nonlinear simulation.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¼¿ optimal control also has a feed-through term from Ö which gives a reference model that can be inverted. À ½ optimization is used to shape two or sometimes three closed-loop transfer functions. but our emphasis has been on À ½ loop shaping which is easy to apply and in our experience works very well in practice. After a design. to the point where there is only one possible solution. we would suggest doing an initial LQG design (with simple weights) and using the resulting loop shape as a reasonable one to aim for in À ½ loop shaping. But again the problem formulations become so complex that the designer has little direct influence on the design. In which case. The act of specifying the requirements in detail implies the final . on signals. one should be careful about combining controller synthesis and analysis into a single step.

Rosenbrock (1974) makes the following observation: Solutions are constrained by so many requirements that it is virtually impossible to list them all.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ solution. performance. control structure design. which can then turn out to be unsuitable in ways that were not foreseen. controller synthesis. control system design usually proceeds iteratively through the steps of modelling. easy maintenance. The designer finds himself threading a maze of such requirements. A good design usually has strong aesthetic appeal to those who are competent in the subject. . control system analysis and nonlinear simulation. and so on. performance and robustness weights selection. Therefore. controllability analysis. yet has to be done in ignorance of this solution. attempting to reconcile conflicting demands of cost.

Which variables should be controlled. which variables should be measured. They therefore fail to answer some basic questions which a control engineer regularly meets in practice.½¼ ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ Most (if not all) available control theories assume that a control structure is given at the outset. and which links should be made between them? The objective of this chapter is to describe the main issues involved in control structure design and to present some of the available quantitative methods. and still does to some extent today. for example. which inputs should be manipulated. under the title “A quantitative approach to the selection and partitioning of measurements and manipulations for the control of complex systems”. in the late 1980s Carl Nett (Nett. but the gap still remained. He noted that increases in controller complexity unnecessarily outpaces increases in plant complexity. 10. and that the objective should be to .1 Introduction Control structure design was considered by Foss (1973) in his paper entitled “Critique of process control theory” where he concluded by challenging the control theoreticians of the day to close the gap between theory and applications in this important area. Nett and Minto. (1980) presented an overview of control structure design. 1989) gave a number of lectures based on his experience on aero-engine control at General Electric. Later Morari et al. hierarchical control and multilevel optimization in their paper “Studies in the synthesis of control structure for chemical processes”. 1989. Control structure design is clearly important in the chemical process industry because of the complexity of these plants. for decentralized control. but the same issues are relevant in most other areas of control where we have large-scale systems. For example.

the control configuration .1? The distinction between the words control structure and control configuration may seem minor. see sections 10.8 we considered the general control problem formulation in Figure 10. 5. etc): What algorithm is used for à in Figure 10. see sections 10. that is. PID-controller. 6 and 7) associated with the following tasks of control structure design: 1. The selection of a controller type (control law specification.3): What are the variables Þ in Figure 10.1. decoupler. e. then we see that this is only Step 7 in the overall process of designing a control system.2 and 10. see section 10. generates a control signal Ù which counteracts the influence of Û on Þ . minimize control system complexity subject to the achievement of accuracy specifications in the face of uncertainty. The control structure (or control strategy) refers to all structural decisions included in the design of a control system.6.1. if we go back to Chapter 1 (page 1). and stated that the controller design problem is to ¯ Find a controller à which based on the information in Ú ..1: General control configuration In Chapter 3..4): What are the variable sets Ù and Ú in Figure 10. The selection of a control configuration (a structure interconnecting measurements/commands and manipulated variables. In this chapter we are concerned with the structural decisions (Steps 4. LQG. On the other hand.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .7 and 10. but note that it is significant within the context of this book. thereby minimizing the closed-loop norm from Û to Þ . The selection of controlled outputs (a set of variables which are to be controlled to achieve a set of specific objectives. The selection of manipulations and measurements (sets of variables which can be manipulated and measured for control purposes.8): What is the structure of à in Figure 10. However.1? 2. (weighted) exogenous inputs (weighted) exogenous outputs Û Ù ¹ ¹ È Ã ¹Þ Ú control signals sensed outputs Figure 10. 10. how should we “pair” the variable sets Ù and Ú ? 4.g.1? 3.

the tasks involved in designing a complete control system are performed sequentially. The selection of controlled outputs. in practice the tasks are closely related in that one decision directly influences the others. Control configuration issues are discussed in more detail in Section 10. In other cases. The selection of controlled outputs involves selecting the variables Ý to be controlled at given reference values.6. Ý Ö. first a “top-down” selection of controlled outputs. The number of possible control structures shows a combinatorial growth.2 Optimization and control In Sections 10. manipulations and measurements (tasks 1 and 2 combined) is sometimes called input/output selection. so for most systems a careful evaluation of all alternative control structures is impractical. measurements and inputs (with little regard to the configuration of the controller à ) and then a “bottomup” design of the control system (in which the selection of the control configuration is the most important decision). However.2 and 10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¼ refers only to the structuring (decomposition) of the controller à itself (also called the measurement/manipulation partitioning or input/output pairing). The reader is referred to Chapter 5 (page 160) for an overview of the literature on input-output controllability analysis.1. A survey on control structure design is given by van de Wal and de Jager (1995). Stephanopoulos (1984) and Balchen and Mumme (1988). One important reason for decomposing the control system into a specific control configuration is that it may allow for simple tuning of the subcontrollers without the need for a detailed plant model describing the dynamics and interactions in the process. 10. Ideally. so the procedure may involve iteration. measurements and manipulated inputs.3 we are concerned with the selection of controlled variables (outputs). we can often from physical insight obtain a reasonable choice of controlled outputs. Here the reference value Ö is set at some higher layer . Fortunately. but this performance gain must be traded off against the cost of obtaining and maintaining a sufficiently detailed plant model. These are the variables Þ in Figure 10. but we will in these two sections call them Ý . Some discussion on control structure design in the process industry is given by Morari (1982). Shinskey (1988). Multivariable centralized controllers may always outperform decomposed (decentralized) controllers. simple controllability measures as presented in Chapters 5 and 6 may be used for quickly evaluating or screening alternative control structures. A review of control structure design in the chemical process industry (plantwide control) is given by Larsson and Skogestad (2000).

Thus. Additional layers are possible. Here the control layer is subdivided into two layers: supervisory control (“advanced control”) and regulatory control (“base control”).¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ in the control hierarchy. We have also included a scheduling layer above the optimization. the information flow in such a control hierarchy is based on the higher layer .2 which shows a typical Scheduling (weeks) Site-wide optimization (day)   ©   ª ¨¨ª¨ ¨ ª Control layer Local optimization (hour) Í ¢ · ¢ ¢­ Supervisory control (minutes) Regulatory control (seconds) Ï Figure 10. as is illustrated in Figure 10.2: Typical control system hierarchy in a chemical plant control hierarchy for a complete chemical plant. the selection of controlled outputs (for the control layer) is usually intimately related to the hierarchical structuring of the control system which is often divided into two layers: ¯ ¯ optimization layer — computes the desired reference commands Ö (outside the scope of this book) control layer — implements these commands to achieve Ý Ö (the focus of this book). In general.

There is usually a time scale separation with faster lower layers as indicated in Figure 10. although often used in practice.3(b). The difference is that in a multilevel system all units contribute to satisfying the same goal. it is important that the system remains reasonably close to its optimum. Nevertheless.g. an aircraft pilot. Multilevel systems have been studied in connection with the solution of optimization problems. lack a formal analytical treatment. we generally refer to the control layer. infrequent steady-state optimization. in the next section we provide some ideas on how to select objectives (controlled outputs) for the control layer. However. However.3 which deals with selecting outputs for the control layer. this solution is normally not used for a number of reasons. From a theoretical point of view. the control layer is mainly based on feedback information. maintenance and modification. Between these updates. Mesarovic (1970) reviews some ideas related to on-line multi-layer structures applied to large-scale industrial complexes. are updated only periodically. Remark 1 In accordance with Lunze (1992) we have purposely used the word layer rather than level for the hierarchical decomposition of the control system. these issues are outside the scope of this book.g. according to Lunze (1992). multilayer structures. when the setpoints are constant.2. whereas in a multilayer system the different units have different objectives (which preferably contribute to the overall goal). operator acceptance. The optimization tends to be performed open-loop with limited use of feedback. the optimal coordination of the inputs and thus the optimal performance is obtained with a centralized optimizing controller. hierarchical decomposition and decentralization. manual control). which combines the two layers of optimization and control. Remark 2 The tasks within any layer can be performed by humans (e. As noted above we may also decompose the control layer.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¼ sending reference values (setpoints) to the layer below. and the interaction and task sharing between the automatic control system and the human operators are very important in most cases. This means that the setpoints. The optimization is often based on nonlinear steady-state models. and the lack of computing power.3(c). This observation is the basis for Section 10. as viewed from a given layer in the hierarchy. robustness problems. including the cost of modelling. see Figure 10. whereas we often use linear dynamic models in the control layer (as we do throughout the book). All control actions in such an ideal control system would be perfectly coordinated and the control system would use on-line dynamic optimization based on a nonlinear dynamic model of the complete plant instead of. and from now on when we talk about control configurations. for example. On the other hand. the difficulty of controller design. and the lower layer reporting back any problems in achieving this. see Figure 10. such that the overall goal is satisfied. However. . e.

if we had never baked a cake before. (c) Integrated optimization and control. then we should select room temperature as the controlled output Ý . (and we will assume that the duration of the baking is fixed. and if we were to construct the stove ourselves. In many cases. in practice we look up the optimal oven temperature in a cook book. In shortm the open-loop implementation is sensitive to uncertainty. Now. However.1 Cake baking. Ù É. An effective way of reducing the uncertainty is to use feedback. Then the controlled outputs Ý are selected to achieve the overall system goal. Example 10. for example. (b) Closed-loop implementation with separate control layer. as the optimal heat input depends strongly on the particular oven we use. To get an idea of the issues involved in output selection let us consider the process of baking a cake. The overall goal is to make a cake which is well baked inside and with a nice exterior.3: Alternative structures for optimization and control. from opening the oven door or whatever else might be in the oven. The manipulated input for achieving this is the heat input. For example. and may not appear to be important variables in themselves. and use a closed-loop implementation where a thermostat is used . we might consider directly manipulating the heat input to the stove. Therefore.g. (a) Open-loop optimization.3 Selection of controlled outputs A controlled output is an output variable (usually measured) with an associated control objective (usually a reference value). In other cases it is less obvious because each control objective may not be associated with a measured output variable.½¼ Objective ÅÍÄÌÁÎ ÊÁ Objective Ä Objective Optimizing Controller à ÇÆÌÊÇÄ Optimizer Optimizer Ù + Ö - Ý Ù Ý Ý Controller Ù (a) (b) (c) Figure 10. e. at ½ minutes). possibly with a watt-meter measurement. 10. and the operation is also sensitive to disturbances. if we consider heating or cooling a room. it is clear from a physical understanding of the process what the controlled outputs should be. this open-loop implementation would not work well.

What variables Ý should be selected as the controlled variables? 2. in the cake baking process we may assign to each cake a number È on a scale from 0 to 10.1) ÙÓÔØ . some average value is selected. We make the following assumptions: (a) The overall goal can be quantified in terms of a scalar cost function  which we want to minimize. and we Ideally. Recall that the title of this section is selection of controlled outputs. (b) For a given disturbance . (c) The reference values Ö for the controlled outputs Ý should be constant. In (b) the “optimizer” is the cook book which has a pre-computed table of the optimal temperature profile. In another case È could be the operating profit. In the cake baking process we select oven temperature as the controlled output Ý in the control layer.e. and the overall goal of the control system is then to cases we can select  minimize  . In both  È . Ö ÝÓÔØ ´ µ For example. e. Two distinct questions arise: 1. we want Ù select controlled outputs Ý such that: . The reference value Ö for temperature is then sent down to the control layer which consists of a simple feedback controller (the thermostat). The (a) open-loop and (b) closed-loop implementations of the cake baking process are illustrated in Figure 10.3. this will not be achieved in practice. However. Typically. In the following. What is the optimal reference value (Ý ÓÔØ ) for these variables? The second problem is one of optimization and is extensively studied (but not in this book). A perfectly baked cake achieves È ½¼. It is interesting to note that controlling the oven temperature in itself has no direct relation to the overall goal of making a well-baked cake.g. So why do we select the oven temperature as a controlled output? We now want to outline an approach for answering questions of this kind.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½½ to keep the temperature Ý at its predetermined value Ì . we let Ý denote the selected controlled outputs in the control layer. based on cake quality. The system behaviour is a function of the independent variables Ù and . Here we want to gain some insight into the first problem. Ö should be independent of the disturbances . For a given disturbance the optimal value of the cost function is ÂÓÔØ ´ µ ¸  ´ÙÓÔØ µ ÑÙÒ Â ´Ù µ (10. so we may write   ´Ù µ. Note that this may also include directly using the inputs (open-loop implementation) by selecting Ý Ù. there exists an optimal value Ù ÓÔØ ´ µ and corresponding value Ý ÓÔØ ´ µ which minimizes the cost function  . and an acceptably baked cake achieves È (a completely burned cake may correspond to È ½). i.

2 Selecting controlled outputs: Linear analysis We here use a linear analysis of the loss function. for example. assuming Ý Ö · where Ö is kept constant. e. This approach is may be time consuming because the solution of the nonlinear equations must be repeated for each candidate set of controlled outputs. If we with constant references (setpoints) Ö can achieve an acceptable loss. Here Ö is usually selected as the optimal value for the nominal disturbance. This results in the useful minimum singular value rule. The special case of measurement selection for indirect control is covered on page 439. where is a fixed (generally nonzero) disturbance. or more precisely twice differentiable. . by solving the nonlinear equations. 10. which may be misleading. but this may not be the best choice and its value may also be found by optimization (“optimal back-off”). if the optimum point of operation is close to infeasibility.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¯ The input Ù (generated by feedback to achieve optimal input Ù ÓÔØ ´ µ. Consider the loss Ä Â ´Ù µ   ÂÓÔØ ´ µ (10. we evaluate directly the cost function  for various disturbances and control errors . What happens if Ù ÙÓÔØ . The set of controlled outputs with smallest worst-case or average value of  is then preferred. As “disturbances” we should also include changes in operating point and model uncertainty. the worstcase loss ÏÓÖ×Ø   × ÐÓ×× ¨ ¸ Ñ Ü Â ´Ù µ    ´ÙÓÔØ µ ¾ ßÞ Ä (10.2) Here is the set of possible disturbances.2). Ý Ö) should be close to the Note that we have assumed that Ö is independent of . Specically.1 Selecting controlled outputs: Direct evaluation of cost The “brute force” approach for selecting controlled variables is to evaluate the loss for alternative sets of controlled variables.3. 10. for example. and a reasonable objective for selecting controlled outputs Ý is to minimize some norm of the loss. note that this is a local analysis. However. We make the following additional assumptions: (d) The cost function  is smooth. then this set of controlled variables is said to be self-optimizing.3. due to a disturbance? Obviously. we then have a loss which can be quantified by Ä Â    ÓÔØ .g.

6) First. To see this.3) quantifies how Ù   Ù ÓÔØ affects the cost function. a cook book) pre-computes a desired Ö which is different from the optimal ÝÓÔØ ´ µ.5) Ù¾ ÓÔØ where the term ´ ¾  پ µÓÔØ is independent of Ý . because the algorithm (e. In most cases the errors and ÓÔØ ´ µ can be assumed independent. In addition. Ù   ÙÓÔØ  ½ ´Ý   ÝÓÔص (10. write Ý   ÝÓÔØ Ý   Ö · Ö   ÝÓÔØ · ÓÔØ (10. For a fixed we may then express  ´Ù around the optimal point.3) Equation (10. we have an optimization error ÓÔØ ´ µ ¸ Ö   ÝÓÔØ ´ µ. Obviously. we would like to select the controlled outputs such that Ý   Ý ÓÔØ is zero. However.g. If fixed becomes Ý   ÝÓÔØ is invertible we then get We will neglect terms of third order and higher (which assumes that we are reasonably close to the optimum). (f) The dynamics of the problem can be neglected. this is not possible in    ÂÓÔØ ¡Ì ½    ½ ´Ý   ÝÓÔØ µ ¾ (If is not invertible we may use the pseudo-inverse possible Ù   ÙÓÔØ ¾ for a given Ý   Ý ÓÔØ .4) ¾Â  ½ ´Ý   ÝÓÔص (10. to study how this relates to output selection we use a linearized model of the plant. . we consider the steadystate control and optimization. If the control itself is perfect then Ò (measurement noise). we have a control error Ý   Ö because the control layer is not perfect. If it is optimal to keep some variable at a constraint. Next. that is.) We get Ý which results in the smallest practice. We get µ in terms of a Taylor series expansion in Ù Â ´Ù µ ÂÓÔØ ´ µ · Â Ì ´Ù   ÙÓÔØ ´ µµ · Ù ÓÔØ ßÞ ¼ ¾Â ½ ´Ù   ÙÓÔØ ´ µµÌ ´Ù   ÙÓÔØ´ µµ · ¾ Ù¾ ÓÔØ ¡¡¡ (10. then we assume that this is implemented and consider the remaining unconstrained problem. for example due to poor control performance or an incorrect measurement or estimate (steady-state bias) of Ý . which for a ´Ù   ÙÓÔØ µ where is the steady-state gain matrix.3) is zero at the optimal point for an unconstrained problem.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½¿ (e) The optimization problem is unconstrained. The second term on the right hand side in (10.

can occur in practice. Let us return to our initial question: Why select the oven temperature as a controlled output? We have two alternatives: a closed-loop Ì (the oven temperature) and an open-loop implementation with implementation with Ý Ý Ù É (the heat input). 2. and scale the inputs such that the effect of a given deviation Ù   Ù ÓÔØ on the cost function  is similar for each input (such that   ¾ ¡  پ ÓÔØ is close to a constant times a unitary matrix).e. To use ´ µ to select controlled outputs. On the other hand. the size of the oven. say ÌÖ ½ ¼Æ C. error small. how often the oven door is opened. We must also assume that the variations in Ý   Ý ÓÔØ are uncorrelated.5) and (10.) 2.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Example 10. we know that the optimal oven temperature ÌÓÔØ is largely independent of disturbances and is almost the same for any oven. the main reason for controlling the oven temperature is to minimize the optimization error.g. Ý   Ö is small. the choice of Ý should be such that it is easy to keep the control ½ ´ µ. Ý   Ý ÓÔØ . . we see from (10. we conclude that we should select the controlled outputs Ý such that: 1. corresponding to the direction of ´ µ. and may vary between. etc. continued. Ì ÌÓÔØ [Æ C] small than to keep Therefore. as obtained from the cook book.1 Cake baking.  ½ is small (i. From a (nonlinear) model compute the optimal parameters (inputs and outputs) for various conditions (disturbances. From this data obtain for each candidate output the variation in its optimal value. From experience. we must assume: (g) The “worst-case” combination of output deviations. The desire to have ´ µ large is consistent with our intuition that we should ensure that the controlled outputs are independent of each other. etc.     From (10. the optimal heat input ÉÓÔØ depends strongly on the heat loss.6). A cook book would then need to list a different value of ÉÖ for each kind of oven and would in addition need some correction factor depending on the room temperature. or more precisely. we find that it is much easier to keep ÓÔØ ÉÖ ÉÓÔØ [W] small. say ½¼¼W and ¼¼¼W. The use of the minimum singular value to select controlled outputs may be summarized in the following procedure: 1. In summary. (This yields a “look-up” table of optimal parameter values as a function of the operating conditions. This means that we may always specify the same oven temperature. and so we want the smallest singular value of the Note that ´  ½ µ steady-state gain matrix to be large (but recall that singular values depend on scaling as is discussed below). is large). operating points). Ú ´Ý ÓÔØ Ñ Ü   Ý ÓÔØ Ñ Ò µ ¾. 3. ÓÔØ´ µ Ö   ÝÓÔØ ´ µ is small. the choice of Ý should be such that the inputs have a large effect on Ý .5) that we should first scale the outputs such that the expected magnitude of Ý   Ý ÓÔØ is similar (e. Procedure. 1) in magnitude for each output. the choice of Ý should be such that its optimal value ÝÓÔØ ´ µ depends only weakly on the disturbances and other changes.

Ú · ½). Since the engine at steady-state has three degrees-of-freedom we need to specify three variables to keep the engine approximately at the optimal point. provided we can also achieve good dynamic control performance. Note that the disturbances and measurement noise enter indirectly through the scaling of the outputs (!).g. we have considered a hierarchical strategy where the optimization is performed only periodically. The outputs are scaled as outlined above. In particular. which gives the optimal parameters for the engine at various operating points. “Brute force” evaluation to find the set with the smallest loss imposed by using constant values for the setpoints Ö. 5.3 Selection of controlled variables: Summary Generally. we found that we should select variables Ý for which the variation in optimal value and control error is small compared to their controllable range (the range Ý may reach by varying the input Ù). and five alternative sets of three outputs are given. 10. If the loss imposed by keeping constant setpoints is acceptable then we have selfoptimizing control. There the overall goal is to operate the engine optimally in terms of fuel consumption. including measurement noise Ò ) is similar (e. Note that our desire to have ´ µ large for output selection is not related to the desire to have ´ µ large to avoid input constraints as discussed in Section 6. the scalings. and thus the matrix . Example. Remark. Select as candidates those sets of controlled outputs which correspond to a large value of ´ µ. The objective of the control layer is then to keep the controlled . The aero-engine application in Chapter 12 provides a nice illustration of output selection. the optimal values of all variables will change with time during operation (due to disturbances and other changes).e. The optimization layer is a look-up table. For practical reasons. We considered two approaches for selecting controlled outputs: 1. Scale the candidate outputs such that for each output the sum of the magnitudes of Ú and the control error ( .ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ 3.9. are different for the two cases. 2. and a good output set is then one with a large value of ´ µ. while at the same time staying safely away from instability.3. such that ¾  پ ÓÔØ is close to a constant times a unitary matrix). 4. is the transfer function for the effect of the scaled inputs on the scaled outputs. The question is then: Which variables (controlled outputs) should be kept constant (between each optimization)? Essentially. Maximization of ´ µ where is appropriately scaled (see the above procedure). Scale the inputs such that a unit deviation in each input from its optimal value has   ¡ the same effect on the cost function  (i.

the rule is generally to select those which have a strong relationship with the controlled outputs. and 2) we may want to measure and control additional variables in order to ¯ ¯ stabilize the plant (or more generally change its dynamics) improve local disturbance rejection Stabilization. although the two decisions are obviously closely related. This choice maximizes the (state) controllability and observability of the unstable mode. because 1) we may not be able to measure all the controlled variables. 1998b). The controlled outputs are often measured. For a more formal analysis we may consider the model Here Ý ÐÐ ÐÐ Ù ÐÐ · ÐÐ . respectively (see remark on page 2) (Havre.5 we discuss the relative gain array of the “big” transfer matrix (with all candidate outputs included).½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ outputs at their reference values (which are computed by the optimization layer). because this makes the system effective open-loop and stabilization is impossible. and should be located “close” (in terms of dynamic response) to the outputs and measurements. Thus. as a useful screening tool for selecting controlled outputs. for example. 1998)(Havre and Skogestad. We may also use other measurements to improve the control of the controlled outputs. We usually start the controller design by designing a (lower-layer) controller to stabilize the plant. by use of cascade control. Then in section 10. . For measurements. The measurement selection problem is briefly discussed in the next section. Note that the measurements used by the controller (Ú ) are in general different from the controlled variables (Þ ). or which may quickly detect a major disturbance and which together with manipulations can be used for local disturbance rejection. for a single unstable mode. It turns out that this is achieved. Local disturbance rejection. The issue is then: Which outputs (measurements) and inputs (manipulatons) should be used for stabilization? We should clearly avoid saturation of the inputs. The selected manipulations should have a large effect on the controlled outputs.1. 10. but we may also estimate their values based on other measured variables. by selecting the output (measurement) and input (manipulation) corresponding to the largest elements in the output and input pole vectors (Ý Ô and ÙÔ ). the selection of controlled and measured outputs are two separate issues. A reasonable objective is therefore to minimize the required input usage of the stabilizing control system.4 Selection of manipulations and measurements We are here concerned with the variable sets Ù and Ú in Figure 10.

At least this may be useful for eliminating some alternatives. and with ËÇ ¼ Á output 1 has not been selected. and 1 structure with two inputs and two outputs). consider the minimum singular value. . 2 structures with one input and two outputs. one may. etc). is outlined by Lee et al. A more involved approach. to perform a pre-screening in order to reduce the possibilities to a manageable number.8) Ñ Ð ´Ä   е Ñ ´Å   ѵ A few examples: for Ñ Ð ½ and Å Ä ¾ the number of possibilities is 4. . Ä Ð One way of avoiding this combinatorial problem is to base the selection directly on the “big” models ÐÐ and ÐÐ . 2 structures with two inputs and one output. the number of possibilities is Ý ÐÐ Ù ÐÐ all candidate outputs (measurements) all candidate inputs (manipulations) Å Ä Å (10. For a plant where we want to select Ñ from Å candidate manipulations. An even more involved (and exact) approach would be to synthesize controllers for optimal robust performance for each candidate combination. However. For example. and for Ñ Å . For example. The number of possibilities is much larger if we considerÈ possible combinations all   ¡  ¡ È with 1 to Å inputs and 1 to Ä outputs. for Ñ Ð and Å Ä ½¼ it is 63504. based on analyzing achievable robust performance by neglecting causality.For Ñ Ð Ð Ñ Ä ¾ there are 4+2+2+1=9 candidates (4 structures with one input and example. rules of thumb and simple controllability measures.g. This rather crude analysis may be used. RHP-zeros. These candidate combinations can then be analyzed more carefully. and Ë Á is a non-square output “selection” matrix with a ½ and otherwise ¼’s in each column. with Ë Ç Á all outputs are selected. interactions. the number of combinations has a combinatorial growth. one may consider the singular value decomposition and relative gain array of ÐÐ as discussed in the next section. with Å one output. (1995). Remark. together with physical insight. This approach is more involved both in terms of computation time and in the effort required to define the robust performance objective. Ð and Ä ½¼¼ (selecting 5 measurements out of 100 possible) there are 75287520 possible combinations. and Ð from Ä candidate measurements.7) Here ËÇ is a non-square input “selection” matrix with a ½ and otherwise ¼’s in each row. for Ñ Ð ¾ and Å Ä it is 36. perform an To evaluate the alternative combinations. 1989): Å ½ Ä ½ Ä Å .ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ ¯ ¯ The model for a particular combination of inputs and outputs is then Ý Ù· where ËÇ ÐÐ ËÁ ËÇ ÐÐ (10. The number is (Nett. based on and input-output controllability analysis as outlined in Chapter 6 for each combination (e. so even a simple input-output controllability analysis becomes very time-consuming if there are many alternatives.

and we follow Cao (1995) and define ÈÖÓ Ø ÓÒ ÓÖ ÒÔÙØ ÌÎ Ö ¾ (10. Theorem 10. Then the ’th input is Ù Ì Ý . Define in a similar way such that 0’s elsewhere. We then have that Ì ÎÖ yields the projection of a unit the ’th output is Ý input Ù onto the effective input space of . Similarly. The following theorem links the input and output (measurement) projection to the column and row sums of the RGA.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10.12) are 1. Let ¼ ¡ ¡ ¡ ¼ ½ ¼ ¡ ¡ ¡ ¼ Ì be a unit vector with a 1 in position and Ì Ù.9) ÐÐ Í ¦Î À ÍÖ ¦Ö ÎÖÀ where ¦Ö consists only of the Ö Ö Ò ´ µ non-zero singular values.) The ’th row sum of the RGA is equal to the square of the ’th output projection. £ ÝÌ . . which avoids the combinatorial problem just mentioned. for the case of many candidate measured outputs (or controlled outputs) one may consider not using those outputs corresponding to rows in the RGA where the sum of the elements is much smaller than 1.11) which is a number between 0 and 1. and Í Ö consists of the output directions we can affect (reach) by use of the inputs. Thus. and the ’th column sum of the RGA is equal to the square of the ’th input projection.e. and Î Ö consists of the Ö first columns of Î . write the singular value decomposition of ÐÐ as (10. To see this. Î Ö consists of the input directions with a non-zero effect on the outputs. Í Ö consists of the Ö first columns of Í .1 (RGA and input and output projections. and we define ÈÖÓ Ø ÓÒ ÓÖ ÓÙØÔÙØ ÌÍ Ö ¾ (10. Ñ ½ ÌÍ Ö ¾ ¾ Ð ½ ÌÎ Ö ¾ ¾ (10. is the relative gain array (RGA) of the “big” transfer matrix ÐÐ with all candidate inputs and outputs included.12) For a square non-singular matrix both the row and column sums in (10. 1995). Similarly.5 RGA for non-square plant A simple but effective screening tool for selecting inputs and outputs. i. ÐÐ ¢ ÐÐ Essentially. for the case of many candidate manipulations (inputs) one may consider not using those manipulations corresponding to columns in the RGA where the sum of the elements is much smaller than 1 (Cao. Ì ÍÖ yields the projection of a unit output Ý onto the effective (reachable) output space of .10) which is a number between 0 and 1.

£´ ½ µ   ½¼ . selecting outputs ½ and ¿ yields ½¼ ½¼ which is well-conditioned with £´ ¾ µ  ½ ¾ . For the case of extra inputs the RGA-values depend on the input scaling.12) we see that the row and column sums of the RGA provide a useful way of interpreting the information available in the singular vectors. Thus.4. The variables must therefore be scaled prior to the analysis. The plant and its RGA-matrix are ¾ ½¼ ½¼ ¿ ¾  ¼ ½¼ ¼ ¼ ¿¼¿ ¿ ÐÐ   ¡ ½¼ ¾ ¾ ¾ ¼ ½  ½ ¾ ¾ £ ¼ ¾ ¼ ½¿½ ¼ ¼¿  ¼ ¼¼ ¾ ¼  ¼ ½¼¼  ¼ ¼ ½ ¼ ¾½¼½ ¼ ¼¾ ¾ ¼¾ ½ combinations with ¾ inputs and ¾ outputs. Example 10.3 Consider a plant with ¾ inputs and candidate outputs of which we want to select ¾. From (10. However. To maximize the output projection we should select outputs 1 and 4.2. ¼ ¿ and ¼ ¿ . and ½¼ ½¼   is likely to be difficult to control. ËÁ Æ ½ ¾ The RGA is a useful screening tool because it need only be computed once. such as RHP-zeros. it must be used with some caution.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ Proof: See Appendix A. ½¼ ½¼ ½¼ ¾ ½ ¾ ½ £  ¾ ¿¾ ½  ½ ¿ ¼ ¼  ¼ ¾ ¼ ¼  ¼ ¾ . The following example shows that although the RGA is an efficient screening tool. This shows that we have not lost much gain in the low-gain direction by using only ¾ of the outputs. On the other hand. and for extra outputs on the output scaling. ¾ ¾ ½ ¾  ½ the minimum singular values are: ´ ÐÐ µ ½ ¼ . We discuss on page 435 the selection of extra measurements for use in a cascade control system. there are a large number of other factors that determine controllability. Example 10. sensitivity to uncertainty. For this selection ´ µ ¾ ½¾ whereas ´ ÐÐ µ ¾ ´ ÐÐ µ ¾ with all outputs included. and these must be taken into account when making the final selection. We have: ¾ ¿ ¾ ¿ ÐÐ The four row sums of the RGA-matrix are ¼ the output projection we should select outputs ¼. ´ ½ µ ¼ ½. For comparison. to maximize ½ and ¾. this yields a plant ½ ½¼ ½¼ which is ill-conditioned with large RGA-elements. The six row sums of the RGA-matrix are ¼ ¾ ¾ ¼ ¿ ¼ ¼ ¼¼ ¼ ½¿ ¼ ¼¾½¼ and ¼ ¾ . ¼ ¿. and ´ ¾ µ ¼ ¼. However. The RGA may There exist ¾ be useful in providing an initial screening. It includes all the alternative inputs and/or outputs and thus avoids the combinatorial problem.2 Consider a plant with ¾ inputs and candidate outputs of which we want to select ¾.

In general this will limit the achievable performance compared to a simultaneous design (see also the remark on page 105). These subsets should not be used by any other controller. This definition of decentralized control is consistent with its use by the control community.6 Control configuration elements We now assume that the measurements. First. units. some definitions: Decentralized control is when the control system consists of independent feedback controllers which interconnect a subset of the output measurements/commands with a subset of the manipulated inputs. blocks) with predetermined links and with a possibly predetermined design sequence where subcontrollers are designed locally. The restrictions imposed on the overall controller à by decomposing it into a set of local controllers (subcontrollers. but we may impose the restriction that the feedback part of the controller (Ã Ý ) is first designed locally for disturbance rejection. However.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. Ù ÃÚ (10. Obviously. Some elements used to build up a specific control configuration are: ¯ ¯ ¯ ¯ ¯ Cascade controllers Decentralized controllers Feedforward elements Decoupling elements Selectors These are discussed in more detail below. and then the prefilter (Ã Ö ) is designed for command tracking. manipulations and controlled outputs are fixed. In decentralized control we may rearrange the ordering of .13) (the variables Ú will mostly be denoted Ý in the following). we define Control configuration. By control configuration selection we mean the partitioning of measurements/commands and manipulations within the control layer. Similar arguments apply to other cascade schemes. such a “big” (full) controller may not be desirable. The available synthesis theories presented in this book result in a multivariable controller à which connects all available measurements/commands (Ú ) with all available manipulations (Ù). elements. In a conventional feedback system a typical restriction on à is to use a one degreeof-freedom controller (so that we have the same controller for Ö and  Ý ). In other cases we may use a two degrees-of-freedom controller. and in the context of the process industry in Shinskey (1988) and Balchen and Mumme (1988). this limits the achievable performance compared to that of a two degrees of freedom controller. More specifically.

Decoupling elements link one set of manipulated inputs (“measurements”) with another set of manipulated inputs. Feedforward elements link measured disturbances and manipulated inputs. then this may pose fundamental limitations on the achievable performance which cannot be overcome by advanced controller designs at higher layers. the subcontrollers are designed. and it is sometimes also used in the design of decentralized control systems. based on cascading (series interconnecting) the controllers in a hierarchical manner. In addition to restrictions on the structure of à . Selectors are used to select for control. Cascade control is when the output from one controller is the input to another. and are often viewed as feedforward elements (although this is not correct when we view the control system as a whole) where the “measured disturbance” is the manipulated input computed by another decentralized controller. Remark 2 Of course. a subset of the manipulated inputs or a subset of the outputs. This usually results from a sequential design of the control system. In particular.g. For most decomposed control systems we design the controllers sequentially. we may impose restrictions on the way. starting with the “fast” or “inner” or “lower-layer” control loops in the control hierarchy. for a hierarchical decentralized control system. a performance loss is inevitable if we decompose the control system. Remark 1 Sequential design of a decentralized controller results in a control system which is decomposed both horizontally (since à is diagonal) as well as vertically (since controllers at higher layers are tuned with lower-layer controllers in place). This usually involves a set of independent decentralized controllers. and give some justification for using such “suboptimal” configurations rather than directly . if we select a poor configuration at the lower (base) control layer. e. In this section.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾½ measurements/commands and manipulated inputs such that the feedback part of the overall controller à in (10. Horizontal decomposition. we discuss cascade controllers and selectors. For example. or rather in which sequence. this is relevant for cascade control systems. They are used to improve the performance of decentralized control systems. The choice of control configuration leads to two different ways of partitioning the control system: ¯ ¯ Vertical decomposition. These limitations imposed by the lower-layer controllers may include RHP-zeros (see the aero-engine case study in Chapter 12) or strong interactions (see the distillation case study in Chapter 12 where the ÄÎ -configuration yields large RGA-elements at low frequencies). This is broader than the conventional definition of cascade control which is that the output from one controller is the reference command (setpoint) to another. depending on the conditions of the system.13) has a fixed block-diagonal structure.

see Figure 10. A cascade control structure results when either of the following two situations arise: ¯ ¯ The reference Ö is an output from another controller (typically used for the case of an extra measurement Ý ). e. This cascade scheme is referred to as input resetting.8 we consider decentralized diagonal control. For example. Note that whenever we close a SISO control loop we lose the corresponding input. becomes a new degree of freedom. 10. in Section 10.¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ designing the overall controller à . A centralized implementation Ù Ã ´Ö   ý½ ´×µ´Ö½   ݽ µ · ý¾´×µ´Ö¾   ݾ µ (10. The “measurement” Ý is an output from another controller (typically used for the case of an extra manipulated input Ù . may be written Ù Centralized (parallel) implementation. or to reduce the effect of measurement noise.7. we discuss in more detail the hierarchical decomposition. as a degree of freedom.14) is a reference minus a measurement. in Section 10. For simplicity. Finally. partially controlled systems and sequential controller design. including cascade control. ݽ the controlled output (with an associated control objective Ö ½ ) and ݾ the extra measurement. velocity feedback is frequently used in mechanical systems. However.6.4(b) where Ù ¾ is the “measurement” for controller à ½ ). 10.14) where à ´×µ is a scalar.15) .6. we here use single-input single-output (SISO) controllers of the form Ù Ã ´×µ´Ö  Ý µ (10. and local flow cascades are used in process systems. where à is a 2-input-1-output controller. Ö . we can cascade controllers to make use of extra measurements or extra inputs. since the input to the controller in (10.4(a).2 Cascade control: Extra measurements In many cases we make use of extra measurements Ý ¾ (secondary outputs) to provide local disturbance rejection and linearization. but at the same time the reference. Later. This is conventional cascade control. Let Ù be the manipulated input. in Figure 10. Ù .g. ݵ.1 Cascade control systems We want to illustrate how a control system which is decomposed into subcontrollers can be used to solve multivariable control problems. It may look like it is not possible to handle non-square systems with SISO controllers.

cascades based on measuring the actual manipulated variable (in which case ݾ ÙÑ ) are commonly used to reduce uncertainty and nonlinearity at the plant input. For example. and this is sometimes referred to as parallel cascade . It also shows more clearly that Ö ¾ is not a degree-offreedom at higher layers in the control system. but rather the reference input to the faster secondary (or slave) controller þ . On the other hand.2. An advantage with the cascade implementation is that it more clearly decouples the design of the two controllers. Finally.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ Ö½ + ¹ ¹ - ý Ö¾¹ + ¹ - þ Ù¾¹ Plant ݾ Ý Õ ¹½ ¾¿ ÖÙ¾ Ö (a) Extra measurements ݾ (conventional cascade control) + ¹ ¹ + ¹ ¹ - ý þ Ù½ ¹ ¾ Õ Ù¹ Plant Ý Õ ¹ (b) Extra inputs Ù¾ (input resetting) Figure 10. ¼ in (10.4(a).17) Note that the output Ö ¾ from the slower primary controller à ½ is not a manipulated plant input. Consider conventional cascade control in Figure 10.16) Ù¾ þ´×µ´Ö¾   ݾ µ Ö¾ Ù½ (10. In the general case ݽ and ݾ are not directly related to each other. see the velocity feedback for the helicopter case study in Section 12. it allows for integral action in both loops (whereas usually only à ½½ should have integral action in (10. Remark.15)).4(a): Ö¾ ý ´×µ´Ö½   ݽ µ (10. Ö¾ ¼ (since we do not have a degree of freedom to control Cascade implementation (conventional cascade control). To obtain an implementation with two SISO controllers we may cascade the controllers as illustrated in Figure 10.15) the relationship between the centralized and cascade With Ö¾ implementation is ý½ þ ý and ý¾ þ . a centralized implementation is better suited for direct multivariable synthesis.4: Cascade implementations where in most cases ݾ ).

a poorly known nonlinear behaviour – and the inner loop serves to remove the uncertainty. then the input-output controllability of ¾ and ½ ¾ are in theory the same. the practical bandwidth is limited by the frequency ÛÙ where the phase of the plant is ½ ¼Æ (see section 5.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ö½ + ¹ ¹ - ý + ¹ ¹ - þ Ù¾¹ ¾ + ¹ +¾ Õ ¹ ݾ ½ ½ + ¹ +½ Õ Ý¹ Figure 10. We want to derive conclusions (a) and (b) from an input-output controllability analysis. (c) explain why we may choose to use cascade control if we want to use simple controllers (even with ¾ ¼). control.5. it is common to encounter the situation in Figure 10. and for rejecting ¾ there is no fundamental advantage in measuring ݽ rather than ݾ . (c) In most cases. whereas a somewhat slower ý ´×µ ¾ à ´× · ½µ ×´¼ ½× · ½µ à Sketch the closed-loop response. in the inner loop. (b) The inner loop ľ ¾ þ removes the uncertainty if it is sufficiently fast (high gain feedback) and yields a transfer function ´Á · ľ µ ½ ľ close to Á at frequencies where ý is active. ½ ¾ ¾ ݽ depends . consider Figure 10. so an inner cascade loop may yield faster control (for rejecting ½ and tracking Ö½ ) if the phase of ¾ is less than that of ½ ¾ . Outline of solution: (a) Note that if ½ is minimum phase. and it is Exercise 10. Morari and Zafiriou (1989) conclude that the use of extra measurements is useful under the following circumstances: (a) The disturbance ¾ is significant and ½ is non-minimum phase.12). However. (b) The plant ¾ has considerable uncertainty associated with it – e.5 and let ½ ½ ´× · ½µ¾ ¾ ¾ ½ ×·½ We use a fast proportional controller þ PID-controller is used in the outer loop. such as when PID controllers are used. What is the bandwidth for each of the two loops? . With reference to the special (but common) case of conventional cascade control shown in Figure 10. In terms of design they recommended that þ is first designed to minimize the effect of ¾ on ݽ (with ý ¼) and then ý is designed to minimize the effect of ½ on ݽ .g.5: Common case of cascade control where the primary output ݽ depends directly on the extra measurement ݾ .1 Conventional cascade control. This is a special case of Figure 10.1.2 To illustrate the benefit of using inner cascades for high-order plants. and also.5 where directly on ݾ .4(a) with “Plant” considered further in Example 10. case (c) in the above example.   Exercise 10.

18) Here two inputs are used to control one output. ¼ the relationship between the centralized and cascade implementation The cascade implementation again has the advantage of decoupling the design of the two controllers. ݵ where à is a 1-input 2-output controller. may be used as a degree-of-freedom at higher layers in the control system.20) With ÖÙ¾ is ý½ and we see that the output from the fast controller à ¾ is the “measurement” for the slow controller ý . We again let input Ù¾ take care of the fast control and Ù ½ of the long-term control. 10.4(b). A centralized implementation Ù Ã ´Ö   þ½ ´×µ´Ö   ݵ (10.6. Sometimes Ù¾ is an extra input which can be used to improve the fast (transient) control of Ý .ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾ Compare this with the case where we only measure ݽ .19) The objective of the other slower controller is then to use input Ù ½ to reset input Ù¾ to its desired value ÖÙ¾ : Ù½ ý´×µ´ÖÙ¾   ݽ µ ݽ Ù¾ (10. What is the ½) and sketch the achievable bandwidth? Find a reasonable value for à (starting with à closed-loop response (you will see that it is about a factor 5 slower without the inner cascade). so to get a unique steady-state for the inputs Ù½ and Ù¾ . . so ½ ¾ . Cascade implementation (input resetting). but if it does not have sufficient power or is too costly to use for long-term control. These may be used to improve control performance. The fast control loop is then Ù¾ þ ´×µ´Ö   ݵ (10. the reference for Ù ¾ . Consider a plant with a single controlled output Ý and two manipulated inputs Ù ½ and Ù¾ . To obtain an implementation with two SISO controllers we may cascade the controllers as shown in Figure 10.  Ã½Ã¾ and þ½ þ. Then Ù¾ ´Øµ will only be used for transient (fast) control and will return to zero (or more precisely to its desired value Ö Ù¾ ) as Ø ½. we can have integral action in both à ½ and þ . and use a PIDcontroller à ´×µ with the same dynamics as ý ´×µ but with a smaller value of à . It also shows more clearly that Ö Ù¾ . then after a while it is reset to some desired value (“ideal resting value”). Finally. We usually let ý½ have integral control whereas à ¾½ does not. may be written Ù½ ý½ ´×µ´Ö   ݵ Ù¾ Centralized (parallel) implementation.3 Cascade control: Extra inputs In some cases we have more manipulated inputs than controlled outputs. but note that the gain of à ½ should be negative (if effects of Ù½ and Ù¾ on Ý are both positive).

Exercise 10.   manipulated inputs (Ù¾ and Ù¿ ).21) Ù¾ þ ´×µ´Ö¾   ݾ µ   Ö¾ Ù½ (10. ÿ (input resetting using slower input).22) .¹ + þ ÿ Ù¾ Õ ¹ ¹ ½ ¿ + ¹ ݾ + Õ¹ ¾ Õ ¹ ݽ Ö¿ Ù¿ Figure 10. one controlled output (ݽ which should be close to Ö½ ) and two measured variables (ݽ and ݾ ). Ù½ and Ù¾ for the cascade input resetting scheme in Figure 10. for the control system in Figure 10. because the extra input Ù¾ . Consider the system in Figure 10.4 Two layers of cascade control.4.6 with two Ö½ ¹. and ý (final adjustment of ݽ ). ý ½ × and ½½ ½¾ þ ½¼ ×.¹ + Ö¾ ý ¹. whereas controllers þ and ÿ are cascaded to achieve input resetting. As an example use ½ ½ and use integral action in both controllers. For example. Example 10. Show that input Ù¾ is reset at steady-state.6 controllers ý and þ are cascaded in a conventional manner.6: Control configuration with two layers of cascade control.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 1 Typically.4(b). the controllers in a cascade system are tuned one at a time starting with the fast loop. is reset to a desired position by the outer cascade. the cascade implementation of input resetting is sometimes referred to as valve position control. Remark 2 In process control. In Figure 10. Exercise 10. The corresponding equations are Ù½ ý ´×µ´Ö½ ݽ µ (10. Input Ù¾ has a more direct effect on ݽ than does input Ù¿ (there is a large delay in ¿ ´×µ).3 Draw the block diagrams for the two centralized (parallel) implementations corresponding to Figure 10.4 Derive the closed-loop transfer functions for the effect of Ö on Ý . The extra measurement ݾ is closer than ݽ to the input Ù¾ and may be useful for detecting disturbances (not shown) affecting ½ .¹ + ¹.6 we would probably tune the three controllers in the order þ (inner cascade using fast input). usually a valve. Input Ù¾ should only be used for transient control as it is desirable that it remains close to Ö¿ ÖÙ¾ .

which is the reference value for ݾ . say ½¼% of the total flow). For example. this is used in a heater where the heat input (Ù) normally controls temperature (Ý ½ ). In this case the control range is often split such that. 10. However. We assumed above that the extra input is used to improve dynamic performance.5 Selectors Split-range control for extra inputs. Auctioneering selectors are used to decide to control one of several similar outputs. or we need to make a choice about which outputs are the most important to control. controller ÿ manipulates Ù¿ slowly in order to reset input Ù¾ to its desired value Ö¿ . Controller þ controls the secondary output ݾ using input Ù¾ . A practical case of a control system like the one in Figure 10. 10. we cannot control all the outputs independently. the improvement must be traded off against the cost of the extra actuator. Selectors or logic switches are often used for the latter.4 Extra inputs and outputs (local feedback) In many cases performance may be improved with local feedback loops involving extra manipulated inputs and extra measurements. Override selectors are used when several controllers compute the input value. Selectors for too few inputs. Ù¾ the bypass flow (which should be reset to Ö¿ . In this case ݾ may be the outlet temperature from the pre-heater. A completely different situation occurs if there are ). and Ù¿ the flow of heating medium (steam). . Consider the case with one input (Ù) and several outputs (Ý ½ ݾ In this case.23) Controller ý controls the primary output ݽ at its reference Ö½ by adjusting the “input” Ù½ .6. Another situation is when input constraints make it necessary to add a manipulated input. Finally.6. this may be used to adjust the heat input (Ù) to keep the maximum temperature (Ñ Ü Ý ) in a fired heater below some value. Make a process flowsheet with instrumentation lines (not a block diagram) for this heater/reactor process.6 is in the use of a pre-heater to keep the reactor temperature ݽ at a given value Ö½ . too few inputs. and we select the smallest (or largest) as the input. Ù¿ Exercise 10. For example.24 on page 208.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾ ÿ ´×µ´Ö¿   Ý¿ µ Ý¿ Ù¾ (10. measurement and control system. except when the pressure (Ý ¾ µ is too large and pressure control takes over. An example where local feedback is required to counteract the effect of high-order lags is given for a neutralization process in Figure 5. and Ù¾ is used when Ý ¾ ݽ ÝÑ Ü . so we either need to control all the outputs in some average manner.5 Process control application. Ù ½ is used for control when Ý ¾ Ý Ñ Ò Ý½ . for example. The use of local feedback is also discussed by Horowitz (1991).

6(a). The issue of simplified implementation and reduced computation load is also important in many applications. with cascade and decentralized control each controller is usually tuned one at a time with a minimum of modelling effort. sometimes even online by selecting only a few parameters (e. but any theoretical treatment of decentralized control requires a plant model. However. On the other hand. Why do we need a theory for cascade and decentralized control? We just argued that the main advantage of decentralized control was its saving on the modelling effort. This seems to be a contradiction. such as the RGA. it is usually more important (relative to centralized multivariable control) that the fast control loops be tuned to respond quickly. For decentralized diagonal control the RGA is a useful tool for addressing this pairing problem. It may therefore be both simpler and better in terms of control performance to set up the controller design problem as an optimization problem and let the computer do the job. the number of possible pairings is usually very high. resulting in a centralized multivariable controller as used in other chapters of this book. A fundamental reason for applying cascade and decentralized control is thus to save on modelling effort. which are a prerequisite for applying multivariable control. To be able to tune the controllers independently. Other advantages of cascade and decentralized control include the following: they are often easier to understand by operators. decomposed control configurations can easily become quite complex and difficult to maintain and understand. why is cascade and decentralized control used in practice? There are a number of reasons. does not change too much as outer loops are closed. for example. in the input channels. Based on the above discussion. the main challenge is to find a control configuration which allows the (sub)controllers to be tuned independently based on a minimum of model information (the pairing problem).6. If this is the case.g. one desirable property is that the steady-state gain from Ù to Ý in an “inner” loop (which has already been tuned). but the most important one is probably the cost associated with obtaining good plant models. For example. but in most cases physical insight and simple tools. they reduce the need for control links and allow for decentralized implementation. can be helpful in reducing the number of alternatives to a manageable number. we may still want to use a model to decide on a control structure and to decide on whether acceptable control with a decentralized . and they tend to be less sensitive to uncertainty.6 Why use cascade and decentralized control? As is evident from Figure 10. we must require that the loops interact only to a limited extent. Since cascade and decentralized control systems depend more strongly on feedback rather than models as their source of information. their tuning parameters have a direct and “localized” effect. For industrial problems. but is becoming less relevant as the cost of computing power is reduced. even though we may not want to use a model to tune the controllers.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. the gain and integral time constant of a PI-controller).

usually starting with the fast loops (“bottom-up”).7 Hierarchical and partial control A hierarchical control system results when we design the subcontrollers in a sequential manner. We first design a controller à ¾ to control the subset Ý ¾ . We divide the outputs into two classes: ¯ ݽ – (temporarily) uncontrolled output (for which there is an associated control ¯ ݾ – (locally) measured and controlled output We also subdivide the available manipulated inputs in a similar manner: objective) ¯ Ù¾ – inputs used for controlling Ý ¾ ¯ Ù½ – remaining inputs (which may be used for controlling Ý ½ ) We have inserted the word temporarily above. and provide some guidelines for designing hierarchical control systems. The modelling effort in this case is less. we may then design a controller ý for the remaining outputs. we here consider the partially controlled system as it appears after having implemented only a local control system where Ù¾ is used to control Ý ¾ . The outputs Ý (which include ݽ and ݾ ) all have an associated control objective. This means that the controller at some higher layer in the hierarchy is designed based on a partially controlled plant. Sequential design of conventional cascade control. However. In most of the development that follows we assume that the outputs Ý ¾ are tightly controlled. 10. With this controller þ in place (a partially controlled system). since Ý ½ is normally a controlled output at some higher layer in the hierarchy. 10. The reason for controlling Ý ¾ is to improve the control of Ý ½ . The outputs Ý ¾ are additional measured (“secondary”) variables which are not important variables in themselves. In this section we derive transfer functions for partial control.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾ configuration is possible. and we use a hierarchical control system. Sequential design of decentralized controllers. Four applications of partial control are: 1. 2. The .1 Partial control Partial control involves controlling only a subset of the outputs for which there is a control objective.7. because the model may be of a more “generic” nature and does not need to be modified for each particular application.

Indirect control was discussed in Section 10. and in all cases we (1) want the effect of the disturbances on Ý ½ to be small (when Ý ¾ is controlled). By eliminating Ù ¾ and ݾ . the outputs ݽ remain uncontrolled and the set Ù ½ remains unused. 3. 4. see Figure 10. we can indirectly achieve acceptable control of Ý ½ . The outputs Ý ½ have an associated control objective.8). By partitioning the inputs and outputs.25) ¾½ Ù½ · ¾¾ Ù¾ · ¾ Assume now that feedback control Ù ¾ þ ´Ö¾  Ý¾  Ò¾ µ is used for the “secondary” subsystem involving Ù ¾ and ݾ . but they are not measured. and the number of alternatives has a combinatorial growth as illustrated by (10. and (2) want it to be easy to control ݾ using Ù¾ (dynamically). . The outputs Ý (which include Ý ½ and ݾ ) all have an associated control objective.4. Instead. “True” partial control.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ references Ö¾ are used as degrees of freedom for controlling Ý ½ so the set Ù½ is often empty. Indirect control. we aim at indirectly controlling Ý ½ by controlling the “secondary” measured variables Ý ¾ (which have no associated control objective). In all cases there is a control objective associated with Ý ½ and a feedback loop involving measurement and control of Ý ¾ . but there is no “outer” loop involving Ý ½ .26) . we then get the following model for the resulting partially controlled system: ݽ   ¡ ½½   ½¾ þ´Á · ¾¾ þµ ½ ¾½ ¡ Ù½ ·   ½   ½¾ þ ´Á · ¾¾ þ µ ½ ¾ · ½¾ þ ´Á · ¾¾ þµ ½ ´Ö¾   Ò¾ µ (10.24) ݽ ݾ ½½ Ù½ · ½¾ Ù¾ · ½ (10.7. Let us derive the transfer functions for Ý ½ when ݾ is controlled. The following table shows more clearly the difference between the four applications of partial control. and we consider whether by controlling only the subset ݾ . Measurement and control of Ý ½ ? Yes Yes No No Control objective for ݾ ? Yes No Yes No Sequential decentralized control Sequential cascade control “True” partial control Indirect control The four problems are closely related. the overall model Ý Ù may be written (10. This is similar to cascade control. One difficulty is that this requires a separate analysis for each choice of ݾ and Ù¾ . The references Ö ¾ are used as degrees of freedom and the set Ù ½ is empty.7. that is.

e. On substituting ¾¾ ¾¾ this into (10..27) Tight control of ݾ .25) to get Ù¾    ½ ¾    ½ ¾½ Ù½ ·  ½ ݾ ¾¾ ¾¾ ¾¾ We have here assumed that ¾¾ is square and invertible. the control error ¾ equals the measurement error (noise) Ò ¾ . (10.24) we get ݽ (10.26). which is the disturbance gain for a system under perfect partial control. but we stress that it only applies at frequencies where Ý ¾ is tightly controlled.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿½ ½ Ù½ Ù¾ ¾ ݹ ½ Ò¾ Ö¾ ¹ ¹ ½½ ¾½ þ ½¾ ¾¾ + ¹+ + ¹+ + - ݾ + + Figure 10. the transfer function from Ù½ to ݽ is д  Ã¾ µ ½½   ½¾ þ ´Á · ¾¾ þ µ ½ ¾½ (10. In some cases we can assume that the control of Ý ¾ is fast compared to the control of Ý ½ .28) where È is called the partial disturbance gain. ´ ½½   ½¾  ½ ¾½ µ Ù½ · ´ ½   ½¾  ½ ßÞ ¾¾ ßÞ ¾¾ ¸ ÈÙ ¸È   ¾ µ · ½¾  ½ ´Ö¾ ßÞ ¾ µ ßÞ ¾¾ ¸ ÈÖ Ý¾ . and È Ù is the effect of Ù½ on ݽ with ݾ perfectly controlled. Ý .26) may be rewritten in terms of linear fractional transformations. For example. To obtain the model we may formally let à ¾ ½ in (10.7: Partial control Remark.28) over (10. In many cases the set Ù ½ is empty (there are no extra inputs). otherwise we can get the least-square solution by replacing  ½ by the pseudo-inverse. i. For the case of tight control we have ¾ ¸ ݾ   Ö¾ Ò¾ .26) is that it is independent of à ¾ . The advantage of the model (10. but it is better to solve for Ù ¾ in (10.

etc. The control of Ý ¾ using Ù¾ should provide local disturbance rejection.2 Hierarchical control and sequential design A hierarchical control system arises when we apply a sequential design procedure to a cascade or decentralized control system. The objectives for this hierarchical decomposition may vary: 1.7. and is not required to operate the plant. 10.g.28) have been derived by many authors.g. see the work of Manousiouthakis et al. Relationships similar to those given in (10. 2. the input-output controllability of the subsystem involving use of Ù¾ to control ݾ should be good (consider ¾¾ and ¾ ). with this lower-layer control system in place. e. that is.28) (for the case when we can assume ݾ perfectly controlled). The control of Ý ¾ using Ù¾ should not impose unnecessary control limitations on the remaining control problem which involves using Ù ½ and/or Ö¾ to control ݽ . 4. To allow simple models when designing the higher-layer control system (à ½ ). The high-frequency dynamics of the models of the partially controlled plant (e. ÈÙ and ÈÖ ) may be simplified if ý is mainly effective at lower frequencies. The latter is the case in many process control applications where we first close a number of faster “regulatory” loops in order to “stabilize” the plant. The higher layer control system (à ½ ) is then used mainly for optimization purposes. 3. Hovd and Skogestad (1993) proposed some criteria for selecting Ù¾ and ݾ for use in the lower-layer control system: 1. but rather to a plant that is sensitive to disturbances and which is difficult to control manually. that is. 3. .26) (for the general case) or (10. By “unnecessary” we mean limitations (RHP-zeros. The idea is to first implement a local lower-layer (or inner) control system for controlling the outputs Ý ¾ . we design a controller à ½ to control ݽ . it should minimize the effect of disturbances on Ý ½ (consider È for ݾ tightly controlled). Next. The lower layer must quickly implement the setpoints computed by the higher layers. To allow the use of longer sampling intervals for the higher layers (à ½ ). To “stabilize”1 the plant using a lower-layer control system (à ¾ ) such that it is amenable to manual control.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark. To allow for simple or even on-line tuning of the lower-layer control system (à ¾ ). ill-conditioning.) that did not ½ The terms “stabilize” and “unstable” as used by process operators may not refer to a plant that is unstable in a mathematical sense. (1986) on block relative gains and the work of Haggblom and Waller (1988) on distillation control configurations. The appropriate model for designing à ½ is given by (10. 2. Based on these objectives.

but the plant has poles fin or close to the origin and needs to be stabilized. the “ÄÎ -configuration” used in many examples in this book refers to a partially controlled system where we use to control ݽ Ü Ì control of ݽ using Ù½ is nearly independent of the tuning of the controller þ involving ݾ and Ù¾ . But the model in (10. In this case. the problem of controlling ݽ by Ù½ (“plant” ÈÙ ) is often strongly interactive with large steady-state RGA-elements in ÈÙ . bottom composition Ü . condenser holdup Å . each choice (“configuration”) is named by the inputs Ù½ left for composition control. Example 10. and ÈÙ has small RGA-elements.8. This problem usually has no inherent control limitations caused by RHP-zeros. ¢ In most cases. Consider the controllability of ÈÙ for ݾ tightly controlled. For example. distillate 5 outputs Ü Å Å ÔÌ (these are compositions and inventories: top composition Ý . reboiler holdup Å .26) depends .8 has 5 inputs Ù Ý Ý Ä Î ÎÌ Ì . which should not be much worse than that of . pressure Ô) see Figure 10. since each level (tank) has an inlet and two outlet flows. These three criteria are important for selecting control configurations for distillation columns as is discussed in the next example. the distillation column is first stabilized by closing three decentralized SISO loops for level and pressure so ݾ and the remaining outputs are Å Å Ý ÔÌ The three SISO loops for controlling ݾ usually interact weakly and may be tuned independently of each other. Another complication is that composition measurements are often expensive and unreliable. boilup Î . the steady-state interactions from Ù½ to ݽ are generally much less. Another configuration is the ÎÌ Ì to control ݾ ). there exists many possible choices for Ù¾ (and thus for Ù½ ). for high-purity separations the RGA-matrix may have some large elements. bottom flow . By convention. However. In addition. The ÄÎ -configuration is good from the point of view that ݽ Ù½ Ä Î Ì (and we assume that there is a control system in place which uses Ù¾ Î -configuration where Ù½ Î Ì and thus Ù¾ Ä ÎÌ Ì .ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿¿ exist in the original overall problem involving Ù and Ý . However.5 Control configurations for distillation columns. overhead vapour ÎÌ ) and (these are all flows: reflux Ä. The overall control problem for the distillation column in Figure 10.

Furthermore.g. are given in Haggblom and Waller (1988) and Skogestad and Morari (1987a). If ݾ is tightly controlled then none of the configurations seem to yield RHP-zeros in ÈÙ . Waller et al. are sensitive to the tuning of þ . like the Î -configuration mentioned above. with five inputs there are ½¼ alternative configurations. Then the expressions for ÈÙ and È . (1990). This eliminates a few alternatives. Important issues to consider then are disturbance sensitivity (the partial disturbance gain È should be small) and the interactions (the RGA-elements of ÈÙ ).28). which are used in the references mentioned above.g.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÎÌ È Ô Ä Þ Å Ä Ý Î Å Ä Ü Figure 10. it may be simpler to start from the overall model . one often allows for the possibility of using ratios between flows. ¢ . These issues are discussed by. Another important issue is that it is often not desirable to have tight level loops and some configurations. and a slow level loop for introduce unfavourable dynamics for the response from Ù½ to ݽ . ¢ To select a good distillation control configuration. Ä . and derive the models for the configurations using (10. e..26) or (10. one should first consider the problem of controlling levels and pressure (ݾ ). This is further discussed in Skogestad (1997). as possible degrees of freedom in Ù½ . may not apply. on the tuning of the level loops). Å may There are also many other possible configurations (choices for the two inputs in Ù½ ).e. so the final choice is based on the ¾ ¾ composition control problem (ݽ ).8: Typical distillation column controlled with the ÄÎ -configuration strongly on þ (i. (1988) and Skogestad et al. However. Expressions which directly relate the models for various configurations. and this sharply increases the number of alternatives. for example. see also the MATLAB file on page 501. e. relationships ÄÎ between ÈÙ È ÄÎ and ÈÙ Î È Î etc.

we often find in practice that only one of the two product compositions is controlled (“true” partial control). Sequential design is also used for the design of cascade control systems. the overall ¢ distillation control problem is solved by first designing a ¿ ¢ ¿ controller þ for levels and pressure. these arguments apply at higher frequencies. This is then a case of (block) decentralized control where the controller blocks à ½ and þ are designed sequentially (in addition. In particular. Remark. apply to cascade control if the term “optimization layer” is replaced by “primary controller”. This is discussed in detail in Example 10. Furthermore. From (10. where we have additional “secondary” measurements Ý ¾ with no associated control objective. The idea is that this should reduce the effect of disturbances and uncertainty on ݽ .2.5 shows a cascade control system where the primary output ݽ depends directly on the extra measurement ݾ . we want the input-output controllability of the “plant” ÈÙ ÈÖ (or ÈÖ if the set Ù½ is empty) with disturbance model È . Exercise 10. In summary. More precisely. ½ ½ and discuss the result.7 below. for the selection of controlled outputs. . Sequential design of cascade control systems Consider the conventional cascade control system in Figure 10.6 The block diagram in Figure 10. This is discussed next. the blocks ý and þ may themselves be decentralized).3. Show that È Á ¼ and ÈÖ ¾¾ ¾. Most of the arguments given in Section 10. to be better than that of the plant ½½ ½¾ (or ½¾ ) with disturbance model ½ . it follows that we should select secondary measurements Ý ¾ (and inputs Ù¾ ) such that È is small and at least smaller than ½ .4(a). it should be easy to control Ý ½ by using as degrees of freedom the references Ö ¾ (for the secondary outputs) or the unused inputs Ù ½ . and “control layer” is replaced by “secondary controller”. so ½¾ ½ ¾. where their reference values are set by a composition controller in a cascade manner.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ Because of the problems of interactions and the high cost of composition measurements. Note that ÈÖ is the “new” plant as it appears with the inner loop closed. and the objective is to improve the control of the primary outputs Ý ½ by locally controlling ݾ .28). and then designing a ¾ ¢ ¾ controller ý for the composition control. and in Section 10. Another common solution is to make use of additional temperature measurements from the column. Á ½ and ¾ ¼ Á . for the separation into an optimization and a control layer.

29): The proof is from Skogestad and Wolff (1992).28). There may also be changes in Ö ¾ (of magnitude Ê ¾ ) which may be regarded as disturbances on the uncontrolled outputs Ý ½ . To analyze the feasibility of partial control. Therefore.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. “True” partial control is often considered if we have an Ñ ¢ Ñ plant ´×µ where acceptable control of all Ñ outputs is difficult.28). we also have that: ¯ A set of outputs ݽ may be left uncontrolled only if the effects of all reference changes in the controlled outputs (Ý ¾ ) on ݽ . rearrange the system as in (10. ¾¾ One uncontrolled output and one unused input.4. are less than ½ in magnitude at all frequencies. Proof of (10.3 “True” partial control We here consider the case where we attempt to leave a set of primary outputs Ý ½ uncontrolled. One justification for partial control is that measurements. Then we have that: ¯ A set of outputs ݽ may be left uncontrolled only if the effects of all disturbances on ݽ .7. The effect of a disturbance on the uncontrolled output Ý is ݽ and ٠ݾ ¾ È where “Ù outputs ÝÐ Ý Ù ¼ ÝÐ ¼  ½  ½ (10. consider the effect of disturbances on the uncontrolled output(s) Ý ½ as given by (10.28). are less than 1 in magnitude at all frequencies. This “true” partial control may be possible in cases where the outputs are correlated such that controlling the outputs Ý ¾ indirectly gives acceptable control of ݽ . Then Ù  ½  ½ . and we therefore prefer control schemes with as few control loops as possible. Suppose all variables have been scaled as discussed in Section 1. and we consider leaving one input Ù unused and one output Ý uncontrolled.29) ¼ ÝÐ ¼” means that input Ù is constant (unused) and the remaining are constant (perfectly controlled). and by setting Ù ¼ we find Ý   Ù·    ½ ¾ . actuators and control links cost money.25) and compute È using (10. as expressed by the elements in the matrix ½¾  ½ ʾ .28) then. Set ÝÐ ¼ for all Ð . as expressed by the elements in the corresponding partial disturbance gain matrix È . to evaluate the feasibility of partial control one must for each choice of controlled outputs (Ý ¾ ) and corresponding inputs (Ù ¾ ). Ù we may directly evaluate the partial disturbance gain based on the overall model Ý Ù· .24) and (10. In this case. Rewrite Ý  ½ Ý  ½  ½ Ý as Ù . as an alternative to rearranging Ý into into Ù½ for each candidate control configuration and computing È from (10. From (10.

7). That is.16).29) we derive direct insight into how to select the uncontrolled output and unused input: 1. Select the unused input Ù such that the ’th row in  ½ has small elements. If (and thus ½½ ) is square.16. We assume that changes in the reference (Ö½ and Ö¾ ) are infrequent and they will not be considered.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ We want È small so from (10.31) are similar in magnitude as are also the elements of  ½ Since the row elements in  ½ (between ¼ ¿ and ¼ ). 1997). 2. That is.29) do not clearly favour any particular partial . Example 10. as is done in Exercise 6. (The outputs are mainly selected to avoid the presence of RHP-zeros.7 Partial and feedforward control of ¾ ¾ distillation process. To improve the performance of Ý ½ we also consider the use of feedforward control where Ù ½ is adjusted based on measuring the disturbance (but we need no measurement of Ý ½ ). we then have È  ¢  ½ £ We next consider a ¾ ¢ ¾ distillation process where it is difficult to control both outputs independently due to strong interactions. and we leave one output (Ý ½ ) uncontrolled. the rules following (10. Select the uncontrolled output Ý and unused input Ù such that the ’th element in  ½ is large. see Exercise 6.28) by making use of the Schur complement in (A. since all elements in the third row of  ½ are large. From the second rule. We first reorder such that the upper left ½½subsystem contains the uncontrolled and unused variables. it seems reasonable to let input Ù¿ be unused.29) may be generalized to the case with several uncontrolled outputs / unused inputs (Zhao and Skogestad. keep an output uncontrolled if it is insensitive to changes in the unused input with the other outputs controlled.6 Consider the FCC process in Exercise 6. ½½ ¡ ½ ¢  ½ £ ½ (10.16 on page 250 with ´¼µ  ½ ½ ½¾ ¿¼ ¿¼ ¿½ ¼  ½ ½ ½ ¼  ½ ´¼µ ¼ ¼ ¼ ¼¾  ¼ ¼ ¼ ¼¿ ¼ ¼¿  ¼ ¼¾  ¼ ¿  ¼ ¿¾ ¼ ¿ where we want to leave one input unused and one output uncontrolled. ¾ outputs (product compositions Ý and Ü ) and ¾ disturbances (feed flowrate and feed composition Þ ). Example 10. (10. keep the input constant (unused) if its desired change is small. At steadystate (× ¼) we have This result is derived from the definition of È in (10.30) ¢ ½¼ ¾  ½¼   ½ ½½ ¾ ½½ ½  ½  ¼  ¼  ¼ ¼¼  ¼ ½¼ (10. Consider a distillation process with ¾ inputs (reflux Ä and boilup Î ).

the mathematical ¼ ½ . In particular.9. However.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ control scheme.3).4. this is easily implemented as a ratio constant. the magnitudes of the elements in È are much smaller than in one output significantly reduces the effect of the disturbances on the uncontrolled output. by installing a buffer tank (as in pH-example in Chapter 5. this is the case for disturbance ¾. where we show for the uncontrolled output ݽ and the worst disturbance ½ both the open-loop disturbance gain ( ½½ . for example.9: Effect of disturbance 1 on output 1 for distillation column example The partial disturbance gain for disturbance ½ (the feed flowrate ) is somewhat above ½ at ½ ¿ ). so control of four cases. Let us consider in more detail scheme ½ which has the smallest disturbance sensitivity for the uncontrolled output (È ¾½¾ ). which are seen to be quite similar for the four candidate partial control schemes: È ¾½¾  ½ ¿  ¼ ¼½½ Ì È ¾½½  ½ ¿  ¼ ¾ Ì È ½¾¾ ½ ¾ ¼ ¼½ Ì È ½¾½ ¾ ¼¼ ¼ ¿¿ Ì The superscripts denote the controlled output and corresponding input. In practice. Curve ½) ¾ and the partial disturbance gain (È ¾½½ . In terms of our linear model. so it does not help in this case. Add filter to FF 10 −2 3. Add static Feed-Forward −3 10 10 −2 10 −1 10 0 10 1 Frequency [rad/min] Figure 10. ½½ 10 −1 4. È ¾½½ 1.16. The effect of the disturbance after including this static feedforward controller in   . Importantly. in all . and È show that the conclusion at steady state also applies Frequency-dependent plots of at higher frequencies. This eliminates the steady-state effect of ½ on ݽ controller which keeps Ä (provided the other control loop is closed). 20% error in FF 5. where ¼ is the ½ ½-element equivalence of this ratio controller is to use Ù½  ½ . Curve ¾). so let us next consider how we may reduce its effect on ݽ . low frequencies (È ´¼µ One approach is to reduce the disturbance itself. for which the gain is reduced from about ½¼ to ¼ ¿¿ and less. 10 1 Magnitude 10 0 ¾ 2.   Another approach is to install a feedforward controller based on measuring ½ and adjusting Ù½ (the reflux Ä) which is so far unused. This is confirmed by the values of È . the model is given in Section 12. We use a dynamic model which includes liquid flow dynamics. a buffer tank has no effect at steady-state. This is illustrated in Figure 10. This scheme corresponds to controlling output ݾ (the bottom composition) using Ù¾ (the boilup Î ) and with ݽ (the top composition) uncontrolled. For disturbance ¾ the partial disturbance gain (not shown) remains below ½ at all frequencies.

and in fact makes the response worse at higher frequencies (as seen when comparing curves ¿ and with curve ¾). However. for this example it is difficult to control both outputs simultaneously using feedback control due to strong interactions. In the example there are four alternative partial control schemes with quite similar disturbance sensitivity for the uncontrolled output.32) from disturbance ½ to Ù½ . To decide on the best scheme. so let us assume the error is ¾¼%. the effect is above ¼ at higher frequencies. Remark. The reason for this undesirable peak is that the feedforward controller. our objective is to minimize Â Ý ½   Ö½ . which is steady-state effect of the disturbance is then È ´¼µ´½ ½ ¾µ still acceptable. and instead we attempt to achieve our goal by controlling Ý ¾ at a constant value Ö ¾ . we can almost achieve acceptable control of both outputs by leaving Ý ½ uncontrolled. due to measurement error we cannot achieve perfect feedforward control. reacts too fast.g. We assume we cannot measure Ý ½ . But. and we see that the effect is now less than ¼ ¾ Ù½ at all frequencies. The resulting effect of the disturbance on the uncontrolled output is shown by curve . and use Ù½ ½ ¾ ¼ ½ . e. so the performance is acceptable.32) ¿× · ½ ½ To be realistic we again assume an error of ¾¼%.33) (10.9. and with less phase lag. For small changes we may assume linearity and write ݽ ݾ ½Ù · ½ (10. However.34) and substituting into (10. as seen from the frequency-dependent plot (curve ). 10. To avoid this we filter the feedforward action with a time constant of ¿ min resulting in the following feedforward controller:   ¡ ¡ ¼ (10.33) yields ݽ ´ ½   ½  ½ ¾ µ · ½  ½ ´Ö¾ · ¾ µ ¾ ¾ . The effect of the most difficult disturbance on Ý ½ can be further reduced using a simple feedforward controller (10.4 Measurement selection for indirect control Assume the overall goal is to keep some variable Ý ½ at a given value (setpoint) Ö ½ .34) ¾Ù · ¾ Ö¾ · ¾ where ¾ is the control error. Performing such an analysis. we should also perform a controllability analysis of the feedback properties of the four ½ ½ problems. We With feedback control of Ý ¾ we get ݾ now follow the derivation that led to È in (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ is shown as curve ¿ in Figure 10. The ½ ¿ ¼ ¾ ¼ ¾ . which is purely static. which may not be desirable.7. we find that schemes ½ (the one chosen) and are preferable. ¢ In conclusion. because the input in these two cases has a more direct effect on the output.28): Solving for Ù ¾ in (10.

there is no universal agreement on these terms. and scale the outputs ݾ so that the expected control error ¾ (measurement noise) is of magnitude 1 for each output (this is different from the output scaling used in other cases).36) a procedure for selecting controlled outputs ݾ may be suggested: Scale the disturbances to be of magnitude 1 (as usual). Then to minimize the control error for the primary ݽ   Ö½ .7. we should select sets of controlled outputs which: outputs. The magnitude of the control error ¾ depends on the choice of outputs Ý ¾ .   and the matrix Remark 3 In some cases this measurement selection problem involves a trade-off between wanting È small (wanting a strong correlation between measured outputs ݾ and “primary” outputs ݽ ) and wanting ÈÖ small (wanting the effect of control errors (measurement noise) to be small). The main difference is that in cascade control we also measure and control ݽ in an outer loop.35) he control error in the primary output is then ´ ½   ½  ½ ßÞ ¾ È ¾ µ · ½ßÞ ½ ¾ ¾ ÈÖ (10. and Marlin (1995) uses the term inferential control to mean indirect control as discussed above. Based on (10. For a high-purity separation. at least for square plants). ÈÖ is still non-zero. in inferential control the idea is usually to use the measurement of ݾ to estimate (infer) ݽ and then to control this estimate rather than controlling ݾ directly.36) To minimize  ݽ   Ö½ we should therefore select controlled outputs such that È and ÈÖ ¾ are small.¼ With ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¾ ¼ and ¼ this gives ݽ Ö½ ݽ   Ö½ ½ ½  ½ Ö¾ . e.37) ݽ we have that Ö½ Ö¾ is independent of Remark 2 For the choice ݾ È in (10.37) is usually of secondary importance. . so Ö¾ must be chosen such that ¾  ½ Ö¾ ¾ (10. For example. However. The maximum singular value arises if ¾ ½ and ¾ ¾ ½. and we cannot place it too far from the column end due to sensitivity to disturbances ( È becomes large).  ÅÒÑÞ × È ÈÖ (10. and we want to minimize ݽ Ö½ ¾ . this is the case in a distillation column when we use temperatures inside the column (ݾ ) for indirect control of the product compositions (ݽ ).36) and (10. Remark 4 Indirect control is related to the idea of inferential control which is commonly used in the process industry. Note that È depends on the scaling of disturbances and “primary” outputs Ý ½ (and is independent of the scaling of inputs Ù and selected outputs ݾ . see Stephanopoulos (1984). However. However. Remark 5 The problem of indirect control is closely related to that of cascade control discussed in Section 10.g.2.37) is zero. we cannot place the measurement too close to the column end due to sensitivity to measurement error ( ÈÖ becomes large). Remark 1 The choice of norm in (10. In this case we want È ÈÖ small only at high frequencies beyond the bandwidth of the outer loop involving ݽ .

The reader is referred to the literature (e.39) . With a particular choice of pairing we can rearrange the columns or rows of ´×µ such that the paired elements are along the ).. ÑÑ .g. we do not address it in this book. and diagonal of ´×µ.. by removing row and column in ´×µ. ´×µ denotes a square Ñ ¢ Ñ plant ´×µ denotes the remaining ´Ñ   ½µ ¢ ´Ñ   ½µ plant obtained with elements .¹ ½ Ù½ ¹ ¹ + - ¹ ¾ Ù¾ ¹ ´ µ × Õ ¹Ý½ Õ ¹Ý¾ Figure 10. The design (tuning) of each controller. 1995) for more details.10: Decentralized diagonal control of a ¾ ¢ ¾ plant ¿ In this section. (10. Sourlas and Manousiouthakis. The optimal solution to this problem is very difficult mathematically. The design of decentralized control systems involves two steps: 1. ´×µ.8 Decentralized feedback control à ´×µ Ö½ Ö¾ + ¹. We then have that the controller à ´×µ is diagonal ( we also introduce ¾ ¸ ½½ ¿ ¾¾ . because the optimal controller is in general of infinite order and may be non-unique. . The choice of pairings (control configuration selection) 2.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ 10. Notation for decentralized diagonal control. (10. Rather we aim at providing simple tools for pairing selections (step 1) and for analyzing the achievable performance (controllability) of diagonally controlled plants (which may assist in step 2).10) ¾ ½ ´×µ à ´×µ ´×µ ¾ ´×µ .38) Ñ ´×µ This is the problem of decentralized diagonal feedback control. ´×µ is a square plant which is to be controlled using a diagonal controller (see Figure 10.

e. and show that the RGA provides a measure of the interactions caused by decentralized diagonal control. and of Ý Ù Ù ¼ (10. Ù Other loops closed: All other outputs are constant. i.44) and (10. it is assumed that the other loops are closed with perfect control.41) ’th  ½ is the ’th element of is the inverse of the ½  ½ (10. . which is also equal to the ’th diagonal element of in loop is denoted Ä Ä Ã.42) To derive (10. Ý ¼ ¼ . is a useful measure of interactions. Bristol argued that there will be two extreme cases: ¯ ¯ Other loops open: All other inputs are constant.41). and he introduced the term.40) ¸ (10.40) and (10.e. and assume that our task is to use Ù to control Ý .45) . 10. of Ù and Ý.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ as the matrix consisting of the diagonal elements of . whereas (10.42) note that Ý Ù µ and and interchange the roles of  ½ .43) and to get Ù  ½ Ý µ Ù Ý Ý ¼  ½ (10. We get Other loops open: Other loops closed: Here element of Ý Ù Ù ¼ Ý Ù Ý ¼ . In the latter case. i. corresponding to the two extreme cases. Perfect control is only possible at steady-state.42) follows.8. Let Ù and Ý denote a particular input and output for the multivariable plant ´×µ. Bristol argued that the ratio between the gains in (10. The loop transfer function . We now evaluate the effect Ý of “our” given input Ù on “our” given output Ý for the two extreme cases. ’th relative gain defined as ¸  ½ (10. but it is a good approximation at Ù frequencies within the bandwidth of each loop.1 RGA as interaction measure for decentralized control We here follow Bristol (1966).

tools are needed which are capable of quickly evaluating .2 Factorization of sensitivity function The magnitude of the off-diagonal elements in diagonal elements are given by the matrix (the interactions) relative to its ¸´   ´Á · à µ ´Á · Ì µ ßÞ µ  ½ (10.48) is correct.48) ´Á · à µ ßÞ (10. 10. a ¿ ¢ ¿ plant offers 6. and an Ñ ¢ Ñ plant has Ñ alternatives.45) we get £´ µ multiplication (the Schur product).3 Stability of decentralized control systems Consider a square plant with single-loop controllers.47) or equivalently in terms of the sensitivity function Ë Ë Here Ë ´Á · Ì µ ½ ½ Ò Ë ¸ ´Á · à µ ½ ½· Ì Á  Ë (10. 10. because most of the important results for stability and performance using decentralized control may be derived from this expression.139) with and ¼ . we would like to pair such that the rearranged system. More precicely. This is identical to our definition of the RGAmatrix in (3.8.8.46) An important relationship for decentralized control is given by the following factorization of the return difference operator: ÓÚ Ö ÐÐ ßÞ ÒØ Ö Ø ÓÒ× Ò Ú Ù Ð ÐÓÓÔ× ´Á · à µ  ½ .49) contain the sensitivity and complementary sensitivity functions for the individual loops. Note that Ë is not equal to the matrix of diagonal elements of Ë .69). has a RGA matrix close to identity (see Pairing Rule 1. because Intuitively. a ¢ plant 24. page 445). (10. (10. The reader is encouraged to confirm that (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ The Relative Gain Array (RGA) is the corresponding matrix of relative gains. is close to 1. with the pairings along the diagonal.48) follows from (A. ¢ ´  ½ µÌ where ¢ denotes element-by-element From (10. we would like to pair variables Ù and Ý so that this means that the gain from Ù to Ý is unaffected by closing the other loops. For a ¾ ¢ ¾ plant there are two alternative pairings. Thus.

´ Ì µ ½.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ alternative pairings. Sufficient conditions for stability For decentralized diagonal control.51) Here ´ µ is called the structured singular value interaction measure. as discussed by Grosdidier and Morari (1986). These give rules in terms of diagonal dominance.47) and the generalized Nyquist theorem in Lemma A. see page 514. A.107) we then have that the overall system is stable if ´ Ì ´ µµ ½ (10. Sufficient conditions in terms of .5 (page 540). and is computed with respect to the diagonal structure of Ì . i.50) we then derive the following sufficient condition for overall stability in terms of the rows of : Ø (10. where we may view Ì as the “design uncertainty”. in order to satisfy (10. From (10. Then the entire system is closed-loop stable (Ì is stable) if ´Ì µ Ñ ÜØ ½ ´ µ (10.51) we want Ì we need ´ µ ½ at low frequencies where we have tight control. ´ µ ½ (“generalized diagonal dominance”) Another approach is to use Gershgorin’s theorem. it follows that the overall system is stable (Ë is stable) if and only if Ø´Á · Ì ´×µµ does not encircle the origin as × traverses the Nyquist -contour.92) we have ´ Ì µ ´ µ ´Ì µ and from (10. From (8. From the spectral radius stability condition in (4.2 Assume is stable and that the individual loops are stable ( Ì is stable). Then from the factorization Ë Ë ´Á · Ì µ ½ in (10.52) .50) we get the following theorem (as first derived by Grosdidier and Morari. Assume therefore that individual loop is stable by itself ( Ë and Ì are stable). Á at low frequencies. Thus. 1986): Theorem 10. that is. This gives the following rule: Prefer pairings for which we have at low frequencies.e.50) This sufficient condition for overall stability can. We would like to use integral action in the loops. it is desirable that the system can be tuned is stable and each and operated one loop at a time. We then derive necessary conditions for stability which may be used to eliminate undesirable pairings. be used to obtain a number of even weaker stability conditions. The least conservative approach is to split up ´ Ì µ using the structured singular value. In this section we first derive sufficient conditions for stability which may be used to select promising pairings.

53). and since the diagonal elements of are zero.127) that ´ µ ´ µ. .50) is satisfied.52) and (10. which applies to a triangular plant. it follows from (A.84) where ´ µ is the Perron root. in terms of the columns. see (8. the rearranged matrix ´ µ (with the paired elements along the diaginal) is close to triangular. Ø This gives the important insight that we prefer to pair on large elements in (10.51) imposes the same bound on Ø for each loop. see (A. Mathematically. We now want to show that for closed-loop stability it is desirable to select pairings such that the RGA is close to the identity matrix in the crossover region. A plant with a “triangularized” transfer matrix (as described above) controlled by a diagonal controller has only one-way coupling and will always yield ´ µ  ½ a stable system provided the individual loops are stable. However. Remark 3 Condition (10.53) .3 Suppose the plant ´×µ is stable. i.51) is always less conservative than (10. since ´ µ ´ in this case is diagonal. Remark 1 We cannot say that (10. the RGA-number £´ ´ µµ   Á ×ÙÑ should be small.51). The next simple theorem. see equations (31a-b) in Skogestad and Morari (1989). whereas (10. so ´ Ì µ ¼ and (10.52) and (10. will enable us to do this: Á Theorem 10.52) or (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ or alternatively. can be made triangular. In most cases. Proof: From the definition of the RGA it follows that £´ µ Á can only arise from a triangular ´×µ or from ´×µ-matrices that can be made triangular by interchanging rows and columns in such a way that the diagonal elements remain the same but in a different order (the pairings remain the same). Sufficient conditions for stability in terms of RGA.   RGA at crossover frequencies. To achieve stability with decentralized control prefer pairings such that at frequencies around crossover.e.53) is always It is true that the smallest of the smaller (more restrictive) than ½ ´ µ in (10. However.53) give individual bounds. some of which may be less restrictive than ½ ´ µ. ½ Ñ upper bounds in (10. where  ½ µ. and it is better (less restrictive) to use ´ µ to define diagonal dominance.51) and the use of ´ µ for (nominal) stability of the decentralized control system can be generalized to include robust stability and robust performance. If the RGA-matrix £´ µ then stability of each of the individual loops implies stability of the entire system.127). it follows that all ¾ eigenvalues of Ì are zero. Remark 2 Another definition of generalized diagonal dominance is that ´ µ ½. it is sufficient for overall stability to require that ´ µ is close to triangular (or £´ µ Á ) at crossover frequencies: Pairing Rule 1. This is equivalent to requiring £´ ´ µµ Á . (10.

with diagonal elements close to zero. the system possesses integrity if it remains stable ¯ and ¯ may take on when the controller à is replaced by à where ¼ or ¯ ½. Assume that Ë is stable. (10. This is because   Á and thus Ë ´  Á µ are then close to triangular. because the off-diagonal element of 6 is larger than any of the diagonal elements. i. Decentralized integral controllability (DIC) is concerned with whether this is possible integral control: . (10.54) At low frequencies.e. Here Ë ´  Á µ is stable.52) and (10.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Derivation of Pairing rule 1. so the eigenvalues of Ë ´  Á µ´ µ are close to zero. This is confirned by computing the relative interactions as given by the matrix :   ¼ ¼ ¾   µ  ½  ¼ . this condition is usually satisfied because Ë is small. B. Consider a plant and its RGA-matrix   ½ ¾ ´ £´ µ ¼ ¾ ¼¿ ¼¿ ¼ ¾ ¼¿ ¼ ¿½¾ ¼¿ The RGA indicates that is diagonally dominant and that we would prefer to use the diagonal pairing. At higher frequencies.e. ¾       Example.60) the overall system is stable (Ë is stable) if and only if ´Á · Ë ´  Á µµ ½ is stable. the closed-loop system should remain stable as subsystem controllers are brought in and out of service. This conclusion also holds for plants with RHP-zeros provided they are located beyond the crossover frequency range. Then from (10. Note that the Perron root ´ µ ¼ conservative. and that Ë  ´×µ Ë ´×µ ´×µ ½ is stable and has no RHP-zeros (which is always satisfied if both and are stable and have no RHP-zeros). so the plant is diagonally dominant. the values of ¯ An even stronger requirement is that the system remains stable as the gain in various loops are reduced (detuned) by an arbitrary factor. Mathematically. where the elements in Ë × approach and possibly exceed 1 in magnitude. Necessary steady-state conditions for stability A desirable property of a decentralized control system is that it has integrity.53) that the plant is diagonally dominant. It is not possible in this case to conclude from the Gershgorin bounds in (10.107) the overall system is stable if     ´Ë ´    Á µ´ µµ ½ (10. i.54) is satisfied and we have stability of Ë .54) may be satisfied if ´ µ is close to triangular. ¼ ¯ ½ (“complete detunability”).51) stability of the individual loops will guarantee stability of the overall closed-loop ½ which shows that the use of ´ µ is less system. so from the spectral radius stability condition in (4. and from (10. The SSV-interaction measure is ´ µ ¼ ½¾ .

1 Decentralized Integral Controllability (DIC). Unstable plants are not DIC. The reason is that with all ¯ uncontrolled plant . . Alternatively. Note that DIC considers the existence of a controller.g. due to input saturation. Since Ø ¼ Ø and from (A.4 Steady-state RGA and DIC. If a pairing of outputs and manipulated inputs corresponds to a negative steady-state relative gain. The steady-state RGA provides a very useful tool to test for DIC. but situation (c) is also highly undesirable as it will imply instability if the loop with the negative relative gain somehow becomes inactive. Consider a stable square plant and a diagonal controller à with integral action in all elements. The worst case is (a) when the overall system negative value of is unstable.47) and applying the generalized Nyquist stability condition. (c) The closed-loop system is unstable if the loop with the negative relative gain is opened (broken). then the closed-loop system has at least one of the following properties: (a) The overall closed-loop system is unstable. as is clear from the following result which was first proved by Grosdidier et al. and the system will be (internally) unstable if ´×µ is unstable.4 follows.77) Ø ¼ Ø Ø and Theorem 10. A detailed survey of conditions for DIC and other related properties is given by Campo and Morari (1994). 1. for example. (b) The loop with the negative relative gain is unstable by itself. The plant ´×µ (corresponding to a given pairing with the paired elements along its diagonal) is DIC if there exists a stabilizing decentralized controller with integral action in each loop such that each individual loop may be detuned independently by a factor ¯ (¼ ¯ ½) without introducing instability. Situation (b) is unacceptable if the loop in question is intended to be operated by itself. or if all the other loops may become inactive. ¾ Each of the three possible instabilities in Theorem 10. (1985): Theorem 10. and assume that the loop transfer function à is strictly proper. This can be summarized as follows: A stable (reordered) plant ´×µ is DIC only if ´¼µ ¼ for all .4 resulting from pairing on a ´¼µ is undesirable. (10. we can use Theorem 6. ¼ we are left with the 2. DIC was introduced by Skogestad and Morari (1988b). due to input saturation.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ Definition 10.55) Proof: The theorem may be proved by setting Ì Á in (10. Remarks on DIC and RGA. e.5 on page 245 and select Ø we have ¼ . so it depends only on the plant and the chosen pairings.

If we assume that the controllers have integral action. The theory of D-stability provides necessary and sufficient conditions except in a few special cases. This is clear from Ø the fact that Ø .55). Then one may compute the all diagonal elements of determinant of ´¼µ and all its principal submatrices (obtained by deleting rows and corresponding columns in ´¼µ). The following condition is concerned with whether it is possible to design a decentralized controller for the plant such that the system possesses integrity. 4. as pointed outÔ Campo and Morari (1994). for a ¾ ¾ system ¢ . and we can 8.5 to all possible combinations ¼ or 1 as illustrated in the proof of Theorem 10. which is a prerequisite for having DIC: Assume without loss have been adjusted such that of generality that the signs of the rows or columns of are positive.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 3.57) Á ¸ Ô ¾¾ ´¼µ · ¿¿ ´¼µ ½ (Strictly speaking. because in the RGA the terms are combined into Ø we may have cases where two negative determinants result in a positive RGA-element. However. For ¾ ¾ plants (Skogestad and Morari. i. the RGA is usually the preferred tool because it does not have to be recomputed for each pairing. then Ì ´¼µ derive from (10. otherwise the integrator would yield internal instability. such as when the determinant of one or more of the submatrices is zero. ´¼µ ¼.56) For ¿ ¿ plants with positive diagonal RGA-elements of ´¼µ and of (its three principal submatrices) we have (Yu and Fan. One cannot expect tight conditions for DIC in terms of the RGA for ¼ or ¯ ½ The reason is that the RGA essentially only considers “corner values”. 6.154). Á . This determinant condition follows by applying Theorem 6.51) that a sufficient condition for DIC is that is generalized diagonally dominant at steady-state. DIC is also closely related to -stability. this yields more information Ø so than the RGA.e. For ¯ ¼ we assume that the integrator of the corresponding SISO controller has been removed. ¯ (integrity).58) of ´¼µ and its principal submatrices are all positive. Specifically. which should all have the same sign for DIC. Actually. p. we do not have equivalence by Ô Ô for the case when ½½ ´¼µ · ¾¾ ´¼µ · ¿¿ ´¼µ is identical to 1. 1990) ¢ ½¾¿ (10. For ¾ ¾ and ¿ ¿ plants we have even tighter conditions for DIC than (10. ´ ´¼µµ ½ This is proved by Braatz (1993. see papers by Yu and Fan (1990) and Campo and Morari (1994). and corresponds to ¯ ¼ with the other ¯ ½. Nevertheless. where corresponds to ¯ ½ for all . 7. 5. that is. 1988b) ¢ ¢ ¢ Á ¸ ½½ ´¼µ · Ô ½½ ´¼µ ¼ ´¼µ Ô (10. Determinant conditions for integrity (DIC). ¢ ÆÁ Ø ´¼µ ¥ ´¼µ (10. the requirement is only sufficient for DIC and therefore cannot be used to eliminate designs. systems or higher. and is equivalent to requiring of ¯ that the so-called Niederlinski indices.4. corresponds to ¯ ½ with the other ¯ ¼. but this has little practical significance). for the detuning factor in each loop in the definition of DIC.

these are moved slightly into the LHP (e. 10.8. Performance is analyzed in Section 10. Then taken together the above two theorems then have the following implications: . b) The overall plant ´×µ has a RHP-zero.g.8. assume that ´½µ is positive (it is usually close to 1. This is indeed true as illustrated by the following two theorems: Theorem 10. Theorem 10. This will have no practical significance for the subsequent analysis. by using very low-gain feedback). e. one may first implement a stabilizing controller and then analyze the partially controlled system as if it were the plant ´×µ. 1986) that ´ ´¼µµ ½ is equivalent to ½½ ´¼µ ¼ . and assume that the element has no RHP-zero. we may also easily extend Theorem 10. If ´ ½µ and ´¼µ have different signs then at least one of the following must be true: a) The element ´×µ has a RHP-zero. Let ´×µ denote the closed-loop transfer function between input Ù and output Ý with all the other outputs under integral control. (ii) the loop transfer function à is strictly proper. 1985) Consider a stable transfer function matrix ´×µ with elements ´×µ.5 applies to unstable plants.56). Assume that we pair on a negative steady-state RGA-element. ´×µ. ´¼µ ¼. see Pairing rule 1).6 (Grosdidier et al. Since Theorem 6. 11. which is conservative when compared with the necessary and sufficient condition ½½ ´¼µ ¼ in (10. If the plant has -axis poles.5. We then have: If ´¼µ ¼ then ´×µ has an odd number of RHP-poles and RHP-zeros. 1992) Consider a transfer function matrix ´×µ with no zeros or poles at × ¼. integrators. Alternatively. (iii) all other elements of ´×µ have equal or higher pole excess than ´×µ. c) The subsystem with input and output removed.g. has a RHP-zero. This is shown in Hovd and Skogestad (1994a).4 The RGA and right-half plane zeros: Further reasons for not pairing on negative RGA elements Bristol (1966) claimed that negative values of ´¼µ implied the presence of RHPzeros. Assume Ð Ñ× ½ ´×µ is finite and different from zero. The above results only address stability. 10. it is recommended that. Assume that: (i) ´×µ has no RHP-zeros.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ it is easy to show (Grosdidier and Morari. Negative RGA-elements and decentralized performance With decentralized control we usually design and implement the controller by tuning and closing one loop at a time in a sequential manner. prior to the RGAanalysis.5 (Hovd and Skogestad.. 9.4 to unstable plants (and in this case one may actually desire to pair on a negative RGAelement).

The RGA is a very efficient tool because it does not have to be recomputed for each possible choice of pairing. then we will get a RHP-zero i(in will limit the performance in output Ý (follows from Theorem 10. they found pairing (a) corresponding to . Example 10.   Remark. and therefore one can often eliminate many alternative pairings by a simple glance at the RGA-matrix. none of the two alternatives satisfy Pairing Rule 1. in addition to resulting in potential instability as given in Theorem 10.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (a) If we start by closing this loop (involving input Ù and output Ý ). pairing on a negative RGA-element will. This is illustrated by the following example. 1992) confirm this conclusion by designing PI controllers for ½ to be significantly the two cases.5 by assuming that has no RHP-zero and that ´½µ ¼). then we will get ´×µ) which will limit the performance in the other outputs (follows a RHP-zero (in from Theorem 10. is 30 for both alternatives. and only one positive element in ½ ). and we are led to conclude that decentralized control should not be used for this plant. if we require DIC we can from a quick glance positive (Ù½ at the steady-state RGA eliminate five of the six alter