Ë ÙÖ Ë Ó ×Ø
Norwegian University of Science and Technology
Á Ò ÈÓ×ØÐ Ø Û Ø
University of Leicester
Second Edition This version: August 29, 2001
ÂÇÀÆ ÏÁÄ
×Ø Ö
º
Æ Û ÓÖ
² ËÇÆË
º
Ö× Ò
º
ÌÓÖÓÒØÓ
º
Ë Ò ÔÓÖ
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
BORGHEIM, an engineer: Herregud, en kan da ikke gjøre noe bedre enn leke i denne velsignede verden. Jeg synes hele livet er som en lek, jeg! Good heavens, one can’t do anything better than play in this blessed world. The whole of life seems like playing to me!
Act one, L ITTLE E YOLF, Henrik Ibsen.
Ú
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
ÇÆÌ ÆÌË
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PREFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 INTRODUCTION . . . . . . . . . . . . 1.1 The process of control system design 1.2 The control problem . . . . . . . . 1.3 Transfer functions . . . . . . . . . . 1.4 Scaling . . . . . . . . . . . . . . . 1.5 Deriving linear models . . . . . . . 1.6 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii v ix 1 1 2 3 5 8 11 15 15 21 24 27 39 40 55 62 63 63 64 68 79 84 87
2 CLASSICAL FEEDBACK CONTROL . 2.1 Frequency response . . . . . . . . . . 2.2 Feedback control . . . . . . . . . . . 2.3 Closedloop stability . . . . . . . . . 2.4 Evaluating closedloop performance . 2.5 Controller design . . . . . . . . . . . 2.6 Loop shaping . . . . . . . . . . . . . 2.7 Shaping closedloop transfer functions 2.8 Conclusion . . . . . . . . . . . . . .
3 INTRODUCTION TO MULTIVARIABLE CONTROL 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 3.2 Transfer functions for MIMO systems . . . . . . . . 3.3 Multivariable frequency response analysis . . . . . . 3.4 Control of multivariable plants . . . . . . . . . . . . 3.5 Introduction to multivariable RHPzeros . . . . . . . 3.6 Condition number and RGA . . . . . . . . . . . . .
Ú
3.7 3.8 3.9 3.10
ÅÍÄÌÁÎ ÊÁ
Introduction to MIMO robustness . General control problem formulation Additional exercises . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ä
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ã ÇÆÌÊÇÄ
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 . 98 . 110 . 112 . . . . . . . . . . . . 113 113 122 127 128 132 135 139 143 145 152 158 159 159 163 164 171 172 173 180 182 185 187 189 193 194 196 199 201 210 213 213 214 218 219 220
4 ELEMENTS OF LINEAR SYSTEM THEORY . 4.1 System descriptions . . . . . . . . . . . . . . 4.2 State controllability and state observability . . 4.3 Stability . . . . . . . . . . . . . . . . . . . . 4.4 Poles . . . . . . . . . . . . . . . . . . . . . . 4.5 Zeros . . . . . . . . . . . . . . . . . . . . . 4.6 Some remarks on poles and zeros . . . . . . . 4.7 Internal stability of feedback systems . . . . . 4.8 Stabilizing controllers . . . . . . . . . . . . . 4.9 Stability analysis in the frequency domain . . 4.10 System norms . . . . . . . . . . . . . . . . . 4.11 Conclusion . . . . . . . . . . . . . . . . . .
5 LIMITATIONS ON PERFORMANCE IN SISO SYSTEMS . . . . . . 5.1 InputOutput Controllability . . . . . . . . . . . . . . . . . . . . . 5.2 Perfect control and plant inversion . . . . . . . . . . . . . . . . . . 5.3 Constraints on Ë and Ì . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Ideal ISE optimal control . . . . . . . . . . . . . . . . . . . . . . . 5.5 Limitations imposed by time delays . . . . . . . . . . . . . . . . . 5.6 Limitations imposed by RHPzeros . . . . . . . . . . . . . . . . . . 5.7 RHPzeros amd noncausal controllers . . . . . . . . . . . . . . . . 5.8 Limitations imposed by unstable (RHP) poles . . . . . . . . . . . . 5.9 Combined unstable (RHP) poles and zeros . . . . . . . . . . . . . . 5.10 Performance requirements imposed by disturbances and commands 5.11 Limitations imposed by input constraints . . . . . . . . . . . . . . . 5.12 Limitations imposed by phase lag . . . . . . . . . . . . . . . . . . 5.13 Limitations imposed by uncertainty . . . . . . . . . . . . . . . . . 5.14 Summary: Controllability analysis with feedback control . . . . . . 5.15 Summary: Controllability analysis with feedforward control . . . . 5.16 Applications of controllability analysis . . . . . . . . . . . . . . . . 5.17 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 LIMITATIONS ON PERFORMANCE IN MIMO SYSTEMS . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Constraints on Ë and Ì . . . . . . . . . . . . . . . . . . . . 6.3 Functional controllability . . . . . . . . . . . . . . . . . . . 6.4 Limitations imposed by time delays . . . . . . . . . . . . . 6.5 Limitations imposed by RHPzeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ÇÆÌ ÆÌË
6.6 6.7 6.8 6.9 6.10 6.11 6.12 Limitations imposed by unstable (RHP) poles . . . RHPpoles combined with RHPzeros . . . . . . . Performance requirements imposed by disturbances Limitations imposed by input constraints . . . . . . Limitations imposed by uncertainty . . . . . . . . MIMO Inputoutput controllability . . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ú
223 224 226 228 234 246 251 253 253 255 258 259 270 277 284 289 291 293 293 296 303 305 307 309 312 314 322 326 330 339 348 351 355 355 359 368 382 403
7 UNCERTAINTY AND ROBUSTNESS FOR SISO SYSTEMS 7.1 Introduction to robustness . . . . . . . . . . . . . . . . . . . 7.2 Representing uncertainty . . . . . . . . . . . . . . . . . . . 7.3 Parametric uncertainty . . . . . . . . . . . . . . . . . . . . 7.4 Representing uncertainty in the frequency domain . . . . . . 7.5 SISO Robust stability . . . . . . . . . . . . . . . . . . . . . 7.6 SISO Robust performance . . . . . . . . . . . . . . . . . . 7.7 Examples of parametric uncertainty . . . . . . . . . . . . . 7.8 Additional exercises . . . . . . . . . . . . . . . . . . . . . . 7.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 8 ROBUST STABILITY AND PERFORMANCE ANALYSIS . 8.1 General control conﬁguration with uncertainty . . . . . . . 8.2 Representing uncertainty . . . . . . . . . . . . . . . . . . 8.3 Obtaining È , Æ and Å . . . . . . . . . . . . . . . . . . . 8.4 Deﬁnitions of robust stability and robust performance . . . 8.5 Robust stability of the Å ¡structure . . . . . . . . . . . . 8.6 RS for complex unstructured uncertainty . . . . . . . . . . 8.7 RS with structured uncertainty: Motivation . . . . . . . . . 8.8 The structured singular value . . . . . . . . . . . . . . . . 8.9 Robust stability with structured uncertainty . . . . . . . . 8.10 Robust performance . . . . . . . . . . . . . . . . . . . . . 8.11 Application: RP with input uncertainty . . . . . . . . . . . 8.12 synthesis and Ã iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.13 Further remarks on 8.14 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 9 CONTROLLER DESIGN . . . . . . . . . 9.1 Tradeoffs in MIMO feedback design 9.2 LQG control . . . . . . . . . . . . . . 9.3 À¾ and À½ control . . . . . . . . . . 9.4 À½ loopshaping design . . . . . . . 9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 CONTROL STRUCTURE DESIGN . . . . . . . . . . . . . . . . . . . 405
Ú
10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9
ÅÍÄÌÁÎ ÊÁ
Ä
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ã ÇÆÌÊÇÄ
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 407 410 416 418 420 429 441 458 459 459 460 462 463 464 467 476 477 478 479 479 480 490 500 506 509 509 512 515 522 526 539 541
Introduction . . . . . . . . . . . . . . . . . . Optimization and control . . . . . . . . . . . Selection of controlled outputs . . . . . . . . Selection of manipulations and measurements RGA for nonsquare plant . . . . . . . . . . Control conﬁguration elements . . . . . . . . Hierarchical and partial control . . . . . . . . Decentralized feedback control . . . . . . . . Conclusion . . . . . . . . . . . . . . . . . .
11 MODEL REDUCTION . . . . . . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . 11.2 Truncation and residualization . . . . . . . . . . 11.3 Balanced realizations . . . . . . . . . . . . . . . 11.4 Balanced truncation and balanced residualization 11.5 Optimal Hankel norm approximation . . . . . . . 11.6 Two practical examples . . . . . . . . . . . . . . 11.7 Reduction of unstable models . . . . . . . . . . . 11.8 Model reduction using MATLAB . . . . . . . . . 11.9 Conclusion . . . . . . . . . . . . . . . . . . . . 12 CASE STUDIES . . . . 12.1 Introduction . . . . 12.2 Helicopter control . 12.3 Aeroengine control 12.4 Distillation process 12.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A MATRIX THEORY AND NORMS . . . . . A.1 Basics . . . . . . . . . . . . . . . . . . A.2 Eigenvalues and eigenvectors . . . . . . A.3 Singular Value Decomposition . . . . . A.4 Relative Gain Array . . . . . . . . . . . A.5 Norms . . . . . . . . . . . . . . . . . . A.6 Factorization of the sensitivity function A.7 Linear fractional transformations . . . .
B PROJECT WORK and SAMPLE EXAM . . . . . . . . . . . . . . . . 545 B.1 Project work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 B.2 Sample exam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
ÈÊ
This is a book on practical feedback control and not on system theory generally. Feedback is used in control systems to change the dynamics of the system (usually to make the response stable and sufﬁciently fast), and to reduce the sensitivity of the system to signal uncertainty (disturbances) and model uncertainty. Important topics covered in the book, include
¯ ¯ ¯ ¯ ¯ ¯ ¯
classical frequencydomain methods analysis of directions in multivariable systems using the singular value decomposition inputoutput controllability (inherent control limitations in the plant) model uncertainty and robustness performance requirements methods for controller design and model reduction control structure selection and decentralized control
The treatment is for linear systems. The theory is then much simpler and more well developed, and a large amount of practical experience tells us that in many cases linear controllers designed using linear methods provide satisfactory performance when applied to real nonlinear plants. We have attempted to keep the mathematics at a reasonably simple level, and we emphasize results that enhance insight and intuition. The design methods currently available for linear systems are well developed, and with associated software it is relatively straightforward to design controllers for most multivariable plants. However, without insight and intuition it is difﬁcult to judge a solution, and to know how to proceed (e.g. how to change weights) in order to improve a design. The book is appropriate for use as a text for an introductory graduate course in multivariable control or for an advanced undergraduate course. We also think it will be useful for engineers who want to understand multivariable control, its limitations, and how it can be applied in practice. There are numerous worked examples, exercises and case studies which make frequent use of MATLAB TM 1 .
½ MATLAB is a registered trademark of The MathWorks, Inc.
Ü
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
The prerequisites for reading this book are an introductory course in classical singleinput singleoutput (SISO) control and some elementary knowledge of matrices and linear algebra. Parts of the book can be studied alone, and provide an appropriate background for a number of linear control courses at both undergraduate and graduate levels: classical loopshaping control, an introduction to multivariable control, advanced multivariable control, robust control, controller design, control structure design and controllability analysis. The book is partly based on a graduate multivariable control course given by the ﬁrst author in the Cybernetics Department at the Norwegian University of Science and Technology in Trondheim. About 10 students from Electrical, Chemical and Mechanical Engineering have taken the course each year since 1989. The course has usually consisted of 3 lectures a week for 12 weeks. In addition to regular assignments, the students have been required to complete a 50 hour design project using MATLAB. In Appendix B, a project outline is given together with a sample exam.
Examples and internet
Most of the numerical examples have been solved using MATLAB. Some sample ﬁles are included in the text to illustrate the steps involved. Most of these ﬁles use the toolbox, and some the Robust Control toolbox, but in most cases the problems could have been solved easily using other software packages. The following are available over the internet:
¯ ¯ ¯ ¯ ¯ ¯
MATLAB ﬁles for examples and ﬁgures Solutions to selected exercises Linear statespace models for plants used in the case studies Corrections, comments to chapters, extra exercises and exam sets
This information can be accessed from the authors’ home pages: http://www.chembio.ntnu.no/users/skoge http://www.le.ac.uk/engineering/staff/Postlethwaite
Comments and questions
Please send questions, errors and any comments you may have to the authors. Their email addresses are:
¯ ¯
Sigurd.Skogestad@chembio.ntnu.no ixp@le.ac.uk
ÈÊ
Ü
Acknowledgements
The contents of the book are strongly inﬂuenced by the ideas and courses of Professors John Doyle and Manfred Morari from the ﬁrst author’s time as a graduate student at Caltech during the period 19831986, and by the formative years, 19751981, the second author spent at Cambridge University with Professor Alistair MacFarlane. We thank the organizers of the 1993 European Control Conference for inviting us to present a short course on applied À ½ control, which was the starting point for our collaboration. The ﬁnal manuscript began to take shape in 199495 during a stay the authors had at the University of California at Berkeley – thanks to Andy Packard, Kameshwar Poolla, Masayoshi Tomizuka and others at the BCCIlab, and to the stimulating coffee at Brewed Awakening. We are grateful for the numerous technical and editorial contributions of Yi Cao, Kjetil Havre, Ghassan Murad and Ying Zhao. The computations for Example 4.5 were performed by Roy S. Smith who shared an ofﬁce with the authors at Berkeley. Helpful comments and corrections were provided by Richard Braatz, Jie Chen, Atle C. Christiansen, Wankyun Chung, Bjørn Glemmestad, John Morten Godhavn, Finn Are Michelsen and Per Johan Nicklasson. A number of people have assisted in editing and typing various versions of the manuscript, including ZiQin Wang, Yongjiang Yu, Greg Becker, Fen Wu, Regina Raag and Anneli Laur. We also acknowledge the contributions from our graduate students, notably Neale Foster, Morten Hovd, Elling W. Jacobsen, Petter Lundstr¨ m, John Morud, Raza Samar and o Erik A. Wolff. The aeroengine model (Chapters 11 and 12) and the helicopter model (Chapter 12) are provided with the kind permission of RollsRoyce Military Aero Engines Ltd, and the UK Ministry of Defence, DRA Bedford, respectively. Finally, thanks to colleagues and former colleagues at Trondheim and Caltech from the ﬁrst author, and at Leicester, Oxford and Cambridge from the second author. We have made use of material from several books. In particular, we recommend Zhou, Doyle and Glover (1996) as an excellent reference on system theory and À ½ control. Of the others we would like to acknowledge, and recommend for further reading, the following: Rosenbrock (1970), Rosenbrock (1974), Kwakernaak and Sivan (1972), Kailath (1980), Chen (1984), Francis (1987), Anderson and Moore (1989), Maciejowski (1989), Morari and Zaﬁriou (1989), Boyd and Barratt (1991), Doyle et al. (1992), Green and Limebeer (1995), Levine (1995), and the MATLAB toolbox manuals of Grace et al. (1992), Balas et al. (1993) and Chiang and Safonov (1992).
Second edition
In this second edition, we have corrected a number of minor mistakes and made numerous changes and additions to the text, partly arising from the many questions and comments we have received from interested readers. All corrections to the ﬁrst
exercises and equation. . so there should be little problem in using the two editions in parallel. ﬁgures.Ü ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ edition are available on the web. We have tried to minimize the changes in numbering of pages. examples.
ÈÊ Blank page Ü .
Design a controller. if necessary. 1. Model the system and simplify the model. We then discuss linear models and transfer functions which are the basic building blocks for the analysis and design techniques presented in this book. These demands often emerge in a step by step design procedure as follows: 1. Finally. Decide which variables are to be controlled (controlled outputs). either on a computer or a pilot plant. 12. 8. Choose hardware and software and implement the controller. Repeat from step 2. Study the system (plant) to be controlled and obtain initial information about the control objectives. 14. and if they are not satisﬁed modify the speciﬁcations or the type of controller. Simulate the resulting controlled system. we summarize the most important notation used in the book. and tune the controller online. 13. 3. 10. Decide on the type of controller to be used. Scale the variables and analyze the resulting model. 11. Decide on the measurements and manipulated variables: what sensors and actuators will be used and where will they be placed? 6. 9.½ ÁÆÌÊÇ Í ÌÁÇÆ In this chapter. An example is given to show how to derive a linear model in terms of deviation variables for a practical application. if necessary.1 The process of control system design The process of designing a control system usually makes many demands of the engineer or engineering team. if necessary. we begin with a brief outline of the design process for control systems. 7. 2. Select the control conﬁguration. based on the overall control objectives. The scaling of variables is critical in applications and so we provide a simple procedure for this. Decide on performance speciﬁcations. determine its properties. . 5. 4. Test and validate the control system. Analyze the resulting controlled system to see if the speciﬁcations are satisﬁed.
Simply stated. Ziegler and Nichols then proceed to observe that there is a factor in equipment design that is neglected. often based on a hierarchy of cascaded control loops. 4 5. Therefore. However. on methods for controller design and control system analysis. and state that . A special feature of this book is the provision of tools for inputoutput controllability analysis (step 3) and for control structure design (steps 4. To derive simple tools with which to quantify the inherent inputoutput controllability of a plant is the goal of Chapters 5 and 6. using only online tuning (involving steps 1. that is. may not deliver the desired performance. It is affected by the location of sensors and actuators. on badly designed processes. 13 and 14).2 The control problem The objective of a control system is to make the output Ý behave in a desired way by manipulating the plant input Ù. but on these processes. 6 and 7). 6. the process of control system design should in some cases also include a step 0. For example. 1. the missing characteristic can be called the “controllability”. Inputoutput controllability is the ability to achieve acceptable control performance. it may be possible to design workable control systems. even for complex systems with many inputs and outputs. involving the design of the process equipment itself. 5. . 5 and 6. and there is a need for systematic tools and insights to assist the designer with steps 4. many real control systems are designed without any consideration of these two steps. but otherwise it cannot be changed by the control engineer. A poor controller is often able to perform acceptably on a process which is easily controlled. True. in this case a suitable control structure may not be known at the outset. it is important to realize that controller and process form a unit. when applied to a miserably designed process. as is clear from the following quote taken from a paper by Ziegler and Nichols (1943): In the application of automatic controllers. . 7.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Control courses and text books usually focus on steps 9 and 10 in the above procedure. advanced controllers are able to eke out better results than older models. The idea of looking at process equipment design and control system design as an integrated whole is not new. credit or discredit for results obtained are attributable to one as much as the other. The regulator problem is to manipulate Ù to . the ability of the process to achieve and maintain the desired equilibrium value. there is a deﬁnite end point which can be approached by instrumentation and it falls short of perfection. Interestingly. “even the best control system cannot make a Ferrari out of a Volkswagen”. The ﬁnest controller made.
where the model “uncertainty” or “perturbation” is bounded. The servo problem is to manipulate Ù to keep the output close to a given reference input Ö. ¡. In particular. In this book we make use of linear models of the form Ý Ù· (1. and of the plant model ( ) and disturbance model ( ). Robust performance (RP). To arrive at a good design for Ã we need a priori information about the expected disturbances and reference inputs. change with time. where the magnitude (norm) of ¡ is less than or equal to ½. . in both cases we want the control error Ý Ö to be small. In most cases weighting functions. which are very useful in applications for the following reasons: ¯ ¯ ¯ ¯ ¯ Invaluable insights are obtained from simple frequencydependent plots. The algorithm for adjusting Ù based on the available information is the controller Ã . inaccuracy in may cause problems because the plant will be part of a feedback loop.1) ) may be inaccurate or may A major source of difﬁculty is that the models ( . The system is stable for all perturbed plants about the nominal model up to the worstcase model uncertainty. but otherwise unknown. The system satisﬁes the performance speciﬁcations with no model uncertainty. whereas in the time domain the evaluation of complicated convolution integrals is required. Ô · . ´×µ. are used to express Û¡ in terms of normalized perturbations. Poles and zeros appear explicitly in factorized scalar transfer functions. Important concepts for feedback such as bandwidth and peaks of closedloop transfer functions may be deﬁned.3 Transfer functions The book makes extensive use of transfer functions. The system is stable with no model uncertainty. and of the frequency domain. The system satisﬁes the performance speciﬁcations for all perturbed plants about the nominal model up to the worstcase model uncertainty. instead of a single model we may study the behaviour of a class of models. Û´×µ. The following terms are useful: Nominal stability (NS). ´ µ gives the response to a sinusoidal input of frequency .ÁÆÌÊÇ Í ÌÁÇÆ ¿ counteract the effect of a disturbance . Nominal Performance (NP). To deal with such a problem we will make use of the concept of model uncertainty. Thus. 1. Robust stability (RS). A series interconnection of systems corresponds in the frequency domain to multiplication of the individual system transfer functions. For example.
The system is timeinvariant since the coefﬁcients ½ ¼ ¬½ and ¬¼ are independent of time. the transfer function is independent of the input signal (forcing function). To simplify our presentation we will make the usual abuse of notation and replace Ý ´×µ by Ý ´×µ. will be used throughout the book to model systems and their components.2) we obtain ×Ü½ ´×µ Ü½ ´Ø ×Ü¾ ´×µ Ü¾ ´Ø ¼µ ¼µ Ý´×µ ½ Ü½ ´×µ · Ü¾ ´×µ · ¬½ Ù´×µ ¼ Ü½ ´×µ · ¬¼ Ù´×µ Ü½ ´×µ (1. for linear systems. If Ù´Øµ Ü½ ´Øµ Ü¾ ´Øµ and Ý ´Øµ represent deviation variables away from a nominal operating point or trajectory. such as ´×µ in (1. timeinvariant systems whose inputoutput responses are governed by linear ordinary differential equations with constant coefﬁcients. This is related to the fact that two systems can be described as close (i. ’ Here Ù´Øµ represents the input signal. and Ý ´Øµ the output signal. a small change in a parameter in a statespace description can result in an entirely different system response. and so on. we consider rational transfer functions of the form ¬ÒÞ ×ÒÞ · ¡ ¡ ¡ · ¬½ × · ¬¼ ´×µ (1.2) where Ü´Øµ Ü Ø.3) then yields the transfer function Ý´×µ Ù´×µ ´×µ ¬½ × · ¬¼ ¾ · ½× · ¼ × (1. then we can assume Ü ½ ´Ø ¼µ Ü¾ ´Ø ¼µ ¼.4) may also represent the following system Ý´Øµ · ½ Ý´Øµ · ¼ Ý´Øµ ¬½ Ù´Øµ · ¬¼ Ù´Øµ (1.6) Ò · Ò ½ ×Ò ½ · ¡ ¡ ¡ · ½ × · ¼ × . In addition. have similar behaviour) if their frequency responses are similar.e. etc. we will omit the independent variables × and Ø when the meaning is clear.4). On the other hand. Notice that the transfer function in (1.3) where Ý ´×µ denotes the Laplace transform of Ý ´Øµ. Ü ½ ´Øµ and Ü¾ ´Øµ the states.4) Importantly. If we apply the Laplace transform to (1..ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¯ Uncertainty is more easily handled in the frequency domain. The elimination of Ü ½ ´×µ and Ü¾ ´×µ from (1. More generally.5) with input Ù´Øµ and output Ý ´Øµ. An example of such a system is Ü½ ´Øµ Ü¾ ´Øµ Ý´Øµ ½ Ü½ ´Øµ · Ü¾ ´Øµ · ¬½ Ù´Øµ ¼ Ü½ ´Øµ · ¬¼ Ù´Øµ Ü½ ´Øµ (1. We consider linear. Transfer functions.
but since we always use deviation variables when we consider Laplace transforms. on the allowed magnitude of each input signal.4 Scaling Scaling is very important in practical applications as it makes model analysis and controller design (weight selection) much simpler. This is sometimes shown explicitly. and on the allowed deviation of each output. ½. The transfer function may then be ´×µ ´×Á µ ½ · (1. Furthermore. Usually we use ´×µ to represent the effect of the inputs Ù on the outputs Ý . whereas ´×µ represents the effect on Ý of the disturbances . It requires the engineer to make a judgement at the start of the design process about the required performance of the system.8) We have made use of the superposition principle for linear systems. ´×µ and Ý ´×µ are deviation variables. Then Ò Ò Þ is referred to as the pole excess or relative order. are semiproper. by use of the notation ÆÙ´×µ.1 ¯ ¯ ¯ ¯ Ü A system A system A system A system ´×µ is strictly proper if ´ µ ¼ as ½. In (1. All practical systems will have zero gain at a sufﬁciently high frequency. 1. such as Ë ´Á · Ã µ ½ . the Æ is normally omitted. certain derived transfer functions. ´×µ is a matrix of transfer functions. . and are therefore strictly proper. to model high frequency effects by a nonzero term. with Ò Ü· Ù Ý Ò Þ . Deﬁnition 1. ´×µ is improper if ´ µ ½ as ½. however.6) Ò is the order of the denominator (or pole polynomial) and is also called the order of the system. decisions are made on the expected magnitudes of disturbances and reference changes.7) written as Remark. We then have the following linear process model in terms of deviation variables Ý ´×µ ´×µÙ´×µ · ´×µ ´×µ (1. for example. Ü · Ù. which implies that a change in a dependent variable (here Ý ) can simply be found by adding together the separate effects resulting from changes in the independent variables (here Ù and ) considered one at a time. we may realize (1. To do this. All the signals Ù´×µ. and ÒÞ is the order of the numerator (or zero polynomial). similar to (1. For a proper system. It is often convenient.2).ÁÆÌÊÇ Í ÌÁÇÆ For multivariable systems. and hence semiproper models are frequently used. ´×µ is semiproper or biproper if ´ µ ¼ as ´×µ which is strictly proper or semiproper is proper.6) by a statespace description.
9) we get Ý Ù Ö and introducing the scaled transfer functions ½ ½ (1. introduce the scaling factors ÑÜ Ù ÙÑ Ü For MIMO systems each variable in the vectors . This is done by dividing each variable by its maximum expected or allowed change.11) To formalize the scaling procedure. For disturbances and manipulated inputs.13) On substituting (1. A useful approach for scaling is to make the variables less than one in magnitude. Ù. so the same scaling factor should be applied to each. and Ö become diagonal scaling maximum value. for example.10) ¯ Ñ Ü — largest expected change in disturbance ¯ ÙÑ Ü — largest allowed input change The maximum deviation from a nominal value should be chosen by thinking of the maximum value one can expect.12) Ñ Ü Ö ÖÑ Ü . as a function of time. we here usually choose to scale with respect to the maximum control error: Ý Ý ÑÜ Ö Ö ÑÜ ÑÜ (1.9) where a hat ( ) is used to show that the variables are in their unscaled units. The variables Ý. we use the scaled variables ÑÜ where: Ù Ù ÙÑ Ü (1. Ö .13) into (1. and Ö are in the same units.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Let the unscaled (or originally scaled) linear model of the process in deviation variables be Ý Ù· Ý Ö (1. The corresponding scaled variables to use for control purposes are then (1. This ensures. that all errors (outputs) are of about equal importance in terms of their magnitude. Ù and may have a different ½ Ù Ý Ù ½ Ù Ý ÙÙ · ½ Ý ½ Ö ½ Ö (1. or allow.14) . Two alternatives are possible: ¯ Ñ Ü — largest allowed control error ¯ ÖÑ Ü — largest expected change in reference value Since a major objective of control is to minimize the control error . in which case matrices.
1. and references Ö of magnitude ½.17) Ê is the largest expected change in reference relative to the allowed control Ù ¹ ¹ + + Ý ¹+ Ö ¹ Figure 1. Remark 2 With the above scalings. ´Øµ ½ and Ö´Øµ ½ such that ´Øµ ½. . This is done by dividing the reference by the maximum expected reference change Ö We then have that Ö ÖÑ Ü Ö ½ Ö (1.18) and we see that a normalized reference change Ö may be viewed as a special case of a Ê. this applies to the inputoutput controllability analysis presented in Chapters 5 and 6. the worstcase behaviour of a system is analyzed by considering disturbances of magnitude ½.ÁÆÌÊÇ Í ÌÁÇÆ then yields the following model in terms of scaled variables Ý Ù· Ý Ö (1.15) Here Ù and should be less than 1 in magnitude. In particular. for a MIMO system one cannot correctly make use of the sensitivity function Ë ´Á · Ã µ ½ unless the output errors are of comparable magnitude. which is less than 1 in magnitude.1: Model in terms of scaled variables error (typically.16) Ö Here ÊÖ Û Ö Ê ¸ ½ Ö ÖÑ Ü Ñ Ü Ö Ê (1. Furthermore. The block diagram for the system in scaled variables may then be written as in Figure 1. and our control Ý´Øµ Ö´Øµ ½ Remark 1 A number of the interpretations used in the book depend critically on a correct scaling. where Ê is usually a constant diagonal matrix. We will disturbance with sometimes use this to unify our treatment of disturbances and references. Remark 3 The control error is Ý Ö Ù· ÊÖ (1. Ê ½). and it is useful in some cases to introduce a scaled reference Ö . for which the following control objective is relevant: ¯ In terms of scaled variables we have that objective is to manipulate Ù with Ù´Øµ (at least most of the time).
ÆÜ´Øµ deﬁned by ÆÜ´Øµ Ü´Øµ Ü£ where the superscript £ denotes the steadystate operating point or trajectory along which we are linearizing. it is always important for a control engineer to have a good understanding of a model’s origin. Introduce deviation variables and linearize the model. (b) Introduce the deviation variables. from analyzing inputoutput data. However. Remark 5 If the expected or allowed variation of a variable about ¼ (its nominal value) is not symmetric. (c) Subtract the steadystate to eliminate the terms involving only steadystate quantities.5 Deriving linear models Linear models may be obtained from physical “ﬁrstprinciple” models. then the largest variation should be used for Ñ Ü and the smallest variation for ÙÑ Ü and Ñ Ü .g. and if the Ù ½¼ then ÙÑ Ü . The following steps are usually taken when deriving a linear model for controller design based on a ﬁrstprinciple approach: 1.19) . then one may choose to scale the outputs with respect to their expected variation (which is usually similar to ÖÑ Ü ). if the disturbance is ½¼ then Ñ Ü ½¼. 1. e. Formulate a nonlinear statespace model based on physical knowledge. 2. for a nonlinear statespace model of the form Ü Ø ´Ü Ùµ (1.11) in terms of the control error is used when analyzing a given plant.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 4 The scaling of the outputs in (1. These parts are usually accomplished together. For example.) when the variations for several variables are not symmetric. see Section 10. Although modelling and system identiﬁcation are not covered in this book.3. 3. For example. or from a combination of these two approaches. A further discussion on scaling and performance is given in Chapter 5 on page 161. if the issue is to select which outputs to control. Determine the steadystate operating point (or trajectory) about which to linearize. This approach may be conservative (in manipulated input is terms of allowing too large disturbances etc. There are essentially three parts to this step: (a) Linearize the equations using a Taylor expansion where second and higher order terms are omitted.
1 Physical model of a room heating process. An energy balance for the room requires that the change in energy in the room must equal the net inﬂow of energy to the room (per unit of time). since (1.2. its Laplace transform ÆÜ´×µ · ÆÙ´×µ. ÌÓ Ã Ã «Ï Ã ÌÃ Î Â Ã ¡¡ ¡¡ ¡ ¡ ¡ ¡ ¡ ¡ ÉÏ Figure 1.2: Room heating process The above steps for deriving a linear model will be illustrated on the simple example depicted in Figure 1. In most cases steps 2 and 3 are performed numerically based on the model obtained in step 1.20) Here Ü and Ù may be vectors. Units are shown in square brackets.21) Example 1. The outdoor temperature ÌÓ is the main disturbance.20) is in terms of deviation variables.22) . Scale the variables to obtain scaled models which are more suitable for control purposes. This yields the following statespace model ¦ ´ Ìµ Ø Î É · «´ÌÓ Ì µ (1. 4. 1.ÁÆÌÊÇ Í ÌÁÇÆ the linearized model in deviation variables (ÆÜ ÆÜ´Øµ Ø £ ÆÙ) is ßÞ ßÞ Ü ÆÜ´Øµ · £ Ù ÆÙ´Øµ (1. or becomes ×ÆÜ´×µ ÆÜ´×µ ´×Á µ ½ ÆÙ´×µ (1. where the control problem is to adjust the heat input É to maintain constant room temperature Ì (within ½ K). in which case the Jacobians and are matrices. Physical model. Also.
If we assume « is constant the model in (1. Furthermore. have perfect disturbance at steady state. The model in terms of variations in outdoor temperature are ¦½¼ K.e. We assume the room heat capacity is constant. and since its nominal ¾¼¼¼ W (see Remark 5 on page 8). Linear model in scaled variables. Then introducing deviation variables ÆÌ ´Øµ yields Ì ´Øµ Ì £ ´Øµ ÆÉ´Øµ Î É´Øµ É£ ´Øµ ÆÌÓ ´Øµ £ ÌÓ ´Øµ ÌÓ ´Øµ Remark. using an input of magnitude Ù ¼) for the maximum disturbance ( ½). that is. the expected value is ¾¼¼¼ W we have ÆÉÑ Ü ½¼ K. Ø ÆÌ ´Øµ ÆÉ´Øµ · «´ÆÌÓ ´Øµ ÆÌ ´Øµµ (1.26) × · ½ ÆÌÑ Ü ½¼¼¼× · ½ ¾¼. (This value corresponds approximately to the heat capacity of air in a room Î of about ½¼¼ m¿ . Linear model in deviation variables. Consider a case where the heat input É£ is ¾¼¼¼ W and the difference £ between indoor and outdoor temperatures Ì £ ÌÓ is ¾¼ K. É [W] is the heat input (from some heat source). . If « depended on the state variable (Ì in this example).23). i. Then the steadystate energy £ ¾¼¼¼ ¾¼ ½¼¼ W/K. 4.23). assuming get ÆÌ ´Øµ ¼ at Ø ¼. Finally. ÆÌÑ Ü Æ Ñ Ü ½ K. whereas the static gain for the disturbance is Note that the static gain for the input is ½¼.16. 2. thus we neglect heat accumulation in the walls. ÆÌÓ Ñ Ü Ý ´×µ Ù´×µ ´ ×µ ´× µ ÆÌ ´×µ ÆÌÑ Ü ÆÉ´×µ ÆÉÑ Ü ¾¼ ½ ÆÉÑ Ü ½ × · ½ ÆÌÑ Ü « ½¼¼¼× · ½ ½ ÆÌÓ Ñ Ü ½¼ ´×µ (1. and rearranging we Î (1. The fact that means that we have enough “power” in the inputs to reject the ½.24) ÆÌ ´×µ « scaled variables then becomes ÆÌÓ ´×µ (1. Î [J/K] is the heat capacity of the room. and the term «´ÌÓ Ì µ [W] represents the net heat loss due to exchange of air and heat conduction through the walls. we can.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where Ì [K] is the room temperature.e. then one would have to include an extra term £ ´Ì £ ÌÓ µÆ«´Øµ on the right hand side of Equation (1. balance yields « ½¼¼ kJ/K. The fact that ½ means that we need some control (feedback or feedforward) ½µ when there is a disturbance of magnitude to keep the output within its allowed bound ( ½. It means that for a step increase in heat input it will take about ½ min for the temperature to reach ¿± of its steadystate increase.) 3. i. the heat input can vary between ¼ W and ¼¼¼ W.2 where we analyze the inputoutput controllability of the room heating process. Operating point. Introduce the following scaled variables ½ ½ ÆÉ´×µ · ÆÌÓ ´×µ ×·½ « ½¼¼ ¡ ½¼¿ ½¼¼ The time constant for this example is On taking Laplace transforms in (1.22) is already linear. We will return with a disturbance rejection ( detailed discussion of this in Section 5.25) ÆÌÓ Ñ Ü In our case the acceptable variations in room temperature Ì are ¦½ K.23) ½¼¼¼ s ½ min which is reasonable. or on one of the independent variables of interest (É or ÌÓ in this example).
and a general control conﬁguration. denoted by lowercase letters. as well as feedforward and estimation schemes and many others. or to avoid conﬂicts in notation. The nominal plant model is . but an overriding concern has been to be consistent within the book. which shows a one degreeoffreedom control conﬁguration with negative feedback. as we will see. Ë ´Á · Äµ ½ and Ì Ä´Á · Äµ ½ . Matrix elements are usually is the ’th element in the matrix . including the one and two degreesoffreedom conﬁgurations. Ã ). Lowercase letters are used for vectors and signals (e. In most cases Ä Ã .3 are deﬁned in Table 1. The also include measurement dynamics (Ý Ñ Ñ corresponding transfer functions as seen from the input of the plant are Ä Á Ã (or ÄÁ Ã Ñ ). for example if is partitioned so that is itself a matrix. We have tried to use the most familiar notation from the literature whenever possible. The latter can be used to represent a wide class of controllers. However. Ý . and.g.27) to mean that the transfer function ´×µ has a statespace realization given by the µ. where Ä is the transfer function around the loop as seen from the output. We sometimes write ´×µ s (1. namely Ö and ÝÑ . Apart from the use of Ú to represent the controller inputs for the general conﬁguration.3. The Laplace variable × is often omitted for simplicity. transfer functions and systems (e. but if we Ý · Ò) then Ä Ã Ñ . ËÁ ´Á · ÄÁ µ ½ and ÌÁ ÄÁ ´Á · ÄÁ µ ½ . with a statespace realization ´ µ has a transfer function a system ´×µ ´×Á µ ½ · . With negative output. Ò).6 Notation There is no standard notation to cover all of the topics covered in this book. .ÁÆÌÊÇ Í ÌÁÇÆ ½½ 1. and capital letters for matrices. whereas the perturbed model with uncertainty is denoted Ô (usually for a set of possible perturbed plants) or ¼ (usually for a ½ A onedegree of freedom controller has only the control error Ö ÝÑ as its input. this notation is reasonably standard.1. a two degreesoffreedom control conﬁguration 1. it can also be used to formulate optimization problems for controller design. whereas the two degreesoffreedom controller has two inputs. quadruple ´ For closedloop transfer functions we use Ë to denote the sensitivity at the plant Á Ë to denote the complementary sensitivity.g. To represent uncertainty we use perturbations (not normalized) or perturbations ¡ (normalized such that their magnitude (norm) is less than or equal to one). That is. so sometimes we use uppercase letters . The most important notation is summarized in Figure 1. The symbols used in Figure 1. and Ì feedback. to ensure that the reader can follow the ideas and techniques through from one chapter to another. Ù. so we often write when we mean ´×µ. For statespace realizations we use the standard ´ µnotation. .
3: Control conﬁgurations .½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ö + ¹ ÝÑ  ¹ Ã Ù ¹ ¹ + + + Õ ¹Ý + Ò (a) One degreeoffreedom control conﬁguration Ö ÝÑ ¹ ¹ Ã Ù ¹ ¹ + + + Õ ¹Ý + Ò (b) Two degreesoffreedom control conﬁguration Û Ù ¹ ¹ È Ã ¹Þ Ú (c) General control conﬁguration Figure 1.
in whatever conﬁguration. commands. e. if È is being used to formulate a design problem. Sometimes the controller is broken down into its constituent parts. setpoints) disturbances (process noise) Ò measurement noise Ý plant outputs.1: Nomenclature Ã controller. It will include and and the interconnection structure between the plant and the controller. etc. measured plant outputs. In addition. Ù control signals È . These signals include the variables to be controlled (“primary” outputs with reference values Ö) and possibly some additional “secondary” measurements to improve control.3(c)): generalized plant model. For the special case of a one degreeoffreedom controller with perfect measurements we have Ú Ö Ý. “error” signals to be minimized. in the two degreesofÃÖ freedom controller in Figure 1. Û exogenous inputs: commands. Usually the signals Ý are measurable. For example. e. For the conventional control conﬁgurations (Figure 1.3(b).3(a) and (b)): plant model disturbance model Ö reference inputs (commands.ÁÆÌÊÇ Í ÌÁÇÆ ½¿ Table 1. disturbances and noise Þ exogenous outputs. then it will also include weighting functions. measured disturbances. Ý Ö Ú controller inputs for the general conﬁguration.g. Ã ÃÝ where ÃÖ is a preﬁlter and ÃÝ is the feedback controller.g. ÝÑ measured Ý Ù control signals (manipulated plant inputs) For the general control conﬁguration (Figure 1.
and thus includes poles on the imaginary axis. A RHPpole (unstable pole) is a pole located in the righthalf plane. The lefthalf plane (LHP) is the open left half of the complex plane. By the righthalf plane (RHP) we mean the closed right half of the complex plane. where Û is a weight representing the magnitude of Ô the uncertainty. a RHPzero (“unstable” zero) is a zero located in the righthalf plane. We use Ì to denote the transpose of a matrix .½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ particular perturbed plant). special terminology and abbreviations will be deﬁned in the text. Then the following expressions are equivalent: A´B A if B. excluding the imaginary axis. and À to represent its complex conjugate transpose. . ¸ is used to denote equivalent by deﬁnition. or: If B then A A is necessary for B B µ A. or: B implies A B is sufﬁcient for A B only if A not A µ not B The remaining notation. Mathematical terminology The symbol ¸ is used to denote equal by deﬁnition. with additive uncertainty we may have · · Û ¡ . and means that is identically equal to . including the imaginary axis ( axis). Similarly. Let A and B be logic statements. For example.
For the other two interpretations we cannot assign a clear physical meaning to ´ µ or Ý´ µ at a particular frequency – it is the distribution relative to other frequencies which matters then. and have proved to be indispensable when it comes to providing insight into the beneﬁts. The frequency content of a deterministic signal via the Fourier transform. namely that of frequencybyfrequency sinusoidal response. for example. we review the classical frequencyresponse techniques for the analysis and design of singleloop (singleinput singleoutput. SISO) feedback control systems. by considering the ½ norm of the weighted sensitivity function. and at each frequency the complex number ´ µ (or complex matrix for a MIMO system) has a clear physical interpretation. Frequency responses can be used to describe: 1. the frequency distribution of a stochastic signal via the power spectral density function. limitations and problems of feedback control. and 3. This interpretation has the advantage of being directly linked to the time domain. These loopshaping techniques have been successfully used by industrial control engineers for decades. This will be explained in more detail below.1 Frequency response On replacing × by in a transfer function model ´×µ we get the socalled frequency response description. We introduce this method at the end of the chapter. During the 1980’s the classical methods were extended to a more formal method based on shaping closedloop transfer functions. A system’s response to sinusoids of varying frequency. MIMO) control systems. . 2. The same underlying ideas and techniques will recur throughout the book as we present practical procedures for the analysis and design of multivariable (multiinput multioutput. It gives the response to an input sinusoid of frequency . 2.¾ Ä ËËÁ Ä ÇÆÌÊÇÄ À Ã In this chapter. In this book we use the ﬁrst interpretation.
that is. namely Ù´Øµ Ù¼ × Ò´ Ø · «µ Ý´Øµ Ý¼ × Ò´ Ø · ¬ µ Here Ù¼ and Ý¼ represent magnitudes and are therefore both nonnegative. It is important that the reader has this picture in mind when reading the rest of the book.1. A physical interpretation of the frequency response for a stable linear system Ý ´×µÙ is as follows.1) says that after sending a sinusoidal signal through a system ´×µ. Then the output signal is also a persistent sinusoid of the same frequency. as is evident from the excellent treatment by Kwakernaak and Sivan (1972). Frequencybyfrequency sinusoids We now want to give a physical picture of frequency response in terms of a system’s response to persistent sinusoids. such that This input signal is persistent.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ One important advantage of a frequency response analysis of a system is that it provides insight into the beneﬁts and tradeoffs of feedback control. it can be shown that Ý ¼ Ù¼ and can be obtained directly from the and evaluating Laplace transform ´×µ after inserting the imaginary number × the magnitude and phase of the resulting complex number ´ µ. For example. with real part ´ µ and imaginary part µ (2. let ´ ÁÑ ´ µ.3) . the signal’s magnitude is ampliﬁed by a factor ´ µ and its phase is shifted by ´ µ. We have ¸¬ « Ý¼ Ù¼ For example.1) µ . Note that the output sinusoid has a different amplitude Ý ¼ and is also shifted in phase from the input by Importantly. it has been applied since Ø ½. Apply a sinusoidal input signal with frequency [rad/s] and magnitude Ù¼ . this statement is illustrated for the following ﬁrstorder delay system (time in seconds). then ´ µ · Ô ´ µ Ö Ê (2. we believe that the frequencybyfrequency sinusoidal response interpretation is the most transparent and useful.2) ´ µ ¾· ¾ ´ µ Ö Ø Ò´ In words. In Figure 2. (2. ´×µ × ×·½ ¾ ½¼ (2. Although this insight may be obtained by viewing the frequency response in terms of its relationship between power spectral densities. it is needed to understand the response of a multivariable system in terms of its singular value decomposition.
This is quite common for process control applications. Sometimes the gain is given in units of dB (decibel) deﬁned as For example. ¾ corresponds to ¾ corresponds to ¿ ¼½ dB. More accurately. is also referred to as the system gain. The delay only shifts the sinusoid in time. In Bode plots we usually employ a logscale for frequency and gain. The gain remains relatively constant up to the break frequency ½ where it starts falling sharply.Ä ËËÁ 2 1 0 −1 −2 0 Ä ÇÆÌÊÇÄ ½ Ý´Øµ Ù´Øµ 10 20 30 40 50 60 Time [sec] 70 80 90 100 Figure 2. and sinusoidal inputs with ½ are attenuated by the system dynamics. the Bode plots are shown for the system in (2. being equal to Ý ¼ ´ µ Ù¼ ´ µ .1: Sinusoidal response for system ¼ ¾ rad/s ´×µ ¾× ´½¼× · ½µ at frequency At frequency ¼ ¾ rad/s. this is the steadystate gain and is obtained by setting × ¼ (or ¼). Physically. the ampliﬁcation is ´ µ and the phase shift is Ô ´ µ¾ · ½ Ô ´½¼ µ¾ · ½ ¾¾ ´ µ Ö Ø Ò´ µ Ö Ø Ò´½¼ µ ¾ ½ ½Ö Æ ´ µ is called the frequency response of the system ´×µ. Ô (2.3). The magnitude of the frequency response. This dependency may be plotted explicitly in Bode plots (with as independent variable) or somewhat implicitly in a Nyquist plot (phase plane plot). ´ µ . The system gain ´ µ is equal to at low frequencies. and a linear scale for the phase.2. we see that the output Ý lags behind the input by about a quarter of a period and that the amplitude of the output is approximately twice that of the input. and ¼ dB. and thus affects the phase but not the gain. ¾¼ ÐÓ ½¼ ¼¾ dB. In Figure 2. and ½ corresponds to Both ´ µ and ´ µ depend on the frequency . It describes how the system responds to persistent sinusoidal inputs of frequency .4) . We note that in this case both the gain and phase fall monotonically with frequency. the system responds too slowly to let highfrequency (“fast”) inputs have much effect on the outputs.
which by itself has no steadystate response. Note that Ù´ µ is not equal to Ù´×µ evaluated at × ´ µ the nor is it equal to Ù´Øµ evaluated at Ø .2).6) ´ µ are deﬁned in (2.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Magnitude 10 0 10 −2 10 0 −3 10 −2 10 −1 10 0 10 1 Phase −90 −180 −270 10 −3 10 −2 Frequency [rad/s] 10 −1 10 0 10 1 Figure 2.6) can then be written in complex form as follows Ý´ µ Ø ´ µÙ´ µ Ø (2. In this case all signals within the system are persistent sinusoids with the same frequency .5) ¼ ¼ Ý¼ ´ µ Ù¼ ¬ ´ µ·« (2. It then follows that × Ò´ Øµ is equal to the imaginary part of the Ø . and in some cases so may Ù¼ and «. Phasor notation. Let ´×µ be stabilized by feedback control.9) .5) and (2. From Euler’s formula for complex numbers we have that Þ Ó× Þ · × Ò Þ .8) or because the term where Ø appears on both sides Ý´ µ ´ µÙ´ µ (2. Since ´ µ ´ µ sinusoidal response in (2. Now introduce the complex numbers and ´ µ and Ù´ µ ¸ Ù¼ « Ý´ µ ¸ Ý¼ ¬ (2. and ´ µ yields as before the sinusoidal response from the input to the output of ´×µ.2: Frequency response (Bode plots) of ´×µ ¾× ´½¼× · ½µ The frequency response is also useful for an unstable plant ´×µ. and we can write the time domain sinusoidal response in complex function complex form as follows: Ù´Øµ Ù ÁÑ ´ Ø·«µ Ú × × Ø ½ Ý´Øµ Ý ÁÑ ´ Ø·¬µ (2. and consider applying a sinusoidal forcing signal to the stabilized system.7) where we have used as an argument because Ý ¼ and ¬ depend on frequency.
but its phase is ¾ Ö Ø Ò´ µ [rad] (and not ¼ [rad] as it would be for the minimum phase system ´×µ ½ of the same gain). and it is good for stable minimum phase systems except at frequencies close to those of resonance (complex) poles or zeros.10) is inﬁnite at ¬ ¬ ÐÒ ´ µ ÐÒ ¼ ´ ¼ µ is ¾ ¾ ´ ¼µ The approximation is exact for the system ´×µ ½ × Ò (where Æ ´ µ Ò). Thus whenever we use notation as an argument). the system ´×µ ×· zero at × has a constant gain of 1. are stable and minimum phase. Minimum phase systems. Also ½ ÐÒ ¬ ¼ ¬¡ ¼ · ¼ ¬ in (2. For example. The term Æ ´ µ is the slope of the magnitude in logvariables at frequency .9) also applies to MIMO systems where Ù´ µ and Ý´ µ are complex vectors representing the sinusoidal signal in each channel and ´ µ is a complex matrix.11) ½ ½ The normalization of ´×µ is necessary to handle systems such as ×·¾ and × ½ . × . For stable systems which are minimum phase (no time delays or righthalf plane (RHP) zeros) there is a unique relationship between the gain and phase of the frequency response. (2. the reader should interpret this as such as Ù´ µ (with and not a (complex) sinusoidal signal. ¾ Æ ´ ¼µ Ö ¼Æ ¡ Æ ´ ¼ µ (2. so it follows ¬that ¬ Ê½ ¬ · ¼¬ primarily determined by the local slope Æ ´ ¼ µ. Ý ´ µ and ´ µ are complex numbers.10) µ The name minimum phase refers to the fact that such a system has the minimum possible phase lag for the given magnitude response ´ µ . Ù´ µ. This may be quantiﬁed by the Bode gainphase relationship which gives the phase of (normalized 1 such that ´¼µ ¼) at a given frequency ¼ as a function of ´ µ over the entire frequency range: ´ ¼µ ½ ½ ÐÒ ´ µ ¬ · ¼ ¬ ¬ ¬ ¬¡ ÐÒ ¬ ¬ ÐÒ ¼¬ ½ ßÞ Æ´ (2. Ù´ µ Ø .Ä ËËÁ Ä ÇÆÌÊÇÄ ½ which we refer to as the phasor notation. At each frequency. Similarly. the local slope at frequency ¼ is Æ ´ ¼µ ¬ ¬ The term ÐÒ ¬ which justiﬁes the commonly used approximation for stable minimum phase systems ¼ . RHPzeros and time delays contribute additional phase lag to a system when compared to that of a minimum phase system with the same gain (hence the term ×· with a RHPnonminimum phase system). but their phases differ by ½ ¼Æ . In particular. which have equal ·¾ gain. but its phase is [rad]. and the usual rules for multiplying complex numbers apply. We will use this phasor notation throughout the book. Systems with integrators may be ½ treated by replacing ½ by ×·¯ where ¯ is a small positive number. the time delay system × has a constant gain of 1.
¾ and ¿ .13) The asymptotes (straightline approximations) are shown by dotted lines. For the design methods used in this book it is useful to be able to sketch Bode plots quickly. The reader is therefore advised to become familiar with asymptotic Bode plots (straightline approximations).3: Bode plots of transfer function Ä½ ¿¼ ´×·¼ ¼½µ¾ ´×·½¼µ . and the only case when there is signiﬁcant deviation is around the resonance frequency ¼ for complex poles or zeros with a of about 0. In Figure 2. the Bode plots are shown for damping ´× · Þ½ µ´× · Þ¾ µ ¡ ¡ ¡ (2.12) therefore. The magnitude of a transfer function is usually close to its asymptotic value.3.3 or less. These approximations yield straight lines on a loglog plot which meet at . For example. For complex ¾ ¾ poles or zeros. and in particular the magnitude (gain) diagram.¾¼ 10 5 ÅÍÄÌÁÎ ÊÁ 0 Ä Ã ÇÆÌÊÇÄ Magnitude 10 0 −2 −1 −2 −3 10 −5 10 0 10 −2 ½ 10 −1 10 0 ¾ 10 1 ¿ 10 2 10 3 Phase −45 −90 −135 −180 10 −3 −2 −1 0 1 2 3 10 10 Frequency [rad/s] 10 10 10 10 ×·½ Figure 2. The asymptotes are given by dotted lines. Straightline approximations (asymptotes). for a transfer function ´×µ the asymptotic Bode plots of the approximation · . the frequencies the socalled break point frequency Þ½ Þ¾ Ô½ Ô¾ are the break points where the asymptotes meet.12) ´× · Ô½ µ´× · Ô¾ µ ¡ ¡ ¡ ´ µ are obtained by using for each term ´× · µ for and by · for Ä½ ´×µ ¿¼ ´× · ½µ ´× · ¼ ¼½µ¾´× · ½¼µ (2. The vertical dotted lines on the upper plot indicate the break frequencies ½ . In (2. In this example the asymptotic slope of Ä ½ is 0 up to the ﬁrst break frequency at ½ . the term × ¾ · ¾ × ¼ · ¼ (where ½) is approximated by ¼ for ¾ ´ µ¾ ¾ for ¼ and by × ¼.
1 One degreeoffreedom controller In most of this chapter.2. We note that the magnitude follows the asymptotes closely. connecting the phase asymptotes by a straight line (on a Bode plot with logarithmic ½¼). instead of jumping. The input to the controller Ã ´×µ is Ö Ý Ñ . frequency) which starts ½ decade before the before the break frequency (at Ó at the break frequency . whereas the phase does not. this much improved after the break frequency (at phase approximation drops from ¼ to ½ ¼Ó between frequencies ¼ ¼¼½ ( ½ ½¼) and ¼ ½. we examine the simple one degreeoffreedom negative feedback structure shown in Figure 2. ¦ 2. remains at ½¿ Ó up to frequency ½¼. before it drops down to ½ ¼Ó at frequency ½¼¼ ( ½¼ ¿ ). Remark.4.3. increases up to ½¿ Ó at frequency ½. there at ¾ is a break frequency corresponding to a pole at ¿ ½¼ rad/s and so the slope is Æ ¾ at this and higher frequencies.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾½ ¼ ¼½ rad/s where we have two poles and then the slope changes to Æ ¾.4: Block diagram of one degreeoffreedom feedback control system 2.2 Feedback control Ö + ¹  ¹ Ã Ù ¹ ¹+ + + + Õ ¹Ý ÝÑ Ò Figure 2. An improved phase approximation of a term × · is obtained by. and ends ½ decade passes through the correct phase change of ½¼ ). Then ½ rad/s there is a zero and the slope changes to Æ ½. For the example in Figure 2. Finally. The asymptotic phase jumps at the break frequency by ¼ Ó (LHPpole or RHPzero) or · ¼ Ó (LHPzero or RHPpole).
2. The corresponding plant input signal is Ù ÃËÖ ÃË ÃËÒ (2. The control error is deﬁned as Ý Ö where Ö denotes the reference value (setpoint) for the output. In the literature. the error should involve the actual value (Ý ) and not the measured value (ÝÑ ).20) loop transfer function sensitivity function complementary sensitivity function The following notation and terminology are used Ì Ä Ã Ë ´Á · Ã µ ½ ´Á · Äµ ½ ´Á · Ã µ ½ Ã ´Á · Äµ ½ Ä .15) Remark.18) Ë The control error is Ý Ö where we have used the fact Ì Á ËÖ · Ë Ì Ò (2.17) and hence the closedloop response is Ý ´Á · Ã µ ½ Ã Ö · ´Á · Ã µ ½ ßÞ Ì ßÞ (2.¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where ÝÑ Ý · Ò is the measured output and Ò is the measurement noise. 2.14) into (2.16) yields Ã ´Ö Ý Òµ · ÃÖ · or ´Á · Ã µÝ ÃÒ ´Á · Ã µ ½ Ã Ò ßÞ Ì (2. the control error is frequently deﬁned as Ö ÝÑ which is often the controller input.14) The objective of control is to manipulate Ù (design Ã ) such that the control error remains small in spite of disturbances .19) Ë . Second. this is not a good deﬁnition of an error variable. Thus. First.2 Closedloop transfer functions The plant model is written as Ý Ý ´×µÙ · ´× µ (2. (2.16) and for a one degreeoffreedom controller the substitution of (2. However. the input to the plant is Ù Ã ´×µ´Ö Ý Òµ (2. the error is normally deﬁned as the actual value (here Ý ) minus the desired value (here Ö).
that Ì Ì Ë (2. The term complementary sensitivity for Ì follows from the identity: Ë·Ì Á (2. Then Ý ÃÖ · ·¼¡Ò (2.e. while Ì is the closedloop transfer function from the reference signals to the outputs. If there is also a measurement device. the above is not the original reason for the name “sensitivity”. To see this.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾¿ We see that Ë is the closedloop transfer function from the output disturbances to the outputs. In the above case Ä Ã . at a given frequency we have for a SISO plant.2.2. The term sensitivity function is natural because Ë gives the sensitivity ´Á · Äµ reduction afforded by feedback.18) shows that.3 Why feedback? At this point it is pertinent to ask why we should use feedback control at all — rather than simply using feedforward control.14)(2. Of course. closedloop transfer functions for SISO systems with negative feedback may be obtained from the rule ÇÍÌÈÍÌ Ö Ø ½ · ÐÓÓÔ ¡ ÁÆÈÍÌ (2.24) is easily derived by generalizing (2.17). with no feedback. in the loop. Ã Ñ . by straightforward differentiation of Ì .25) (we assume for now that it is possible to obtain and physically realize such an inverse. We assume that the plant and controller are . we present a more general form of this rule which also applies to multivariable systems. Remark 3 In general. write Ë · Ì ´Á · Äµ ½ · ´Á · Äµ ½ Ä and factor out the term ½ .22) are written in matrix form because they also apply to ½ Ä ½. Ñ ´×µ. Ì ½·Ä MIMO systems. although this may of course not be true).24) where “direct” represents the transfer function for the direct effect of the input on the output (with the feedback path open) and “loop” is the transfer function around the loop (denoted Ä´×µ). In Section then Ä´×µ 3.22) and a comparison with (2. consider the “openloop” case i. In particular. Remark 1 Actually.21). The rule in (2. 2. A “perfect” feedforward controller is obtained by removing the feedback signal and using the controller ÃÖ ´×µ ½ ´×µ (2.23) Remark 2 Equations (2. the response with feedback is obtained by premultiplying the right hand side by Ë . for SISO systems we may write Ë · Ì and so on. Ë ½·Ä .21) To derive (2. with the exception of noise. Bode ﬁrst called Ë sensitivity because it gives the relative sensitivity of the closedloop transfer function Ì to the relative plant model error.
An unstable plant The third reason follows because unstable plants can only be stabilized by feedback (see internal stability in Chapter 4).26) This is one of two main example processes used in this chapter to illustrate the techniques of classical control.5 0 5 Ã Ã ¿ (Unstable) ¾ Ã ½ ÈÙ 15 20 25 30 35 40 45 50 Ã 10 ¼ Time [sec] Figure 2. is never an exact model.5 0 −0. The fundamental reasons for using feedback control are therefore the presence of 1. The model has a righthalf plane (RHP) zero at × ¼ rad/s. 2 1.3 Closedloop stability One of the main issues in designing feedback controllers is stability. the effect as the controller input.5: Effect of proportional gain response process Ã on the closedloop response Ý ´Øµ of the inverse Example 2. The ability of feedback to reduce the effect of model uncertainty is of crucial importance in controller design. This . Consider the plant (time in seconds) ´×µ ¿´ ¾× · ½µ ´ × · ½µ´½¼× · ½µ (2. 2. that is. this of the disturbances on the outputs. Then with Ö feedforward controller would yield perfect control: Ý Ù· ÃÖ ´Ö µ· Ö Unfortunately.1 Inverse response process.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ both stable and that all the disturbances are known. we know . This is illustrated next by a simple example.5 1 0. Signal uncertainty – unknown disturbance ( ) 2. Model uncertainty (¡) 3. and the disturbances are never known exactly. then the controller may “overreact” and the closedloop system becomes unstable. If the feedback gain is too large.
Ã in Figure 2. with Ã ´×µ Ã (a constant) and loop transfer function Ä´×µ Ã ´×µ. and high controller gains will induce closedloop instability. it provides useful measures of relative stability and forms the basis for several of the robustness tests used later in this book. However. Pad´ approximations. one may equivalently use Bode’s stability condition which says that the closedloop system is stable if and only if the loop gain Ä is less than 1 at this frequency. is best suited for numerical calculations.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾ imposes a fundamental limitation on control. 1.2 Stability of inverse response process with proportional control. which is based on the frequency e response.7) closedloop stability is inferred by equating the number of encirclements to the number of openloop unstable poles (RHPpoles).27) ½ ¼ is the phase crossover frequency deﬁned by Ä´ ½ ¼ µ ½ ¼Æ. That is. The system is seen to be stable for Ã ¾ . that is ËØ where Ð ØÝ ¸ Ä´ ½ ¼ µ ½ (2.5. that is. poles on the imaginary axis are considered “unstable”). time delays must ﬁrst be approximated as rational transfer functions.g. The poles are solutions to ½ · Ä´×µ ¼ or equivalently the roots of ´ × · ½µ´½¼× · ½µ · Ã ¿´ ¾× · ½µ ¼ ¸ ¼×¾ · ´½ Ã µ× · ´½ · ¿Ã µ ¼ (2. is and unstable for Ã sometimes called the ultimate gain and for this value the system is seen to cycle continuously with a period ÈÙ ½ ¾ s. e. The frequency response (including negative frequencies) of Ä´ µ is plotted in the complex plane and the number of encirclements it makes of the critical point ½ is counted. ÃÙ ¾ . For openloop stable systems where Ä´ µ falls with frequency such that Ä´ µ crosses ½ ¼Æ only once (from above at frequency ½ ¼ ). The poles of the closedloop system are evaluated. The poles are also equal to the eigenvalues of the statespace matrix. which involves computing the poles. and this is usually how the poles are computed numerically. where the This is illustrated for a proportional (P) controller Ã ´×µ ÌÖ Ã ´½ · Ã µ ½ Ö to a step change in the reference (Ö´Øµ ½ for response Ý Ø ¼) is shown for four different values of Ã . The controller gain at the limit of instability. ¸ Two methods are commonly used to determine closedloop stability: 1. Method 2. 2. Method ½. and may also be used for systems with time delays. Example 2.26) with proportional control. where Ä is the transfer function around the loop. has a nice graphical interpretation. By Nyquist’s stability criterion (for which a detailed statement is given in Theorem 4. Let us determine the condition for closedloop stability of the plant in (2. The system is stable if and only if all the closedloop poles are in the open lefthalf plane (LHP) (that is. The system is stable if and only if all the closedloop poles are in the LHP. Furthermore. the roots of ½· Ä´×µ ¼ are found. ¾ . corresponding to the frequency Ù ¾ ÈÙ ¼ ¾ rad/s.28) .
e. Stability may also be evaluated from the frequency response of Ä´×µ.27) that the system is stable if and only if Ä´ ½ ¼ µ above). This yields Ä´ ½ ¼ µ ½ Ã ¾ (as found from (2.28) to get onset of instability may be found by substituting Ã Ô ¼×¾ · ¼. and we get off the corresponding gain.5. With negative feedback (Ã ¼) only the upper bound is of practical interest. at the onset of instability we have two poles on the imaginary axis. it is not necessary to solve (2.28). This yields the following stability conditions ¡¡¡ or equivalently ½ ¿ ¾ . The loop gain at ¼ Ã Ä´ ½ ¼ µ . this test says that we have stability if and only if all the coefﬁcients have the same sign. i. Rather. The Bode plots of the plant (i.e. A graphical evaluation ½) are shown in is most enlightening. ´½ Ã Ã µ ¼ ´½ · ¿Ã µ ¼ ¦ ¦ Magnitude 10 0 10 −2 10 −2 10 −1 0 −45 −90 −135 −180 −225 −270 −2 10 ½¼ 10 0 10 1 Phase 10 −1 Frequency [rad/s] 10 0 10 1 Figure 2. This agrees a frequency with the simulation results in Figure 2.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ But since we are only interested in the half plane location of the poles. Ä´×µ with Ã Figure 2.6. the phase crossover frequency may be obtained analytically from: ¸ this frequency is Ä´ ½ ¼ µ which gives ½ ¼ Ö Ø Ò´¾ ½ ¼ µ Ö Ø Ò´ ½ ¼ µ Ö Ø Ò´½¼ ½ ¼ µ ½ ¼Æ Ã Ô Ô ¿ ¡ ´¾ ½ ¼ µ¾ · ½ Ô ´ ½ ¼ µ¾ · ½ ¡ ´½¼ ½ ¼ µ¾ · ½ ¼ ½¾ rad/s as found in the pole calculation above.6: Bode plots for Ä´×µ ¾× Ã ´½¼¿´·½µ´·½µ × ×·½µ with Ã ½ 2. Alternatively.5. one may consider the coefﬁcients of the characteristic equation Ò Ò× · ½ × · ¼ ¼ in (2. From these one ﬁnds the frequency ½ ¼ where Ä is ½ ¼Æ and then reads Ã ´ ½ ¼ µ ¼ Ã . and we ﬁnd that the maximum allowed gain (“ultimate ¾ which agrees with the simulation in Figure 2. Thus. The poles at the gain”) is ÃÙ ÃÙ ¾ into (2.28). and use the RouthHurwitz test to check for stability. For second order systems. and the system will be continuously cycling with ¼ ½¾ rad/s corresponding to a period ÈÙ ¾ ½ ¾ s. × ¼ ¼ ½¾.
the possibility of inducing instability is one of the disadvantages of feedback control which has to be traded off against performance improvement. The objective of this section is to discuss ways of evaluating closedloop performance. Actually.26). 2. that is. The reason for this offset is the nonzero steadystate sensitivity function.5). To remove the steadystate offset we add integral action in the form of a PIcontroller Ã Ã ´×µ Ã ½· ½ Á× (2.5 1 0. The stability condition Ä´ ½ ¼ µ ½ then yields Ã ¾ as expected. to make the output Ý ´Øµ behave in a more desirable manner.5).3 PIcontrol of the inverse response process.4.6.4 Evaluating closedloop performance Although closedloop stability is an important issue. 2 1. From ËÖ it follows that for Ö ½ the steadystate control error is ¼ ½ (as is conﬁrmed by the simulation in Figure 2.5 0 −0. except for a steadystate offset (see Figure 2.29) . illustrates what type of closedloop performance one might expect. the real objective of control is to improve performance. 2.26).5 0 10 20 30 Ý´Øµ Ù´Øµ 40 50 60 70 80 Time [sec] Figure 2.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾ which is the same as found from the graph in Figure 2.1 Typical closedloop responses The following example which considers proportional plus integral (PI) control of the inverse response process in (2.7: Closedloop response to a step change in reference for the inverse response process with PIcontrol Example 2. We have already studied the use of a proportional controller for the process in (2. Ë ´¼µ ½·Ã½ ´¼µ ¼ ½ (where ´¼µ ¿ is the steadystate gain of the plant). We found that a controller gain of ½ gave a reasonably good response.
in Section 2. we ﬁnd that the phase margin (PM) is ¼ ¿ rad = ½ Æ and the gain margin (GM) is ½ ¿.4. The it does not settle to within overshoot (height of peak relative to the ﬁnal value) is about ¾% which is much larger than one would normally like for reference tracking.26). and from the plot of Ä´ µ. and Ã ´×µ ½ ½¿ ´½ · ½¾½ × µ (a ZieglerNichols PI controller) ·½µ´½¼×·½µ The corresponding Bode plots for Ä. . is about ¼ ¿ which is also a bit large. we deﬁne stability margins. and the peak value of Ì is ÅÌ ¿ ¿ which again are high according to normal design rules.31) ¦ Magnitude 10 0 Ì Ä Ë Ì 10 −2 10 90 −2 10 −1 10 0 10 1 Phase 0 −90 Ì Ä −1 Ë −180 −270 −2 10 10 10 0 Frequency [rad/s] 10 1 Figure 2.7. and we get Ã obtained from the model ´×µ.31) to compute ÃÙ and ÈÙ for the process in (2. ´ Ùµ ½ ¼Æ .1 Use (2. Ë and Ì when ´×µ ´ ×¿´ ¾×·½µ . ÈÙ ¾ Ù (2.5). Alternatively. ÃÙ and ÈÙ can be in Figure 2. In our case ÃÙ ½ ½ and Á ½¾ s. with PIcontrol. repeated in Figure 2.11.8: Bode magnitude and phase plots of Ä Ã . Exercise 2. These margins are too small according to common rules of thumb.30) ÃÙ ½ where Ù is deﬁned by ´ Ùµ The closedloop response.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where ÃÙ is the maximum (ultimate) Pcontroller gain and ÈÙ is the corresponding period ¾ and ÈÙ ½ ¾ s (as observed from the simulation of oscillations. The output Ý ´Øµ has an initial inverse response due to the RHPzero. which is the ratio between subsequent peaks. The decay ratio. but it then ¼ s (the rise time). The peak value of Ë is ÅË ¿ ¾. Ë and Ì are shown in Figure 2. to a step change in reference is shown in Figure 2. Later.8. The settings for Ã and Á can be determined from the classical tuning rules of Ziegler and Nichols (1942): Ã ÃÙ ¾ ¾ Á ÈÙ ½ ¾ (2.3. The response is quite oscillatory and rises quickly and Ý ´Øµ ¼ at Ø % of the ﬁnal value until after Ø s (the settling time).
Overshoot: the peak value divided by the ﬁnal value. The above example illustrates the approach often taken by engineers when evaluating the performance of a control system.2 (20%) or less. which should typically be 1. the ZieglerNichols’ PItunings are somewhat “aggressive” and give a closedloop system with smaller stability margins and a more oscillatory response than would normally be regarded as acceptable. .3 or less. which is usually required to be small. However.9): ¯ ¯ ¯ ¯ ¯ Rise time: (ØÖ ) the time it takes for the output to ﬁrst reach 90% of its ﬁnal value. Decay ratio: the ratio of the second and ﬁrst peaks. for this example. resulting in a two degreesoffreedom controller.5 0 ØÖ Time Ø× Figure 2.Ä ËËÁ Ä ÇÆÌÊÇÄ ¾ In summary. which should typically be 0.9: Characteristics of closedloop response to step in reference Step response analysis. which is usually required to be small. one simulates the response to a step in the reference input.2 Time domain performance 1. decay ratio and steadystate offset are related to the quality of the response. and one can add a preﬁlter to improve the response for reference tracking. and considers the following characteristics (see Figure 2. For disturbance rejection the controller settings may be more reasonable.4. Settling time: (Ø× ) the time after which the output remains within ¦ ± of its ﬁnal value. this will not change the stability robustness of the system.5 Overshoot Decay ratio = = Ý´Øµ 1 ½¼ ¼ 0. The rise time and settling time are measures of the speed of the response. That is. Steadystate offset: the difference between the ﬁnal value and the desired ﬁnal value. which is usually required to be small. Another measure of the quality of the response is: ¯ Excess variation: the total variation (TV) divided by the overall change at steady state. which should be as close to 1 as possible. whereas the overshoot. 2.
For the cases considered here the overall change is 1. That is. for example. then the total variation in Ý for a step change in Ö is ¼ ´É ´Øµ ¾ · Ê Ù´Øµ ¾ µ Ø where É and Ê are positive constants. one might use the integral squared error Õ (ISE). p. Ù). 98). . then the response to these should also be considered. ´Øµ ¾ ¼ ´ µ¾ . i. Remark 2 The step response is equal to the integral of the corresponding impulse response. The above measures address the output response.e. but in LQcontrol one normally considers an impulse rather than a step change in Ö´Øµ. One can also take input magnitudes into account by considering.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ý ´ Øµ Ú¼ Ú¾ Ú½ Ú Ú¿ Ú Ú Ú¼ Time Figure 2. Another advantage of the 2norm is that the resulting optimization problems (such as minimizing ISE) are numerically easy Õ solve.11). If there are important disturbances. Some thought then reveals that one can compute the total variation as the integrated absolute area (1norm) of the corresponding impulse response (Boyd and Barratt.10. Ý ´Øµ resulting from an impulse change in Ö´Øµ. e. Remark 1 Another way of quantifying time domain performance is in terms of some norm of the error signal ´Øµ Ý ´Øµ Ö´Øµ. which usually should be as small and smooth as possible. Finally.10: Total variation is ÌÎ È½ ½ Ú . For example. one may investigate in simulation how the controller works if the plant model parameters are different from their nominal values.g. one should consider the magnitude of the manipulated input (control signal. In addition. set Ù´ µ ½ in (4. let Ý Ì Ö. Ý ´Øµ.32) where Ì ´Øµ is the impulse response of Ì . or its square root which is the 2norm of the error signal. 1991. Actually. in most cases minimizing the 2norm seems to give a reasonable tradeoff between the various objectives listed above. to Ê½ Â Ê½ linear quadratic (LQ) optimal control. Note that in this case the various objectives related to both the speed and quality of response are combined into one number. This is similar to ÌÎ ½ ¼ Ì´ µ ¸ Ì ´Øµ ½ (2. so the excess variation is equal to the total variation. and Excess variation is ÌÎ Ú¼ The total variation is the total movement of the output as illustrated in Figure 2.
A typical Bode plot and a typical Nyquist plot of Ä´ µ illustrating the gain margin (GM) and phase margin (PM) are given in Figures 2. One advantage of the frequency domain compared to a step response analysis.4. Typical Bode plots of Ä. 10 1 Magnitude 10 0 ½Å −2 10 −1 10 −90 10 −1 ½¼ 10 0 Phase −135 −180 −225 10 −2 ÈÅ Frequency [rad/s] 10 −1 10 0 Figure 2.3 Frequency domain performance The frequencyresponse of the loop transfer function.11: Typical Bode plot of Ä´ µ with PM and GM indicated The gain margin is deﬁned as Å ½ Ä´ ½ ¼ µ (2. and in particular system behaviour in the crossover (bandwidth) region. Ä´ µ. We will now describe some of the important frequencydomain measures used to assess performance e. Ì and Ë are shown in Figure 2.g.12. respectively. This makes it easier to characterize feedback properties.8. may also be used to characterize closedloop performance. Gain and phase margins Let Ä´×µ denote the loop transfer function of a system which is closedloop stable under negative feedback. the maximum peaks of Ë and Ì . and the various deﬁnitions of crossover and bandwidth frequencies used to characterize speed of response. is that it considers a broader class of signals (sinusoids of any frequency). that is Ä´ µ (2.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿½ 2.33) where the phase crossover frequency ½ ¼ is where the Nyquist curve of crosses the negative real axis between ½ and 0.11 and 2. or of various closedloop transfer functions.34) Ä´ ½ ¼ µ ½ ¼Æ . gain and phase margins.
The GM is thus a direct safeguard ¾. Typically. e. we require PM larger than ¿¼ Æ .36) is where Ä´ µ Ä´ µ ½ The phase margin tells how much negative phase (phase lag) we can add to Ä´×µ at frequency before the phase at this frequency becomes ½ ¼ Æ which corresponds to closedloop instability (see Figure 2. in dB) is the vertical distance from the unit magnitude line down to Ä´ ½ ¼ µ .35) ﬁrst crosses 1 from above.12).5 1 Ê Ä´ µ −0. (2. we have that GM (in logarithms. Typically we require Å Nyquist plot of Ä crosses the negative real axis between ½ and ½ then a gain reduction margin can be similarly deﬁned from the smallest value of Ä´ ½ ¼ µ of such crossings.12: Typical Nyquist plot of Ä´ µ for stable plant with PM and GM indicated. The phase margin is deﬁned as ÈÅ where the gain crossover frequency that is Ä´ µ · ½ ¼Æ (2.11. If the against steadystate gain uncertainty (error).5 Ä´ µ Figure 2.¿¾ ÅÍÄÌÁÎ ÊÁ ÁÑ Ä Ã ÇÆÌÊÇÄ 0. The GM is the factor by which the loop gain Ä´ µ may be increased before the closedloop system becomes unstable. On a Bode plot with a logarithmic axis for Ä .g. see Figure 2.5 ½ ½Å −1 Ä´ ½ ¼ µ ÈÅ ·½ 0. Closedloop instability occurs if Ä´ µ encircles the critical point ½ If there is more than one crossing the largest value of Ä´ ½ ¼ µ is taken.
It is also important to note that by decreasing the value of (lowering the closedloop bandwidth. For stable plants we usually have Å Ë ÅÌ . and so if Û is in [rad/s] then PM must be in radians.37) Note that the units must be consistent. From the above arguments we see that gain and phase margins provide stability margins for gain and delay uncertainty. but this is not a general rule. the gain and phase margins are used to provide the appropriate tradeoff between performance and stability. it is required that Å Ë is less than about ¾ ( dB) and Å Ì is less than about ½ ¾ (¾ dB). as we show below the gain and phase margins are closely related to the peak values of Ë ´ µ and Ì ´ µ and are therefore also useful in terms of performance. resulting in a slower response) the system can tolerate larger time delay errors. The allowed time delay error s.73) for a ﬁrst Ë so ÅË and ÅÌ differ at most by ½. we have Ý Ö Ö.4 For the PIcontrolled inverse response process example we have ÈÅ Æ is then ½ ÑÜ ¼¿ ¿ rad rad ¼ ¿ rad and ¼ ¾¿ rad/s ½ ¼ ¾¿ rad/s. Exercise 2. A large value of Å Ë or ÅÌ (larger than about ) indicates poor performance as well as poor robustness. ¼). A large value of Å Ë therefore occurs if and only if ÅÌ is large. Exercise 2. In short.2 Prove that the maximum additional delay for which closedloop stability is maintained is given by (2. the system becomes unstable if we add a time delay of ÑÜ ÈÅ (2. However.37).38) (Note that ÅË Ë ½ and ÅÌ Ì ½ in terms of the À½ norm introduced later. Maximum peak criteria The maximum peaks of the sensitivity and complementary sensitivity functions are deﬁned as ÅË Ñ Ü Ë ´ µ ÅÌ Ñ Ü Ì ´ µ (2. and with feedback Without control (Ù Ì Ë·Ì ½ .) Typically. Since Ë · Ì ½ it follows that at any frequency ÃÙ ½ ´ Ùµ given in (5. An upper bound on Å Ì has been a common design speciﬁcation in classical control and the reader may be familiar with the use of Å circles on a Nyquist plot or a Nichols chart used to determine Å Ì from Ä´ µ.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿¿ or more. The PM is a direct safeguard against time delay uncertainty. ½ Example 2. We now give some justiﬁcation for why we may want to bound the value of Å Ë .3 Derive the approximation for order delay system.
Ä´ µ has more than ½ ¼Æ phase lag at some frequency (which is the case for any real system). with Å Ë ¾ we are guaranteed Å ¾ and ÈÅ ¾ ¼ Æ .40) consider µ ½ ½ · Ä´ µ ½ ½ Ä´ µ and we Figure 2. . ¾ identity ¾ × Ò´ÈÅ ¾µ Remark.39) and (2.41) requires Ë to be larger than 1 at frequency ½ ¼ . from which we get ½ Å Ì ´ ½ ¼µ ½ Å ½ Ë´ ½ ¼ µ ½ ½ ½Å (2. To derive the PMinequalities in (2. For instance. for a given value of Å Ì we are guaranteed ½ ½ ½ Å ½· (2. ÅË . so we want Ä to stay away from this point. One may also view Å Ë as a robustness measure. which are sometimes used. both for stability and performance we want Å Ë close to 1.g. There is a close relationship between these maximum peaks and the gain and phase margins. then the peak of Ë ´ µ must exceed 1. This means that provided ½ ¼ exists.40) ÈÅ ¾ Ö × Ò Ö ÅÌ ¾ÅÌ ÅÌ and therefore with Å Ì ¾ we have Å ½ and ÈÅ ¾ ¼ Æ . Speciﬁcally. The smallest distance between Ä´ µ and the 1 point is ÅË ½ .39) ÈÅ ¾ Ö × Ò Ö ÅË ½ ¾ÅË ÅË For example. there is an intermediate frequency range where feedback control degrades performance. Usually. feedback control improves performance in terms at all frequencies where Ë ½. To maintain closedloop stability the number of encirclements of the critical point ½ by Ä´ µ must not change. for a given Å Ë we are guaranteed ÅË ½ ½ (2. Similarly. requiring Å Ë ¾ implies the common rules of thumb Å ¾ and ÈÅ ¿¼Æ. Alternative formulas. Thus. In summary. Ë is small at low of reducing frequencies.42) ¾ × Ò´ÈÅ ¾µ and the inequalities follow. follow from the Ô ¾´½ Ó×´ÈÅµµ.13 where we have Ë ´ obtain ½ Ë´ µ Ì ´ µ (2. can make speciﬁcations on the gain and phase margins unnecessary. At intermediate frequencies one cannot avoid in practice a peak value. Å Proof of (2. for example. Ë ´¼µ ¼ for systems with integral action.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ control Ë´ Öµ. the smaller Å Ë . In conclusion. and therefore for robustness. as is now explained. But because ¼ or all real systems are strictly proper we must at high frequencies have that Ä equivalently Ë ½.39) and (2. larger than 1 (e. the better. that is. and the value of Å Ë is a measure of the worstcase performance degradation.40): To derive the GMinequalities notice that Ä´ ½ ¼ µ (since Å ½ Ä´ ½ ¼ µ and Ä is real and negative at ½ ¼ ). see the remark below). we see that speciﬁcations on the peaks of Ë ´ µ or Ì ´ µ (Å Ë or ÅÌ ). We note with interest that (2.41) and the GMresults follow. Thus.
Ä ËËÁ Ä ÇÆÌÊÇÄ ¿ ÁÑ −1 ÈÅ ¾ µ Ê Ä´ µ −1 Figure 2. Is there any relationship between the frequency domain peak of Ì ´ µ.1.4.13: At frequency 2.43). This provides some justiﬁcation for the use of ÅÌ in classical control to evaluate the quality of the response. which is ¾ for our prototype system in (2.32) which together yield the following general bounds Here Ò is the order of Ì ´×µ. This is further conﬁrmed by (A.44) . together with Å Ì and ÅË .136) and (2.43) ÅÌ ÌÎ ´¾Ò · ½µÅÌ (2. and any characteristic of the time domain step response. ½ Ä´ we see from the ﬁgure that ½ · Ä´ µ ¾ × Ò´ÈÅ ¾µ Ì ´×µ ¾ ×¾ · ¾ × · ½ ½ (2. the bound in (2. for example the overshoot or the total variation? To answer this consider a prototype secondorder system with complementary sensitivity function For underdamped systems with ½ the poles are complex and yield oscillatory step responses. Given that the response of many systems can be crudely approximated by fairly loworder systems. we see that the total variation TV correlates quite well with Å Ì .4 Relationship between time and frequency domain peaks For a change in reference Ö. Å Ì .1. the output is Ý ´×µ Ì ´×µÖ´×µ. With Ö´Øµ ½ (a unit step change) the values of the overshoot and total variation for Ý ´Øµ are given. as a function of in Table 2.44) suggests that Å Ì may provide a reasonable approximation to the total variation. From Table 2.
In most cases we require tight control at steadystate so the bandwidth.1. and this leads to considering the bandwidth frequency of the system.B.6 0.01 Time domain Overshoot Total variation 1 1 1 1 1 1 1.[tau*tau 2*tau*zeta 1]).03 5.4 0.8 0.22 1.5 Bandwidth and crossover frequency The concept of bandwidth is very important in understanding the beneﬁts and tradeoffs involved when applying feedback control.09 1. Above we considered peaks of closedloop transfer functions. In general. T = nd2sys(1.e4).53 3. The interpretation we use is that control is effective if we obtain some beneﬁt in terms of performance.73 6.D. and the system will usually be more robust.t). Loosely speaking. for the simplest case with .25 1. For tracking performance Ý Ö ËÖ and we get that feedback is effective (in terms of the error is improving performance) as long as the relative error Ö Ë is reasonably small. since high frequency signals are more easily passed on to the outputs.0 0. which we may deﬁne to be less than 0. y1 = step(A.C.04 1.66 2. However.4. ½ ¼.1.5 1. a large bandwidth corresponds to a faster rise time.7 Frequency domain ÅÌ ÅË 1 1.21 1. and we then simply call ¾ The word “effective” may be interpreted in different ways.01:100.12 50.e4) 2.39 1. bandwidth may be deﬁned as the frequency range ½ ¾ over which control is effective.22 1.36 1.0 50.02 1.97 63.1 0. and this may give rise to different deﬁnitions of bandwidth.707 when deﬁning the bandwith is that.0 1.73 5.55 2.08 1 1.2 0. for performance we must also consider the speed of the response.35 1.B. overshoot=max(y1).1.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 2. A high bandwidth also indicates a system which is sensitive to noise and to parameter variations.05 1 1. if the bandwidth is small.tv=sum(abs(diff(y1))) Mt=hinfnorm(T. S = msub(1.C. Conversely.zeta=0.1: Peak values and total variation of prototype secondorder system 2. Å Ë and ÅÌ .T).0 % MATLAB code (Mu toolbox) to generate Table: tau=1.15 1 1. the time response will generally be slow.Ms=hinfnorm(S.707 in magnitude. [A.68 1.D]=unpck(T).03 1.t=0:0. which are related to the quality of the response.1. 2 We then get the following ¾ The reason for choosing the value 0.
when ÈÅ ¼Æ we get Ë ´ µ Ì´ µ ¼ ¼ (see (2. For ÈÅ ¼Æ we get Ë ´ µ Ì´ µ ¼ ¼ . ¾ frequency where Ì ´ µ crosses 0.45): Note that Ä´ µ ½ so Ë ´ µ Ì ´ µ .707 from above. is less useful for feedback control. but does not improve performance — in most cases we ﬁnd that in this frequency range Ë is larger than 1 and control degrades performance. Ì .15).7. Example. and we have Ì . and control is effective in terms of improving performance. we would argue that this alternative deﬁnition. . for systems with PM Proof of (2.707 from below we must have . since Ì is the . we must have Ì Ì (2. The bandwidth and crossover frequencies are ¼¾ Ì ¼ (conﬁrming 2. Ë´ µ Remark. the phase of Ä remains constant at ¼Ó . which we may deﬁne to be larger than 0. For tracking performance.45) From this we have that the situation is generally as follows: Up to the frequency . Speciﬁcally. is the frequency where Deﬁnition 2.45).63 and PM=½ Ó . The plant has a RHPzero and the PItunings are quite agressive ¼½ so GM=1. It has the advantage of being simple to compute and usually gives a value between and ¼ Æ (most practical systems) we have Ì . so PM= ¼Ó .42)). we may say that control is effective as long as Ì is reasonably large.707. deﬁned as the frequency where Ä´ µ ﬁrst crosses 1 from above. This leads to an alternative deﬁnition which has been traditionally used to deﬁne the bandwidth of a control system: The bandwidth in terms of Ì . ½ ½¼ ½ Example 2. a ﬁrstorder closedloop response with Ë the frequency where Ë ´ µ lowfrequency Ô× ¾´× · ¾ µ.5 below (see Figure 2. Thus. at frequencies higher than Ì we ½ and control has no signiﬁcant effect on the response. Furthermore. Consider the simplest case with a ﬁrstorder closedloop response. (or really undeﬁned) and GM= . is the highest frequency at which Ì ´ µ crosses ½ ¾ ¼ ¼ ´ ¿ dB) from above. The situation just have Ë described is illustrated in Example 2. asymptote × · ½ crosses 1 at . Ô The gain crossover frequency. Another interpretation is to say that control is effective if it signiﬁcantly changes the Ì Ö and since without control output response. and since is the frequency where Ë ´ µ crosses 0. although being closer to how the term is used in some other ﬁelds. However. Ë is less than 0. Ä´×µ × Ë ´×µ × ×· Ì ´ ×µ ×· In this ideal case the above bandwidth and crossover frequencies are identical: Ì . Finally. The ﬁrst crosses ½ ¾ ¼ ¼ ´ ¿ dB) from below.Ä ËËÁ deﬁnition: Ä ÇÆÌÊÇÄ ¿ . is also sometimes used to deﬁne closedloop bandwidth. the output is Ý Ý ¼.1 continue. theÔ¾ ¼ ¼ . Similarly.1 Ô (closedloop) bandwidth. In the frequency range Ì control still affects the response.
15: Plots of Ë and Ì for system Ì ×·¼ ½ ½ ×·¼ ½ ×·½ The magnitude Bode plots of Ë and Ì are shown in Figure 2. An example where Ì is a poor indicator of performance is the following (we are not suggesting this as a good controller design!): Ä × · Þ ×´ × · Þ · ¾µ Ì × · Þ ½ ×·Þ ×·½ Þ ¼½ ½ (2. The closedloop response RHPzeros).5 −1 0 5 10 15 20 25 30 35 40 45 50 Time [sec] Figure 2. in the frequency range from Ì the phase of Ì (not shown) . but very different from ½ Ì ½ ¼ s. We see that Ì ½ up to to about Ì . ÅË ½ ¿ and ÅÌ ½. The rise time is ¿½ ¼ s.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Example 2. illustrating that is a is close to ½ better indicator of closedloop performance than Ì . ÈÅ ¼ ½Æ . both Ä and Ì have a RHPzero at Þ ¼ ½.14: Step response for system Ì ×·¼ ½ ½ ×·¼ ½ ×·½ 10 0 Ë ½¼ Ì Magnitude Ì 10 −1 10 −2 10 −2 10 −1 Frequency [rad/s] 10 0 10 1 10 2 Figure 2. and we have Å ¾ ½. 1 0.46) For this system.5 Comparison of and Ì as indicators of performance.14. whereas Ì ½ to a unit step change in the reference is shown in Figure 2. We ﬁnd that ¼ ¼¿ and ¼ ¼ are both less than Þ ¼ ½ (as one should expect because speed of response is limited by the presence of ½ ¼ is ten times larger than Þ . However. which ¾ ¼ s.5 Ý´Øµ 0 −0.15.
slopes etc. In addition to heuristic rules and online tuning we can distinguish between three main approaches to controller design: 1. 2. There exist a large number of methods for controller design and some of these will be discussed in Chapter 9. (b) Shaping of closedloop transfer functions. The signalbased approach. Shaping of transfer functions. The “modern” statespace methods from the 1960’s. However. This is the classical approach in which the magnitude of the openloop transfer function. classical loop shaping is difﬁcult to apply for complicated systems. such . We will look at this approach in detail later in this chapter. If performance is not satisfactory then one must either adjust the controller parameters directly (for example. but one also needs methods for controller design. and one may then instead use the GloverMcFarlane À ½ loopshaping design presented in Chapter 9.5 Controller design We have considered ways of evaluating performance. we must also consider its phase. (which is deﬁned in terms of Ë ) and also (in terms of Ä ) are good indicators of closedloop performance. The reason is that we want Ì ½ in order to have good performance. ¾¾¼Æ . Here one considers a particular disturbance or reference change and then one tries to optimize the closedloop response. for for good performance we want Ë close to 0. Usually no optimization is involved and the designer aims to obtain Ä´ µ with desired bandwidth. Optimization is usually used. This involves time domain problem formulations resulting in the minimization of a norm of a transfer function. while Ì (in terms of Ì ) may be misleading in some cases. is shaped. but most other methods involve minimizing some cost function. and then ﬁnds a controller which gives the desired shape(s). The overall design process is iterative between controller design and performance (or cost) evaluation. Ì and ÃË . and this will be the case if Ë ¼ irrespective of the phase of Ë . resulting in various À ½ optimal control problems such as mixed weighted sensitivity. The method consists of a second step where optimization is used to make an initial loopshaping design more robust. The ZieglerNichols’ method used earlier is well suited for online tuning. such as Ë . In this approach the designer speciﬁes the magnitude of some transfer function(s) as a function of frequency. so in practice tracking is out of phase and thus poor In conclusion. Ä´ µ. (a) Loop shaping. 2. and it is not sufﬁcient that Ì ½. more on this later.Ä ËËÁ Ä ÇÆÌÊÇÄ ¿ drops from about ¼Æ to about in this frequency range. On the other hand. by reducing Ã from the value obtained by the ZieglerNichols’ rules) or adjust some weighting factor in an objective function used to synthesize the controller.
6. such as rise times. 2.6 Loop shaping In the classical loopshaping approach to controller design.1 Tradeoffs in terms of Ä Recall equation (2. Numerical optimization. Also. By considering sinusoidal signals. This approach has attracted signiﬁcant interest. are based on this signaloriented approach. The numerical optimization approach may also be performed online. Computationally.19). frequencybyfrequency.47) . to yield quite complex robust performance problems requiring synthesis. a signalbased À ½ optimal control methodology can be derived in which the À ½ norm of a combination of closedloop transfer functions is minimized. and may be combined with model uncertainty representations. the problem formulation is far more critical than in iterative twostep approaches. “loop shape” refers to the magnitude of the loop transfer function Ä Ã as a function of frequency. and so we will discuss loop shaping in some detail in the next two sections. This often involves multiobjective optimization where one attempts to optimize directly the true objectives. an important topic which will be addressed in later chapters. These methods may be generalized to include frequency dependent weights on the signals leading to what is called the WienerHopf (or À ¾ norm) design method. stability margins. which yields the closedloop response in terms of the control error Ý Ö: Ä ´Á ·ßÞ µ ½ Ö · ´Á ·ßÞ µ ½ Ä Ë Ë ´Á · ßÞµ ½ Ä Ò Ä Ì (2. such optimization problems may be difﬁcult to solve. etc. Online optimization approaches such as model predictive control are likely to become more popular as faster computers and more efﬁcient and reliable computational algorithms are developed. 2. In LQG the input signals are assumed to be stochastic (or alternatively impulses in a deterministic setting) and the expected value of the output variance (or the 2norm) is minimized. especially if one does not have convexity in the controller parameters. 3.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ as Linear Quadratic Gaussian (LQG) control. An understanding of how Ã can be selected to shape this loop gain provides invaluable insight into the multivariable techniques and concepts which will be presented later in the book. by effectively including performance evaluation and controller design in a single step procedure. which might be useful when dealing with cases with constraints on the inputs and outputs.
which is obtained with Ä ¼. the conﬂicting design objectives mentioned above are generally in different frequency ranges. are obtained with Ë Ë ´Á · Äµ ½ . in this case between large loop gains for disturbance rejection and tracking. On the other hand. 5.2 Fundamentals of loopshaping design By loop shaping we mean a design procedure that involves explicitly shaping the ´×µÃ ´×µ where magnitude of the loop transfer function. that is. and small loop gains to reduce the effect of noise. we usually want to avoid fast changes in Ù. 7. 8. i. namely disturbance rejection and ¼. Robust stability (stable plant): Ä small (because of uncertain or neglected dynamics). This illustrates the fundamental nature of feedback design which always involves a tradeoff between conﬂicting objectives. It is also important to consider the magnitude of the control action Ù (which is the input to the plant). 2. Ä large. Nominal stability (stable plant): Ä small (because of RHPzeros and time delays). the requirement for zero noise transmission implies that Ì ¼. controller gains and a small Ä The most important design objectives which necessitate tradeoffs in feedback control are summarized below: 1. or equivalently. Small magnitude of input signals: Ã small and Ä small. Since command tracking. Mitigation of measurement noise on plant outputs: Ä small. Performance. Stabilization of unstable plant: Ä large. good disturbance rejection: needs large controller gains. Ë Á . and also because Ù is often a disturbance to other parts of the system (e. or equivalently. good command following: Ä large. In particular.g. we would like ¼¡ ·¼¡Ö·¼¡Ò The ﬁrst two requirements in this equation. consider opening a window in your ofﬁce to adjust your comfort and the undesirable disturbance this will impose on the air conditioning system for the building). The control action is given by Ù Ã ´Ö ÝÑµ and we ﬁnd as expected that a small Ù corresponds to small Ã. the actuator and the . and a small gain ( Ä ½) loop gain ( Ä at high frequencies above crossover. Ì Á . 2. 4. Ä´ µ . Fortunately. Performance. 6. including the plant. and we can meet most of the objectives by using a large ½) at low frequencies below crossover.e. this implies that the loop transfer function Ä must be large in magnitude. Physical controller must be strictly proper: Ã ¼ and Ä ¼ at high frequencies.Ä ËËÁ Ä ÇÆÌÊÇÄ ½ For “perfect control” we want Ý Ö ¼. Here Ä´×µ Ã ´×µ is the feedback controller to be designed and ´×µ is the product of all other transfer functions around the loop.6. 3. We want Ù small because this causes less wear and saves input energy.
if the slope is made steeper at lower or higher frequencies. then this will add unwanted phase lag at intermediate frequencies. The value of Æ at (dB) then a slope of Æ high frequencies is often called the rolloff rate. which is required for Ã rolls off at least as fast as . For example.13) with the Bode plot shown in Figure 2. the rolloff is 2 or larger. if we are considering step changes in the references or disturbances which affect the outputs as steps. Here the slope of the asymptote of Ä is ½ at the gain crossover frequency (where Ä ½ ´ µ ½). we want high bandwidth (fast response) we want the phase lag in Ä to be small. there is a “penalty” of about ¿ Æ at crossover. The situation becomes even worse for cases with delays or RHPzeros in Ä´×µ which add undesirable phase lag to Ä without contributing to a desirable negative slope in Ä. i. to get the beneﬁts of feedback control we want the loop gain. then a slope for Ä of ½ at low frequencies is acceptable. to be as large as possible within the bandwidth region. For stability. that is. Ä´ ½ ¼ µ ½. At the gain crossover frequency . consider Ä½ ´×µ given in (2. due to time delays. the desired shape of Ä depends on what disturbances and references we are designing for. with a proper controller.3. RHPzeros. However. that is. Unfortunately. To measure how Ä falls with frequency we consider the logarithmic slope Æ ÐÒ Ä ÐÒ . a desired loop shape for Ä´ µ typically has a slope of about ½ in the crossover region. In addition. disregarding stability for the moment. to get a and therefore ½ ¼ large. In summary. we must have that Ä frequencies. Thus. and the slope of Ä cannot exceed Æ ½ . If the gain is measured in decibels ½ corresponds to ¾¼ dB/ decade. the loop transfer function Ä ½ × Ò (which has a slope Æ Ò on a loglog plot) has a phase Ä Ò ¡ ¼ Æ. Essentially. Also. we at least need the loop gain to be less than 1 at frequency ½ ¼ .¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ measurement device. which by itself gives ¼Æ phase lag. due to the inﬂuence of the steeper slopes of ¾ at lower and higher frequencies. For example. the additional phase lag from delays and RHPzeros may in practice be ¿¼ Æ or more. to have a phase margin of Æ we need Ä ½¿ Æ. a slope Æ ½ implies that Ä drops by a factor of 10 when increases by a factor of 10. In practice. Thus. unmodelled highfrequency dynamics and limitations on the allowed manipulated inputs. The design of Ä´×µ is most crucial and difﬁcult in the crossover region between (where Ä ½) and ½ ¼ (where Ä ½ ¼Æ). Ä´ µ . and for offsetfree reference tracking the rule is that . If the references or disturbances require the outputs to change in a ramplike fashion then a slope of ¾ is required. However. this is not consistent with the desire that Ä´ µ should fall sharply. Thus. the loop gain has to drop below one at and above some frequency which we call the crossover frequency . As an example. For example. it is desirable that Ä´ µ falls sharply with frequency. integrators are included in the controller to get the desired lowfrequency performance. so the actual phase of Ä ½ at is approximately ½¾ Æ .e. and a slope of ¾ or higher beyond this frequency. At low any real system.
the control error is Ð Ñ ´Øµ × Ð Ñ × ´×µ ¼ (2. and if Ö´Øµ is a ramp then Ö´×µ ½ ×¾ (ÒÖ ¾).48) require ÒÁ ÒÖ .50) The frequency response (Bode plots) of Ä is shown in Figure 2. etc. Consider a reference signal of the form Ö´×µ ½ ×ÒÖ . We require the system to have one integrator (type ½ system). ÒÁ ÒÖ ½ · ½ ´×µ Ö´×µ Ò×Á Ä × · Ä´×µ (2.g.6 Loopshaping design for the inverse response process. For example. 2.49) ´Ø ½µ ¼) we must from (2. Proof: Let Ä´×µ Ä´×µ ×ÒÁ where Ä´¼µ is nonzero and ﬁnite and ÒÁ is the number of integrators in Ä´×µ — sometimes ÒÁ is called the system type. and the rule ¾ In conclusion.48) ´×µ and to get zero offset (i. 3. Typically. we discuss how to specify the loop shape when disturbance rejection is the primary objective of control. The controller gain Ã was selected to get reasonable stability margins (PM and GM). and therefore a reasonable approach is to let the loop transfer function have a slope of ½ at low frequencies. e. where Ä´ µ ½. then Ö´×µ ½ × (ÒÖ ½). The gain crossover frequency. The desired slope at lower frequencies depends on the nature of the disturbance or reference signal. The system type. we desire a slope of about Æ a larger rolloff at higher frequencies. Example 2. the magnitude of the input signal. The ﬁnal value theorem for Laplace transforms is Ø ½ In our case. Loopshaping design is typically an iterative procedure where the designer shapes and reshapes Ä´ µ after computing the phase and gain margins. and then to roll off with a higher slope at frequencies beyond ¼ rad/s. selected closedloop time responses. . if Ö´Øµ is a unit step. the peaks of closedloop frequency responses (Å Ì and ÅË ).4.16 for Ã ¼¼ . We will now design a loopshaping controller for the example process in (2. . in terms of the slope of Ä´ µ in certain frequency ½ around crossover. The shape of Ä´ µ. deﬁned as the number of pure integrators in Ä´×µ. In Section 2.6. follows. The RHPzero limits the achievable bandwidth and so the crossover RHPzero at × region (deﬁned as the frequencies between and ½ ¼ ) will be at about ¼ rad/s. and ranges. one can deﬁne the desired loop transfer function in terms of the following speciﬁcations: 1. The plant and our choice for the loopshape is ´×µ ¿´ ¾× · ½µ ´ × · ½µ´½¼× · ½µ Ä´×µ ¿Ã ´ ¾× · ½µ ×´¾× · ½µ´¼ ¿¿× · ½µ (2. The procedure is illustrated next by an example.e.26) which has a ¼ .Ä ËËÁ Ä ÇÆÌÊÇÄ ¿ ¯ Ä´×µ must contain at least one integrator for each integrator in Ö´×µ.
17 also shows that the magnitude . The phase of Ä is ¼Æ at low frequency. ( Å ¾ ¾.5 0 −0. so at ¾×·½ ¼ rad/s. The controller (2. The corresponding time response is shown in corresponding to Å Figure 2.16: Frequency response of Ä´×µ in (2. and and ½ ¼ rad/s the additional contribution from the term ¾×·½ in (2.17. This is desired in this case because we do not want the slope of the loop shape to drop at the break frequencies ½ ½¼ ¼ ½ rad/s ¼ ¾ rad/s just before crossover.5.50) is ¼Æ .17: Response to step in reference for loopshaping design The asymptotic slope of Ä is ½ up to ¿ rad/s where it changes to corresponding to the loopshape in (2. ÅÌ ½ ½½) ¼¼ 2 1. ÈÅ ¼ ½ .50) is ¾.5 1 0.7 or with the Pcontroller in Figure 2.50) for loopshaping design with Ã Æ.5 0 5 10 15 20 25 30 35 Ý´Øµ Ù´Øµ 40 45 50 Time [sec] Figure 2. The choice Ã ¼ ¼ yields ¼ ½ rad/s for stability we need ¾ ¾ and PM= Æ . It is seen to be much better than the responses with either the simple PIcontroller in Figure 2. Figure 2. ½ ¼ ¼ ¿.51) Ã ´×µ Ã ´½¼× · ½µ´ × · ½µ ×´¾× · ½µ´¼ ¿¿× · ½µ Ã ¼¼ The controller has zeros at the locations of the plant poles.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Magnitude 10 0 ½Å −2 10 −2 10 −90 10 −1 ½¼ 10 0 10 1 Phase −180 −270 −360 −2 10 ÈÅ 10 −1 Frequency [rad/s] 10 0 10 1 Figure 2. ÅË ½ .
if we assume that for performance and robustness we want a phase margin of about ¿ Æ or more. even when there are no RHPzeros or delays. This means that the controller gain is not too large at high frequencies. Thus.51) for loopshaping design Limitations imposed by RHPzeros and time delays Based on the above loopshaping arguments we can now examine how the presence of delays and RHPzeros limit the achievable control performance.5). The phase contribution from the allpass term at is ¾ Ö Ø Ò´¼ µ ¿Æ (which is close to Æ). approximately. The magnitude Bode plot for the controller (2. then the phase lag of Ä at will necessarily be at least ¼Æ . First consider a time delay . the form of which is referred to as allpass since its magnitude equals ×·Þ Þ ¾ 1 at all frequencies. which at frequency ½ is ½ rad = Æ (which is more than Æ). for acceptable control performance we need ½ . then the additional phase contribution from any delays and RHPzeros at frequency cannot exceed about Æ. so for acceptable control Þ ¾.18.18: Magnitude Bode plot of controller (2. It yields an additional phase contribution of . Next consider a real RHPzero at × Þ . Therefore. performance we need . It is interesting to note that in the crossover region around controller gain is quite constant. around ½ in magnitude.51) is shown ¼ rad/s the in Figure 2. which is similar to the “best” gain found using a Pcontroller (see Figure 2. 10 1 Magnitude Ã´ µ 10 0 10 −1 10 −2 10 −1 Frequency [rad/s] 10 0 10 1 Figure 2. We have already argued that if we want the loop shape to have a slope of ½ around crossover ( ). approximately. To avoid an increase in slope caused by this zero we place a pole at × Þ such that the loop transfer function contains the term ×·Þ . with preferably a steeper slope before and after crossover.Ä ËËÁ Ä ÇÆÌÊÇÄ of the input signal remains less than about ½ in magnitude.
This suggests the following possible approach for a minimumphase plant (i. the controller inverts the plant and adds an integrator (½ ×).52) and (2. but otherwise the speciﬁed Ä´×µ was independent of ´×µ. 2. Command tracking: The rise time (to reach ¼% of the ﬁnal value) should be less than ¼ ¿ s and the overshoot should be less than %. and may in any case yield large input signals. namely Ä´×µ × (2.20). Consider the disturbance process described by ´×µ ¾¼¼ ½ ½¼× · ½ ´¼ ¼ × · ½µ¾ ´×µ ½¼¼ ½¼× · ½ (2. The controller will not be realizable if ´×µ has a pole excess of two or larger. . there are at least two good reasons for why this inversebased controller may not be a good choice: 1. 3. Example 2.3 Inversebased controller design In Example 2. we made sure that Ä´×µ contained the RHPzero of ´×µ.53) That is.6. Input constraints: Ù´Øµ should remain within the range ½ ½ at all times to avoid input saturation (this is easily satisﬁed for most designs). This is an old idea.53) is not generally desirable.54) with time in seconds (a block diagram is shown in Figure 2. one with no RHPzeros or time delays).6. The loop shape resulting from (2.e. This is illustrated by the following example.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 2. This loop shape yields a phase margin of ¼ Æ and an inﬁnite gain margin since the phase of Ä´ µ never reaches ½ ¼Æ. We now introduce our second SISO example control problem in which disturbance rejection is an important objective in addition to command tracking. 2. and it should return to ¼ as quickly as possible ( Ý ´Øµ should at least be less than ¼ ½ after ¿ s). and is also the essential part of the internal model control (IMC) design procedure (Morari and Zaﬁriou. Problem formulation. 1989) which has proved successful in many applications. The controller corresponding to (2. These problems may be partly ﬁxed by adding highfrequency dynamics to the controller. We assume that the plant has been appropriately scaled as outlined in Section 1.52) is Ã ´×µ × ½ ´×µ (2.7 Disturbance process. Disturbance rejection: The output in response to a unit step disturbance should remain within the range ½ ½ at all times. The control objectives are: 1. However. Select a loop shape which has a slope of ½ throughout the frequency range.52) where is the desired gain crossover frequency.4. unless the references and disturbances affect the outputs as steps.
5 1 1.19: Responses with “inversebased” controller Ã¼ ´×µ for the disturbance process The response to a step reference is excellent as shown in Figure 2. i. we use ´¼ ½× · ½µ ´¼ ¼½× · ½µ to give the realizable design Ã¼ ´×µ × ½¼× · ½ ¼ ½× · ½ Ä ´×µ ¾¼¼ ¼ ¼½× · ½ ¼ ¼ ½× · ½ × ´¼ ¼ × · ½µ¾ ´¼ ¼½× · ½µ ½¼ (2.19(a). Since ´¼µ ½¼¼ we have that without control the output response to a ½) will be ½¼¼ times larger than what is deemed to be acceptable. Thus. Since ´×µ has a pole excess of three this yields an by (2. we do not want and stability problems associated with high gain feedback.5 0 0 1 Time [sec] (a) Tracking response 2 3 1 Time [sec] 2 3 (b) Disturbance response Figure 2. does not always yield satisfactory designs. On the to be larger than necessary because of sensitivity to noise other hand. so we need to be approximately equal to ½¼ rad/s for disturbance rejection. However. it is still ¼ Because of the integral action the output does eventually return to zero.5 0 0 Ý ´Øµ 0. We will thus aim at a design with ½¼ rad/s. The rise time is about ¼ ½ s and there is no overshoot so the speciﬁcations are more than satisﬁed. but it remains larger than ½ up to The magnitude ½¼ rad/s (where ´ µ ½).e. stays within the range ½ ½ .5 1 Ý ´Øµ 0. Inversebased controller design. and therefore we choose to approximate the plant term ´¼ ¼ × · ½µ¾ by ´¼ ½× · ½µ and then in the controller we let this term be effective over one decade.53) with unrealizable controller. unit disturbance ( ´ µ is lower at higher frequencies. the response to a step disturbance (Figure 2. In the example. and to propose a more desirable loop shape for disturbance rejection. reference tracking was excellent. . but it does not drop below ¼ ½ until after ¾¿ s. Although the output at Ø ¿ s (whereas it should be less than ¼ ½).Ä ËËÁ Ä ÇÆÌÊÇÄ Analysis. The objective of the next section is to understand why the disturbance response was so poor.19(b)) is much too sluggish. We will consider the inversebased design as given ½¼. The above example illustrates that the simple inversebased design method where Ä has a slope of about Æ ½ at all frequencies. but disturbance rejection was poor.52) and (2. feedback control is needed up to frequency .55) 1.
57) at frequencies around crossover. ½. we often add integral action at low frequencies. in order to minimize the input signals.18). we do not want to use larger loop gains than necessary (at least at frequencies around crossover). With Ý Ë . and use Ã This can be summarized as follows: ½ (2. Since Ä satisﬁes (2.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 2. this is approximately the same as requiring Ä . ½·Ä (2. However. we have and we get ÃÑ Ò ½. and the main control objective is to achieve ´ µ ½. from which we get that a maximum reference Ê may be viewed as a disturbance ½ with ´×µ Ê where change Ö Ê is usually a constant. so a simple proportional controller with unit gain yields a good tradeoff between output performance and input usage. so to achieve ´ µ ½ for ´ µ ½ feedback control we have (the worstcase disturbance) we require Ë ´ µ ½ .6.58) In addition. the desired loopshape Ä´×µ may be modiﬁed as follows: . so an inversebased design provides the best tradeoff between performance (disturbance rejection) and minimum use of feedback.57) Ò signiﬁes that Ä Ñ Ò is the smallest loop gain to satisfy Ã the corresponding controller with the minimum gain ÃÑ Ò ×· Á × ½ (2. to get zero steadystate offset). For disturbances entering directly at the plant output. to improve lowfrequency performance (e.4 Loop shaping for disturbance rejection At the outset we assume that the disturbance has been scaled such that at each frequency ´ µ ½. For disturbances entering directly at the plant input (which is a common situation in practice – often referred to as a load disturbance). A reasonable initial loop shape ÄÑ Ò ´×µ is then one that just satisﬁes the condition ÄÑ Ò where the subscript Ñ ´ µ ½. thereby reducing the sensitivity to noise and avoiding stability problems. 2. In addition to satisfying Ä (eq. This explains why selecting Ã to be like ½ (an inversebased controller) yields good responses to step changes in the reference.59) ¯ ¯ ¯ ¯ For disturbance rejection a good choice for the controller is one which contains the dynamics ( ) of the disturbance and inverts the dynamics ( ) of the inputs (at least at frequencies just before crossover). This follows from (1. we get ÃÑ Ò ½ .56) At frequencies where ½. Notice that a reference change may be viewed as a disturbance directly affecting the output. or equivalently.g.
54) ÄÑ Ò Step 1. Ä´ µ . research on the internal model principle (Wonham. that is.57) we know that a good initial loop shape looks like ¬ ¬ ½¼¼ ¬ ¬ ½¼×·½ ¬ at frequencies up to crossover. ¼ Ö + ¹  ¹ Ã ´×µ Ù ¹ ´¼ ¼ ½×·½µ¾ ¹+ + ¹ ½¼¾¼¼ ×·½ Õ ¹Ý Figure 2. The plant can be represented by the block diagram in Figure 2. any time Ã because RHP polezero delays or RHPzeros in ´×µ must be included in Ä cancellations between ´×µ and Ã ´×µ yield internal instability. This is to achieve good transient behaviour with acceptable gain and phase margins. and we share the same see that the disturbance enters at the plant input in the sense that and dominating dynamics as represented by the term ¾¼¼ ´½¼× · ½µ. for example.8 Loopshaping design for the disturbance process. However. Let Ä´×µ roll off faster at higher frequencies (beyond the bandwidth) in order to reduce the use of manipulated inputs. 3. 2. Increase the loop gain at low frequencies as illustrated in (2. The above requirements are concerned with the magnitude. The corresponding controller is . or the internal model control design for disturbances (Morari and Zaﬁriou. The idea of including a disturbance model in the controller is well known and is more rigorously presented in. our development is simple. When selecting Ä´×µ to satisfy Ä by the corresponding minimumphase transfer function with the same magnitude. In addition. ¬ From (2. On the other hand. see Chapter 4. the dynamics (phase) of Ä´×µ must be selected such that the closedloop system one should replace ´×µ is stable. Around crossover make the slope Æ of Ä to be about ½. 1989). time delays and RHPzeros in ´×µ should not be included in Ä´×µ as this will impose undesirable limitations on feedback. and sufﬁcient for gaining the insight needed for later chapters.20: Block diagram representation of the disturbance process in (2.54). Adding an integrator yields zero steadystate offset to a step disturbance. Example 2.59) to improve the settling time and to reduce the steadystate offset. Initial design. Remark.Ä ËËÁ Ä ÇÆÌÊÇÄ 1. to make the controller realizable and to reduce the effects of noise. 1974). Consider again the plant described by (2.20.
5 1 0. To increase the phase margin and improve the transient response we supplement the controller with “derivative action” by multiplying Ã¾ ´×µ by a leadlag term which is effective over one decade starting at ¾¼ rad/s: ×·¾ × (2. so we want Á to be large.21(b) satisﬁes the requirement that Ý ´Øµ ¼ ½ at time Ø ¿ s. Ä½ ´ µ . and the response (Ý½ ´Øµ) to a step change in the disturbance are shown in Figure 2. where Á is the frequency up to which the term is effective (the × asymptotic value of the term is 1 for Á ). Also.5 0 0 Ä½ Ä½ Ä¾ −2 Ý½ Ý¾ Ý¿ 1 Time [sec] (b) Disturbance responses 10 0 Ä¿ 10 2 10 −2 10 10 0 2 3 Frequency [rad/s] (a) Loop gains Figure 2.21. and for Ø ¿ s the response to a step change in the disturbance is not much different from that with the more complicated inversebased controller Ã¼ ´×µ of (2. This simple controller works surprisingly well.60) ½ 10 4 Ä¾ Ä¿ Magnitude 10 2 1.21: Loop shapes and disturbance responses for controllers disturbance process Ã½ . but Ý ´Øµ exceeds ½ for a short time. A reasonable value is Á ¼Æ ½½Æ at . but since the term ´¼ ¼ × · ½µ¾ only comes into effect at ½ ¼ ¼ ½¼ rad/s.62) .59).e. the response is slightly oscillatory as might be expected since the phase margin is only ¿½Æ and the peak values for Ë and Ì are ÅË ¾ ¾ and ÅÌ ½ .¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ½ ÄÑ Ò ¼ ´¼ ¼ × · ½µ¾ . see (2. than poles).61) Ã¿ ´×µ ¼ ×·¾ ¼ ¼ ×·½ × ¼ ¼¼ × · ½ (2. so Á should not be too large. Ã¾ and Ã¿ for the Step 2. shown earlier in Figure 2. we may replace it by a which is beyond the desired gain crossover frequency constant gain of ½ resulting in a proportional controller The magnitude of the corresponding loop transfer function. but in order to maintain an acceptable phase Æ for controller Ã½ ) the term should not add too much negative phase at margin (which is ¼ ¾ for which frequency . Highfrequency correction. In our case the phase contribution from ×· Á is Ö Ø Ò´½ ¼ ¾µ × ½¼ rad/s.19. Step 3. To get integral action we multiply the controller by the term ×· Á .55) as ½ as Ø . For performance we want large gains at low frequencies. This controller is not proper (i. it has more zeros Ã ´×µ ¾¼ rad/s. there is no integral action and Ý½ ´Øµ Ã½ ´×µ ¼ (2. More gain at low frequency. However. so we select the following controller Ã ¾ ´ ×µ ¼ The resulting disturbance response (Ý¾ ) shown in Figure 2.
The most general form is shown in Figure 1. 2.24 1. There exist several alternative implementations of a two degreesoffreedom controller. In general. the inversebased controller Ã¼ inverts the term ½ ´½¼× · ½µ which is also in the disturbance model. it is optimal to design the combined two degreesoffreedom controller Ã in one step. and therefore yields a very sluggish at Ø ¿ s whereas it should be less than ¼ ½). The solution is to use a two degreesoffreedom controller as is discussed next. response to disturbances (the output is still ¼ In summary. The feedback controller Ã Ý is used to reduce the effect of uncertainty (disturbances and model error) whereas the preﬁlter Ã Ö shapes the commands Ö to improve tracking performance. it is not satisfactory for reference tracking.9Æ 50. see (2. × whereas for disturbance rejection we want the controller to look like ½ ½ . and peak values ÅË ½ ¿ and ÅÌ Figure 2.4 8.16 0. However.99 0.65 9.27 1.95 4. and then Ã Ö is designed .19 0. see × (2.95 1.28 1. it is seen that the controller Ã¿ ´×µ reacts quicker than disturbance response Ý¿ ´Øµ stays below ½. the controller is often split into two separate blocks as shown in Figure 2. Table 2. We cannot achieve both of these simultaneously with a single (feedback) controller. The solution is to use a two degreesoffreedom controller where the reference signal Ö and output measurement Ý Ñ are independently treated by the controller.34 1.59).22 where Ã Ý denotes the feedback part of the controller and Ã Ö a reference preﬁlter.35 1. On the other hand. However.43 1 1.9Æ 44.23 0.83 2. Although controller Ã¿ satisﬁes the requirements for disturbance rejection. rather than operating on their difference Ö Ý Ñ as in a one degreeoffreedom controller.21(b). Ã¼ Ã½ Ã¾ Ã¿ 9.5 Two degreesoffreedom design For reference tracking we typically want the controller to look like ½ ½ .99 0.24 19.75 0.6.16 1.001 0.21 0.Ä ËËÁ Ä ÇÆÌÊÇÄ ½ ½ ¾¿. in practice ÃÝ is often designed ﬁrst for disturbance rejection.2 summarizes the results for the four loopshaping designs. the overshoot is ¾ % which is signiﬁcantly higher than the maximum value of %. the inversebased design Ã¼ for reference tracking and the three designs Ã½ Ã¾ and Ã¿ for disturbance rejection.24 0.04 3.27 0.3(b) on page 12 where the controller has two inputs (Ö and ÝÑ ) and one output (Ù).7 72.2: Alternative loopshaping designs for the disturbance process Reference Disturbance GM PM ÅË ÅÌ ØÖ ÝÑ Ü ÝÑ Ü Ý ´Ø ¿µ Spec.7Æ 30.00 1.48 8.9Æ ½¼ ¿ ½¼ ½ ¼½ 11.001 Table 2. for this process none of the controller designs meet all the objectives for both reference tracking and disturbance rejection. From Ã¾ ´×µ and the This gives a phase margin of ½Æ .53).89 1.51 1.33 1.
which is the case in ¼). Remark. The rise time is ¼ ½ s which is better than the required value of ¼ ¿s. Let Ì Ä´½ · Äµ ½ (with Ä ÃÝ ) denote the complementary sensitivity function for the feedback system. This is the approach taken here. but the overshoot is ¾ % which is signiﬁcantly higher than the maximum value of %.63) Thus. However.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ö ¹ ÃÖ + ¹  ¹ ÃÝ Ù¹ ¹+ + + + ¹Ý ÝÑ Ò Figure 2. In Example 2. in theory we may design Ã Ö ´×µ to get any desired tracking response Ì Ö ´×µ.8 we designed a loopshaping controller Ã¿ ´×µ for the plant in (2. and Ð Ð if we want to slow down the response. whereas for a two degreesoffreedom controller Ý Ì Ã Ö Ö.23. However. many process control applications.9 Two degreesoffreedom design for the disturbance process. If the desired transfer function for reference tracking (often denoted the reference model) is Ì Ö .54) which gave good performance with respect to disturbances. or ÃÖ ´×µ Ì ½´×µÌÖ ´×µ (2.63) with reference model controller with ÃÝ . A convenient practical choice of preﬁlter is the leadlag network ÃÖ ´×µ Ð ×·½ Ð ×·½ (2. and also Ì Ã Ö ÌÖ if Ì ´×µ is not known exactly. If one does not require fast reference tracking. and we design ÃÖ ´×µ based on (2. Then for a one degreeoffreedom controller Ý Ì Ö. To improve upon this we can use a two degreesoffreedom Ã¿ .22: Two degreesoffreedom controller to improve reference tracking. a simple lag is often used (with Ð Example 2. then the corresponding ideal reference preﬁlter Ã Ö satisﬁes Ì ÃÖ ÌÖ . the command tracking performance was not quite acceptable as is shown by Ý¿ in Figure 2. in practice it is not so simple because the resulting Ã Ö ´×µ may be unstable (if ´×µ has RHPzeros) or unrealizable.64) Here we select Ð Ð if we want to speed up the response.
5 0 0 Ý¿ ´Øµ(two degreesoffreedom) 0.23.24(b) shows the openloop response to a unit step disturbance.66) ´ µ ½ around the From the Bode magnitude plot in Figure 2. Thus we use Ì ´×µ ¼ ½×·½ ¼ ×·½ ´¼ ½×·½µ´¼ ×·½µ . Consider the following model of a ﬂexible structure with a disturbance occurring at the plant input ´×µ ´×µ ¾ ×´×¾ · ½µ ¾ · ¼ ¾ µ´×¾ · ¾¾ µ ´× (2. can be used to design a one degreeoffreedom controller for a very different kind of plant. The dashed line in Figure 2.24(a) we see that resonance frequencies of ¼ and ¾ rad/s. we may either use the actual Ì ´×µ and then use a loworder approximation of ÃÖ ´×µ.5 Ä ÇÆÌÊÇÄ ¿ Ý¿ ´Øµ 1 0.63) ¼ ×·½ .65) Loop shaping applied to a ﬂexible structure The following example shows how the loopshaping procedure for disturbance rejection. Example 2. To get a loworder ÃÖ ´×µ. The tracking response with this two degreesoffreedom controller is shown in Figure 2. we are able to satisfy all speciﬁcations using a two degreesoffreedom controller. The disturbance response is the same as curve Ý¿ in Figure 2. We will do the latter.5 3 Time [sec] Figure 2.5 2 2.21. a fast response with time constant ¼ ½ s and gain ½ . from which (2. In conclusion.23: Tracking responses with the one degreeoffreedom controller (Ã¿ µ and the two degreesoffreedom controller (Ã¿ ÃÖ¿ ) for the disturbance process ÌÖ ½ ´¼ ½× · ½µ (a ﬁrstorder response with no overshoot). The output is . From the step response Ý¿ in Figure 2.10 Loop shaping for a ﬂexible structure. so control is needed at these frequencies. or we may start with a loworder approximation of Ì ´×µ. The rise time is ¼ ¾ s which is better than the requirement of ¼ ¿ s. and the overshoot is only ¾ ¿% which is better than the requirement of %.Ä ËËÁ 1. and a slower response with time constant ¼ s and gain ¼ (the sum of ´¼ ×·½µ ½ ¼ the gains is 1).23 we approximate the response by two parts.5 1 1. Following closedloop simulations we modiﬁed this slightly to arrive yields ÃÖ ´×µ ¼ ×·½ at the design ÃÖ¿ ´×µ where the term ½ ´¼ ¼¿× · ½µ was included to avoid the initial peaking of the input signal Ù´Øµ above ½. ¼ ×·½ ¡ ½ ¼ × · ½ ¼ ¼¿× · ½ (2.
decides on a loop shape. it may be very difﬁcult to achieve stability.6 Conclusions on loop shaping The loopshaping procedure outlined and illustrated by the examples above is well suited for relatively simple problems.3 on page 387. The method is applied to the disturbance process in Example 9. there exist alternative methods where the burden on the engineer is much less. the output goes up to about ¼ and then returns to zero. Indeed the controller Ã ´×µ ½ (2. with 2. instability cannot occur because the plant is “passive” ½ ¼Æ at all frequencies.58) a controller which meets the speciﬁcation Ý ´ µ ½ ´ µ ½ is given by ÃÑ Ò ´ µ ½. Ä (denoted the “shaped plant” × ). However.6. The fact that the ´×µ gives closedloop stability is not immediately obvious since has choice Ä´×µ gain crossover frequencies. which conﬁrms that ½ for control is needed. It is essentially a twostep procedure. as might arise for stable plants where Ä´×µ crosses the negative real axis only once.24: Flexible structure in (2. where in the ﬁrst step the engineer. and in the second step an optimization provides the necessary phase corrections to get a stable and robust design. Another design philosophy which deals directly with shaping both the gain and phase of Ä´×µ is the quantitative feedback theory (QFT) of Horowitz (1991). Fortunately. One such approach is the GloverMcFarlane À ½ loopshaping procedure which is discussed in detail in Chapter 9. . From (2. Although the procedure may be extended to more complicated systems the effort required by the engineer is considerably greater. In particular.67) turns out to be a good choice as is veriﬁed by the closedloop disturbance response (solid line) in Figure 2.ÅÍÄÌÁÎ ÊÁ 10 2 Ä Ã ÇÆÌÊÇÄ 2 1 Magnitude ÝÇÄ Ý Ä 10 0 0 −1 10 −2 10 −2 10 0 10 2 −2 0 Frequency [rad/s] (a) Magnitude plot of 10 15 20 Time [sec] (b) Openloop and closedloop disturbance responses with Ã 5 ½ Figure 2.24(b). as outlined in this section.66) seen to cycle between ¾ and ¾ (outside the allowed range ½ to ½).
however. which is purely mathematical. one cannot infer anything about Ë and Ì from the magnitude of the loop shape. Strictly speaking. After all. An apparent problem with this approach. An alternative design strategy is to directly shape the magnitudes of closedloop transfer functions. for engineering purposes there is no difference between “×ÙÔ” and “Ñ Ü”. as in the Speciﬁcations directly on the openloop transfer function Ä loopshaping design procedures of the previous section.4. the least upper bound). ´×µ ½ ¸ Ñ Ü ´ µ (2. Ë and Ì may experience large peaks if Ä´ µ is close to ½. we should here replace “Ñ Ü” (the maximum value) by “×ÙÔ” (the supremum. make the design process transparent as it is clear how changes in Ä´×µ affect the controller Ã ´×µ and vice versa.7. we are simply talking about a design method which aims to press down the peak(s) of one or more selected transfer functions.Ä ËËÁ Ä ÇÆÌÊÇÄ 2. To make the term . the phase of Ä´ µ is crucial in this frequency range. the term À ½ . i.6 and in more detail in Chapter 9.7 Shaping closedloop transfer functions In this section. has now established itself in the control community. ½ The terms À½ norm and À ½ control are intimidating at ﬁrst. we introduce the reader to the shaping of the magnitudes of closedloop transfer functions.68) Remark. Ã . which determine the ﬁnal response. Such a design strategy can be formulated as an À½ optimal control problem. Ä´ µ . is that it does not consider directly the closedloop transfer functions. thus automating the actual controller design and leaving the engineer with the task of selecting reasonable bounds (“weights”) on the desired closedloop transfer functions. For example. However. The following approximations apply Ä´ µ Ä´ µ ½ ½ µ Ë Ä ½ Ì ½ µ Ë ½ Ì Ä but in the crossover region where Ä´ µ is close to 1. The topic is discussed further in Section 3. 2. This is because the maximum may only be approached as Û and may therefore not actually be achieved. such as Ë ´×µ and Ì ´×µ. such as Ë and Ì . Before explaining how this may be done in practice. where we synthesize a controller by minimizing an À ½ performance objective.e. and a name conveying the engineering signiﬁcance of À ½ would have been better. that is. However.1 The terms À½ and À¾ The À½ norm of a stable scalar transfer function ´×µ is simply the peak value of ´ µ as a function of frequency. we discuss the terms À ½ and À¾ .
117). the symbol À ¾ stands for the Hardy space of transfer functions with bounded ¾norm. The À¾ norm of a strictly proper stable scalar transfer function is deﬁned as The factor ½ ¾ is introduced to get consistency with the 2norm of the corresponding impulse response. see (4. The subscript È stands for performance since Ë is mainly used as a performance indicator. and the performance requirement becomes . Shape of Ë over selected frequency ranges. which is the set of stable and strictly proper transfer functions. Mathematically.707 from below). which is simply the set of stable and proper transfer functions. an explanation of its background may help. 2. 5. we need not worry about its phase. these speciﬁcations may be captured by an upper bound. Minimum bandwidth frequency £ (deﬁned as the frequency where Ë ´ crosses 0. System type. Maximum tracking error at selected frequencies. where ÛÈ ´×µ is a weight selected by the designer. typically we select Å ¾. Typical speciﬁcations in terms of Ë include: 1. the symbol À stands for “Hardy space”. the symbol ½ comes from the fact that the maximum magnitude over frequency may be written as Ñ Ü ´ µ ÐÑ Ô ½ ½ ½ ´ µÔ ½ Ô Essentially. it is sufﬁcient to consider just its magnitude Ë . that is. or alternatively the maximum steadystate tracking error. and also introduces a margin of robustness. whereas its À½ norm is ﬁnite. An example of a semiproper transfer function (with an inﬁnite À ¾ norm) is the sensitivity function Ë ´Á · Ã µ ½ . Similarly. ½ Û È ´×µ . . 4. Ô ½ ½ ´×µ ¾ ¸ ´ µ¾ ¾ ½ ½¾ (2. First. Next. and À ½ in the context of this book is the set of transfer functions with bounded ½norm.69) 2. Note that the À ¾ norm of a semiproper (or biproper) transfer function (where Ð Ñ × ½ ´×µ is a nonzero constant) is inﬁnite. the sensitivity function Ë is a very good indicator of closedloop performance. Ë ´ µ ½ Å . by raising to an inﬁnite power we pick out its peak value. 3.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ less forbidding. The main advantage of considering Ë is that because we ideally want Ë small.7. both for SISO and MIMO systems. on the magnitude of Ë .2 Weighted sensitivity As already discussed. µ The peak speciﬁcation prevents ampliﬁcation of noise at high frequencies. Maximum peak magnitude of Ë .
Û È Ë . ÛÈ Ë therefore exceeds 1 at the same frequencies as is illustrated in Figure 2. ½ Û È .71) The last equivalence follows from the deﬁnition of the À ½ norm. an example is shown where the sensitivity. . Note that we usually do not use a logscale for the magnitude when plotting weighted transfer functions. In Figure 2. such as Û È Ë .25: Case where Ë exceeds its bound ½ ÛÈ .25(b). and in words the performance requirement is that the À ½ norm of the weighted sensitivity. exceeds its upper bound. must be less than one.Ä ËËÁ Ä ÇÆÌÊÇÄ 10 1 Ë Magnitude 10 0 ½ ÛÈ 10 −1 10 −2 10 −2 10 −1 Frequency [rad/s] 10 0 10 1 (a) Sensitivity 3 Ë and performance weight ÛÈ ÛÈ Ë ½ Magnitude 2 1 0 −2 10 10 −1 Frequency [rad/s] 10 0 10 1 (b) Weighted sensitivity ÛÈ Ë Figure 2. The resulting weighted sensitivity.70) ¸ ÛÈ Ë ¸ ÛÈ Ë ½ ½ (2. Ë .25(a). resulting in ÛÈ Ë ½ ½ Ë´ µ ½ ÛÈ ´ µ ½ (2. at some frequencies.
An asymptotic plot of a typical upper bound. for disturbance rejection we must satisfy Ë ´ µ ½ at all frequencies (assuming the variables have been scaled to be less than 1 in magnitude). we may want a steeper slope for Ä (and Ë ) below the bandwidth.26: Inverse of performance weight.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10 0 Å £ Magnitude 10 −1 10 −2 10 −2 10 −1 10 0 10 1 10 2 Frequency [rad/s] Figure 2. In some cases. and then a higherorder weight may be selected. For example. ½ in Figure 2. For this weight the loop shape Ä bound (2.71) at frequencies below the bandwidth and easily satisﬁes (by a factor Å ) the bound at higher frequencies. and the asymptote crosses 1 at the frequency £ .73) and compare with the asymptotic ÛÈ in (2.4 Make an asymptotic plot of ½ ÛÈ in (2. It then follows that a good initial choice for the performance ´ µ at frequencies where ½. is shown ÛÈ ´×µ × Å· £ ×· £ (2.72). A weight which asks for a slope of ¾ for Ä in a range of frequencies below crossover is ÛÈ ´×µ plot of ½ ´× Å ½ ¾ · £ µ¾ ´× · £ ½ ¾ µ¾ (2. The insights gained in the previous section on loopshaping design are very useful for selecting weights. in order to improve performance. £ × yields an Ë which exactly matches the Remark.72) ½ at low and we see that ½ ÛÈ ´ µ (the upper bound on Ë ) is equal to frequencies. weight is to let ÛÈ ´ µ look like .72) ½ ÛÈ ´ µ in Weight selection. Exact and asymptotic plot of (2.26. is equal to Å ½ at high frequencies. The weight illustrated may be represented by ÛÈ . which is approximately the bandwidth requirement.73) Exercise 2.
A good tutorial introduction to À ½ control is given by Kwakernaak (1993). for example. and nor does it allow us to specify the rolloff of Ä´×µ above the bandwidth. resulting in the following overall speciﬁcation: ¾ Æ ½ Ñ Ü ´Æ ´ µµ ½ Æ ÛÈ Ë ÛÌ Ì ÛÙ ÃË ¿ (2. the À ½ optimal controller is obtained by solving the problem Ñ Ò Æ ´Ã µ ½ (2. Ô Ô . In is. but not an upper one. on the complementary sensitivity Ì For instance. a “stacking approach” is usually used. with Ò stacked requirements the resulting error is at most Ò. to achieve robustness or to restrict the magnitude of the input signals. but in the “worst” case when ½ ¾ . Also. to measure the size of the matrix Æ at each frequency. the speciﬁcations are in general rather rough. one may place an upper bound.78) that ½ ¼ ¼ and ¾ ¼ ¼ .Ä ËËÁ Ä ÇÆÌÊÇÄ 2. ½ Û Ù .77) This is similar to.76) Ã where Ã is a stabilizing controller. That ¿ dB. ´Æ ´ µµ. on the magnitude of ÃË . but not quite the same as the stacked requirement ½¾· ¾¾ (2.78) are very similar when either ½ or ¾ is small.7. Ù ÃË ´Ö µ. To do this one can make demands on another closedloop transfer Á Ë ÃË . assume that ½ ´Ã µ and ¾ ´Ã µ are two functions of Ã (which might represent ½ ´Ã µ ÛÈ Ë and ¾ ´Ã µ ÛÌ Ì ) and that we want to achieve ÛÈ Ë ¾ · ÛÌ Ì ¾ · ÛÙ ÃË ¾ (2.77) and (2. there is a possible “error” in each speciﬁcation equal to at most a factor ¾ general. we get from (2. In any case.3 Stacked requirements: mixed sensitivity The speciﬁcation ÛÈ Ë ½ ½ puts a lower bound on the bandwidth. function.75) ½ ½ ¾ ½ Ô Ò ¾ ½ ½ (2.78) Objectives (2. This inaccuracy in the speciﬁcations is something we are probably willing to sacriﬁce in the interests of mathematical convenience. and are effectively knobs for the engineer to select and adjust until a satisfactory design is reached. Remark 1 The stacking procedure is selected for mathematical convenience as it does not allow us to exactly specify the bounds on the individual transfer functions as described above. For SISO systems. one might specify an upper bound ½ Û Ì on the magnitude of Ì to make sure that Ä rolls off sufﬁciently fast at high frequencies. Æ is a vector and ´Æ µ is the usual Euclidean vector norm: ´Æ µ Ô After selecting the form of Æ and the weights. To combine these “mixed sensitivity” speciﬁcations.74) We here use the maximum singular value. For example.
8 for more details) % systemnames = ’G Wp Wu’. as ÛÈ ½ ´×µ × Å· £ ×· £ Å ½ £ ½¼ ½¼ (2.72). [1 wb*A]).conv([10 1]. % % Generalized plant P is found with function sysic: % (see Section 3. rG]’. except for at most a factor Ò. tol=0. and we therefore select a simple input weight ÛÙ weight is chosen. An important property of ½ optimal controllers is that they yield a ﬂat frequency response. inputvar = ’[ r(1). gmx=20. that is.5.80) ¼ would ask for integral action in the controller. so the weighted sensitivity requirements are not quite satisﬁed (see design 1 in Figure 2. cleanupsysic = ’yes’. but to get a stable weight A value of and to prevent numerical problems in the algorithm used to synthesize the controller. % % Find Hinfinity optimal controller: % nmeas=1. ´Æ ´ µµ ¼ at all frequencies.3: MATLAB program to synthesize an % Uses the Mutoolbox G=nd2sys(1. This gives the designer a mechanism for directly shaping the magnitudes of ´Ë µ.e4. in the form of (2. The practical implication is that. A=1.5. The performance less in magnitude.3. À À Ô Example 2. gmn=0. ´ÃË µ.tol). Wu = 1. ´Ì µ.79) Appropriate scaling of the plant has been performed so that the inputs should be about ½ or ½.200). % Weights. input to Wu = ’[u]’. and so on. Wu. [khinf. input to G = ’[u]’. Consider again the plant in (2. For this problem.76) will be close to ¼ times the bounds selected by the designer. outputvar = ’[Wp.gopt] = hinfsyn(P. u(1)]’. Nevertheless. sysic.conv([0. À½ controller % Plant is G. This has no practical ½¼ has been selected to achieve signiﬁcance in terms of control performance. Wp = nd2sys([1/M wb]. The ½ problem is solved with the toolbox in MATLAB using the commands in Table 2.05 1]. nu=1. the design seems good with Ë ½ ÅË À . sysoutname = ’P’.54).05 1])). input to Wp = ’[rG]’.11 ½ mixed sensitivity design for the disturbance process.nmeas.gmn.ghinf.001. The value £ approximately the desired crossover frequency of ½¼ rad/s. À Table 2.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 2 Let ¼ Ñ ÒÃ Æ ´Ã µ ½ denote the optimal ½ norm.gmx. wb=10. we have moved the integrator slightly by using a small nonzero value for .27 where the curve for Ë½ is slightly above that for ½ ÛÈ ½ ). the transfer functions resulting from a solution to (2. we achieved an optimal ½ norm of ½ ¿ .[0. M=1. and consider an ½ mixed sensitivity Ë ÃË design in which À À Æ ÛÈ Ë ÛÙ ÃË (2.nu.
then from our earlier discussion in Section 2. ÈÅ ½ ¾Æ and ¾¾ rad/s. yielding À . Å sluggish. and the tracking response is very good as shown by curve Ý½ in Figure 2. but it speciﬁes tighter control at lower frequencies.5 0 Ý½ ´Øµ Ý¾ ´Øµ 1 Time [sec] 2 3 0 1 Time [sec] 2 3 (a) Tracking response (b) Disturbance response Figure 2.4 this motivates the need for a performance weight that speciﬁes higher gains at low frequencies. (The design is actually very similar to the loopshaping design for references.28(a). and is seen from the dashed line to cross ½ in magnitude at about the same frequency as weight ÛÈ ½ . With the weight ÛÈ ¾ .5 0 0 1.6.Ä ËËÁ Ä ÇÆÌÊÇÄ ½ 10 0 Magnitude 10 −2 Ë½ ½ ÛÈ ½ −2 Ë¾ ½ ÛÈ ¾ 10 −1 0 1 2 10 −4 10 Frequency [rad/s] 10 10 10 Figure 2.) However. which was an inversebased controller.27.28: Closedloop step responses for two alternative disturbance process À½ designs (1 and 2) for the ¼ . Ã¼ . If disturbance rejection is the main concern. we see from curve Ý½ in Figure 2.5 1 0. Ì ½ ÅÌ ½ ¼. we get a design with an optimal ½ norm of ¾ ¾½.28(b) that the disturbance response is very ½ ¿¼.27: Inverse of performance weight (dashed line) and resulting sensitivity function (solid line) for two ½ designs (1 and 2) for the disturbance process À 1.81) The inverse of this weight is shown in Figure 2. We therefore try ÛÈ ¾ ´×µ ´× Å ½ ¾ · £ µ¾ ´× · £ ½ ¾ µ¾ Å ½ £ ½¼ ½¼ (2.5 Ý¾ ´Øµ Ý½ ´Øµ 1 0.
) The disturbance response is very good. ÅÌ ½ ¿. We also introduced the À½ problem based on weighted sensitivity. design ½ is best for reference tracking whereas design ¾ is best for disturbance rejection.9. as was discussed in Example 2. Ã¿ .72) and (2. ÈÅ ¿ ¿Æ and ½½ ¿ rad/s. and the design approaches available.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÅË ½ ¿. see curve Ý¾ in Figure 2.8 Conclusion The main purpose of this chapter has been to present the classical ideas and techniques of feedback control. We have concentrated on SISO systems so that insights into the necessary design tradeoffs. for which typical performance weights are given in (2. can be properly developed before MIMO systems are considered. To get a design with both good tracking and good disturbance rejection we need a two degreesoffreedom controller.28(a). . In conclusion. 2. Å . (The design is actually very similar to the loopshaping design for disturbances.73). whereas the tracking response has a somewhat high overshoot.
. that is. The need for a careful analysis of the effect of uncertainty in MIMO systems is motivated by two examples. Directions are relevant for vectors and matrices. we introduce the reader to multiinput multioutput (MIMO) systems. Ý½ Ý¾ ÝÐ .1 Introduction We consider a multiinput multioutput (MIMO) plant with Ñ inputs and Ð outputs. then this will generally affect all the outputs. An exception to this is Bode’s stability condition which has no generalization in terms of singular values. We discuss the singular value decomposition (SVD). the basic transfer function model is Ý ´×µ vector. and we will see that most SISO results involving the absolute value (magnitude) may be generalized to multivariable systems by considering the maximum singular value. where Ý is an Ð ¢ ½ Thus. most of the ideas and techniques presented in the previous chapter on SISO systems may be extended to MIMO systems. If we make a change in the ﬁrst input. despite the complicating factor of directions. Ù is an Ñ ¢ ½ vector and ´×µ is an Ð ¢ Ñ transfer function matrix. Finally we describe a general control conﬁguration that can be used to formulate control problems.¿ ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ In this chapter. and multivariable righthalf plane (RHP) zeros. This is related to the fact that it is difﬁcult to ﬁnd a good measure of phase for MIMO transfer functions. Many of these important topics are considered again in greater detail later in the book. Ù ½ . However. and so on. there is interaction between the inputs and outputs. multivariable control. The singular value decomposition (SVD) provides a useful way of quantifying multivariable directionality. 3. but not for scalars. The main difference between a scalar (SISO) system and a MIMO system is the presence of directions in the latter. ´×µÙ´×µ. The chapter should be accessible to readers who have attended a classical SISO control course. Ù¾ only affects Ý¾ . A noninteracting plant would result if Ù ½ only affects Ý½ .
1(a) (from left to right). Finally. At this point.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ The chapter is organized as follows. we must exercise some care since matrix multiplication is not commutative. ½ and ¾ in Remark. in general Ã Ã . which demonstrate that the simple gain and phase margins used for SISO systems do not generalize easily to MIMO systems. The order of the transfer function matrices in ¾ ½ (from left to right) is the reverse of the order in which they appear in the block diagram of Figure 3. 3. We also give a brief introduction to multivariable control and decoupling.1(b). For the cascade (series) interconnection of Figure 3. 1.1: Block diagrams for the cascade rule and the feedback rule The following three rules are useful when evaluating transfer functions for MIMO systems. and study two example plants. With reference to the positive feedback system in Figure 3. that is. Then we introduce the singular value decomposition and show how it may be used to study directions in multivariable systems. Exercises to test your understanding of this mathematics are given at the end of this chapter. you may ﬁnd it useful to browse through Appendix A where some important mathematical tools are described. We start by presenting some rules for determining multivariable transfer functions from block diagrams. . the overall transfer function matrix is ¾ ½. Feedback rule. 2. in this case the order of the transfer function blocks in a feedback path will be reversed compared with their order in the formula. we have Ú ´Á Äµ ½ Ù where Ä ¾ ½ is the transfer function around the loop.2 Transfer functions for MIMO systems Ù ¹ ½ ¹ ¾ Þ ¹ Ù + ¹ Ú¹ + ½ ¾ Õ Ý¹ Þ (a) Cascade system (b) Positive feedback system Figure 3. each ¾ ¢ ¾. This has led some authors to use block diagrams in which the inputs enter at the right hand side. Although most of the formulas for scalar systems apply. After this we discuss robustness. Cascade rule. We then consider a simple plant with a multivariable RHPzero and show how the effect of this zero may be shifted from one output channel to another.1(a). However. so no fundamental beneﬁt is obtained. we consider a general control problem formulation.
1) Proof: Equation (3.2) Example 3. For matrices of appropriate dimensions ½ ´Á ¾ ½ µ ½ ´Á ½ ¾ µ ½ ½ ´Á (3.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ 3. We then exit from a feedback loop and get a term ´Á Äµ ½ (positive feedback) with Ä È¾¾ Ã .2) To derive this from the MIMO rule above we start at the output Þ and move backwards towards Û.1 The transfer function for the block diagram in Figure 3. The rule is best understood by considering an example. and ﬁnally we meet È¾½ . Pushthrough rule. ½ ¾ µ and post¾ Exercise 3. In the other branch we move backwards and meet È½¾ and then Ã . one of which gives the term È½½ directly.1) is veriﬁed by premultiplying both sides by multiplying both sides by ´Á ¾ ½ µ. È¾¾ ¹ È¾½ Û ¹ + + ¹ Ã Õ ¹ È½¾ ¹ + + ¹ È½½ ¹Þ Figure 3. MIMO Rule: Start from the output and write down the blocks as you meet them when moving backwards (against the signal ﬂow).1 Derive the cascade and feedback rules. Care should be taken when applying this rule to systems with nested loops.2: Block diagram corresponding to (3. taking the most direct path towards the input. For such systems it is probably safer to write down the signal equations and eliminate internal variables to get the transfer function of interest. The cascade and feedback rules can be combined into the following MIMO rule for evaluating closedloop transfer functions from block diagrams.2 is given by Þ ´È½½ · È½¾ Ã ´Á È¾¾ Ã µ ½ È¾½ µÛ (3. If you exit from a feedback loop then include a term ´Á Äµ ½ for positive feedback (or ´Á · Äµ ½ for negative feedback) where Ä is the transfer function around that loop (evaluated against the signal ﬂow starting at the point of exit from the loop). There are two branches. .
We deﬁne ÄÁ to be the loop transfer function as seen when breaking the loop at the input to the plant with negative feedback assumed.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Exercise 3. what transfer function does ËÁ represent? Evaluate the transfer functions from ½ and ¾ to Ö Ý .6) ÄÁ In Figure 3. ËÁ Ë .3.18) corresponds to the negative feedback system in Figure 2. . and to make this explicit one may use the notation Ä Ç Ä.4) In Figure 3.4. and ÌÁ Ì .2 Use the MIMO rule to derive the transfer functions from Ù to Ý and from Ù to Þ in Figure 3. Use the pushthrough rule to rewrite the two transfer functions.5) The input sensitivity and input complementary sensitivity functions are then deﬁned as ËÁ ¸ ´Á · ÄÁ µ ½ ÌÁ ¸ Á ËÁ ÄÁ ´Á · ÄÁ µ ½ (3.20) which apply to MIMO systems. Negative feedback control systems + ¹ ¹ + Ù¹ Ö Ã + ¾ ¹ + ¹ + ½ Ý Õ ¹ Figure 3.1(b). Thus.3) The sensitivity and complementary sensitivity are then deﬁned as ¸ Á Ë Ä´Á · Äµ ½ (3.3: Conventional negative feedback control system For the negative feedback system in Figure 3. Ë and Ì are sometimes called the output sensitivity and output complementary sensitivity. respectively. also see equations (2. for SISO systems Exercise 3.3 ÄÁ Ã (3.3. Ì Á is the transfer function from Ä. ¾ to Ù. In Figure 3. we deﬁne Ä to be the loop transfer function as seen when breaking the loop at the output of the plant.4 In Figure 3. Exercise 3. This is to distinguish them from the corresponding transfer functions evaluated at the input to the plant.3.16) to (2. ËÇ Ë and ÌÇ Ì . and Ë is the transfer function from ½ to Ý . for the case where the loop consists of a plant and a feedback controller Ã we have Ä Ë ¸ ´Á · Äµ ½ Ì Ã (3.3.3 Use the MIMO rule to show that (2. Ì is the transfer function from Ö to Ý . Of course.
7).6 we present some factorizations of the sensitivity function which will be useful in later applications.8) (3. we can always compute the frequency response of Ä´ µ ½ (except at frequencies where Ä´×µ has axis poles).10). so Ä must be semiproper with rarely the case in practice). a minimal realization of Ì then multiply it by Ä to obtain Ì has only Ò states.7) (3. but it is preferable to ﬁnd it directly using Ì Á Ë . (3. Then we can ﬁrst form Ë ´Á · Äµ ½ with Ò states. . since Ä is square.g.10) ´Á · Ã µ ½ ´Á · Ã µ ½ Ã ´Á · Ã µ ½ ´Á · Ã µ ½ Ã ´Á · Ã µ ½ Ã Ì Ä´Á · Äµ ½ ´Á · ´Äµ ½ µ ½ Note that the matrices and Ã in (3. and then obtain Ì ´ µ from (3. e. Remark 3 In Appendix A. For example. (3. To assist in remembering (3. This may be obtained numerically using model reduction. assume we have been given realizations. For example.10) can be derived from the identity Å½ ½ Å¾ ½ ´Å¾ Å½ µ ½ .10) note that comes ﬁrst (because the transfer function is evaluated at the output) and then and Ã alternate in sequence. see (3. to that of the nominal plant.9) (3. but with and Ã interchanged.7)(3. ¢ Remark 2 Note also that the right identity in (3. (3. but they are also useful for numerical calculations involving statespace ´×Á µ ½ · . On the other hand.9) also follows from the pushthrough rule. but then these ’s would actually represent two different physical entities). For example.10) need not be square whereas Ä Ã is square. (3. A given transfer matrix never occurs twice in sequence. Ä´×µ Ã with Ò states (so is a Ò Ò matrix) and we want to a statespace realization for Ä ﬁnd the state space realization of Ì . However. We have Ë¼ Ë ´Á · Ç Ì µ ½ Ç ¸´ ¼ µ ½ (3. Similar relationships. Remark 1 The above identities are clearly useful when deriving transfer functions analytically.7)(3. and ËÄ with ¾Ò states.11) and where Ç is an output multiplicative perturbation representing the difference between ¼ .139) relates the sensitivity of a perturbed plant. Ë ´Á · Ã µ ½ . and Ì is the nominal complementary sensitivity function. (A. apply for the transfer functions evaluated at the plant input.10) can only be used to compute the state¼ (which is space realization of Ì if that of Ä ½ exists.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ The following relationships are useful: Ä ÇÆÌÊÇÄ ´Á · Äµ ½ · Ä´Á · Äµ ½ Ë·Ì Á (3. Ë ¼ ´Á · ¼ Ã µ ½ .7) follows trivially by factorizing out the term ´Á · Äµ ½ from the right. the closedloop transfer function ´Á · Ã µ ½ does not exist (unless is repeated in the block diagram.8) says that Ë Á Ë and follows from the pushthrough rule.
13) ÝÓ Ó ´ µ ¬ « ´ µ (3.13)(3. it has been applied since Ø ½. In Section 2. Consider the system ´×µ in Figure 3.3. In particular.12) sinusoidal response of scalar systems. To be more speciﬁc. see (2.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 3. apply to input channel a scalar sinusoidal signal given by This input signal is persistent.9). if we ﬁx × × ¼ then we may view ´× ¼ µ simply as a complex matrix.15) In phasor notation.16) . We have ¯ ´ µ represents the sinusoidal response from input ´Øµ to output . that is. These results may be directly generalized to multivariable systems by considering the elements of the matrix . However.15) by Ý´ µ ´ µ ´ µ (3.14) where the ampliﬁcation (gain) and phase shift may be obtained from the complex number ´ µ as follows ¼ × Ò´ Ø · « µ (3.1 Obtaining the frequency response from ´×µ ¹ Figure 3.4: System ´×µ ´×µ with input ¹Ý and output Ý (We here denote the input by rather than by Ù to avoid confusion with the matrix Í used below in the singular value decomposition). Then the corresponding persistent output signal in channel is also a sinusoid with the same frequency Ý ´Øµ Ý ¼ × Ò´ Ø · ¬ µ (3.4 with input ´×µ and output Ý ´×µ: Ý´×µ ´×µ ´×µ (3. the choice × ¼ is of interest since ´ µ represents the response to a sinusoidal signal of frequency .3 Multivariable frequency response analysis The transfer function ´×µ is a function of the Laplace variable × and can be used to represent a dynamic system. which can be analyzed using standard tools in matrix algebra.1 we considered the The frequency domain is ideal for studying directions in multivariable systems at any given frequency.7) and (2. 3. we may compactly represent the sinusoidal time response described in (3.
20) represent the vectors of sinusoidal input and output signals. by the superposition principle for linear systems. and we have from (3. but since the system is linear it is independent of the input magnitude ´ µ .21) Ý ´Ø µ (3. Example 3. . .14).23) ¾¼ «¾ 3. equal to the sum of the individual responses.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ where Ä « ÇÆÌÊÇÄ ´ µ Ó Ý´ µ Here the use of (and not ) as the argument of are complex numbers. .19) ´ µ ¿ ½´ µ ¾´ µ Ñ´ .13) and (3.2 Consider a ¾ ¾ multivariable system where we simultaneously apply sinusoidal signals of the same frequency to the two input channels: ¢ ´Øµ The corresponding output signal is ½ ´Øµ ¾ ´Øµ Ý½ ´Øµ Ý¾ ´Øµ ½¼ × Ò´ Ø · «½ µ ¾¼ × Ò´ Ø · «¾ µ Ý½¼ × Ò´ Ø · ¬½ µ Ý¾¼ × Ò´ Ø · ¬¾ µ Ý½¼ ¬½ Ý¾¼ ¬¾ (3.17) ´ µ and Ý ´ µ implies that these Ý´ µ ½´ µ ½´ µ · ¾´ µ ¾´ µ · ¡ ¡ ¡ Ý´ µ ¾ ´ µ ´ µ (3. Ò µ Ý´ µ Ý½ ´ Ý¾ ´ ÝÐ ´ . Ý . representing at each frequency the magnitude and phase of the sinusoidal signals in (3.3. .16) ÝÓ ¬ (3.2 Directions in multivariable systems For a SISO system.22) which can be computed by multiplying the complex matrix Ý´ µ ´ µ ´ µ Ý´ µ ´ µ by the complex vector ´ µ: ½¼ «½ ´ µ (3. µ¿ µ µ (3. the gain at a given frequency is simply Ý´ µ ´ µ ´ µ ´ µ ´ µ ´ µ The gain depends on the frequency . . The overall response to simultaneous input signals of the same frequency in several input channels is.18) or in matrix form where ´ µ ´ µ ¾ (3.
¿ ¾ (3. the usual measure of length. Example 3. The maximum gain as the direction of the input is varied is the maximum singular value of .28) The ﬁrst identities in (3.28) follow because the gain is independent of the input magnitude for a linear system.29) . and we need to “sum up” the magnitudes of the elements in each vector by use of some norm. (3.1.24) and the magnitude of the vector output signal is Ý´ µ ¾ The gain of the system ratio × Ý´ µ¾ Õ ¾ ¾ Ý½¼ · Ý¾¼ · ¡ ¡ ¡ (3. If we select the vector 2norm. the gain is in general different for ½ ½ ¼ ¾ ¼ ½ ¿ ¼ ¼ ¼ ¼ ¾ ½ ¼ ¼ ¼ ¼ ¼ ¼ (which all have the same magnitude for the ¾ ¾ system ¢ ½ but are in different directions).3 For a system with two inputs.27) and (3. For example. and again it is independent of the input magnitude ´ µ ¾ . Ñ Ü ¼ ¾ ¾ ¾ Ñ Ü ¾ ½ ¾ ´ µ .27) whereas the minimum gain is the minimum singular value of ÑÒ ¼ ¾ ÑÒ ¾ ½ ¾ ´ µ (3. for a MIMO system there are additional degrees of freedom and the gain depends also on the direction of the input . the following ﬁve inputs: ½¼ ¾¼ . as discussed in Appendix A. then at a given frequency the magnitude of the vector input signal is ´ µ ¾ × ´ µ¾ Õ ¾ · ¾ ·¡¡¡ ½¼ ¾¼ (3.26) Again the gain depends on the frequency .25) ´×µ for a particular input signal ´ µ is then given by the ´ µ ´ µ ¾ ´ µ ¾ Ô Ô Ý´ µ ¾ ´ µ ¾ ¾ ¾ Ý½¼ · Ý¾¼ · ¡ ¡ ¡ ¾ · ¾ ·¡¡¡ ½¼ ¾¼ (3.5. However.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Things are not quite as simple for MIMO systems where the input and output signals are both vectors.
to conclude from the which has both eigenvalues eigenvalues that the system gain is zero is clearly misleading. However. 3.29) the following output vectors (a constant matrix) we compute for the ﬁve inputs Ý½ Ý½ ¾ ¿ Ý¾ ¿ Ý¾ ¾ ¾ Ý¿ ¿ ¿ Ý ¼ ¼ ¼ ¼ Ý ¼ ¾ ¼¾ and the 2norms of these ﬁve outputs (i. We see that.3 Eigenvalues are a poor measure of gain Before discussing in more detail the singular value decomposition we want to demonstrate that the magnitudes of the eigenvalues of a transfer function matrix. do not provide a useful means of generalizing the SISO gain.5 where we have used the ratio ¾¼ ½¼ as an independent variable to represent the input direction.g. ´ ´ µ . the gains for the ﬁve inputs) are Ý¿ ¾ ¿¼ Ý ¾ ½ ¼¼ Ý ¾ ¼¾ This dependency of the gain on the input direction is illustrated graphically in Figure 3. an input vector The “problem” is that the eigenvalues measure the gain for the special case when the inputs and the outputs are in the same direction. ´ µ . depending on the ratio ¾¼ ½¼ . First of all. the gain varies between ¼ ¾ and ¿ . namely in the direction of the Ø. consider the system Ý ¼ ½¼¼ ¼ ¼ (3. For example. and even then they with can be very misleading.30) equal to zero. e.3. These are the minimum and maximum singular values of ½ . eigenvectors. eigenvalues can only be computed for square systems.e. We get Ý Ø Ø .5: Gain ½ ¾ ¾ as a function of ¾¼ ½¼ for ½ in (3. To see this let Ø be an eigenvector of and consider an input Then the output is Ý Ø Ø where is the corresponding eigenvalue.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ 8 Ä ÇÆÌÊÇÄ ½ ´ ½µ ¿ Ý ¾ ¾ 6 4 2 0 −5 ´ ½µ −4 −3 ¼¾ −2 −1 ¾¼ ½¼ 0 1 2 3 4 5 Figure 3. with ¼ ½ Ì we get an output vector Ý ½¼¼ ¼ Ì . respectively. To see this.
We will use all of these norms in this book. ´ µ ¸ Ñ Ü ´ µ (the spectral radius).30).31) ½ ¾ (3. the sum norm ×ÙÑ.3. In Appendix A. e. we need the To ﬁnd useful generalizations of .29) and (3. this is the reason for the subscript ). and we write Í ¦Î À where (3. Any matrix may be decomposed into its singular value decomposition.4 Singular value decomposition The singular value decomposition (SVD) is deﬁned in Appendix A. in this chapter we will mainly use the induced 2norm. 3.3. The singular values are the positive square roots of the eigenvalues of À . and denote ´ µ by for simplicity. the magnitude of the largest eigenvalue. for the case when is a matrix. Two important properties which must be concept of a matrix norm.32) (see Appendix A. This may be useful for stability analysis. where À is the complex conjugate transpose of .115). . see (3.5. Here we are interested in its physical interpretation when applied to the frequency response of a MIMO system ´×µ with Ñ inputs and Ð outputs. As we may expect. Notice that ´ µ ½¼¼ for the matrix in (3. but not for performance.34) .2 we introduce several matrix norms. the maximum column sum ½ . the maximum row sum ½ .5 Compute the spectral radius and the ﬁve matrix norms mentioned above for the matrices in (3. such as the Frobenius norm . also see (A. ´ µ Õ ´ À µ (3. denoted satisﬁed for a matrix norm are the triangle inequality ½· ¾ and the multiplicative property ½ · ½ ¡ ¾ ¾ (3.27).30).5 for more details). ´ µ. However. Consider a ﬁxed frequency where ´ µ is a constant Ð ¢ Ñ complex matrix. the other entries are zero. arranged in descending order along its main diagonal. Exercise 3. and the maximum singular value ¾ ´ µ (the latter three norms are induced by a vector norm.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ so measures the gain in the direction Ø . does not satisfy the properties of a matrix norm.g.33) ¦ is an Ð ¢ Ñ matrix with Ñ Ò Ð Ñ nonnegative singular values. each depending on the situation.
The column vectors of Í . Ú . They are orthogonal and of unit length (orthonormal).36). 2. then the output is in the direction Ù . the singular values must be computed numerically. In other words ´ µ Ú ¾ Ú ¾ Ú ¾ (3. These input and output directions are related through the singular values. whereas is a scalar.38) where Ú and Ù are vectors. In general.35) we see that the matrices Í and Î involve rotations and that their columns are orthonormal. denoted Ú .39) Some advantages of the SVD over the eigenvalue decomposition for analyzing gains and directionality of multivariable plants are: 1. Input and output directions. and the associated directions are called principal directions. represent the output directions of the plant. the column vectors of Î . denoted Ù .36) ÙÀ Ù ½ ÙÀ Ù ¼ (3. To see this. ¿ Í is an Ð ¢ Ð unitary matrix of output singular vectors. ½ ßÞ ½ Í ßÞ ¦ ¾ ¾ ßÞ ÎÌ ¾ where the angles ½ and ¾ depend on the given matrix. that is Ô Ù ¾ Ù½¾· Ù¾¾· · ÙÐ¾ ½ (3. since Ú ¾ ½ and Ù ¾ ½ we see that the ’th singular value gives directly the gain of the matrix in this direction.33) may be written as Î Ú Ù (3. It is standard notation to use the symbol Í to denote the matrix of output singular vectors. note that since Î is unitary we have Î À Î Á . are orthogonal and of unit length. Furthermore. so Í ¦. That is. Caution.37) Likewise.35) ¼ × Ò ¦ Ó× ×Ò Ó× is an Ñ ¢ Ñ unitary matrix of input singular vectors. analytic expressions for the singular values are given in (A. and represent the input directions. . For ¾ ¢ ¾ matrices however. The plant directions obtained from the SVD are orthogonal. if we consider an input in the direction Ú . The singular values are sometimes called the principal values or principal gains. This is unfortunate as it is also standard notation to use Ù (lower case) to represent the input signal.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ . The reader should be careful not to confuse these two. Ù Î This is illustrated by the SVD of a real ¾ ¢ ¾ matrix which can always be written in the form Ó× ¾ ¦ × Ò ¾ Ì Ó× ½ × Ò ½ ½ ¼ (3. From (3. The singular values give better information about the gains of the plant. which for column becomes (3.
¼ Ð). As already stated. Consider again the system (3. we can always choose a nonzero input .29) with ½ The singular value decomposition of ¿ ¾ ¿¿ ¼ ¼ ¼¾ ¾ ßÞ (3.43) The vector Ú corresponds to the input direction with largest ampliﬁcation. The directions involving Ú and Ù are sometimes referred to as the “strongest”.3 continue.272 is for an input in the direction Ú ¼ ¼ Ú ¼ ¼ ¼ . it can be shown that the largest gain for any input direction is equal to the maximum singular value ´ µ ½´ µ Ñ Ü ¼ ¾ ¾ Ú½ ¾ Ú½ ¾ (3.5) until the “least important”. The smallest gain of ½ For a “fat” matrix Note that the directions as given by the singular vectors are not unique.42) Ù Ú½ ÚÙ Ù and Ú Ú Ú.343 is for an input in the direction 0. ¼ . The next most important directions are associated with Ú ¾ and Ù¾ . This conﬁrms ﬁndings on page 70.ÅÍÄÌÁÎ ÊÁ 3. “highgain” or “most important” directions.41) . for any vector we have that ´ µ Deﬁne Ù½ ¾ ¾ Ù Ú ´ µ (3. Ä Ã ÇÆÌÊÇÄ Maximum and minimum singular values. in the sense that the elements in each pair of vectors (Ù . Thus. The SVD also applies directly to nonsquare plants.44) ½ is ¦ ½ ¼ ¾ ¼ ¼ ¼ ¼ ¼ ¾ ßÞ ¼ ¼ ¼ ¼ ¼ ¼ ÎÀ ßÞ À Í The largest gain of 7.3. and so on (see Appendix A. and Ù is the corresponding output direction in which the inputs are most effective. Then it follows that Ù (3. Ú ) may be multiplied by a complex in the nullspace of with more inputs than outputs (Ñ such that . Example 3. “weak” or “lowgain” directions which are associated with Ú and Ù.40) and that the smallest gain for any input direction (excluding the “wasted” in the nullspace of when there are more inputs than outputs 1 ) is equal to the minimum singular value ´ µ where ´ µ ÑÒ ¼ ÑÒ Ð Ñ ¾ ¾ Ú ¾ Ú ¾ (3.
This is easily seen from (3. For example.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ scalar of magnitude 1 ( ½). We thus see that the control of an illconditioned plant may be especially difﬁcult if there is input uncertainty which can cause the input signal to “spread” from one input direction to another. will be upwards (lifting up the cart).44) both inputs affect both outputs. sideways and upwards. we ½) provided we also change may change the sign of the vector Ú (multiply by the sign of the vector Ù.4 Shopping cart. we see that if we hand. 3 2 ]. Example 3. [u. i. On the other ¼ ¼ ¼ ¼ Ì . We will discuss this in more detail later. the ratio between the gains in the strong and weak directions.45) The variables have been scaled as discussed in Section 1. Furthermore. will be sideways. Example 3. The next direction. we see that the gain is ½ ¾ when we increase one input and decrease the other input by a similar amount.44) (g1=[5 4. Consider a shopping cart (supermarket trolley) with ﬁxed wheels which we may want to move in three directions. corresponding to the largest singular value.4. Since in (3.s.44) is ¿ ¿ ¼ ¾ ¾ ¾ ¼. Ú ¼ ¼ ¼ ¼ Ì . then some of our applied force will be directed forwards (where the plant gain is large) and the cart will suddenly move forward with an undesired large speed. corresponding to the smallest singular value. since the elements are much larger than ½ in magnitude this suggests that there will be no problems with input constraints. This follows from the relatively large offdiagonal elements in ½ . and the control problem associated with the shopping cart can be described as follows: Assume we want to push the shopping cart sideways (maybe we are blocking someone).46) From the ﬁrst input singular vector. the system is illconditioned. from the second input singular vector. Thus. Control of illconditioned plants is sometimes difﬁcult. which for the system in (3. the most “difﬁcult” or “weak” direction. the plant is illconditioned. For example. However.e. will clearly be in the forwards direction. if you use Matlab to compute the SVD of the matrix in (3. This is a simple illustrative example where we can easily ﬁgure out the principal directions from experience. if there is any uncertainty in our knowledge about the direction the cart is pointing. The strongest direction. some combinations of the inputs have a strong effect on the outputs. then you will probably ﬁnd that the signs are different from those given above. Ú . For the shopping cart the gain depends strongly on the input direction.5 Distillation process. This is rather difﬁcult (the plant has low gain in this direction) so a strong force is needed. corresponding to the second singular value.39). Finally. whereas other combinations have a weak effect on the outputs. To see this consider the SVD of : ¼ ¾ ¼ ½ ¼ ½ ¼ ¾ ßÞ ½ ¾ ¼ ¼ ½¿ ßÞ Í ¦ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ÎÀ ßÞ À (3. However. This may be quantiﬁed by the condition number. forwards. that is. Consider the following steadystate model of a distillation column ½¼ ¾ ½¼ (3. we say that the system is interactive. this is somewhat misleading as the gain in the lowgain direction (corresponding to the smallest singular value) is actually only just above ½.v]=svd(g1)).
ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ increase both inputs by the same amount then the gain is only ½ ¿ . and later in this chapter we will consider a simple controller design (see Motivating robustness example No. which is obtained for inputs in the direction Ú singular vector directions. Similarly. it takes a large control action to move the compositions in different directions.e. the reason for this small change is that the compositions in the distillation column are only weakly dependent on changes in the internal ﬂows (i. the distillation process ½½ . ´ µ ½¿ . but in the opposite direction of that for the increase in Ù½ . to change Ý½ Ý¾ . is illconditioned. Typical plots are shown in Figure 3. changes in Ù½ .7. increase Ù½ Ù ¼ ½ ¼ ¼ ¼ ¼ . NonSquare plants The SVD is also useful for nonsquare plants. Again.45) represents twopoint (dual) composition control of a distillation column. In this case the third output singular vector. that is. at least at steadystate. On the other hand. Therefore. the outputs are very sensitive to yields a large steadystate change in Ý½ of . and if we increase Ù½ and Ù¾ simultaneously by ½. the output singular vectors will be denoted by Ù and Ù. Thus. the distillation column is very sensitive to changes in external ﬂows (i.8 on page 434). 2 in Section 3.6 Physics of the distillation process. simultaneous changes in the internal ﬂows Ä and Î ). This can be seen from the input singular vector Ú associated with the largest singular value. The reason for this is that the plant is such that the two inputs counteract each other. ¼ ¼ ¼ ¼ For dynamic systems the singular values and their associated directions vary with frequency. to make both products purer simultaneously. Example 3. Thus an increase in Ù½ by ½ (with Ù¾ constant) The ½ ½element of the gain matrix is . using reﬂux L (input Ù½ ) and boilup V (input Ù¾ ) as manipulated inputs (see Figure 10. Physically. The reason for this is that the external distillate ﬂow Ä) has to be about equal to the amount of light component in the feed. The model in (3. tells us in which . then the overall steadystate change in Ý½ is only ½ . and the condition number is ½ ¾ ½ ¿ The physics of this example is discussed in more detail below. For example. From the output ¼ ¾ we see that the effect is to move the outputs in different Ù¾ Ä Î ). We therefore see that changes in Ù½ and Ù¾ counteract each other. Ù ¿ .e. and for control purposes it is usually the frequency range corresponding to the closedloop bandwidth which is of main interest. and is a general property of distillation columns where both products are of high purity. (which varies as Î and even a small imbalance leads to large changes in the product compositions. that is. .6. where the top composition is to ¼ (output Ý½ ) and the bottom composition at Ü ¼ ¼½ (output be controlled at Ý Ý¾ ).2). This makes sense from a physical point of view. This can also be seen from the smallest singular value. Note that we have here returned to the convention of using Ù½ and Ù¾ to denote the manipulated inputs. that is. The singular values are usually plotted as a function of frequency in a Bode magnitude plot with a logscale for frequency and magnitude. an increase in Ù¾ by ½ (with Ù½ constant) yields Ý½ this is a very large change. consider a plant with 2 inputs and 3 outputs.
But the maximum singular value is also very useful in terms of frequencydomain performance and robustness.3. the additional input singular vectors tell us in which directions the input will have no effect. but note that an is in the nullspace of and yields an output Ý ¼. the control error. ´ µ Ö´ µ . what is the interpretation of the singular values and the associated input directions (Î )? What is Í in this case? 3.6: Typical plots of singular values output direction the plant cannot be controlled.81) Figure 3. it is the gain from a sinusoidal reference input (or output disturbance) to Ë´ µ . For example.76) 10 Frequency [rad/s] 10 10 (b) Distillation process in (3. Exercise 3.7 Consider a nonsquare system with 3 inputs and 2 outputs. Similarly. Example 3. for a plant with more inputs than outputs.5 Singular values for performance So far we have used the SVD primarily to gain insight into the directionality of MIMO systems.6 For a system with Ñ inputs and ½ output.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ 10 2 Ä ÇÆÌÊÇÄ 10 2 ´ µ ´ µ −4 −2 0 Magnitude 10 10 0 ´ µ ´ µ −2 Magnitude 2 1 10 0 10 −1 10 10 0 10 −2 10 Frequency [rad/s] (a) Spinning satellite in (3. For SISO systems we earlier found that Ë ´ µ evaluated as a function of frequency gives useful information about the effectiveness of feedback control. ¾ with singular value decomposition ½ ¿ ¾ ½ ¼ ½¿ ¼ ¼ ¼ ¾ ¼ ½ ½ ¼ ¼ ¼ ¼ ½¾ ¼ ¼ ¼ ¼ ßÞ ¼ ½ ÎÀ À ¾ ¼ ¼ ½ ¼ ½ ¼ ßÞ ¿ ¼ Í ßÞ ¦ From the deﬁnition of the minimum singular value we have input in the direction ¼ ¼ ¼½ ´ ¾µ ½¿ . We here consider performance.
This results in the following performance requirement: ´Ë ´ µµ ½ ÛÈ ´ µ ¸ ´ÛÈ Ë µ ¸ ÛÈ Ë ½ ½ ½ (3.2. where Ö is the vector of reference inputs.7 (the lowgain or worstcase direction). This peak is undesirable. then we consider the worstcase (lowgain) direction. which should be studied carefully.50) The maximum singular value. and deﬁne ¯ Bandwidth.7 (the highgain or best direction). and ¡ ¾ is the vector 2norm. As for SISO systems we deﬁne the bandwidth as the frequency up to which feedback is effective. If we want to associate a single bandwidth frequency for a multivariable system. Let ½ Û È ´ µ (the inverse of the performance weight) represent the maximum allowed magnitude of ¾ Ö ¾ at each frequency. For MIMO systems the bandwidth will depend on directions. and a higher frequency where the minimum singular value. usually has a peak larger than 1 around the crossover frequencies. ´Ë ´ µµ. The singular values of Ë ´ µ may be plotted as functions of frequency. ´Ë µ. : Frequency where ½ ´Ë µ crosses Ô¾ ¼ from below. As explained above.49) Typical performance weights Û È ´×µ are given in Section 2. this gain depends on the direction of Ö´ µ and we have from (3. they are small at low frequencies where feedback is effective.48) where the À½ norm (see also page 55) is deﬁned as the peak of the maximum singular value of the frequency response Å ´×µ ½ ¸ Ñ Ü ´Å ´ µµ (3. Typically. reaches 0.7. ´Ë µ. but it is unavoidable for real systems.42) that it is bounded by the maximum and minimum singular value of Ë .10(a). and we have a bandwidth region between a lower frequency where the maximum singular value. . ´ µ ¾ Ö´ µ ¾ .47) In terms of performance. reaches 0. including the “worstcase” direction which gives a gain of ´Ë ´ µµ. ´Ë ´ µµ ´ µ ¾ Ö´ µ ¾ ´Ë ´ µµ (3. it is reasonable to require that the gain ´ µ ¾ Ö´ µ ¾ remains small for any direction of Ö´ µ. and they approach 1 at high frequencies because any real system is strictly proper: ½ Ä´ µ ¼ µ Ë´ µ Á (3. as illustrated later in Figure 3.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ For MIMO systems a useful generalization results if we consider the ratio is the vector of control errors.
Finally. After ﬁnding a suitable Ï ½ ´×µ we can design a diagonal controller Ã × ´×µ for the shaped ´×µÏ½ ´×µ (3.51) Thus at frequencies where feedback is effective (namely where ´Äµ ½) we have Ô ´Ë µ ½ ´Äµ. the bandwidth is approximately where ´Äµ crosses 1. 3. at higher frequencies where for any real system ´Äµ (and ´Äµ) is small we have that ´Ë µ ½.41 and 2. Thus. which counteracts the interactions in the plant and results in a “new” shaped plant: × ´×µ which is more diagonal and easier to control than the original plant ´×µ.4 Control of multivariable plants Ö + ¹  ¹ Ã Ù ¹ ¹+ + + + Õ ¹Ý ÝÑ Ò Figure 3. A conceptually simple approach to multivariable control is given by a twostep procedure in which we ﬁrst design a “compensator” to deal with the interactions in . and at the bandwidth frequency (where ½ ´Ë ´ µµ ¾ ½ ½) we have that ´Ä´ µµ is between 0. and then design a diagonal controller using methods similar to those for SISO systems.7. This approach is discussed below. Ï ½ ´×µ. The most common approach is to use a precompensator.52) yields ´Äµ ½ ½ ´Ë µ ´Äµ · ½ (3.7: One degreeoffreedom feedback control conﬁguration Consider the simple feedback system in Figure 3. Since Ë ´Á · Äµ ½ .41. (A.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ It is then understood that the bandwidth is at least for any direction of the input (reference or disturbance) signal.52) .
designed based on optimization (to optimize ½ robust stability). Ó may be obtained.g.4. If we then select Ã× ´×µ Ð´×µÁ (e.1 Decoupling Decoupling control results when the compensator Ï ½ is chosen such that × Ï½ in (3.54) as an inversebased controller. .53) In many cases effective compensators may be derived on physical grounds and may include nonlinear elements such as ratios. 3. 2. × desirable properties. and then a controller Ã× ´×µ is designed.52) is diagonal at a selected frequency. we get Ï ½ ½ ´×µ). Dynamic decoupling: × ´×µ is diagonal at all frequencies. for example. problems involved in realizing ×). Remark 1 Some design approaches in this spirit are the Nyquist Array technique of Rosenbrock (1974) and the characteristic loci technique of MacFarlane and Kouvaritakis (1977). Ë ´×µ nominal system with identical loops. The overall controller is then Ä Ã ÇÆÌÊÇÄ Ã ´×µ Ï½ ´×µÃ× ´×µ (3. In some cases we may want to keep the diagonal elements in the shaped plant ½ . the overall controller is with Ð´×µ Ã ´×µ Ã ÒÚ ´×µ ¸ Ð´×µ ½ ´×µ (3. À À À 3. Ã× ´×µ is a full multivariable controller. Remark 2 The ½ loopshaping design procedure. with ½ ´×µ (disregarding the possible × ´×µ Á and a square plant. elements in Ï½ to be 1. i. The following different cases are possible: 1. using the align algorithm of Kouvaritakis (1974). Ä´×µ ½·Ð´×µ Á and Ð Ì ´×µ ½·´Ð×´µ×µ Á . This may be obtained by selecting Ï½ and the offdiagonal elements of Ï½ are then called “decoupling elements”. Approximate decoupling at frequency Û Ó : × ´ Ó µ is as diagonal as possible.e. It results in a decoupled ½ Ð´×µÁ . This may be obtained by selecting a ½ ´¼µ (and for a nonsquare plant we may use constant precompensator Ï ½ the pseudoinverse provided ´¼µ has full row (output) rank). In other cases we may want the diagonal unchanged by selecting Ï½ ½ ´´ ½ µ µ ½ .4. with similar in that a precompensator is ﬁrst chosen to yield a shaped plant. ½ This is usually obtained by choosing a constant precompensator Ï ½ Ó where Ó is a real approximation of ´ Ó µ. is Ï½ . described in detail in Section 9. For example. The main difference is that in ½ loop shaping. Remark.¼ plant ÅÍÄÌÁÎ ÊÁ × ´×µ. The bandwidth frequency is a good selection for Ó because the effect on performance of reducing interaction is normally greatest at this frequency.54) We will later refer to (3. Steadystate decoupling: × ´¼µ is diagonal.
8: Pre.) 2.6. page 93. One popular design method. which avoids most of the problems just mentioned.7. as shown in Figure 3. but there are several difﬁculties: 1.8. Ï½ and Ï¾ .ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½ The idea of decoupling control is appealing.and postcompensator design. If the plant has RHPzeros then the requirement of decoupling generally introduces extra RHPzeros into the closedloop system (see Section 6.5. 3.58). Ã× is diagonal Ã× for the shaped plant Ï ¾ Ï½ .4. The requirement of decoupling and the use of an inversebased controller may not be desirable for disturbance rejection. The overall controller is then Ã ´×µ Ï½ where ÎÓ ÍÓ ¦ÓÎÓÌ . SVDcontrollers are studied by Hung and MacFarlane (1982).2 Pre.and postcompensators and the SVDcontroller The above precompensator approach may be extended by introducing a postcompensator Ï ¾ ´×µ. is the internal model control (IMC) approach (Morari and Zaﬁriou. Ï½ Ã× Ï¾ Ò (3. and are discussed further below. Here ÎÓ Ï¾ Ì ÍÓ (3.4. page 221). is to use partial (oneway) decoupling where × ´×µ in (3. which essentially yields a decoupling controller. (1994) who found that the SVD controller structure is .56) and ÍÓ are obtained from a singular value decomposition of Ó where Ó is a real approximation of ´ Ó µ at a given frequency Û Ó (often around the bandwidth).2.55) The SVDcontroller is a special case of a pre. decoupling may be very sensitive to modelling errors and uncertainties.1. Even though decoupling controllers may not always be desirable in practice.and postcompensators. they are of interest from a theoretical point of view. This is illustrated below in Section 3. The reasons are similar to those given for SISO systems in Section 2.52) is upper or lower triangular. They also yield insights into the limitations imposed by the multivariable interactions on achievable performance. 3. and by Hovd et al. 1989). As one might expect. Another common strategy. see (3. One then designs a diagonal controller Ã ¹ Ï¾ ¹ Ã× ¹ Ï½ ¹ Figure 3.
For simplicity.58) for SISO systems. ¾ ½. if offdiagonal elements in ´×µ are large.4.3 Diagonal controller (decentralized control) Another simple approach to multivariable controller design is to use a diagonal or blockdiagonal controller Ã ´×µ. The closedloop disturbance response is Ý Ë . In summary. In requirement is that Ý ¾ many cases there is a tradeoff between input usage and performance. so a simple constant unit gain controller yields a good . and our performance ½. For example. At frequencies where feedback is effective we ´Á · Äµ ½ Ä ½. However. such that the controller that minimizes the input magnitude is one that yields all singular values of Ë equal to 1. This provides a ½ which was derived in (2.e.4 What is the shape of the “best” feedback controller? Consider the problem of disturbance rejection. we see that for disturbances entering at the plant inputs. ´Ë µ ½ . i. By Ð´×µ¦ ½ a decoupling design is achieved. The subscript min refers to the use of the smallest loop gain that satisﬁes the performance objective. This is equivalent to requiring ´Ë µ ½. this works well if ´×µ is close to diagonal. for plants consisting of symmetrically interconnected subsystems.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ optimal in some cases. we get ÃÑ Ò Í¾ . the SVDcontroller provides a useful class of controllers. This is often referred to as decentralized control.g.4) such that at each frequency the disturbances are of magnitude 1. and each element in Ã ´×µ may be designed independently. Clearly. because then the plant to be controlled is essentially a collection of independent subplants.57) yields Ä Ñ Ò ÃÑ Ò Í½ ½ .58) some allpass transfer function matrix. In have Ë conclusion. This corresponds to ËÑ Ò Í½ (3. and the summary following (2. the controller and loop shape with the minimum gain will often look like Í½ ½ is where Í¾ generalization of Ã Ñ Ò ÃÑ Ò ½ Í¾ ÄÑ Ò Í¾ (3.10). 3. we assume that is square so Í½ ´ µ is a unitary matrix.4. then the performance with decentralized diagonal control may be poor because no attempt is made to counteract the interactions. and (3.58) on page 48 therefore also applies to MIMO systems. Decentralized control is discussed in more detail in Chapter 10.57) where Í½ ´×µ is some allpass transfer function (which at each frequency has all its singular values equal to 1). . Suppose we have scaled the system (see Section 1. e. and by selecting a selecting Ã× Ó diagonal Ã× with a low condition number ( ´Ã × µ small) generally results in a robust controller (see Section 6. 3.
59) This problem was discussed earlier for SISO systems. Invariably this twostep procedure results in a suboptimal design. so a decoupling design is generally not optimal for disturbance rejection.5 Multivariable controller synthesis The above design methods are based on a twostep procedure in which we ﬁrst design a precompensator (for decoupling control) or we make an inputoutput pairing selection (for decentralized control) and then we design a diagonal controller Ã × ´×µ.60) . the objective is to minimize the À ½ norm of Æ ÏÈ Ë ÏÙ ÃË (3. In the Ë ÃË problem. Ö to ÛÈ ×Å · £ ×· £ ÛÈ with Ö Ý. We also note with interest that Í¾ is it is generally not possible to select a unitary matrix Í ¾ such that ÄÑ Ò diagonal. These insights can be used as a basis for a loopshaping design.7.4.4. ÃË is the transfer function from Ö to Ù in Figure 3. Optimization in controller design became prominent in the 1960’s with “optimal control theory” based on minimizing the expected value of the output variance in the face of stochastic disturbances.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ¿ tradeoff between output performance and input usage. Later. other approaches and norms were introduced.11. A common choice for the ½ (3. page 60.3 would be useful now. The following issues and guidelines are relevant when selecting the weights Ï È and ÏÙ : 1. The alternative is to synthesize directly a multivariable controller Ã ´×µ based on minimizing some objective function (norm). such as À½ optimal control. a reasonable initial choice for the input weight is function from performance weight is Ï È 2. see more on À ½ loopshaping in Chapter 9. and another look at Section 2.7. We here use the word synthesize rather than design to stress that this is a more formalized approach. namely the Ë ÃË (mixedsensitivity) À ½ design method which is used in later examples in this Chapter.4. small 3. Ë is the transfer been scaled as in Section 1.6 Summary of mixedsensitivity À½ synthesis (Ë ÃË ) We here provide a brief summary of one multivariable synthesis approach. so for a system which has ÏÙ Á . A sample MATLAB ﬁle is provided in Example 2. 3.
we ﬁnd that RHPzeros impose fundamental limitations on control. . To ﬁnd a reasonable initial choice for the weight Ï È . one can ﬁrst obtain a controller with some other design method. Often we select Å about ¾ for all outputs.2 illustrates this by . In some situations we may want to adjust Ï È or in order to satisfy better our original objectives. More details about À ½ design are given in Chapter 9. introducing a scalar parameter « to adjust the magnitude of 5. A more direct approach is to add highfrequency dynamics. To reduce sensitivity to noise and uncertainty. to the plant model to ensure that the resulting Ï½ . The helicopter case study in Section 12. A large value of £ yields a faster response for output . Æ represents the transfer function from 3.62) and Ù. and ﬁnally include Ï ½ ´×µ in the controller. × À½ optimal controller Ã× for this shaped plant. and so we may want additional rolloff in Ä. whereas £ action with Ë ´¼µ may be different for each output. e.g. Ã Ï½ Ã× . Selecting ½ ensures approximate integral ¼. We then obtain an shaped plant.59).26 on page 58). we may in some cases want a steeper slope for Û È ´×µ at low frequencies than that given in (3. However. One approach is to add Ï Ì Ì to the stack for Æ in (3. 4. and select Û È ´×µ as a rational approximation of ½ Ë . This can be achieved in several ways. For disturbance rejection. 3. As for SISO systems.60). Ï ½ ´×µ. it may be better to consider the disturbances explicitly by considering the À ½ norm of ÏÈ Ë Ï È Ë Æ (3.61) ÏÙ ÃË ÏÙ ÃË or equivalently Æ where ÏÈ ËÏ ÏÙ ÃËÏ with Ï Á Ö to the weighted (3. rolls off with the desired slope. we want Ì small at high frequencies.73). Ì is the transfer function from Ò to Ý . where Ï Ì ÛÌ and ÛÌ is smaller than 1 at low frequencies and large at high frequencies.5 Introduction to multivariable RHPzeros By means of an example. as see the weight in (2. we now give the reader an appreciation of the fact that MIMO systems have zeros even though their presence may not be obvious from the elements of ´×µ.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (see also Figure 2. plot the magnitude of the resulting diagonal elements of Ë as a function of frequency.
5 Ý¾ Ý½ 5 10 1. that is.8 Consider the following plant ´×µ ½ ½ ½ ´¼ ¾× · ½µ´× · ½µ ½ · ¾× ¾ (3. We see 2 1.63) The responses to a step in each individual input are shown in Figure 3. and we can ﬁnd the direction of a zero by looking at the direction in which the matrix ´Þ µ has zero gain. as it may incorrectly cancel poles and zeros with the same location but different directions (see Sections 4. the plant does have a multivariable RHPzero at Þ ¼ .64) and we have as expected RHPzero are Ú inputs and with both outputs. The presence of the multivariable RHPzero is also observed from the time response in Figure 3. Thus. but for these two inputs there is no inverse response to indicate the presence of a RHPzero.5. The singular value decomposition of ´¼ µ is ´¼ µ ½ ½ ½ ½ ¾ ¾ ¼ ¼ ßÞ ¼ ¼ Í ½ ¾ ¼ ¼ ¼ ßÞ ¦ ¼ ½ ¼ ½ ÎÀ ßÞ ¼ ½ À ¼ ½ (3.5 0 −0. However.5 and 4. We see that Ý¾ displays an inverse response whereas Ý½ happens to À Æ ÏÈ Ë ÏÙ ÃË (3.3 for more details).5 1 0.5 −1 0 Ý¾ Ý½ 5 10 Time [sec] (a) Step in ½ ¼Ì Ù½ . Example 3. Ù Time [sec] (b) Step in ¼ ½Ì Ù¾ .ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ The zeros Þ of MIMO systems are deﬁned as the values × Þ where ´×µ loses rank. Nevertheless.9(c).9(a) and (b). The input and output directions corresponding to the ¼ ¼ .5 1 0.5 0 0 Ý¾ Ý½ 5 10 1 0. To see how the RHPzero affects the closedloop response. ´×µ loses rank at × ¼ .5 0 0 2 1.63) that the plant is interactive. this crude method may fail in some cases.65) . Ù Time [sec] ½ ½ Figure 3. the RHPzero is associated with both Ù remain at zero for this particular input change.9: Openloop response for ´×µ in (3. we design a controller which minimizes the ½ norm of the weighted Ë ÃË matrix ½ ½ . which is for a simultaneous input change in opposite directions. For square systems we essentially have that the poles and zeros of ´×µ are the poles and zeros of Ø ´×µ. and Ø ´¼ µ ¼. Ù (c) Combined step in Ù½ and Ì Ù¾ . ¼ ½ ¼ ½ ´ ´¼ µµ and Ù ¼.
Design 1. but output ¾ responds very poorly (much poorer than output ½ for Design ¾). Design 3.ÅÍÄÌÁÎ ÊÁ with weights Ä Ã ÇÆÌÊÇÄ ÏÙ Á ÏÈ ÛÈ ½ ¼ ¼ ÛÈ ¾ ÛÈ ×Å · £ × · Û£ ½¼ (3. the ½ norm for Æ is À À .10(a). whereas it was only ¾ ¾ for Design ¾. We do this by increasing the bandwidth requirement in output channel ¾ by a factor of ½¼¼: × Ò¾ Å½ Å¾ ½ £ ½ ¼¾ £ ¾ ¾ This yields an ½ norm for Æ of ¾ ¾.10(b) that the response for output ¾ (Ý¾ ) is excellent with no inverse response. Since there is a RHPzero at Þ limit the bandwidth of the closedloop system. À 10 0 Magnitude ´Ë µ ´Ë µ ´Ë µ ´Ë µ 2 1 0 −1 10 2 Design 1: Design 2: 10 −2 Ý½ Ý¾ 1 2 3 4 5 Design 1: Design 2: 10 −2 10 0 −2 0 Frequency [rad/s] (a) Singular values of Ë Time [sec] (b) Response to change in reference. Ö ½ ½ Ì Figure 3. Furthermore. For MIMO plants. we change the weight ÛÈ ¾ so that more emphasis is placed on output ¾.3 on page 60. In this case (not shown) we get an excellent response in output ½ with no inverse response. To illustrate this.10(b).g.63) with RHPzero Design 2. We weight the two outputs equally and select ¢ × Ò½ Å½ Å¾ ½ £ ½ £ ¾ Þ ¾ ¼¾ This yields an ½ norm for Æ of ¾ ¼ and the resulting singular values of Ë are shown by the ½ ½Ì solid lines in Figure 3. We can also interchange the weights ÛÈ ½ and ÛÈ ¾ to stress output ½ rather than output ¾. inverse response) of a RHPzero to a particular output channel. one can often move most of the deteriorating effect (e.10: Alternative designs for ¾ ¢ ¾ plant (3. this comes at the expense of output ½ (Ý½ ) where the response is somewhat poorer than for Design ½.66) The MATLAB ﬁle for the design is the same as in Table 2. However. In this case we see from the dashed line in Figure 3. except that we now ¼ we expect that this will somehow have a ¾ ¾ system. ¿. We note that both outputs behave rather poorly and both display an inverse response. The closedloop response to a reference change Ö is shown by the solid lines in Figure 3.
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ
Ä
ÇÆÌÊÇÄ
¼ ½. This may be expected from the output direction of the RHPzero, Ù ¼ , which is mostly in the direction of output ½. We will discuss this in more detail in Section 6.5.1.
Thus, we see that it is easier, for this example, to get tight control of output ¾ than of output
Remark 1 We ﬁnd from this example that we can direct the effect of the RHPzero to either of the two outputs. This is typical of multivariable RHPzeros, but there are cases where the RHPzero is associated with a particular output channel and it is not possible to move its effect to another channel. The zero is then called a “pinned zero” (see Section 4.6). Remark 2 It is observed from the plot of the singular values in Figure 3.10(a), that we were able to obtain by Design 2 a very large improvement in the “good” direction (corresponding to ´Ë µ) at the expense of only a minor deterioration in the “bad” direction (corresponding to ´Ë µ). Thus Design 1 demonstrates a shortcoming of the ½ norm: only the worst direction (maximum singular value) contributes to the ½ norm and it may not always be easy to get a good tradeoff between the various directions.
À
À
3.6 Condition number and RGA
Two measures which are used to quantify the degree of directionality and the level of (twoway) interactions in MIMO systems, are the condition number and the relative gain array (RGA), respectively. We here deﬁne the two measures and present an overview of their practical use. We do not give detailed proofs, but refer to other places in the book for further details.
3.6.1 Condition number
We deﬁne the condition number of a matrix as the ratio between the maximum and minimum singular values,
´ µ
´ µ ´ µ
(3.67)
A matrix with a large condition number is said to be illconditioned. For a nonsingular (square) matrix ´ µ ½ ´ ½ µ, so ´ µ ´ µ ´ ½ µ. It then follows from (A.119) that the condition number is large if both and ½ have large elements. The condition number depends strongly on the scaling of the inputs and outputs. To be more speciﬁc, if ½ and ¾ are diagonal scaling matrices, then the condition numbers of the matrices and ½ ¾ may be arbitrarily far apart. In general, the matrix should be scaled on physical grounds, for example, by dividing each input and output by its largest expected or desired value as discussed in Section 1.4. One might also consider minimizing the condition number over all possible scalings. This results in the minimized or optimal condition number which is deﬁned by £´ µ Ñ Ò ´ ½ ¾µ (3.68)
½ ¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
and can be computed using (A.73). The condition number has been used as an inputoutput controllability measure, and in particular it has been postulated that a large condition number indicates sensitivity to uncertainty. This is not true in general, but the reverse holds; if the condition number is small, then the multivariable effects of uncertainty are not likely to be serious (see (6.72)). If the condition number is large (say, larger than 10), then this may indicate control problems:
´ µ ´ µ may be caused by a small value 1. A large condition number ´ µ of ´ µ, which is generally undesirable (on the other hand, a large value of ´ µ need not necessarily be a problem). 2. A large condition number may mean that the plant has a large minimized condition number, or equivalently, it has large RGAelements which indicate fundamental control problems; see below. 3. A large condition number does imply that the system is sensitive to “unstructured” (fullblock) input uncertainty (e.g. with an inversebased controller, see (8.135)), but this kind of uncertainty often does not occur in practice. We therefore cannot generally conclude that a plant with a large condition number is sensitive to uncertainty, e.g. see the diagonal plant in Example 3.9.
3.6.2 Relative Gain Array (RGA)
The relative gain array (RGA) of a nonsingular square matrix deﬁned as Ê ´ µ £´ µ ¸ ¢ ´ ½ µÌ is a square matrix (3.69)
where ¢ denotes elementbyelement multiplication (the Hadamard or Schur product). For a ¾ ¢ ¾ matrix with elements the RGA is
£´ µ
½½ ¾½
½¾ ¾¾
½ ½½
½½
½ ½½
½½
½½
½ ½¾ ¾½ ½½ ¾¾
½
(3.70)
Bristol (1966) originally introduced the RGA as a steadystate measure of interactions for decentralized control. Unfortunately, based on the original deﬁnition, ¼”. To the many people have dismissed the RGA as being “only meaningful at contrary, in most cases it is the value of the RGA at frequencies close to crossover which is most important. The RGA has a number of interesting algebraic properties, of which the most important are (see Appendix A.4 for more details): 1. It is independent of input and output scaling. 2. Its rows and columns sum to one. 3. The sumnorm of the RGA, £ ×ÙÑ, is very close to the minimized condition number £ ; see (A.78). This means that plants with large RGAelements are
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ
Ä
ÇÆÌÊÇÄ
always illconditioned (with a large value of ´ µ), but the reverse may not hold (i.e. a plant with a large ´ µ may have small RGAelements). 4. A relative change in an element of equal to the negative inverse of its corresponding RGAelement yields singularity. 5. The RGA is the identity matrix if is upper or lower triangular. From the last property it follows that the RGA (or more precisely £ Á ) provides a measure of twoway interaction. Example 3.9 Consider a diagonal plant and compute the RGA and condition number,
½¼¼ ¼ ¼ ½
£´ µ
Á ´ µ
´ µ ´ µ
½¼¼ ½
½¼¼ £ ´ µ
½
(3.71)
Here the condition number is 100 which means that the plant gain depends strongly on the Á input direction. However, since the plant is diagonal there are no interactions so £´ µ and the minimized condition number £ ´ µ ½.
Example 3.10 Consider a triangular plant
for which we get
Note that for a triangular matrix, the RGA is always the identity matrix and £ ´ ½.
½ ¾ ¼ ½
½
½ ¼
¾ £´ µ Á ´ µ ¾ ½ ½ ¼ ½
¿ £´ µ
½
(3.72)
µ is always
In addition to the algebraic properties listed above, the RGA has a surprising number of useful control properties: 1. The RGA is a good indicator of sensitivity to uncertainty: (a) Uncertainty in the input channels (diagonal input uncertainty). Plants with large RGAelements around the crossover frequency are fundamentally difﬁcult to control because of sensitivity to input uncertainty (e.g. caused by uncertain or neglected actuator dynamics). In particular, decouplers or other inversebased controllers should not be used for plants with large RGAelements (see page 243). (b) Element uncertainty. As implied by algebraic property no. 4 above, large RGAelements imply sensitivity to elementbyelement uncertainty. However, this kind of uncertainty may not occur in practice due to physical couplings between the transfer function elements. Therefore, diagonal input uncertainty (which is always present) is usually of more concern for plants with large RGAelements. Example 3.11 Consider again the distillation process for which we have at steadystate
½¼ ¾
½¼
½
¼¿ ¼¿
¼ ¿½ ¼ ¿¾¼
£´ µ
¿ ½ ¿ ½ ¿ ½ ¿ ½
(3.73)
¼
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
In this case ´ µ ½ ¾ ½ ¿ ½ ½ ½ is only slightly larger than £ ´ µ ½¿ ¾ . ½¿ ¾ . This The magnitude sum of the elements in the RGAmatrix is £ ×ÙÑ conﬁrms (A.79) which states that, for ¾ ¾ systems, £´ µ ×ÙÑ £ ´ µ when £ ´ µ is large. The condition number is large, but since the minimum singular value ´ µ ½ ¿ ½ is larger than ½ this does not by itself imply a control problem. However, the large RGAelements indicate control problems, and fundamental control problems are expected if analysis shows that ´ µ has large RGAelements also in the crossover frequency range. (Indeed, the idealized dynamic model (3.81) used below has large RGAelements at all frequencies, and we will conﬁrm in simulations that there is a strong sensitivity to input channel uncertainty with an inversebased controller).
¢
2. RGA and RHPzeros. If the sign of an RGAelement changes from × ¼ to × ½, then there is a RHPzero in or in some subsystem of (see Theorem 10.5), page 449). 3. Nonsquare plants. The deﬁnition of the RGA may be generalized to nonsquare matrices by using the pseudo inverse; see Appendix A.4.2. Extra inputs: If the sum of the elements in a column of RGA is small ( ½), then one may consider deleting the corresponding input. Extra outputs: If all elements in a row of RGA are small ( ½), then the corresponding output cannot be controlled (see Section 10.4). 4. Pairing and diagonal dominance. The RGA can be used as a measure of diagonal dominance (or more precicely, of whether the inputs or outputs can be scaled to obtain diagonal dominance), by the simple quantity RGAnumber
£´ µ Á ×ÙÑ
(3.74)
For decentralized control we prefer pairings for which the RGAnumber at crossover frequencies is close to 0 (see pairing rule 1 on page 445). Similarly, for certain multivariable design methods, it is simpler to choose the weights and shape the plant if we ﬁrst rearrange the inputs and outputs to make the plant diagonally dominant with a small RGAnumber. 5. RGA and decentralized control. (a) Integrity: For stable plants avoid inputoutput pairing on negative steadystate RGAelements. Otherwise, if the subcontrollers are designed independently each with integral action, then the interactions will cause instability either when all of the loops are closed, or when the loop corresponding to the negative relative gain becomes inactive (e.g. because of saturation) (see Theorem 10.4, page 447). (b) Stability: Prefer pairings corresponding to an RGAnumber close to 0 at crossover frequencies (see page 445). Example 3.12 Consider a plant
´×µ
½ ×·½
×·½ ×· ½ ¾
(3.75)
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ
Ä
ÇÆÌÊÇÄ
½
We ﬁnd that ½½ ´ µ ¾ and ½½ ´¼µ ½ have different signs. Since none of the diagonal elements have RHPzeros we conclude from Theorem 10.5 that ´×µ must have a RHPzero. This is indeed true and ´×µ has a zero at × ¾.
½
For a detailed analysis of achievable performance of the plant (inputoutput controllability analysis), one must also consider the singular values, RGA and condition number as functions of frequency. In particular, the crossover frequency range is important. In addition, disturbances and the presence of unstable (RHP) plant poles and zeros must be considered. All these issues are discussed in much more detail in Chapters 5 and 6 where we discuss achievable performance and inputoutput controllability analysis for SISO and MIMO plants, respectively.
3.7 Introduction to MIMO robustness
To motivate the need for a deeper understanding of robustness, we present two examples which illustrate that MIMO systems can display a sensitivity to uncertainty not found in SISO systems. We focus our attention on diagonal input uncertainty, which is present in any real system and often limits achievable performance because it enters between the controller and the plant.
3.7.1 Motivating robustness example no. 1: Spinning Satellite
Consider the following plant (Doyle, 1986; Packard et al., 1993) which can itself be motivated by considering the angular velocity control of a satellite spinning about one of its principal axes:
´×µ
½ ×¾ · ¾
× ¾ ´× · ½µ
¾
A minimal, statespace realization,
The plant has a pair of axis poles at × ¦ so it needs to be stabilized. Let us apply negative feedback and try the simple diagonal constant controller
´× · ½µ × ¾ ´×Á µ ½ · ¿ ½ ¼ ¼ ¼ ¼ ½ ½ ¼ ¼ ½ ¼ ¼
½¼
, is
(3.76)
(3.77)
Ã
The complementary sensitivity function is
Á
½ ×·½
Ì ´×µ
Ã ´Á · Ã µ ½
½
½
(3.78)
¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
Nominal stability (NS). The closedloop system has two poles at × ½ and so it is stable. This can be veriﬁed by evaluating the closedloop state matrix
½ ¼ ½ ¼ ½ (To derive Ð use Ü Ü · Ù, Ý ÃÝ). Ã are shown in Nominal performance (NP). The singular values of Ä Figure 3.6(a), page 77. We see that ´Äµ ½ at low frequencies and starts dropping off at about ½¼. Since ´Äµ never exceeds ½, we do not have tight control in
Ð
Ã
¼
½ ¼ Ü and Ù
the lowgain direction for this plant (recall the discussion following (3.51)), so we expect poor closedloop performance. This is conﬁrmed by considering Ë and Ì . For example, at steadystate ´Ì µ ½¼ ¼ and ´Ë µ ½¼. Furthermore, the large offdiagonal elements in Ì ´×µ in (3.78) show that we have strong interactions in the closedloop system. (For reference tracking, however, this may be counteracted by use of a two degreesoffreedom controller). Robust stability (RS). Now let us consider stability robustness. In order to determine stability margins with respect to perturbations in each input channel, one may consider Figure 3.11 where we have broken the loop at the ﬁrst input. The loop ½ × transfer function at this point (the transfer function from Û ½ to Þ½ ) is Ä½ ´×µ Ä½ ´×µ ). This corresponds to an ½ (which can be derived from Ø ½½ ´×µ ½·× ½·Ä½ ´×µ inﬁnite gain margin and a phase margin of ¼ Æ . On breaking the loop at the second input we get the same result. This suggests good robustness properties irrespective of the value of . However, the design is far from robust as a further analysis shows. Consider input gain uncertainty, and let ¯ ½ and ¯¾ denote the relative error in the gain
Þ½
Û½
+ ¹
+ ¹ 

¹ ¹
Õ
Ã
Õ¹
¹
Figure 3.11: Checking stability margins “oneloopatatime”
in each input channel. Then
Ù¼½
´½ · ¯½ µÙ½
Ù¼¾
´½ · ¯¾ µÙ¾
(3.79)
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ
Ä
ÇÆÌÊÇÄ
¿
where Ù¼ and Ù¼ are the actual changes in the manipulated inputs, while Ù ½ and Ù¾ are ½ ¾ the desired changes as computed by the controller. It is important to stress that this diagonal input uncertainty, which stems from our inability to know the exact values of the manipulated inputs, is always present. In terms of a statespace description, (3.79) may be represented by replacing by
¼
½ · ¯½ ¼ ¼ ½ · ¯¾
The corresponding closedloop state matrix is
¼
Ð
¼Ã
¼
¼
½ · ¯½ ¼ ¼ ½ · ¯¾
½
½
(3.80)
which has a characteristic polynomial given by
Ø´×Á ¼ Ð µ
×¾ · ´¾ · ¯½ · ¯¾ µ × · ½ · ¯½ · ¯¾ · ´ ¾ · ½µ¯½ ¯¾
ßÞ
½
ßÞ
¼
The perturbed system is stable if and only if both the coefﬁcients ¼ and ½ are positive. We therefore see that the system is always stable if we consider uncertainty in only one channel at a time (at least as long as the channel gain is positive). More ¯ ½ ½ ¯¾ ¼µ and ´¯½ ¼ ½ ¯¾ precisely, we have stability for ´ ½ ½µ. This conﬁrms the inﬁnite gain margin seen earlier. However, the system can only tolerate small simultaneous changes in the two channels. For example, let ¯ ½ ¯¾ , then the system is unstable ( ¼ ¼) for
¯½
Ô ¾½ ·½
¼½
In summary, we have found that checking singleloop margins is inadequate for MIMO problems. We have also observed that large values of ´Ì µ or ´Ë µ indicate robustness problems. We will return to this in Chapter 8, where we show that with ½ ´Ì µ, we are guaranteed robust stability input uncertainty of magnitude ¯ (even for “fullblock complex perturbations”). In the next example we ﬁnd that there can be sensitivity to diagonal input uncertainty even in cases where ´Ì µ and ´Ë µ have no large peaks. This can not happen for a diagonal controller, see (6.77), but it will happen if we use an inversebased controller for a plant with large RGAelements, see (6.78).
3.7.2 Motivating robustness example no. 2: Distillation Process
The following is an idealized dynamic model of a distillation column,
´×µ
½ × · ½ ½¼ ¾
½¼
(3.81)
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
(time is in minutes). The physics of this example was discussed in Example 3.6. The plant is illconditioned with condition number ´ µ ½ ½ at all frequencies. The plant is also strongly twoway interactive and the RGAmatrix at all frequencies is
Ê
´ µ
¿
¿ ½ ½
¿
½ ¿ ½
(3.82)
The large elements in this matrix indicate that this process is fundamentally difﬁcult to control.
Remark. (3.81) is admittedly a very crude model of a real distillation column; there should be a highorder lag in the transfer function from input 1 to output 2 to represent the liquid ﬂow down to the column, and higherorder composition dynamics should also be included. Nevertheless, the model is simple and displays important features of distillation column behaviour. It should be noted that with a more detailed model, the RGAelements would approach 1 at frequencies around 1 rad/min, indicating less of a control problem.
2.5 2 1.5 1 0.5 0 0 10 20
Nominal plant: Perturbed plant:
Ý½ Ý¾
30
Time [min]
40
50
60
Figure 3.12: Response with decoupling controller to ﬁltered reference input Ö½ The perturbed plant has 20% gain uncertainty as given by (3.85).
½ ´ × ·½µ.
We consider the following inversebased controller, which may also be looked upon as a steadystate decoupler with a PI controller:
Ã ÒÚ ´×µ
error this controller should counteract all the interactions in the plant and give rise ½ ¿ to two decoupled ﬁrstorder responses each with a time constant of ½ ¼ min. This is conﬁrmed by the solid line in Figure 3.12 which shows the simulated response to a reference change in Ý ½ . The responses are clearly acceptable, and we conclude that nominal performance (NP) is achieved with the decoupling controller. Robust stability (RS). The resulting sensitivity and complementary sensitivity functions with this controller are
¼ ¿½ ½ ´½ · ×µ ¼ ¿ ½ ¼ (3.83) ¼ ¿ ¿ ¼ ¿¾¼¼ × ¼ Á . With no model Nominal performance (NP). We have Ã ÒÚ Ã ÒÚ ×
½ ½ ´×µ ×
Ë
ËÁ
× Á ×·¼
Ì
ÌÁ
½ Á ½ ¿× · ½
(3.84)
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ
Ä
ÇÆÌÊÇÄ
Thus, ´Ë µ and ´Ì µ are both less than 1 at all frequencies, so there are no peaks which would indicate robustness problems. We also ﬁnd that this controller gives an inﬁnite gain margin (GM) and a phase margin (PM) of ¼ Æ in each channel. Thus, use of the traditional margins and the peak values of Ë and Ì indicate no robustness problems. However, from the large RGAelements there is cause for concern, and this is conﬁrmed in the following. We consider again the input gain uncertainty (3.79) as in the previous example, and we select ¯½ ¼ ¾ and ¯¾ ¼ ¾. We then have
Ù¼½
½ ¾Ù ½
Ù¼¾
¼ Ù¾
(3.85)
Note that the uncertainty is on the change in the inputs (ﬂow rates), and not on their absolute values. A 20% error is typical for process control applications (see Remark 2 on page 302). The uncertainty in (3.85) does not by itself yield instability. This is veriﬁed by computing the closedloop poles, which, assuming no cancellations, are Ø´Á · ÄÁ ´×µµ ¼ (see (4.102) and (A.12)). In our solutions to Ø´Á · Ä´×µµ case
Ä¼Á ´×µ
Ã ÒÚ ¼ ×½
Ã ÒÚ
½ · ¯½ ¼ ¼ ½ · ¯¾ ´½ · ¯½ µ
¼ ×
½ · ¯½ ¼ ¼ ½ · ¯¾
(3.86)
so the perturbed closedloop poles are
¼
×¾
¼
´½ · ¯¾µ
and we have closedloop stability as long as the input gains ½ · ¯ ½ and ½ · ¯¾ remain positive, so we can have up to 100% error in each input channel. We thus conclude that we have robust stability (RS) with respect to input gain errors for the decoupling controller. Robust performance (RP). For SISO systems we generally have that nominal performance (NP) and robust stability (RS) imply robust performance (RP), but this is not the case for MIMO systems. This is clearly seen from the dotted lines in Figure 3.12 which show the closedloop response of the perturbed system. It differs drastically from the nominal response represented by the solid line, and even though it is stable, the response is clearly not acceptable; it is no longer decoupled, and Ý ½ ´Øµ and Ý¾ ´Øµ reach a value of about 2.5 before settling at their desired values of 1 and 0. Thus RP is not achieved by the decoupling controller.
Remark 1 There is a simple reason for the observed poor response to the reference change in Ý½ . To accomplish this change, which occurs mostly in the direction corresponding to the low plant gain, the inversebased controller generates relatively large inputs Ù½ and Ù¾ , while trying to keep Ù½ Ù¾ very small. However, the input uncertainty makes this impossible – the result is an undesired large change in the actual value of Ù¼ Ù¼ , which subsequently results ½ ¾ in large changes in Ý½ and Ý¾ because of the large plant gain ( ´ µ ½ ¾) in this direction, as seen from (3.46).
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
Remark 2 The system remains stable for gain uncertainty up to 100% because the uncertainty occurs only at one side of the plant (at the input). If we also consider uncertainty at the output then we ﬁnd that the decoupling controller yields instability for relatively small errors in the input and output gains. This is illustrated in Exercise 3.8 below. Remark 3 It is also difﬁcult to get a robust controller with other standard design techniques ÛÈ Á (using Å ¾ for this model. For example, an Ë ÃË design as in (3.59) with ÏÈ and ¼ ¼ in the performance weight (3.60)) and ÏÙ Á , yields a good nominal response (although not decoupled), but the system is very sensitive to input uncertainty, and the outputs go up to about 3.4 and settle very slowly when there is 20% input gain error. Remark 4 Attempts to make the inversebased controller robust using the second step of the GloverMcFarlane ½ loopshaping procedure are also unhelpful; see Exercise 3.9. This shows that robustness with respect to coprime factor uncertainty does not necessarily imply robustness with respect to input uncertainty. In any case, the solution is to avoid inversebased controllers for a plant with large RGAelements.
À
i.e. select Ï½
Exercise 3.7 Design a SVDcontroller Ã Ï½ Ã× Ï¾ for the distillation process in (3.81), Î and Ï¾ Í Ì where Í and Î are given in (3.46). Select Ã× in the form ¼ ½ ××·½ Ã× ¼ ¾ ××·½
and try the following values: (a) (b) (c)
½ ½ ½
¾ ¼ ¼¼ ; ¼ ¼¼ , ¾ ¼ ¼ ; ¼ ½ ¼ ¼¼¿ , ¾
¼
½¿
¼ ¼
.
Simulate the closedloop reference response with and without uncertainty. Designs (a) and (b) should be robust. Which has the best performance? Design (c) should give the response in Figure 3.12. In the simulations, include highorder plant dynamics by replacing ´×µ by ½ ´¼ ¼¾×·½µ ´×µ. What is the condition number of the controller in the three cases? Discuss the results. (See also the conclusion on page 243).
Exercise 3.8 Consider again the distillation process (3.81) with the decoupling controller, but also include output gain uncertainty ¯ . That is, let the perturbed loop transfer function be
Ä¼ ´×µ
¼ Ã ÒÚ
¼ ×
½ · ¯½ ¼ ¼ ½ · ¯¾
Ä¼
ßÞ
½ · ¯½ ¼ ½ ¼ ½ · ¯¾
(3.87)
where Ä¼ is a constant matrix for the distillation model (3.81), since all elements in share ´×µ ¼ . The closedloop poles of the perturbed system are the same dynamics, ´×µ Ø´Á · ´ ½ ×µÄ¼ µ ¼, or equivalently solutions to Ø´Á · Ä¼ ´×µµ
× Ø´ Á · Ä¼ µ ½
´× ½ µ¾ · ØÖ´Ä¼ µ´× ½ µ · Ø´Ä¼ µ
¼
(3.88)
For ½ ¼ we have from the RouthHurwitz stability condition indexRouthHurwitz stability test that instability occurs if and only if the trace and/or the determinant of Ä¼ are negative.
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ
Ä
ÇÆÌÊÇÄ
Since Ø´Ä¼ µ ¼ for any gain error less than ½¼¼%, instability can only occur if ØÖ´Ä¼ µ ¼. Evaluate ØÖ´Ä¼ µ and show that with gain errors of equal magnitude the combination of ¯¾ ¯½ ¯¾ ¯. Use this to errors which most easily yields instability is with ¯½ show that the perturbed system is unstable if
¯
where ½½ ½½ ¾¾ we get instability for
Ö
½ ¾ ½½ ½
(3.89)
¯
Ø
is the ½ ½element of the RGA of . In our case ½½ ¼ ½¾¼. Check this numerically, e.g. using MATLAB.
¿ ½ and
Remark. The instability condition in (3.89) for simultaneous input and output gain uncertainty, applies to the very special case of a ¾ ¾ plant, in which all elements share the ´×µ ¼ , and an inversebased controller, Ã ´×µ ´ ½ ×µ ½ ´×µ. same dynamics, ´×µ
¢
Exercise 3.9 Consider again the distillation process ´×µ in (3.81). The response using the inversebased controller Ã ÒÚ in (3.83) was found to be sensitive to input gain errors. We want to see if the controller can be modiﬁed to yield a more robust system by using the GloverMcFarlane ½ loopshaping procedure. To this effect, let the shaped plant be Ã ÒÚ , i.e. Ï½ Ã ÒÚ , and design an ½ controller Ã× for the shaped plant × Ã ÒÚ Ã× . (You (see page 389 and Chapter 9), such that the overall controller becomes Ã ½ ½ which indicates good robustness with respect to coprime factor will ﬁnd that Ñ Ò uncertainty, but the loop shape is almost unchanged and the system remains sensitive to input uncertainty.)
À
À
3.7.3 Robustness conclusions
From the two motivating examples above we found that multivariable plants can display a sensitivity to uncertainty (in this case input uncertainty) which is fundamentally different from what is possible in SISO systems. In the ﬁrst example (spinning satellite), we had excellent stability margins (PM and GM) when considering one loop at a time, but small simultaneous input gain errors gave instability. This might have been expected from the peak values (À ½ norms) of Ë and Ì , deﬁned as
Ì ½
Ñ Ü ´Ì ´ µµ
Ë ½
Ñ Ü ´Ë ´ µµ
(3.90)
which were both large (about 10) for this example. In the second example (distillation process), we again had excellent stability margins (PM and GM), and the system was also robustly stable to errors (even simultaneous) of up to 100% in the input gains. However, in this case small input gain errors gave very poor output performance, so robust performance was not satisﬁed, and adding simultaneous output gain uncertainty resulted in instability (see Exercise 3.8). These problems with the decoupling controller might have been expected because the plant has large RGAelements. For this second example the
are useful indicators of robustness problems. The overall control objective is to minimize some norm of the transfer function from Û to Þ . What is desired. The controller design problem is then: Find a controller Ã which based on the information in Ú . This motivates the need for better tools for analyzing the effects of model uncertainty. This is very time consuming. 3.1 on page 13. Note that positive feedback is used. The most important point of this section is to appreciate that almost any linear control . and introduce the structured singular value as our tool. thereby minimizing the closedloop norm from Û to Þ . The formulation makes use of the general control conﬁguration in Figure 3. RGAelements.8 General control problem formulation (weighted) exogenous inputs Û Ù ¹ ¹ È Ã ¹Þ Ú (weighted) exogenous outputs control signals sensed outputs Figure 3. and in the end one does not know whether those plants are the limiting ones. 1984).3 where a analysis predicts the robustness problems found above. so the absence of peaks in Ë and Ì does not guarantee robustness. they provide no exact answer to whether a given source of uncertainty will yield instability or poor performance.11. The two motivating examples are studied in more detail in Example 8. where È is the generalized plant and Ã is the generalized controller as explained in Table 1.13: General control conﬁguration for the case with no model uncertainty ¯ In this section we consider a general method of formulating control problems introduced by Doyle (1983. for example.13. the À ½ norm. is a simple tool which is able to identify the worstcase plant.10 and Section 8. We want to avoid a trialanderror procedure based on checking stability and performance for a large number of candidate plants. generates a control signal Ù which counteracts the inﬂuence of Û on Þ . This will be the focus of Chapters 7 and 8 where we show how to represent model uncertainty in the À ½ framework. etc. Although sensitivity peaks.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ À½ norms of Ë and Ì were both about ½.
including the design of observers (the estimation problem) and feedforward controllers.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ problem can be formulated using the block diagram in Figure 3. 9. We want to ﬁnd È for the conventional one degreeoffreedom control conﬁguration in Figure 3. The ﬁrst step is to identify the signals for the generalized plant: ¾ Û With this choice of Ú .13 may at ﬁrst glance seem restrictive. 9. they assume that È is given.13 (for the nominal case) or in Figure 3.9.12). this is not the case. and we will demonstrate the generality of the setup with a few examples.8. To derive È (and Ã ) for a speciﬁc case we must ﬁrst ﬁnd a block diagram representation and identify the signals Û. Ù and Ú .¹ Ã Ù ¹ ¹+ + + Õ ¹Ý + ÝÑ Ò Figure 3.14 then yields Û½ Û¾ Û¿ ¿ ¾ ¿ Ö Ò Þ Ý Ö Ú Ö ÝÑ Ö Ý Ò (3. the controller only has information about the deviation Ö ÝÑ .13.3 (Figures 9.14. The routines in MATLAB for synthesizing À ½ and À¾ optimal controllers assume that the problem is in the general form of Figure 3. Some examples are given below and further examples are given in Section 9. Also note that Þ Ý Ö. which means that performance is speciﬁed in terms of the actual output Ý and not in terms of the measured output ÝÑ . To construct È one should note that it is an openloop system and remember to break all “loops” entering and exiting the controller Ã .11 and 9. The block diagram in Figure 3. 3. Remark 1 The conﬁguration in Figure 3.1 Obtaining the generalized plant È Ö + ¹. However.10.21 (with model uncertainty).13 One degreeoffreedom feedback control conﬁguration. Þ .14: One degreeoffreedom control conﬁguration Example 3. but this is not considered in this book. Remark 2 We may generalize the control conﬁguration still further by including diagnostics as additional outputs from the controller giving the 4parameter controller introduced by Nett (1986). that is.91) Þ Ú Ý Ö Ö ÝÑ Ù· Ö Ö Ù ÁÛ½ ÁÛ¾ · ¼Û¿ · Ù Ò ÁÛ½ · ÁÛ¾ ÁÛ¿ Ù .
n(1). % Consists of vectors w and u. when performing numerical calculations È can be generated using software.16.92) È Á Á ¼ Á Á Á È Note that È does not contain the controller. in MATLAB we may use the simulink program. for example. we generally have to include weights Ï Þ and ÏÛ in the generalized plant È . % Consists of vectors z and v. sysic.8. % G is the SISO plant.2 Controller design: Including weights in È To get a meaningful controller synthesis problem.+ + + Ú Ã Figure 3. The code in Table 3.92) % Uses the Mutoolbox systemnames = ’G’. 3. we consider the weighted or normalized exogenous ÏÛ Û consists of the “physical” signals entering the system.15.1 generates the generalized plant È in (3.15: Equivalent representation of Figure 3. Obtaining the generalized plant È may seem tedious. input to G = ’[u]’. from the representation in Figure 3.14 where the error signal to be minimized is Þ Ý Ö and the input to the controller is Ú Ö ÝÑ and È which represents the transfer function matrix from Û ÙÌ to Þ ÚÌ is (3.14. or we may use the sysic program in the toolbox. can be obtained by inspection Remark. see Figure 3. inputs Û (where Û .½¼¼ ´ ÅÍÄÌÁÎ ÊÁ Ä + ¹ Ã ÇÆÌÊÇÄ È Û Ö Ò Õ ¹Þ ¹ Ù ¹ + Õ ¹ + ¹.u(1)]’.r(1). inputvar = ’[d(1). For example. However. in terms of the À ½ or À¾ norms.92) for Figure 3. rGdn]’. Table 3. outputvar = ’[G+dr. sysoutname = ’P’. That is. Alternatively.1: MATLAB program to generate È in (3.
where (convince yourself that this is true). Here Û represents a reference command (Û the negative sign does not really matter) or a disturbance entering at the output (Û Ý ). ´Ì µ (for robustness and to avoid sensitivity to noise) and ´ÃË µ (to penalize large inputs).93) may be represented by the block diagram in Figure 3. references and noise).17 the following set of equations À Þ½ Þ¾ Þ¿ Ú ÏÙ Ù ÏÌ Ù ÏÈ Û · ÏÈ Ù Û Ù . and the weighted or normalized controlled outputs Þ ÏÞ Þ (where Þ often consists of the control error Ý Ö and the manipulated input Ù).16: General control conﬁguration for the case with no model uncertainty disturbances. the norm from Û to Þ should be less than 1. that is. Consider an bound problem À½ problem where we want to ´Ë µ (for performance). in most cases only the magnitude of the weights matter. and Þ consists of the weighted input Þ½ ÏÙ Ù. and we may without loss of generality assume that Ï Û ´×µ and ÏÞ ´×µ are stable and minimum phase (they need not even be rational transfer functions but if not they will be unsuitable for controller synthesis using current software). We get from Figure 3. Example 3. Except for some negative signs which have no effect when evaluating Æ ½ . In other words. The weighting matrices are usually frequency dependent and typically selected such that weighted signals Û and Þ are of magnitude 1. and the weighted control error Þ¿ ÏÈ ´Ý Öµ. These requirements may be combined into a stacked À½ ¾ Ñ Ò Æ ´Ã µ ½ Ã Æ ÏÙ ÃË ÏÌ Ì ÏÈ Ë ¿ (3. we have Þ ÆÛ and the objective is to minimize the ½ norm from Û to Þ .17 Ö. the Æ in (3.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½¼½ È Û ¹ ÏÛ Û ¹ ¹ È Ã Þ ¹ ÏÞ ¹Þ Figure 3.14 Stacked Ë Ì ÃË problem. Thus.93) where Ã is a stabilizing controller. the weighted output Þ¾ ÏÌ Ý .
98) (3. Á È¾¾ ÏÙ Á ÏÌ ÏÈ (3.97) Þ Ú È½½ È¾½ È½½ Û · È½¾ Ù È¾½ Û · È¾¾ Ù The reader should become familiar with this notation.3 Partitioning the generalized plant È We often partition È as È È½½ È½¾ È¾½ È¾¾ (3.99) . Ù and Ú in the generalized (3.½¼¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¹ Û Ú¹ Ã ÏÙ ÏÌ ÏÈ ¹ ¹ ¹ Þ ¹  Õ Ù¹ ¹ + Õ¹ Õ+ Figure 3.8.e. For cases with one degreeoffreedom negative feedback control we have È ¾¾ . i. Þ . then È ¾¾ is an ÒÚ ¢ ÒÙ matrix.95) such that its parts are compatible with the signals control conﬁguration. if Ã is an Ò Ù ¢ ÒÚ matrix.96) (3.93) Û ÙÌ È to ¾ Þ ÚÌ is ¿ ¼ ÏÙ Á ¼ ÏÌ ÏÈ Á ÏÈ Á (3.14 we get ÏÈ Á ¼ ¼ È½¾ Note that È¾¾ has dimensions compatible with the controller. Û. In Example 3.94) 3.17: Block diagram corresponding to Þ so the generalized plant È from ÆÛ in (3.
ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½¼¿ 3. ﬁrst partition the generalized plant È as Ù ÃÚ Þ (3. ¼ ÏÙ Á ¼ · ÏÌ ÏÈ Á ÏÈ Ã ´Á · Ã µ ½ ´ Á µ ÏÙÃË ¿ ÏÌ Ì ÏÈ Ë .13 the term ´Á È¾¾ Ã µ ½ has a negative sign. To assist in remembering the sequence of È½¾ and È¾½ in (3.16 have the controller Þ ÆÛ (3. this is identical to Æ given in (3. the negative signs have no effect on the norm of Æ . combine this with the controller equation and eliminate Ù and Ú from equations (3.101) to yield where Æ is given by ÆÛ (3.98) and (3. Some properties of LFTs are given in Appendix A. Æ is obtained from Figure 3.4 Analysis: Closing the loop to get Æ Û ¹ Æ ¹Þ Figure 3. and we may absorb Ã into the interconnection structure and obtain the system Æ as shown in Figure 3.15 We want to derive LFTformula in (3. Since positive feedback is used in the general conﬁguration in Figure 3.102).7.93).95)(3. We get ¾ ¿ ¾ Æ for the partitioned ¿ È in (3.13 and 3. The reader is advised to become comfortable with the above manipulations before progressing much further. for analysis of closedloop performance the controller is given. However.101) given in (3. This is useful when synthesizing the controller.102) is also represented by the block diagram in Figure 3.102). notice that the ﬁrst (last) index in È½½ is the same as the ﬁrst (last) index in È½¾ Ã ´Á È¾¾ Ã µ ½ È¾½ .2. (3.100) where Æ is a function of Ã . Ì ÃË and Á Ì Ë .18: General block diagram for analysis with no uncertainty Ã as a separate block. Of course.97) and (3. The lower LFT in (3.8. Remark.99) using the ¾ Æ where we have made use of the identities Ë ´Á · Ã µ ½ . With the exception of the two negative signs.18 where The general feedback conﬁgurations in Figures 3. To ﬁnd Æ .97). Example 3. In words.96).102) Æ È½½ · È½¾ Ã ´Á È¾¾ Ã µ ½ È¾½ ¸ Ð ´È Ã µ Here Ð ´È Ã µ denotes a lower linear fractional transformation (LFT) of È with Ã as the parameter.13 by using Ã to close a lower feedback loop around È .
3(b).16 Consider the control system in Figure 3. Õ Ã Ö ¹ ÃÖ + ¹ ¹  Ã½ + ¹ Ù ¹ + ¾ Ã¾ Ý¾¹ Õ + +¹ ½ ½ ÕÝ¹ Figure 3. For example in the MATLAB Toolbox we can evaluate Æ Ð ´È Ã µ using the command N=starp(P. (ii) Let Þ using Æ Ý Ö Ö Ö Þ Ú (3. there is no control objective associated with it.5). directly from the block diagram and Ð ´È Ã µ.8. local feedback and two degreesoffreedom control Example 3. and one for a problem involving estimation.7. we now present two further examples: one in which we derive È for a problem involving feedforward control. Here starp denotes the matrix star product which generalizes the use of LFTs (see Appendix A.103) Ù ÝÑ Ò ÆÛ and derive Æ in two different ways. The control conﬁguration includes a two degreesoffreedom controller.13. Ý¾ is a secondary output (extra measurement). and we also measure the disturbance .5 Generalized plant È : Further examples To illustrate the generality of the conﬁguration in Figure 3.K). a feedforward controller and a local feedback controller based on the . Û ¾ ¿ 3. (i) Find È when Exercise 3.10 Consider the two degreesoffreedom feedback conﬁguration in Figure 1.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Again. it should be noted that deriving Æ from È is much simpler using available software. By secondary we mean that Ý¾ is of secondary importance for control. that is.19.19: System with feedforward. where Ý½ is the output we want to control.
19 as follows: Ã Ã½ ÃÖ ¾ Ã½ Ã¾ Ã ½ ¾¿ ¼ ½ ¾ ¾ ¼ ¼Ì ´ ½ ¾ µÌ Ì (3. the output Ý which we want to control. To recast this into our standard conﬁguration of Figure 3. However. Let denote the unknown external inputs (including noise and disturbances) and Ù the known plant inputs (a subscript is used because in this case the output Ù from Ã is not the plant input). we can obtain from the optimal overall Ã the subcontrollers Ã¾ and Ã½ (although we may have to add a small term to Ã to make the controllers proper). In this case the output from Ã½ enters into Ã¾ and it may be viewed as a reference signal for Ý¾ .11.13 we deﬁne Û Ö Þ Ý½ Ö Ú Ö Ý½ Ý¾ (3. From Example 3.17 Output estimator. Derive the generalized controller Ã and the generalized plant È in this case. Consider further Example 3. For example. for a two degreesoffreedom controller a common approach is to ﬁrst design the feedback controller ÃÝ for disturbance rejection (without considering reference tracking) and then design ÃÖ for reference tracking.19 we get È ½ Á ¼ Á ½ ¼ ¼ ¼ Á ¼ (3. Remark.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ¾ ¿ ½¼ extra measurement Ý¾ . Consider a situation where we have no measurement of Ý Ù · Ý¾ Ù · . This will generally give some performance loss compared to a simultaneous design of ÃÝ and ÃÖ . The generalized controller Ã may be written in terms of the individual controller blocks in Figure 3. Let the model be Example 3. for example Ã¾ or Ã½ are designed “locally” (without considering the whole problem). we do have a measurement of another output variable Ý¾ .96) and (3. then this will limit the achievable performance. unless the optimal Ã¾ or Ã½ have RHPzeros.11 Cascade implementation.16. Exercise 3. if we impose restrictions on the design such that.104) Note that and Ö are both inputs and outputs to È and we have assumed a perfect measurement of the disturbance .97) yields È¾¾ ¾ ¼Ì Ì.16 and Exercise 3. see also Figure 10. The local feedback based on Ý¾ is often implemented in a cascade manner.105) By writing down the equations or by inspection from Figure 3.5. However. we see that a cascade implementation does not usually limit the achievable performance since. Since the controller has explicit information about Ö we have a two degreesoffreedom controller.106) Then partitioning È as in (3.
Ù Õ Ý¾ ¹ ¹ Kalman Filter Ý Ý¾ ¹ ¹ + ¹  Þ ¹ Ã ×Ø Ý ¹+ Ã ªª ª Ý¾ ¾ ½ Á × Ù ¹ ¹ +¹ + + Ü Õ ¹ ¹Ý Figure 3.17 the objective was to minimize Ý Ý). the output Ù from the generalized controller is the estimate of the plant output.20. Exercise 3. This problem may The objective is to design an estimator.13 with Û Ù Ù Ý Þ Ý Ý Ú Ý¾ Ù Note that Ù Ý .½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ý ×Ø .107) ¼ ¼ since the estimator problem does not involve feedback.2 the objective is to minimize Ü Ü (whereas in Example 3. Ã be written in the general framework of Figure 3. Show how the Kalman ﬁlter problem can be represented in the general conﬁguration of Figure 3. Ã Ã ×Ø and ¾ È We see that È¾¾ Á ¿ ¼ ¼ ¼ Á (3. In the Kalman ﬁlter problem studied in .12 State estimator (observer). One particular estimator Ã ×Ø is a Kalman Filter Section 9.13 and ﬁnd È . that is. such that the estimated output Ý Ã ×Ø Ù ¾ is as close as possible in some sense to the true output Ý . Furthermore. see Figure 3.20: Output estimation problem.
Example 3.13 Mixed sensitivity. and we have 2. there are some controller design problems which are not covered. 3.6 Deriving È from Æ For cases where Æ is given and we wish to ﬁnd a È such that Æ Ð ´È Ãµ È½½ · È½¾ Ã ´Á È¾¾ Ã µ ½È¾½ it is usually best to work from a block diagram representation. let « be a real scalar then we may instead choose È½¾ «È½¾ and È¾½ ´½ «µÈ¾½ . consider the stacked transfer function Æ ´Á · Ã µ ½ ´Á · Ã µ ½ (3. Let Æ be some closedloop transfer function whose norm we want to minimize. As a simple example. Since É È½¾ ÊÈ¾½ . whereas ´Á · Ã µ ½ may be represented . whereas from Step 3 in the above procedure we see that È½¾ and È¾½ are not unique.8. we can now usually obtain È ½¾ and È¾½ by inspection.108) È ÛÈ Á Á ÛÈ Remark. Use the above procedure to derive the generalized plant È for the stacked Æ in (3. Deﬁne 3. ¼ in Æ to obtain È½½ . É Æ È½½ and rewrite É such that each term has a common factor Ê Ã ´Á È¾¾ Ã µ ½ (this gives È¾¾ ).13. Alternatively. Set Ã 2.18 Weighted sensitivity. This was illustrated above for the stacked Æ in (3. For È in (3. Ê É Ã ´Á · Ã µ ½ so È¾¾ ÛÈ Ê so we have È½¾ ÛÈ . When obtaining È from a given Æ .93). However.7 Problems not covered by the general formulation The above examples have demonstrated the generality of the control conﬁguration in Figure 3.8. Æ ÛÈ Á ÛÈ ´Ë Á µ ÛÈ Ì ÛÈ Ã ´Á · Ã µ ½ . this is not always possible. and È¾½ Á . since there may not exist a block diagram representation for Æ . È½½ Æ ´Ã ¼µ ÛÈ Á . 1. To use the general form we must ﬁrst obtain a È such that Æ Ð ´È Ã µ .ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½¼ 3. the following procedure may be useful: 1. É Æ 3. where ÛÈ is a scalar weight. For instance. Exercise 3. Nevertheless.108) this means that we may move the negative sign of the scalar ÛÈ from È½¾ to È¾½ .109) The transfer function ´Á · Ã µ ½ may be represented on a block diagram with the input and output signals after the plant. We will use the above procedure to derive È when ÛÈ Ë ÛÈ ´Á · Ã µ ½ .93). and we get (3. we have that È½½ and È¾¾ are unique.
However.14 Show that Æ in (3.21. The case where Æ cannot be written as an LFT of Ã .109). Another stacked transfer function which cannot in general be represented in block diagram form is ÏÈ Ë Æ (3.110) Ë Remark.8.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ by another block diagram with input and output signals before the plant.21: General control conﬁguration for the case with model uncertainty . in Æ there are no cross coupling terms between an input before the plant and an output after the plant (corresponding to ´Á · Ã µ ½ ). ¡ Ù¡ Û Ù ¹ ¹ ¹ Ý¡ È Ú Ã ¹Þ Figure 3.8. if we apply the procedure in Section 3. À À ÛÈ Á where ÛÈ Exercise 3. we are not able to ﬁnd solutions to È ½¾ and È¾½ in Step 3. 3.13 may be extended to include model uncertainty as shown by the block diagram in Figure 3.8 A general control conﬁguration including model uncertainty The general control conﬁguration in Figure 3. Equivalently.110) can be represented in block diagram form if ÏÈ is a scalar. It is usually normalized in such a way that ¡ ½ ½. Here the matrix ¡ is a blockdiagonal matrix that includes all possible perturbations (representing uncertainty) to the system.6 to Æ in (3. Although the solution to this ½ problem remains intractable. or between an input after the plant and an output before the plant (corresponding to Ã ´Á · Ã µ ½ ) so Æ cannot be represented in block diagram form. is a special case of the Hadamard weighted ½ problem studied by van Diggelen and Glover (1994a). van Diggelen and Glover (1994b) present a solution for a similar problem where the Frobenius norm is used instead of the singular value to “sum up the channels”.
À Remark 2 In (3.111) To evaluate the perturbed (uncertain) transfer function from external inputs Û to external outputs Þ . although good practical approaches like Ã iteration to ﬁnd the “ optimal” controller are in use (see Section 8. This is discussed in more detail in Chapter 8.23: Rearranging a system with multiple perturbations into the Æ ¡structure The block diagram in Figure 3.7): Þ Ù ´Æ ¡µÛ Ù ´Æ ¡µ ¸ Æ¾¾ · Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ (3. then the same lower LFT as found in (3. ¡ is square in which case Æ½½ is a square matrix .22 in terms of Æ (for analysis) by using Ã to close a lower loop around È . .21 in terms of È (for synthesis) may be transformed into the block diagram in Figure 3. the situation is better and with the ½ norm an assessment of robust performance involves computing the structured singular value.´ µ ´µ Figure 3.12).21 is still an unsolved problem. If we partition È to be compatible with the controller Ã .112) Æ has been partitioned to be compatible with ¡. Usually. resulting in an upper LFT (see Appendix A. that is Æ½½ has dimensions compatible with ¡.102) applies. For analysis (with a given controller). we use ¡ to close the upper loop around Æ (see Figure 3. and Æ Ð ´È Ãµ È½½ · È½¾ Ã ´Á È¾¾ Ã µ ½È¾½ (3.112) Remark 1 Controller synthesis based on Figure 3.22).
These of uncertainty by a perturbation block. First represent each source ½. In Chapter 8 we discuss in detail how to obtain Æ and ¡. etc. which is normalized such that ¡ perturbations may result from parametric uncertainty. Remark 4 The fact that almost any control problem with uncertainty can be represented by Figure 3. 2. Finally.23(a). collect these perturbation blocks into a large blockdiagonal matrix having perturbation inputs and outputs as shown in Figure 3. so Æ¾¾ is the nominal transfer function from Û to Þ . for example in (3. ¡ .113) .16 By Å ½ one can mean either a spatial or temporal norm. neglected dynamics.102). so some explanation is in order. For the nominal case with no uncertainty we have Ù ´Æ Ù ´Æ ¼µ Æ¾¾ . The exercises illustrate material which the reader should know before reading the subsequent chapters. a bandwidth better than ½¼ rad/s.½½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¡µ of the same dimension as ¡. Actually. Hint: See (2.17 What is the relationship between the RGAmatrix and uncertainty in the individual elements? Illustrate this for perturbations in the ½ ½element of the matrix ½¼ (3. a bandwidth better than ½ rad/s and a resonance peak (worst ampliﬁcation caused by feedback) lower than ½ . Explain the difference between the two and illustrate by computing the appropriate inﬁnity norm for Å½ ¾ ¿ Å¾ ´×µ × ½ ¿ ×·½×·¾ Exercise 3. less than ½¼% error up to frequency ¿ rad/s. Then “pull out” each of these blocks from the system so that an input and an output can be associated with each ¡ as shown in Figure 3. 3. Strictly speaking.72) and (2.111).111) (with uncertainty) are equal to the È and Æ in (3. Suggest a rational Exercise 3. and a resonance peak lower than ¾. ½.15 Consider the performance speciﬁcation ÛÈ Ë ½ transfer function weight ÛÈ ´×µ and sketch it as a function of frequency for the following two cases: 1. Exercise 3.21 may seem surprising.73). Remark 3 Note that È and Æ here also include information about how the uncertainty affects the system. we should have used another symbol for Æ and È in (3.9 Additional exercises Most of these exercises are based on material presented in Appendix A. We desire less than ½% steadystate offset. so they are not the same È and Æ as used earlier. We desire no steadystate offset.102) (without uncertainty). the parts È¾¾ and Æ¾¾ of È and Æ in (3. as will be discussed in more detail in Chapters 7 and 8.23(b). but for notational simplicity we did not.
e. ﬁnd À for Ã ´Á · Ã µ ½ . .113) and (ii) ﬁnd an of minimum magnitude which makes · singular. i. i. ﬁnd È Â such that Exercise 3. (i) Formulate a condition in terms of the for the matrix · to remain nonsingular.ÁÆÌÊÇ Í ÌÁÇÆ ÌÇ ÅÍÄÌÁÎ ÊÁ Ä ÇÆÌÊÇÄ ½½½ Exercise 3. ´ µ ¾.26 Statespace descriptions may be represented as LFTs. ½ for .21 Do the extreme singular values bound the magnitudes of the elements of a matrix? That is.23 Find two matrices and such that ´ · µ ´ µ · ´ µ which proves that the spectral radius does not satisfy the triangle inequality and is thus not a norm.25 Write Ã as an LFT of Ì Ã Ð ´Â Ì µ.22 Consider a lower triangular Ñ Ñ matrix with ½. the following matrices and tabulate your results: ½. Exercise 3.91) can be written as Ã Ð ´Â Éµ and ﬁnd Â . Apply this to maximum singular value of in (3.27 Show that the set of all stabilizing controllers in (4. Exercise 3.19 Compute ½. ¢ Ã ´Á · Ã µ ½ as an LFT of Ã. Ñ Ü and ½ ¼ ½ ¼ ×ÙÑ for ½ Á ¾ ½ ¼ ¼ ¼ ¿ ½ ½ ½ ½ ½ ½ ¼ ¼ Show using the above matrices that the following bounds are tight (i. and ¼ for all . Exercise 3. Exercise 3.e. is ´ µ greater than the largest element (in magnitude). ﬁnd such that Exercise 3.e. how is ´ µ related to the largest element in ½ ? Exercise 3. and is ´ µ smaller than the smallest element? For a nonsingular matrix. To demonstrate this Ð ´À ½ ×µ ´×Á µ ½ · Exercise 3.18 Assume that is nonsingular.20 Find example matrices to illustrate that the above bounds are also tight when is a square Ñ ¢ Ñ matrix with Ñ ¾. (a) What is Ø ? (b) What are the eigenvalues of ? all and ﬁnd an with the smallest value of ´ µ such (c) What is the RGA of ? (d) Let Ñ that · is singular. .24 Write Ì Ì Ð ´È Ã µ. we may have equality) for ¾ ¾ matrices (Ñ ¾): ¢ ´ µ ÔÑ ´ µ ´ µ Ñ ÑÜ ÔÑ ´ µ ÔÑ Ñ Ü ½ ÔÑ ´ µ ÔÑ ½ ½ ½ ×ÙÑ Exercise 3.
and ﬁnd Â such that Ã Ð ´Â Ì µ (see Exercise 3. Ë¼ ¼ Ã µ ½ . Closedloop performance may be analyzed in the frequency domain by evaluating the maximum singular value of the sensitivity function as a function of frequency. 1988a)). In terms of analysis. MIMO systems are often more sensitive to uncertainty than SISO systems. and we demonstrated in two examples the possible sensitivity to input gain uncertainty. which can be used as a basis for synthesizing multivariable controllers using a number of methods. À½ and optimal control. Ë Ë¼ where Ç ´ ¼ µ ½ . show that this may be rewritten as Ë¼ Ë ´Á · Ç Ì µ ½ .½½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´Á · Exercise 3. Other useful tools for analyzing directionality and interactions are the condition number and the RGA.10 Conclusion The main purpose of this chapter has been to give an overview of methods for analysis and design of multivariable control systems.156) ´Á · Ã µ ½ by Ë ´Á · Ç Ì µ ½ Æ c) Evaluate Ë ¼ ½½ ½¾ Â½¾ Â¾½ ¾½ Â¾¾ · Â¾½ ¾¾ Â½¾ Ì µ and show that Á ¼ ½ Ì ´Á ´Á ¼ ½ µÌ µ ½ d) Finally. including LQG. These methods are discussed in much more detail in Chapters 8 and 9. In terms of controller design. such that Ë¼ ´Á · ¼ Ã µ ½ a) First ﬁnd Ð ´ Ã µ. Note that since Â½½ ¼ we have from (A. Ë¼ Ð ´Æ 3. What is Æ in terms of and ?. This exercise deals with how the above result may be derived in a systematic (though cumbersome) manner using LFTs (see also (Skogestad and Morari. . we have shown how to evaluate MIMO transfer functions and how to use the singular value decomposition of the frequencydependent plant transfer function matrix to provide insight into multivariable directionality.11) we stated that the sensitivity of a perturbed plant. We also introduced a general control conﬁguration in terms of the generalized plant È .28 In (3. In this chapter we have only discussed the À ½ weighted sensitivity method. we discusssed some simple approaches such as decoupling and decentralized control. but for MIMO systems we can often direct the undesired effect of a RHPzero to a subset of the outputs. ¼ b) Combine these LFTs to ﬁnd Ë ¼ Ð ´Æ Ì µ.25). Multivariable RHPzeros impose fundamental limitations on closedloop performance. À¾ . is related to that of the nominal plant.
g.1 System descriptions The most important property of a linear system (operator) is that it satisﬁes the superposition principle: Let ´Ùµ be a linear operator. let Ù ½ and Ù¾ be two independent variables (e. all of which are equivalent for systems that can be described by linear ordinary differential equations with constant coefﬁcients and which do not involve differentiation of the inputs (independent variables).1) We use in this book various representations of timeinvariant linear systems.1 Statespace representation Consider a system with Ñ inputs (vector Ù) and Ð outputs (vector Ý ) which has an internal description of Ò states (vector Ü). (1996). 4. The most important of these representations are discussed in this section.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ The main objective of this chapter is to summarize important results from linear system theory The treatment is thorough. Linear statespace models may then be derived from the linearization of such models.1. 4. input signals). and let « ½ and «¾ be two real scalars. such as Kailath (1980) or Zhou et al. A natural way to represent many physical systems is by nonlinear statespace models of the form Ü ´Ü Ùµ Ý ´Ü Ùµ (4. In terms of deviation . then ´«½ Ù½ · «¾ Ù¾ µ «½ ´Ù½ µ · «¾ ´Ù¾ µ (4. for more details and background information if these results are new to them. but readers are encouraged to consult other books.2) where Ü Ü Ø and and are nonlinear functions.
and introduce the new states Ü ËÜ. let Ë be an invertible constant matrix. such as the Jordan (diagonalized) canonical form.22). see (A. the observability canonical form.3) (4. involving the right (Ø ) and left (Õ ) eigenvectors of .7) The latter dyadic expansion. Given the linear dynamical system in (4. rational. To see this. Second. If (4. These equations provide a convenient means of describing the dynamic behaviour of proper.e.6) where the matrix exponential is Á· ½ ½ ´ Øµ ½ Ø Ø ÕÀ (4.5) which is frequently used to describe a statespace model of a system . Ü Ë ½ Ü. applies for cases with distinct eigenvalues ( ´ µ). Ü and . First.5 for an example of such a derivation).) we have ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ü represents a deviation from some nominal value or trajectory.3) with an initial state condition Ü´Ø ¼ µ and an input Ù´Øµ.3) is derived by linearizing (4. linear systems.4) is not a unique description of the inputoutput behaviour of a linear system. and Ü Ý Ü Ù which gives rise to the shorthand notation s (4. They may be rewritten as where . etc.½½ variables (where etc.2) then Ù (see Section 1. the dynamical system response Ü´Øµ for Ø Ø ¼ can be determined from Ü´Øµ ´Ø Ø¼ µ Ü´Ø¼ µ · Ø Ø Ø¼ ´Ø µ Ù´ µ Ò (4. Ü´Øµ Ý´Øµ Ü´Øµ · Ù´Øµ Ü´Øµ · Ù´Øµ (4.4) are real matrices. even for a minimal realization (a realization with the fewest number of states and consequently no unobservable or uncontrollable modes) there are an inﬁnite number of possibilities.3)–(4. Then an equivalent statespace realization (i. but with additional unobservable and/or uncontrollable states (modes). We will refer to the . is sometimes called the state matrix. Note that the representation in (4. there exist realizations with the same inputoutput behaviour. one with the same inputoutput behaviour) in terms of these new states (“coordinates”) is Ë Ë ½ Ë Ë ½ ØÐ The most common realizations are given by a few canonical forms.e. i.
8) Ü´Øµ Ü´Øµ · Ù´Øµ (4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½½ ´ µØ as the mode associated with the eigenvalue term realization (where we select Ë Ì such that Ë Ë ½ Ø ´ µØ .4) become 1 ½ We make the usual abuse of notation and let ´×µ denote the Laplace transform of ´Øµ.3) and (4.11) Ø · Æ´Øµ Ø ¼ ¼ (4. ´Øµ. 4.3 Transfer function representation . represents the response Ý ´Øµ to an impulse Ù ´Øµ Æ´Øµ for a system with a zero initial state. between the states Ü.9) is a special case of (4. If is nonsingular then (4. ¼ 4.13) . For a diagonalized £ is a diagonal matrix) and measurement noise Ò the statespacemodel is Note that the symbol Ò is used to represent both the noise signal and the number of states. the dynamic response to an arbitrary input Ù´Øµ (which is zero for Ø ¼) may from (4. we have that Remark. The more general descriptor representation Ü Ý Ü· Ù· Ü· Ù· ·Ò (4. e. It is deﬁned as the Laplace transform of the impulse response ½ ´×µ ´Øµ ×Ø Ø (4.1. the Laplace transforms of (4. implicit algebraic relations inludes.2 Impulse response representation The impulse response matrix is ´Øµ ¼ Ø Ê¯ where Æ ´Øµ is the unit impulse (delta) function which satisﬁes Ð Ñ ¯ ¼ ¼ Æ ´Øµ Ø ½. With initial state Ü´¼µ ¼. Ü´Ø ¼µ ¼. we may start from the statespace description. With the assumption of a zero initial state.10) where £ denotes the convolution operator. ×Ü´×µ Ü´×µ · Ù´×µ µ Ü´×µ ´×Á µ ½ Ù´×µ (4.3).12) ¼ Alternatively. Remark. The ’th element of the impulse response matrix.9) Á ¼ .6) be written as Ø Ý´Øµ ´Øµ £ Ù´Øµ ´Ø µÙ´ µ (4.g. For a system with disturbances written as ´ µ. for cases where the matrix is singular.Laplace transforms The transfer function representation is unique and is very useful for directly obtaining insight into the properties of a system.1.
5 Coprime factorization Another useful way of representing systems is the coprime factorization which may be used both in statespace and transfer function form. ´×µ ½ Ø´×Á µ ´×Á µ · Ø´×Á µ (4.1).1. It is also more suitable for numerical calculations. the reader is referred to Sections 2. the statespace representation yields an internal description of the system which may be useful if the model is derived from physical principles.15) ÉÒ where Ø´×Á µ ½ ´× ´ µµ is the pole polynomial.22). and Å Ö ´×µ should contain as RHPzeros all the RHPpoles of ´×µ. but the opposite is not true. the corresponding disturbance transfer function is ´×µ ´×Á µ ½ · (4. In the latter case a right coprime factorization of is ´×µ ÆÖ ´×µÅÖ ½ ´×µ (4. but do not have a statespace representation. Equivalently.½½ ÅÍÄÌÁÎ ÊÁ Ä ßÞ Ã ÇÆÌÊÇÄ (4.16) · ½ × When disturbances are treated separately.14) Ý´×µ where Ü´×µ · Ù´×µ µ Ý´×µ ´ ´×Á µ ½ · µ Ù´×µ ´×µ ´×µ is the transfer function matrix. from (A.1 and 3. The stability implies that ÆÖ ´×µ should contain all the RHPzeros of ´×µ. time delays and improper systems can be represented by Laplace transforms.18) where ÆÖ ´×µ and ÅÖ ´×µ are stable coprime transfer functions.3.8). and derive Ò Ø ÕÀ ´×µ (4.4 Frequency response An important advantage of transfer functions is that the frequency response (Fourier transform) is directly obtained from the Laplace transform by setting × in ´×µ.17) Note that any system written in the statespace form of (4. For more details on the frequency response. 4. The coprimeness implies that there should be no common RHPzeros in Æ Ö and ÅÖ which result in polezero cancellations when .3) and (4.1. For example. see (4. On the other hand. For cases where the eigenvalues of are distinct. 4. we may use the dyadic expansion of given in (A.4) has a transfer function.
19) ´×µ ÅÐ ½´×µÆÐ ´×µ (4. the left and right coprime factorizations are identical.23) From the above example. Thus both proper and the identity Æ ´×µ × ½ ×· Å ´×µ × ¿ ×·¾ is a coprime factorization. Example 4. Two stable scalar transfer functions. Æ ´×µ and Å ´×µ. We then have that Æ ´×µ ´× ½µ´× · ¾µ ×¾ · ½ × · ¾ Å ´×µ and for any is a coprime factorization of (4. there exist stable such that the following Bezout identity is satisﬁed Í Ð ´×µ and ÎÐ ´×µ (4.24) .Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½½ forming Æ Ö ÅÖ ½ . Now introduce the operator Å £ deﬁned as Å £ ´×µ Å Ì ´ ×µ (which for × is the same as the complex conjugate transpose Å À Å Ì ). a left coprime factorization of is Á (4. are coprime if and only if they have no common RHPzeros. This gives the most degrees of freedom subject to having a realization of Å ´×µ Æ ´×µ Ì with the lowest order.1 Consider the scalar system ´×µ ´× ½µ´× · ¾µ ´× ¿µ´× · µ (4. In this case we can always ﬁnd stable Í and Î such that ÆÍ · ÅÎ ½. Mathematically.20) Here ÆÐ and ÅÐ are stable and coprime. Then ´×µ ÆÖ ´×µÅÖ ½ ´×µ is called a normalized right coprime factorization if £ £ ÅÖ ÅÖ · ÆÖ ÆÖ Á (4. we select Æ and Å to have the same poles as each other and the same order as ´×µ. coprimeness means that there exist stable Í Ö ´×µ and ÎÖ ´×µ such that the following Bezout identity is satisﬁed ÍÖ ÆÖ · ÎÖ ÅÖ Similarly.21) ÆÐ ÍÐ · ÅÐ ÎÐ ÆÅ ½ Á For a scalar system.22) To obtain a coprime factorization. we see that the coprime factorization is not unique. we ﬁrst make all the RHPpoles of zeros of Å . and all the RHPzeros of zeros of Æ . Usually. (4.22) for any ´× ¿µ´× · µ ×¾ · ½ × · ¾ ½ ¾ ¼. We then allocate the poles of Æ and Å so that Æ and Å are ÆÅ ½ holds. Remark. that is. Å ½ Æ .
Exercise 4.23). If has a minimal statespace realization s then a minimal statespace realization of a normalized left coprime factorization is given (Vidyasagar. and substitute them into (4. some algebra and comparing of terms one obtains: ¦ To derive normalized coprime factorizations by hand.22) are as given in Exercise 4.24). Let Æ and Å be as given in (4. .26) where and the matrix equation À ¸ ´ · Ì µÊ ½ Ê¸Á· Ì Ê ½ is the unique positive deﬁnite solution to the algebraic Riccati ´ Ë ½ Ì µ · ´ Ë ½ Ì µÌ · Ë ½ Ì ¼ where Ë¸Á· Ì Notice that the formulas simplify considerably for a strictly proper plant. using the MATLAB ﬁle in Table 4.25) ÅÐ ÆÐ is coinner which means Ð Ð£ Á . is in general difﬁcult. one can easily ﬁnd a statespace realization.1 or the toolbox command sncfbal) that the normalized coprime factors of ´×µ in (4.1. however.22). i. The normalized left coprime factorization ´×µ similarly.½½ In this case ÅÍÄÌÁÎ ÊÁ Ö ´×µ Ä Ã ÇÆÌÊÇÄ function.g.26). Numerically.2 Verify numerically (e. Exercise 4. Show that after ¼ ½.1 can be used to ﬁnd the normalized coprime factorization for ´×µ using (4.e. when ¼. The MATLAB commands in Table 4. requiring that ÅÐ ÅÐ£ · ÆÐ ÆÐ£ Á ÅÖ Ì ÆÖ satisﬁes £ Ö Ö Á and is called an inner transfer Å Ð ½ ´×µÆÐ ´×µ is deﬁned (4.1 We want to ﬁnd the normalized coprime factorization for the scalar system in (4. The In this case Ð ´×µ normalized coprime factorizations are unique to within a right (left) multiplication by a unitary matrix. as in the above exercise. 1988) by ¢ ÆÐ ´×µ ÅÐ ´×µ £ s ·À Ê ½ ¾ Ì ·À Ê ½ ¾ À Ê ½ ¾ Ì (4. ½ and ¾ .
Transfer functions are a good way of representing systems because they give more immediate insight into a systems behaviour.z2. For a nonsquare ´×µ in which has full row (or column) rank. however. ¼.eig. A = a + H*c. M = pck(A.fail. 4. Bm = H. Realization of SISO transfer functions. for numerical calculations a statespace realization is usually desired.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½½ Table 4. R=eye(size(d*d’))+d*d’. For a square ´×µ we have ½ s ½ ½ ½ ½ (4. Improper systems. we can include some highfrequency dynamics which we know from physical considerations will have little signiﬁcance. a right (or left) inverse of ´×µ can be found by replacing ½ by Ý . Dm = inv(sqrtm(R)).1: MATLAB commands to generate a normalized coprime factorization % Uses the Mu toolbox % % Find Normalized Coprime factors of system [a.28) . However. In some cases we may want to ﬁnd a statespace description of the inverse of a system. Z = z2/z1.1.zerr. % Alternative: aresolv in Robust control toolbox: % [z1.Bm.Dm).Dn). One way of obtaining a statespace realization from a SISO transfer function ¼) of the form is given next.6 More on statespace realizations Inverse system.C. preferably chosen on physical grounds.Z] = aresolv(A1’.z2.reig min] = ric schr([A1’ R1. the pseudoinverse of . H = (b*d’ + Z*c’)*inv(R). C = inv(sqrtm(R))*c. One should be careful. Q1 A1]). where the order of the ×polynomial in the numerator exceeds that of the denominator. Bn = b + H*d.26) % S=eye(size(d’*d))+d’*d.C.c. to select the signs of the terms in such that one does not introduce RHPzeros in ´×µ because this will make ´×µ ½ unstable. N = pck(A.R1). one may obtain an approximate inverse For a strictly proper system with by including a small additional feedthrough term . [z1. Dn = inv(sqrtm(R))*d.zwellposed. Consider a strictly proper transfer function ( ´×µ ¬Ò ½ ×Ò ½ · ¡ ¡ ¡ · ¬½ × · ¬¼ Ò · Ò ½ ×Ò ½ · ¡ ¡ ¡ · ½ × · × ¼ (4.27) where is assumed to be nonsingular.Q1. To approximate improper systems by statespace models.Bn. A1 = ab*inv(S)*d’*c. Improper transfer functions. Q1 = b*inv(S)*b’. R1 = c’*inv(R)*c. cannot be represented in standard statespace form.d] using (4.b.
.. of the SISO transfer function ´×µ × .e.29) This is called the observer canonical form. Two advantages of this realization are that one can obtain the elements of the matrices directly from the transfer function. and then ﬁnd the realization of ½ ´×µ using (4. For the term · we get from (4. we ﬁrst bring out a constant term by division to get ×· ¾ · ½ ×· ¾ Thus ½. . . and therefore . we have ÜÒ ÜÒ ½ . since multiplication by × corresponds to differentiation in the time domain. ¼ Ü ½ · ¬¼ Ù ½ Ü½ · ÜÒ · ¬½ Ù Ò ½ Ü½ · Ü¾ · ¬Ò ½ Ù ¾¬ Ü½ corresponding to the realization ¾ Ò ½ Ò ¾ . . .28) and the relationship Ý ´×µ equation ÝÒ ´Øµ· Ò ½ ÝÒ ½ ´Øµ·¡ ¡ ¡· ½ Ý¼ ´Øµ· ¼ Ý´Øµ ¬Ò ½ ÙÒ ½ ´Øµ·¡ ¡ ¡·¬½ Ù¼ ´Øµ·¬¼ Ù´Øµ where Ý Ò ½ ´Øµ and ÙÒ ½ ´Øµ represent Ò ½’th order derivatives. We can further write this as ÝÒ ´ Ò ½ Ý Ò ½ · ¬ Ò ½ Ù Ò ½ µ · ¡ ¡ ¡ · ´ · ½ Ý¼ · ¬½ Ù¼µ · ´ ¼ ÝßÞ ¬¼ Ùµ ßÞ Ü¾ ½ Ò Ü¼ Ò where we have introduced new variables Ü ½ Ü¾ ÜÒ and we have that ÜÒ is the Ò’th derivative of Ü ½ ´Øµ. . .2 To obtain the statespace realization. in observer canonical form. then we must ﬁrst bring out the constant term. .29). . ½ ¼ ¡¡ ¡ ¼ ¼ ¿ ¼ ½ ¼ ¼ ¼ ¼ ½ ¼ ¼ ¼ ¡¡ ¡ ¼ ½ ¼ ¼ ¡¡ ¡ ¼ ¼ ½ ¼ ¼ . Note Ü Ø. With the notation Ü Ü ¼ ´Øµ ½ the following statespace equations ÜÒ ½ ßÞ Ý Ü½ . i. (4. write ´×µ ½ ´×µ · .29) yields ´×µ × ×· and ¼ . etc. (4.28) that ¬¼ ¾ × ¾ and ½. . Example 4. . . ´×µÙ´×µ corresponds to the following differential (4. and that the output Ý is simply equal to the ﬁrst state. . Ò ½ ¿ ¬Ò ¾ ¾ ½ ¼ ¡¡¡ ¼ ¼ ¬¾ ¬½ ¬¼ . Notice that if the transfer function is not strictly proper.½¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Then.
One can at least see that it is a PID controller. An Ò’th order approximation of a time delay may be e obtained by putting Ò ﬁrstorder Pad´ approximations in series × ´½ ¾Ò ×µÒ ´½ · ¾Ò ×µÒ (4. A time delay (or dead time) is an inﬁnitedimensional system and not representable as a rational transfer function. Time delay. Observability canonical form ´¯¾ µ ½ ¾ Á ½ ½ ½ ¼ (4.31).34) ½ Ã´½ ¯¾½ µ ¾ ¯Ã ¾ ¿ Á ½ ¼ ¬½ ¬¼ ½ ¾ (4. For example Ã ´×µ where ¯ is typically 0. In all cases. it is clear that the transfer function offers more immediate insight. the matrix.29) ¼ ¼½ ½ ¯ ¯½ ¼ ½ ¼ Û Ö ¬¼ Ã ¯Á ¬½ ¾ Ã ¯ ¾ ¯ Á ½ ¼ Á (4.32) 1.36) On comparing these four realizations with the transfer function model in (4.33) ¼ ½½ ¼ ¯ Û Ö (4.3 Consider an ideal PIDcontroller Ã ´×µ Ã ´½ · ½ · Á× ×µ Ã Á ×¾ · Á × · ½ Á× (4. 4. For a statespace realization it must therefore be approximated.31) Ã ½·¯ ¯ Ã Ã ½ (4.30) Since this involves differentiation of the input.1 or less.35) 3. is a scalar given by represents the controller gain at high frequencies (× × µ Ã ´½ · ½ · Á× ½ · ¯ × (4.37) . which ). Controllability canonical form where ½ and ¾ are as given above. Observer canonical form in (4. A proper PID controller may be obtained by letting the derivative action be effective over a limited frequency range. This can now be realized in statespace form in an inﬁnite number of ways. it is an improper transfer function and cannot be written in statespace form.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾½ Example 4. Diagonalized form (Jordan canonical form) ¼ ¼½ ¼ ¯ 2. Four common forms are given below.
From (4. Õ À ÕÀ . Two of these are . the ’the output pole vector ÝÔ (4. Thus. 4.1 State controllability. The dynamical system Ü µ. is said to be state controllable if. Then the system ´ µ is state controllable if and only if ÙÔ ¼ In words. for any initial state equivalently the pair ´ Ü´¼µ Ü¼ . Otherwise the system is said to be state uncontrollable. if all its modes are controllable. any time Ø½ ¼ and any ﬁnal state Ü ½ . Thus we have: Let À Õ the ’th input Õ the corresponding left eigenvector. we have from (4. a system is state controllable if and only if all its input pole vectors are nonzero.2 State controllability and state observability For the case when has distinct eigenvalues. Remark. The system is (state) controllable be the ’th eigenvalue of .½¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Alternative (and possibly better) approximations are in use. ´×µ Ò Ø ÕÀ · ½ × ÙÔ Ò ÝÔ ÙÔ · ½ × (4. or Deﬁnition 4. there exists an input Ù´Øµ such that Ü´Ø½ µ Ü½ . but let us start by deﬁning state controllability. but the above approximation is often preferred because of its simplicity.38) we have that the ’th mode is uncontrollable if and only if the ’th input pole vector is zero – otherwise the mode is controllable. A mode is called uncontrollable if none of the inputs can excite the mode. and ÙÔ pole vector. Ü · Ù.40) indicates how much the ’th mode is observed in the outputs. This is explained in more detail below. There exists many other tests for state controllability. 1998) ¸ ÕÀ ¸ Ø (4.39) is an indication of how much the ’th mode is excited (and thus may be “controlled”) by the inputs. the pole vectors may be used for checking the state controllability and observability of a system.38) From this we see that the ’the input pole vector (Havre. Similarly.16) the following dyadic expansion of the transfer function matrix from inputs to outputs.
¾ and ¾ . From (4. The two input pole vectors are are Õ½ ÝÔ½ ÀÕ ½ ¼ ÝÔ¾ ÀÕ ¾ ½ and since ÝÔ½ is zero we have that the ﬁrst mode (eigenvalue) is not state controllable. For a stable system ( is stable) Ï ´ µ. the pair ´ µ is state controllable if and we only need to consider È only if the controllability Gramian ¸ ½ È ¸ ½ ¼) and thus has full rank Ò.Ä Å ÆÌË Ç ÄÁÆ 1. that is. Ï ´Ø µ ¸ Ø ¼ Therefore. the system ´ µ is state controllable if and only if the Gramian matrix Ï ´Øµ has full rank (and thus is positive deﬁnite) for any Ø ¼.44) ¼ Ì Ì (4. È may also be obtained as the solution is positive deﬁnite (È to the Lyapunov equation Ì È ·È Ì (4. The controllability matrix has rank ½ since it has two linearly dependent rows: ½ ½ 3.4 Consider a scalar system with two states and the following statespace realization ¾ ¾ ¼ ´×µ ½ ½ ´×Á µ ½ ½ ¼ ½ ×· ¼ The transfer function (minimal realization) is which has only one state. 2. 2. The eigenvalues of are ½ ¼ ¼ ¼ ¼ Ì and Õ¾ ¼ ½ Ì .6) one can verify that a particular input which achieves Ü´Ø½ µ Ü½ is (4. In fact. The system ´ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾¿ µ is state controllable if and only if the controllability matrix ¾ ¡ ¡ ¡ Ò ½ ¸ (4.42) Ù´Øµ where Ï Ì Ì ´Ø½ Øµ Ï ´Ø½ µ ½ ´ Ø½ Ü¼ Ü½ µ Ì Ì ´Øµ is the Gramian matrix at time Ø. Here Ò is the number of states. This is veriﬁed by considering state controllability.43) Example 4. the ﬁrst state corresponding to the eigenvalue at 2 is not controllable.41) has rank Ò (full row rank). The controllability Gramian is also singular È ¼ ½¾ ¼ ½¾ ¼ ½¾ ¼ ½¾ . and the corresponding left eigenvectors 1.
´×µ ½ ´ × · ½µ A physical example could be four identical tanks (e.g. The ﬁrst two objections are illustrated in the following example. if a system is state controllable we can by use of its inputs Ù bring it from any initial state to any ﬁnal state within any given ﬁnite time. Ì¿ & Ì Ì½ Ì¾ Ì¿ Ì 300 350 400 450 500 10 5 0 −5 −10 −15 −20 0 50 100 150 200 250 Time [sec] (b) Response of states (tank temperatures) Figure 4. not imply that one can hold (as Ø 2. The deﬁnition is an existence result which provides no degree of controllability (see Hankel singular values for this). 150 Control: Ù ´Øµ 100 50 0 Ì¼ −50 0 100 200 300 400 500 Time [sec] (a) Input trajectory to give desired state at Ø 15 ¼¼ × States: Ì½ . The required inputs may be very large with sudden changes. Example 4. 4. Energy balances. it does ½) the states at a given value. Ì¾ . It says nothing about how the states behave at earlier and later times. State controllability would therefore seem to be an important property for practical control. Some of the states may be of no practical importance. yield Ì ×·½ Ì¿ Ì¿ . bath tubs) in series where water ﬂows ½ from one tank to the next. e. 3.g.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ In words. assuming no heat loss. but it rarely is for the following four reasons: 1.1: State controllability of four ﬁrstorder systems in series Consider a system with one input and four states arising from four ﬁrstorder systems in series.5 State controllability of tanks in series.
However. The required change in inlet temperature is more than ½¼¼ times larger than the desired temperature changes in the tanks and it also varies widely with time. since Ù Ì¼ is reset to 0 at Ø ¦ It is quite easy to explain the shape of the input Ì¼ ´Øµ: The fourth tank is furthest away and ½) and therefore the inlet temperature Ì¼ we want its temperature to decrease (Ì ´ ¼¼µ is initially decreased to about ¼.42) and is shown as a function of time in Figure 4.1(b). since Ì¿ ´ ¼¼µ ½ is positive. the input Ù residence time in each tank. and four tank temperatures. But what about the reverse: If we do not have state controllability. so let us consider a speciﬁc case. Ì¼ is increased to ¼. A statespace realization is ¾ ¿ ¾ ¿ ¼ ¼½ ¼ ¼ ¼ ¼ ¼½ ¼ ¼½ ¼ ¼ ¼ ¼ ¼½ ¼ ¼½ ¼ ¼ ¼ ¼ ¼½ ¼ ¼½ ¼ ¼½ ¼ ¼ ¼ (4. 2.41) has full rank. must be equal (all states approach 0 in this case. From the above example. ¾ In Chapter 5. Although the states (tank temperatures Ì ) are indeed at their desired values of ½ at Ø ¼¼ s. to achieve this was computed from (4. The corresponding tank temperatures are shown in Figure 4. it is subsequently decreased to about ﬁnally increased to more than ½¼¼ to achieve Ì½ ´ ¼¼µ ½. Ì¾ ´ ¼¼µ ½. want to achieve at Ø Ì¿ ´ ¼¼µ ½ and Ì ´ ¼¼µ ½. since Ì¾ ´ ¼¼µ ½. This sounds almost too good to be true. Then. and without using inappropriate control signals. is this an indication that the system is not controllable in a practical sense? In other words.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¼¼ ½¾ Ì¿ Ì Ì are the s is the ½ ½ ½ Ì½ Ì¾ ×·½ Ì¾ Ì¾ ×·½ Ì½ Ì½ ×·½ Ì¼ where the states Ü Ì¼ is the inlet temperature. since at steadystate all temperatures must be equal. should we be concerned if a system is not state controllable? In many cases the answer is “no”. the controllability matrix in (4.1(a). .45) In practice. and about ¿¼ at Ø ¾¾¼ s. Two things are worth noting: 1. and that we ¼¼ s the following temperatures: Ì½ ´ ¼¼µ ½. So now we know that state controllability does not imply that the system is controllable from a practical point of view. The change in inlet temperature. Assume that the system is initially at steadystate (all temperatures are zero). we see clearly that the property of state controllability may not imply that the system is “controllable” in a practical sense 2 . since at steadystate all the states ¼¼ s). This is because state controllability is concerned only with the value of the states at discrete values of time (target hitting). If we are indeed concerned about these states then they should be included in the output set Ý . while in most cases we want the outputs to remain close to some desired value (or trajectory) for all values of time. since we may not be concerned with the behaviour of the uncontrollable states which may be outside our system boundary or of no practical importance. we know that it is very difﬁcult to control the four temperatures independently. it is not possible to hold them at these values. Ì¼ ´Øµ. so the system is state controllable and it must be possible to achieve at any given time any desired temperature in each of the four tanks simply by adjusting the inlet temperature. State uncontrollability will then appear as a rank deﬁciency in the transfer function matrix ´×µ (see functional controllability). we introduce a more practical concept of controllability which we call “inputoutput controllability”.
However. The system ´ the observability matrix ¾ ¿ Ç¸ ½ . and most of the above discussion might have been avoided if only Kalman. It also tells us when we can save on computer time by deleting uncontrollable states which have no effect on the output for a zero initial state. is said to be state unobservable.47) which must have full rank Ò (and thus be positive deﬁnite) for the system to be state observable. the initial state Ü´¼µ Ü¼ can be determined from the time history of the input Ù´Øµ and the output Ý ´Øµ in the interval ¼ Ø ½ . a system is state observable if and only if all its output pole vectors are nonzero. For a stable system we may consider the observability Gramian É¸ Ì ¼ Ì (4. For example. Ø be the corresponding From (4.46) Ò ½ 2. Ý Deﬁnition 4.38) we have: Let eigenvector. its name is somewhat misleading. better terms might have been “pointwise controllability” or “state affectability” from which it would have been understood that although all the states could be individually affected. state controllability is a system theoretical concept which is important when it comes to computations and realizations. Remark. because it tells us whether we have included some states in our model which we have no means of affecting. and ÝÔ Ø the ’th output pole vector. In summary. Ü · Ù. who originally deﬁned state controllability. . Otherwise the system. had used a different terminology.2 State observability.48) . The dynamical system Ü Ü · Ù (or the pair ´ µ) is said to be state observable if. . Two other tests for state observability are: µ is state observable if and only if we have full colum rank (rank Ò) of 1. for any time Ø ½ ¼. or ´ µ.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ So is the issue of state controllability of any value at all? Yes. Then the system ´ µ is state observable if and only if ÝÔ ¼ In words. (4. Ø Ø . we might not be able to control them independently over a period of time. be the ’th eigenvalue of . É can also be found as the solution to the following Lyapunov equation ÌÉ· É Ì (4.
Example 4. For example. because we want to avoid the system “blowing up”). see Willems (1970). A mode is hidden if it is not state controllable or observable and thus does not appear in the minimal realization. A statespace realization ´ µ of ´×µ is said to be a minimal realization of ´×µ if has the smallest possible dimension (i. Then.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾ A system is state observable if we can obtain the value of all individual states by measuring the output Ý ´Øµ over some time period. If we deﬁne Ý Ì (the temperature of the last ¼ ¼ ¼ ½ and we ﬁnd that the observability matrix has full column rank tank). . Ì ´¼µ Ì¼ ´Øµ Ù´Øµ is zero for Ø ¼. observability is important for measurement selection and when designing state estimators (observers). McMillan degree and hidden mode. for example Ì½ ´¼µ based on measurements of Ý ´Øµ Ì ´Øµ over some interval ¼ Ø½ . and we use the following deﬁnition: Deﬁnition 4. but this effect will die out if the uncontrollable states initial state is nonzero. for linear timeinvariant systems these differences have no practical signiﬁcance. although in theory all states are observable from the output. Ç Deﬁnition 4. and the inlet temperature in the tanks. 4. However. obtaining Ü´¼µ may require taking highorder derivatives of Ý ´Øµ which may be numerically poor and sensitive to noise.e.3 Minimal realization. However. and may be viewed as outside the system boundary.g. Since only controllable and observable states contribute to the inputoutput behaviour from Ù to Ý . e. This is illustrated in the following example. Ü´Ø are stable. from a practical point of view. Fortunately. then so all states are observable from Ý .5 (tanks in series) continued. Remark 2 Unobservable states have no effect on the outputs whatsoever. However. Remark 1 Note that uncontrollable states will contribute to the output response Ý ´Øµ if the ¼µ ¼. consider a case where the initial temperatures ½ . even if a system is state observable it may not be observable in a practical sense. The smallest dimension is called the McMillan degree of ´×µ.4 A system is (internally) stable if none of its components contain hidden unstable modes and the injection of bounded external signals at any place in the system result in bounded output signals measured anywhere in the system. it follows that a statespace realization is minimal if and only if ´ µ is state controllable and ´ µ is state observable. and thus of no direct interest from a control point of view (unless the unobservable state is unstable. are nonzero (and unknown). the fewest number of states). it is clear that it is numerically very difﬁcult to backcalculate.3 Stability There are a number of ways in which stability may be deﬁned.
but we have no way of observing this from the outputs Ý ´Øµ.2. detectable and hidden unstable modes. A system is (state) detectable if all unstable modes are state observable. Note that if does not correspond to a minimal realization then the poles by this deﬁnition will . we here deﬁne the poles of a system in terms of the eigenvalues of the statespace matrix. characteristic polynomial ´×µ is deﬁned as ´×µ ¸ Ø´×Á µ ½ Thus the poles are the roots of the characteristic equation ´×µ ¸ Ø´×Á µ ¼ (4. 4. More generally.4 Poles For simplicity. In the book we always assume. Similarly. unless otherwise stated.1. A linear system ´ µ is detectable if and only if all output pole vectors Ý Ô associated with the unstable modes are nonzero. but require stability for signals injected or measured at any point of the system. If a system is not detectable.É pole or The (4. see deﬁned as the ﬁnite values × also Theorem 4. that our systems contain no hidden unstable modes. However.5 Stabilizable.6 Poles.2 below. Remark 2 Systems with hidden unstable modes must be avoided both in practice and in computations (since variables will eventually blow up on our computer if not on the factory ﬂoor).3)– ½ Ò of the matrix . The word internally is included in the deﬁnition to stress that we do not only require the response from one particular input to another particular output to be stable. A system with unstabilizable or undetectable modes is said to contain hidden unstable modes. The poles Ô of a system with statespace description (4.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We here deﬁne a signal Ù´Øµ to be “bounded” if there exists a constant such that Ù´Øµ for all Ø. Deﬁnition 4. then there is a state within the system which will eventually grow out of bounds. this may require an unstable controller. A system is (state) stabilizable if all unstable modes are state controllable. see also the remark on page 185. that is.15) and Appendix A. recall (4. This is discussed in more detail for feedback systems in Section 4. Remark 1 Any unstable linear system can be stabilized by feedback control (at least in theory) provided the system contains no hidden unstable mode(s).7. Deﬁnition 4. the poles of ´×µ may be somewhat loosely Ô where ´Ôµ has a singularity (“is inﬁnite”). µ is stabilizable if and only if all input pole vectors Ù Ô A linear system ´ associated with the unstable modes are nonzero.4) are the eigenvalues ´ µ Ò ´× Ô µ.49) To see that this deﬁnition is reasonable. any instability in the components must be contained in their inputoutput behaviour. the components must contain no hidden unstable modes.
1 A linear dynamic system Ü poles are in the open lefthalf plane (LHP). the output Ý ´Øµ grows unbounded as Ø ½ ½ ¦ ½ 4. 4. consider Ý Ù and assume ´×µ has imaginary poles × Ó . It also has the advantage of yielding only the poles corresponding to a minimal realization of the system. It then follows that only observable and controllable poles will appear in the pole polynomial. In the procedure deﬁned by the theorem we cancel common factors in the numerator and denominator of each minor.4. rise to stable modes where including integrators. Proof: From (4. Theorem 4.3 Poles from transfer functions The following theorem from MacFarlane and Karcanias (1976) allows us to obtain the poles directly from the transfer function matrix ´×µ and is also useful for hand calculations. Thus we can not compute the poles Example 4.4. 4.6 Consider the plant: ´×µ ´¿×·½µ¾ ´×·½µ . For example. A minor of a matrix is the determinant of the matrix obtained by deleting certain rows and/or columns of the matrix. A matrix with such a property is said to be “stable” or Hurwitz. Eigenvalues in the RHP with Ê ´ µ ¼ give rise to unstable containing a mode ´ µØ is unbounded as Ø .6) can be written as a sum of terms each ´ µØ .2 Poles from statespace realizations Poles are usually obtained numerically by computing the eigenvalues of the matrix. the poles determine stability: Ü · Ù is stable if and only if all the Theorem 4. × which has no statespace realization as it contains a delay and is also improper. Then with a bounded sinusoidal . that is.4 of stability.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¾ include the poles (eigenvalues) corresponding to uncontrollable and/or unobservable states.1 Poles and stability For linear systems. ¾ input. To get the fewest number of poles we should use a minimal realization of the system. We will use the notation Å Ö to denote the minor corresponding to the deletion of rows Ö and columns in ´×µ. is the least common denominator of all nonidenticallyzero minors of all orders of ´×µ.2 The pole polynomial ´×µ corresponding to a minimal realization of a system with transfer function ´×µ. Eigenvalues in the open LHP give modes since in this case ´ µØ ¼ as Ø . Systems with poles on the axis. are unstable from our Deﬁnition 4.4. Ê ´ µ ¼ . Ù´Øµ × Ò Ó Ø.7) we see that the time response (4.
51) Note the polezero cancellation when evaluating the determinant. and consider ´×µ ½ ×· ¼ ´×µ (4.50) The minors of order ½ are the four elements all have ´× · ½µ´× · ¾µ in the denominator.2 we have that the denominator is ½.½¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´× · ½µ and as from (4.49). However from Theorem 4. From the above examples we see that the MIMOpoles are essentially the poles of the elements. one at × ½ and two at × ¾.53) ½ ´× ½µ´× · ¾µ ¼ ´× ½µ¾ ´× · ½µ´× · ¾µ´× ½µ ´× · ½µ´× · ¾µ ´× ½µ´× · ½µ ´× ½µ´× · ½µ ¾ The minors of order ½ are the ﬁve nonzero elements (e.8 Consider the ¾ ¢ ¿ system.7 Consider the square transfer function matrix ´×µ ½ × ½ × ½ ¾ ´× · ½µ´× · ¾µ × ¾ (4.52) so a minimal realization of the system has two poles: one at × ½ and one at × ¾. by looking at only the elements it is not possible to determine the multiplicity of the poles.54) (4. with ¿ inputs and ¾ outputs.g.57) The system therefore has four poles: one at × ½. (4. expected ´×µ has a pole at × Example 4. Å¾ ¿ ½½ ´×µ): ´×µ Example 4. let ¼ ´×µ be a square Ñ ¢ Ñ transfer function matrix with no pole at × .58) .55) Å½ ´× ½µ ´× · ½µ´× · ¾µ¾ ´×µ Å¿ ½ ´× · ½µ´× · ¾µ (4. However. The least common denominator of all the minors is then ´×µ ´× · ½µ´× · ¾µ (4. For instance. ½ × ½ ½ ½ ½ × · ½ ´× · ½µ´× · ¾µ × ½ × · ¾ × · ¾ The minor of order ¾ corresponding to the deletion of column ¾ is ¾ ¾ Å¾ ´× ½µ´× · ¾µ´× ½µ´× · ½µ · ´× · ½µ´× · ¾µ´× ½µ ¾ ´´× · ½µ´× · ¾µ´× ½µµ ´× · ½µ´× · ¾µ The other two minors of order two are (4.56) By considering all minors we ﬁnd their least common denominator to be ´× · ½µ´× · ¾µ¾ ´× ½µ (4. The minor of order ¾ is the determinant Ø ´×µ ´× ½µ´× ¾µ · × ½ ¾ ¾ ´× · ½µ¾ ´× · ¾µ¾ ½ ½ ¾ ¾ ´× · ½µ´× · ¾µ (4.
consider a ¾ ¢ ¾ plant in the form given by (4. The pole directions may also be deﬁned in terms of the transfer function matrix by evaluating ´×µ at the pole Ô and considering the directions of the resulting complex matrix ´Ô µ. ÝÔ ¾ ÝÔ where ÝÔ is computed from (4. ¼ so if ¼ has no zeros at × may have zeros at × . For example. where each of the three poles (in the common denominator) are repeated ﬁve times. A model reduction to obtain a minimal realization will subsequently yield a system with four poles as given in (4. ½ Speciﬁcally.52)). However. À Õ ¼ then the corresponding pole is not state Remark 1 As already mentioned. (1996.58). we must ﬁrst obtain a statespace realization of the system. .50) where Ø ¼ ´×µ has a zero at × ) or no pole at × (if all the elements of ¼ ´×µ have a zero at × ).61) where ÙÔ is the input pole direction.59) Ø ´ ´×µµ ½ ×· ¼ ´×µ ½ ´× · µÑ . Thus. It may have two poles at × (as for ´×µ in (3. we will get a system with 15 states.39) and (4. and if ÝÔ Zhou et al. to compute the poles of a transfer function ´×µ. 4.4 Pole vectors and directions In multivariable system poles have directions associated with them.57). then ´×µ has Ñ poles at × .Ä Å ÆÌË Ç ÄÁÆ How many poles at × Ê Ë ËÌ Å ÌÀ ÇÊ ½¿½ Ø does a minimal realization of ´×µ have? From (A. if ÙÔ Ø ¼ the corresponding pole is not state observable (see also controllable. As an example. Preferably this should be a minimal realization.81)).8 and then simply combine them to get an overall state space realization. Then ÙÔ is the ﬁrst column in Î (corresponding to the inﬁnite singular value). To quantify this we use the input and output pole vectors introduced in (4. and we may somewhat crudely write ´Ô µÙÔ ½ ¡ ÝÔ (4. p. and Ý Ô the ﬁrst column in Í . if we make individual realizations of the ﬁve nonzero elements in Example 4. The matrix is inﬁnite in the direction of the pole. As noted above.4. the poles are obtained numerically by computing the eigenvalues of the matrix.40): ÝÔ Ø ÙÔ ÀÕ (4. Ø ´ ¼ ´×µµ (4.60) The pole directions are the directions of the pole vectors and have unit length.10). one pole at × (as in (4. The pole directions may in principle be obtained from an SVD of ´Ô µ Í ¦Î À . and Ý Ô is the output pole direction.60) is the ’th output pole direction.
Deﬁnition 4. we require that Þ is ﬁnite. 1998b). The zero polynomial is deﬁned as Þ ´×µ ½ ´× ﬁnite zeros of ´×µ. This deﬁnition of zeros is based on the transfer function matrix. loses The zeros are then the values × rank. resulting in zero output for some nonzero input.62) Þ for which the polynomial system matrix.5 Zeros Zeros of a system arise when competing effects. Numerically. it can be argued that zeros system the zeros Þ are the solutions to ´Þ µ are values of × at which ´×µ loses rank (from rank 1 to rank 0 for a SISO system). minimizes the input usage required for stabilization.5. We may sometimes use the term “multivariable zeros” to distinguish them from the zeros of the elements of the transfer function matrix. These zeros are sometimes called “transmission zeros”.½¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 2 The pole vectors provide a very useful tool for selecting inputs and outputs for stabilization. The normal rank of ´×µ is deﬁned as the rank of ´×µ at all values of × except at a ﬁnite number of singularities (which are the zeros). First note that the statespace equations of a system may be written as Ü È ´×µ Ù Ý ¼ È ´×µ ×Á (4.64) Å Á ¼ ¼ ¼ Á . À À 4. internal to the system. this choice minimizes the lower bound on both the ¾ and ½ norms of the transfer function ÃË from measurement (output) noise to input (Havre. For a single unstable mode.1 Zeros from statespace realizations Zeros are usually computed from a statespace description of the system. È ´×µ. but we will simply call them “zeros”. 1998)(Havre and Skogestad. More precisely. In general. Þ is a zero of ´×µ if the rank of ´Þ ÉÒÞ of ´×µ. the zeros are found as nontrivial solutions (with ÙÞ ¼ and ÜÞ ¼) to the following problem ´ÞÁ Åµ ÜÞ ÙÞ ¼ (4. 1976). selecting the input corresponding to the largest element in ÙÔ and the output corresponding to the largest element in ÝÔ . corresponding to a minimal realization of a system. For a SISO ¼. 4. are such that the output is zero even when the inputs (and the states) are not themselves identically zero.7 Zeros. Þ µ where ÒÞ is the number of µ is less than the normal rank In this book we do not consider zeros at inﬁnity. This is the basis for the following deﬁnition of zeros for a multivariable system (MacFarlane and Karcanias.63) (4.
.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿¿ This is solved as a generalized eigenvalue problem – in the conventional eigenvalue problem Á . is the greatest common divisor of all the numerators of all orderÖ minors of ´×µ.2.5. nonsquare systems are less likely to have zeros than square systems.9 Consider the ¾ ¢ ¾ transfer function matrix ½ × ½ ´×µ ¾´× ½µ ×·¾ (4. Consider again the ¾ ¾ system in (4. The next two examples consider nonsquare systems. Thus. On the other hand. and we ﬁnd that the system has no multivariable zeros. Example 4.10 Consider the ½ ¢ ¢ ¾ system ´×µ × ½ × ¾ ×·½ ×·¾ (4. for a ¾ ¿ system to have a zero. In general. Theorem 4. Ø ´×µ ¾ × . provided that these minors have been adjusted in such a way as to have the pole polynomial ´×µ as their denominators. Thus the zero polynomial is given by the numerator of (4.50) where Ø ´×µ in (4.66) The normal rank of ´×µ is ½.3 The zero polynomial Þ ´×µ. we need all three columns in ´×µ to be linearly dependent. there must be a value of × for which the two columns in ´×µ are linearly dependent. the zero polynomial is Þ ´×µ × . corresponding to a minimal realization of the system. For instance. which is ½. and the minor of order ¾ is the determinant.51) already has ´×µ as its denominator. where Ö is the normal rank of ´×µ. we have Á 4.51). ´×µ has a single RHPzero at × ¾´× ½µ¾ ½ ´×·¾µ¾ This illustrates that in general multivariable zeros have no relationship with the zeros of the transfer function elements. the pole polynomial is ´×µ × · ¾ and therefore ×·¾ . ¢ ¢ The following is an example of a nonsquare system which does have a zero. for a square ¾ ¾ system to have a zero. From Theorem 4.2 Zeros from transfer functions The following theorem from MacFarlane and Karcanias (1976) is useful for hand calculating the zeros of a transfer function matrix ´×µ. ´×µ has no zeros. Example 4. Example 4. Note that we usually get additional zeros if the realization is not minimal. and since there is no value of × for which both elements become zero. This is also shown by the following example where the system has no zeros.65) The normal rank of ´×µ is ¾.7 continued.
67) The common factor for these minors is the zero polynomial Þ ´×µ has a single RHPzero located at × ½. or ¡ À ¼ ¡ ÝÞ . which has a and a LHPpole at Ô ¾. and adjust the minors of order ¾ in (4.½¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Example 4.55) and (4. We will use an SVD of ´Þ µ and ´Ôµ RHPzero at Þ to determine the zero and pole directions (but we stress that this is not a reliable method numerically).11 Zero and pole directions. ÝÞ .3 Zero directions In the following let × be a ﬁxed complex scalar and consider ´×µ as a complex matrix. Then ´×µ loses rank at × Þ . the output zero direction. ´× ½µ. An example was given earlier in (3. and there will exist nonzero vectors ÙÞ and ÝÞ such that ´Þ µÙÞ ¼ ÝÞ (4. Let example. given a statespace realization. Similarly. 4. ÝÞ may be obtained from the transposed statespace description. because ÝÞ gives information about which output (or combination of outputs) may be difﬁcult to control. Taking the Hermitian (conjugate transpose) of (4. is usually of more interest than ÙÞ . we can evaluate ´×µ ´×µ have a zero at × Þ .63). We get ¢ Å½ ´×µ ´× ½µ¾ Å¾ ´×µ ¾´× ½µ´× · ¾µ Å¿ ´×µ ´× ½µ´× · ¾µ ´×µ ´×µ ´×µ (4. Thus. provided their directions are different.68) Here ÙÞ is deﬁned as the input zero direction. Remark. we may obtain ÙÞ and ÝÞ from an SVD of ´Þ µ Í ¦Î À and we have that ÙÞ is the last column in Î (corresponding to the zero singular value of ´Þ µ) and ÝÞ is the last column of Í . For ´×Á µ ½ · .56) so that their denominators are ´×µ ´× · ½µ´× · ¾µ¾ ´× ½µ.64). We usually normalize the direction vectors to have unit length.65).63).5.69). using Å Ì in (4.53). see (4. A better approach numerically. Consider the ¾ ¾ plant in (4. and ÝÞ is deﬁned as the output zero direction. is to obtain ÙÞ from a statespace description using the generalized eigenvalue problem in (4. ¡ ÙÀ ÙÞ Þ ½ À ÝÞ ÝÞ ½ From a practical point of view. ½ yields (4. To ﬁnd the zero direction consider ¢ ´Þ µ ´µ ½ ¿ ½ ¼ ¼ ¿ ¼ ¿ ¼ ¼½ ¼ ¼ ¼ ¼ ¼ ¼ ¼ À .68) yields ÙÀ À ´Þ µ Þ À ½ and ÝÞ ÝÞ Premultiplying by ÙÞ and postmuliplying by ÝÞ noting that ÙÀ ÙÞ Þ ÀÝ Þ ¼ ÙÞ .69) À ÝÞ ´Þ µ ¼ ¡ ÙÀ Þ In principle. the system We also see from the last example that a minimal realization of a MIMO system can have poles and zeros at the same value of ×. Consider again the ¾ ¿ system in (4.8 continued. Example 4.
Ä Å ÆÌË Ç ÄÁÆ we get ÙÞ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ ´Þ µ and The zero input and output directions are associated with the zero singular value of ¼ ¼ component in the ﬁrst output. and a set of initial conditions (states) ÜÞ . We see from ÝÞ that the zero has a slightly larger ´ Ô · ¯µ The SVD as ¯ ´ ¾ · ¯µ ½ ¼ ¼ ¯¾ ¼ ¿ ¼ ½ ¿ · ¯ ¯¾ ¾´ ¿ · ¯µ ¼ ¼ À (4. their directions are not. We note from ÝÔ that the pole has a ½ Remark. If Þ is a zero of ´×µ. this crude deﬁnition may fail in a few cases. However. 1976). 1970) ﬁrst deﬁned multivariable zeros using something similar to the SmithMcMillan form. 3. to determine the pole directions consider ¼ ¼ and ÝÞ ¼ ¿ ¼ . 2. If one does not have a minimal realization. then numerical computations (e. using MATLAB) may yield additional invariant zeros.71) has Ø ´×µ ½. the inputs and outputs need to be scaled properly before making any interpretations based on pole and zero directions. and we get ÙÔ slightly larger component in the second output.70) ¼ yields ´ ¾ · ¯µ ¿ ¼ ¼½ ¼ ¼ ¼ ¼ ¼ ¼ ¿ The pole input and output directions are associated with the largest singular value. where ÙÞ is a (complex) vector and ½· ´Øµ is a unit step. such that Ý ´Øµ ¼ for Ø ¼. Next. The zeros resulting from a minimal realization are sometimes called the transmission zeros. For instance. (1996). ¼ ¼ ¼ ¼ and ÝÔ . These cancel poles associated with uncontrollable or unobservable states and hence have limited practical signiﬁcance. 4. Thus. then there exists an input signal of the form ÙÞ ÞØ ½· ´Øµ. Poles and zeros are deﬁned in terms of the McMillan form in Zhou et al. the system ´×µ ´× · ¾µ ´× · ½µ ¼ ¼ ´× · ½µ ´× · ¾µ (4. although the system obviously has poles at (multivariable) zeros at ½ and ¾. We recommend that a minimal realization is found before computing the zeros.6 Some remarks on poles and zeros 1. when there is a zero and pole in different parts of the system which happen to cancel when forming Ø ´×µ. The presence of zeros implies blocking of certain input signals (MacFarlane and Karcanias. For square systems we essentially have that the poles and zeros of ´×µ are the poles and zeros of Ø ´×µ. For example. Rosenbrock (1966.g. It is important to note that although the locations of the poles and zeros are independent of input and output scalings. 4. These invariant zeros plus the transmission zeros are sometimes called the system zeros. The invariant zeros can be further subdivided into input and output decoupling zeros. ¼½ ¯¾ . ½ and ¾ and .
8. and the zeros of ´×µ are equal to the poles ½ ´×µ. We note that although the system has poles and zeros at the same locations (at ½ and ¾). series and parallel compensation on the zeros. but we may cancel pole from ½ ×· poles in by placing zeros in Ã (e. Thus. their directions are different and so they do not cancel or otherwise interact with each other. if Ý is a physical output.71) the pole at ½ has ½ ¼ Ì . including their output directions ÝÞ . For square systems with a nonsingular matrix. Zeros usually appear when there are fewer inputs or outputs than states.g.g. directions ÙÔ ÝÔ 6. (b) series compensation ( Ã . see Example 4.71) provides a good example for illustrating the importance of directions when discussing poles and zeros of multivariable systems. unstable plants may be characteristic polynomial Ð ´×µ stabilized by use of feedback control. This probably from Ý we can directly obtain Ü (e. Moving zeros.12. but we may cancel their effect by subtracting identical poles in Ã ½ ½ (e. (c) The only way to move zeros is by parallel compensation. . ×· Ã ×· ). the poles are determined by the corresponding closedloop negative feedback Ù Ø´×Á · Ã¼ µ. or when ´×Á µ ½ · with Ò states. See also Example 4. and is discussed in more detail in Section 6. and (c) parallel compensation cannot move the poles.½¿ 5. How are the poles affected by (a) feedback ( ´Á · Ã µ ½ ).5. whereas the zero at ½ has directions ÙÞ ÝÞ ¼ ½ Ì . provided ÝÞ has a nonzero element for this output. For a strictly proper plant ´×µ Ø´×Á µ. We then have Consider a square Ñ Ñ plant ´×µ for the number of (ﬁnite) zeros of ´×µ (Maciejowski. but cancellations are not possible for RHPzeros due to internal stability (see Section 4. ´×Á µ ½ .73) 9.55) ¢ ¼ ¼ ¼ Ò ÖÒ ´ µ Ñ Ø ÑÓ×Ø Ò Ñ · Ö Ò ´ µ Þ ÖÓ× Ø ÑÓ×Ø Ò ¾Ñ · Ö Ò ´ µ Þ ÖÓ× Ü ØÐÝ Ò Ñ Þ ÖÓ× (4. In (4. However. (a) With feedback. if Á and ¼). If we apply constant gain by the characteristic polynomial ÓÐ ´×µ Ã¼ Ý . Ý ´ · Ã µÙ.g. This was illustrated by the example in Section 3.72) 7. can only be accomplished by adding an extra input (actuator). 11. that is. which. This means that the zeros in .5. feedforward control) and (c) parallel compensation ( · Ã )? The ½ ¾ moves the answer is that (a) feedback control moves the poles (e.7). (b) Series compensation can counter the effect of zeros in by placing poles in Ã to cancel them. if the inverse of ´Ôµ exists then it follows from the SVD that ½ ´ÔµÝÔ ¼ ¡ ÙÔ (4. There are no zeros if the outputs contain direct information about all the states. ×· Ã ×· ). even though ÝÞ is ﬁxed it is still possible with feedback control to move the deteriorating effect of a RHPzero to a given output channel. the openloop poles are determined 10. are unaffected by feedback. explains why zeros were given very little attention in the optimal control theory of the 1960’s which was based on state feedback.g.1. (b) series compensation cannot move the poles. the number of poles is the same as the number of zeros.13. 1989. Consider next the effect of feedback. ×· Ã to · ). ¼. Furthermore. Moving poles. the zeros of ´Á · Ã µ ½ are the zeros of plus the poles of Ã . p. and vice versa. ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´×µ in (4.
The control implications of RHPzeros and RHPpoles are discussed for SISO systems on pages 173187 and for MIMO systems on pages 220223. A zero is pinned to a subset of the outputs if ÝÞ has one or more elements equal to zero. The existence of zeros for nonsquare systems is common in practice in spite of what is sometimes claimed in the literature. The zero polynomial is Þ Ð ´×µ Þ ´×µ. 2. Ì ´×µ Note the following: Ä´×µ ½ · Ä´×µ ½· ´×µ ´×µ Þ ´×µ ´×µ · Þ ´×µ Þ Ð ´×µ Ð ´×µ (4. pinned zeros have a scalar origin. one can say that a system which is functionally uncontrollable has in a certain output direction “a zero for all values of ×”.13 We want to prove that ´×µ and rank ´ µ Ò. µ ½ µ ¼ Ð ´×µ Ð ´×µ ´×µ (4. On the ½½ ´× Þµ ½¾ ½¿ does not have a zero at × Þ since ¾ ´Þ µ other hand. as we increase the feedback gain. This follows because the second row of ½ ´Þµ is equal to zero. Loosely speaking. ÝÞ ¼ ½ Ì . the effect of a measurement delay for output Ý½ cannot be moved to output Ý¾ . ¾ ´×µ ¾½ ´× Þµ ¾¾ ¾¿ has rank 2 which is equal to the normal rank of ¾ ´×µ (assuming that the last two columns of ¾ ´×µ have rank 2). is related to zeros.76) Þ ´×µ That is. 13. Solution: Consider the polynomial system has rank Ò. a zero is pinned to certain inputs if ÙÞ has one or more elements equal to zero.12 Effect of feedback on poles and zeros.75) (4.e. In most cases.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ 12. which is ¾. where the zero at ¾ is pinned to input Ù½ and to output Ý½ . Pinned zeros are quite common in practice. and their effect cannot be moved freely to any output. the ﬁrst Ò and last Ñ columns are .71). An example is ´×µ in (4. For example. As an example consider a plant with three inputs and two outputs ½ ´×µ 14. These results are well known from a classical root locus analysis. matrix È ´×µ in (4. where Ò is the number of states. The ﬁrst Ò columns of È are independent because The last Ñ columns are independent of ×. Furthermore. The concept of functional controllability. ´×Á µ ½ · has no zeros if ¼ Example 4. Pinned zeros. i. Zeros of nonsquare systems. In particular. the closedloop poles move from openloop poles to the openloop zeros. RHPzeros therefore imply high gain instability. The closedsystem with plant ´×µ Þ ´×µ ´×µ and a constant gain controller. The pole locations are changed by feedback. Similarly.74) 1. which is less than the normal rank of ½ ´×µ. see page 218.62). Consider a SISO negative feedback . Example 4. For example. Ã ´×µ loop response from reference Ö to output Ý is ½½ ½¾ ½¿ ¾½ ´× Þµ ¾¾ ´× Þ µ ¾¿ ´× Þ µ which has a zero at × Þ which is pinned to output Ý¾ . so the zero locations are unchanged by feedback. they appear if we have a zero pinned to the side of the plant with the fewest number of channels. so the rank of ½ ´Þ µ is 1.
´×Á µ ½ · with just one state.½¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ independent of each other. ¼ because if is nonzero then the ﬁrst Ò columns of È may depend on the (We need last Ñ columns for some value of ×). In conclusion.77) Exercise 4. determine a statespace realization Exercise 4.77). È ´×µ always has rank Ò · Ñ and there are no zeros. ÝÑ Ü (but the controlled output is still Ý Ü · Ù). with ´×µ ½ × . i.4 Determine the poles and zeros of ¾ ´×µ given that ´×·¾µ ¿ ½½×¿ ½ ×¾ ¼× ¼ ×´×·½¼µ´×·½µ´× µ ´×·½µ´× µ ´×·¾µ ´×·¾µ ´×·½µ´× µ ´×·½µ´× µ ¼´× · ½µ¾ ´× · ¾µ´× µ ×´× · ½µ¾ ´× · ½¼µ´× µ¾ Ø ´×µ ¼´× ×¿ ½ ×¾ ¾¿× ½¼µ ×´× · ½µ¾ ´× · ½¼µ´× µ¾ ´×µ have? How many poles does × ´×µÙ´×µ. and what are the zeros of this transfer function? Exercise 4. Find the zeros. Does ´×µ have any zeros for have the same poles an zeros for a SISO system? For a MIMO system? Exercise 4. since ¼ and has full column rank and thus cannot have any columns equal to zero.8 Consider the plant in (4. Can a RHPzero in ´×µ give problems with stability in the feedback system? Can we achieve “perfect” control of Ý in this case? (Answers: No and no). the single state of ´×µ.5 Given Ý ´×µ ½· of ´×µ and then ﬁnd the zeros of ´×µ using the generalized eigenvalue problem. Exercise 4.e. What is the transfer function from Ù´×µ to Ü´×µ.7 For what values of ½ does the following plant have RHPzeros? Á ½¼ ¼ ¼ ½ ½¼ ½ ½¼ ¼ ¼ ¼ ¼ ½ (4.3 (a) Consider a SISO system ´×µ ¼? (b) Do Ã and Ã i. is a scalar. .e.6 Find the zeros for a ¾ ¢ ¾ plant with ½ ¾½ ½½ ¾½ ½¾ ¾¾ ½ ¾¾ Á ¼ Exercise 4. but assume that both states are measured and used for feedback control.
Ë ´Á · Ã µ ½ . In this ideal case the statespace descriptions of Ä and Ë contain an unstable hidden mode corresponding to an unstabilizable or undetectable state. so the system is (internally) unstable. This is a subtle but important point. Ä and Ë will in practice also be unstable. To this effect consider the system in Figure 4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ 4. it is not possible to cancel exactly a plant zero or pole because of modelling errors.14 Consider the feedback system shown in Figure 4.7 Internal stability of feedback systems To test for closedloop stability of a feedback system. we would still get an internally unstable system. e.4. In forming the loop transfer function Ä Ã ´×µ × × ½ Ã we cancel the term ´× ½µ. However. see Deﬁnition 4. The point is best illustrated by an example. the transfer Ã ´Á · Ã µ ½ Ý ×· ´× ´½µ´×½µ µ · Ý (4. to obtain Ä Ã × Ò Ë ´Á · Äµ ½ × ×· (4. that is. However.2: Internally unstable system Remark. In the above example. this assumes that there are no internal RHP polezero cancellations between the controller and the plant. it is usually enough to check just one closedloop transfer function. the transfer function from Ý to function from Ý to Ù is unstable: Ù Ý is stable. it is unstable when considering the “internal” signal Ù. Ö + ¹  ×·½µ ¹ ×´´× ½µ Ã Ù Ý ¹ + Ù ¹ × ½ + ×·½ ¹+ + Õ Ý¹ Figure 4.78) Ë ´×µ is stable. therefore. although the system appears to be stable when considering the output signal Ý . However. In practice.2 where ´×µ ×·½ ×·½ . it is important to stress that even in the ideal case with a perfect RHP polezero cancellation. × ½ and Example 4. as in the above example. From the above example. it is clear that to be rigorous we must consider internal stability of the feedback system. a RHP polezero cancellation.3 where .g.79) Consequently.
81) Ù Ý ´Á · Ã µ ½ Ù Ã ´Á · Ã µ ½ Ý ´Á · Ã µ ½ Ù · ´Á · Ã µ ½ Ý The theorem below follows immediately: Theorem 4. We (4. get and Ã . (1996. In this way.80) and (4. ¾ Note how we deﬁne polezero cancellations in the above theorem.3 is internally stable if and only if one of the four closedloop transfer function matrices in (4.125). but internal stability is clearly not possible. with ´×µ has disappeared and there is effectively a RHP polezero cancellation.80) (4. In this case.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ý Õ ¹ + Ã ¹ + Ù + + Ý Õ Ù Figure 4.3 is internally unstable.80) and (4. that is.e. p. Theorem 4. i. RHP polezero cancellations resulting from or Ã not having full normal rank are also disallowed. such as and Ã . Then the feedback system in Figure 4. if Ã and Ã do not both contain all the RHPpoles in and Ã . Then the feedback system in Figure 4.14): If there are RHP polezero cancellations between ´×µ and Ã ´×µ.3 is internally stable if and only if all four closedloop transfer matrices in (4. This is stated in the following theorem.81) is stable. Proof: A proof is given by Zhou et al. For ½ ´× µ and Ã ¼ we get Ã ¼ so the RHPpole at × example.5 Assume there are no RHP polezero cancellations between ´×µ and Ã ´×µ. we get Ë ´×µ ½ which is stable.81) are stable.3: Block diagram used to check internal stability of feedback system we inject and measure signals at both locations between the two components. The following can be proved using the above theorem (recall Example 4. . then the system in Figure 4.4 Assume that the components and Ã contain no unstable hidden modes. If we disallow RHP polezero cancellations between system components. then stability of one closedloop transfer function implies stability of the others. all RHPpoles in ´×µ and Ã ´×µ are contained in the minimal realizations of Ã and Ã .
Ã and ÌÁ Ã ´Á · Ã µ ½ will each have a RHPzero at Þ . Since Ì is stable. etc. Ë ´Á · Ã µ ½ must have in Ä. Proof of 2: Clearly. 1974): 1. does not necessarily imply internal instability. etc. ´×µ has a RHPzero at Þ . Prove the following interpolation constraints which apply for SISO feedback systems when the plant ´×µ has a RHPzero Þ or a RHPpole Ô: ´Þ µ ¼ Ä´Þ µ ¼ Ì ´Þ µ ¼ Ë ´Þ µ ½ (4.3.84) Exercise 4. then Ä while Ë ´Á · Ã µ ½ ÃË Ã ´Á · Ã µ ½ and ËÁ ´Á · Ã µ ½ have a RHPzero at Ô.82) 4. such as and Ã . Now Ë is stable and thus has no RHPpole which can cancel the RHPzero ÄË must have a RHPzero at Þ . Similarly. is internally stable if and only if Å ´×µ is stable. Ì Ã ´Á · Ã µ ½ .. Ä has a RHPpole at Ô. Note in particular Exercise 4. It is only as between Ä and Ë between separate physical components (e.1 Implications of the internal stability requirement The requirement of internal stability in a feedback system leads to a number of interesting results. ÄÁ Proof of 1: To achieve internal stability.12.81) also may be written From this we get that the system in Figure 4. If 2. Ë ´Á · Ã µ ½ . We ﬁrst prove the following statements which apply when the overall feedback system is internally stable (Youla et al. RHP polezero cancellations between system Ã must have a RHPzero when components. Exercise 4.80) and (4. where we discuss alternative ways of implementing a two degreesoffreedom controller. some of which are investigated below. such ´Á · Äµ ½ .11 Given the complementary sensitivity functions Ì ½ ´× µ ¾× · ½ ×¾ · ¼ × · ½ what can you say about possible RHPpoles or RHPzeros in the corresponding loop transfer functions. ÄË that Ë ¾ We notice from this that a RHP polezero cancellation between two transfer functions. and so Ì ¾ a RHPzero. If ´×µ has a RHPpole at Ô. it follows from Ì must have a RHPzero which exactly cancels the RHPpole in Ä.7.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ Ã ½ Á ½½ Exercise 4. controller. Ù Ý Å ´×µ Ù Ý Å ´×µ Á (4.9 Use (A.7) to show that the signal relationship (4. plant) that RHP polezero cancellations are not allowed. are not allowed. Ã and ÄÁ Ã also have a RHPpole at Ô.83) ½ ´Ôµ ¼ µ ¸ µ Ä´Ôµ ½ ¸ Ì ´Ôµ ½ Ë ´Ôµ ¼ ¾× · ½ Ì ´×µ ×¾ · ¼ × · ½ ¾ (4. Ä½ ´×µ and Ä¾ ´×µ? .10 Interpolation constraints.g. Thus Ä has a RHPzero. then Ä Ã.
1) Explain why the conﬁguration in Figure 4. if the feedback controller ÃÝ ´×µ contains RHP poles or zeros. Ö ¹ ¹ ÝÑ Ã Ù¹ Ö ¹ ÃÖ + ¹ ¹ ÃÝ Ù ¹ ÝÑ (a) (b) + ¹ . Recall ¼ and Ì ½. This implementation issue is dealt with in this exercise by considering the three possible schemes in Figure 4.12 Internal stability of two degreesoffreedom control conﬁgurations. In all these schemes ÃÖ must clearly be stable. We discuss this in more detail in Chapters 5 and 6.83) that a RHPthat for “perfect control” we want Ë zero Þ puts constraints on Ë and Ì which are incompatible with perfect control. in particular. in some cases one may want to ﬁrst design the pure feedback part of the controller. ÃÖ ´×µ. here denoted ÃÝ ´×µ. . and may also yield problems when it comes to implementation. the constraints imposed by the RHPpole are consistent with what we would like for perfect control. A two degreesoffreedom controller allows one to improve performance by treating disturbance rejection and command tracking separately (at least to some degree). A discussion of the signiﬁcance of these interpolation constraints is relevant.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark. for command tracking. We note from (4. and then to add a simple precompensator. The following exercise demonstrates another application of the internal stability requirement.4(b) should not be used if ÃÝ contains RHPzeros (Hint: Avoid a RHPzero between Ö and Ý ). This approach is in general not optimal.Ù¹ Ö¹ ÃÖ Ö ¹ ÃÖ + ¹ ¹ Ã½ Ù ¹ ÃÝ ÝÑ (c) Ã¾ ÝÑ (d) Figure 4. The general form shown in Figure 4. which can happen.4(b)–4.4: Different forms of two degreesoffreedom controller (a) General form (b) Suitable when ÃÝ ´×µ has no RHPzeros (c) Suitable when ÃÝ ´×µ is stable (no RHPpoles) (d) Suitable when ÃÝ ´×µ Ã½ ´×µÃ¾ ´×µ where Ã½ ´×µ contains no RHPzeros and Ã¾ ´×µ no RHP poles Exercise 4. Thus the presence of RHPpoles mainly impose problems when tight (high gain) control is not possible. for disturbance rejection. On the other hand.4(d). However.4(a) is usually preferred both for implementation and design.
8 Stabilizing controllers In this section. Discuss why one may often set ÃÖ Á in this case (to give a fourth possibility).4(c) should not be used if ÃÝ contains RHPpoles..8. 4. Proof: The four transfer functions in (4. By all stabilizing controllers we mean all controllers that yield internal stability of the closedloop system.4(d) may be used. provided the RHPpoles (including integrators) of ÃÝ are contained in Ã½ and the RHPzeros in Ã¾ . ´Á · Ã µ ½ Á É ´Á · Ã µ ½ Á É ´Á · Ã µ ½ ´Á É µ and É are stable. 3) Show that for a feedback controller ÃÝ the conﬁguration in Figure 4. i.81) are easily shown to be Ã ´Á · Ã µ ½ É (4.e.87) (4. with ¾ . We ﬁrst consider stable plants. and then unstable plants where we make use of the coprime factorization.85) (4.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½¿ 2) Explain why the conﬁguration in Figure 4.6 For a stable plant stable if and only if É Ã ´Á · ´×µ the negative feedback system in Figure 4.17).80) and (4. write the model Ý Ù· in the form Ý where and Ù share the same states.88) stable the system is internally which are clearly all stable if stable if and only if É is stable. we introduce a parameterization.14) and (4. 4. for which the parameterization is easily derived.1 Stable plants The following lemma forms the basis. see (4. known as the Éparameterization or Youlaparameterization (Youla et al. Lemma 4. Thus.3 is internally Ã µ ½ is stable. To avoid this problem one should for statea separate unstable disturbance model space computations use a combined model for inputs and disturbances. The requirement of internal stability also dictates that we must exercise care when we use ´×µ. 1976) of all stabilizing controllers for a plant. This implies that this conﬁguration should not be used if we want integral action in ÃÝ (Hint: Avoid a RHPzero between Ö and Ý ).86) (4.
15 Given a stable controller Ã . É must be proper since the term Ã Ö + ¹  Ý ¹ É Õ ¹ ¹ plant ¹+ + Ý Õ ¹ ¹+ model Figure 4. What set of plants can be stabilized by this controller? (Hint: interchange the roles of plant and controller. and thus avoid internal RHP polezero cancellations between Ã and .89) where the “parameter” É is any stable transfer function matrix.5: The internal model control (IMC) structure Exercise 4.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ As proposed by Zames (1981). This means that although É may generate unstable controllers Ã . there is no danger of getting a RHPpole in Ã that cancels a RHPzero in . The parameterization in (4. we ﬁnd that a parameterization of all stabilizing negative feedback controllers for the stable plant ´×µ is given by Ã ´Á É µ ½ É É´Á Éµ ½ (4.13 Show that the IMCstructure in Figure 4.5 is internally unstable if either É or is unstable.88). Remark 2 We have shown that by varying É freely (but stably) we will always have internal stability. Exercise 4. 1989) of stabilizing controllers. Remark 1 If only proper controllers are allowed then ´Á É µ ½ is semiproper. Exercise 4.85) with respect to the controller Ã .85)(4.89) is identical to the internal model control (IMC) parameterization (Morari and Zaﬁriou.) .5. The idea behind the IMCstructure is that the “controller” É can be designed in an openloop fashion since the feedback signal only contains information about the difference between the actual output and the output predicted from the model. by solving (4.14 Show that testing internal stability of the IMCstructure is equivalent to testing for stability of the four closedloop transfer functions in (4. It may be derived directly from the IMC structure given in Figure 4.
The Éparameterization may be very useful for controller synthesis. etc. (1996) for details. so they are afﬁne3 in É.91) yields Ã ´Á É µ ½ É as found before in (4. ÎÖ ½ ÍÖ . 4.89). ´ÎÖ ÉÆÐ µ ½ ´ÍÖ · ÉÅÐ µ ½ ½ ½ Remark 2 For a stable plant.19) for the right coprime factorization. consider its left coprime factorization ´×µ Å ½ ÆÐ Ð (4. so we may from (4. Consider the feedback system shown in Figure 4. Ì . . so ÎÖ and ÍÖ can alternatively be obtained Remark 1 With É ¼ we have Ã¼ from a left coprime factorization of some initial stabilizing controller Ã¼ .) will be in the form À½ · À¾ ÉÀ¿ .g. Second. and É´×µ is any stable transfer function satisfying the technical condition Ø´ÎÖ ´ µ É´ µÆÐ ´ µµ ¼. we refer to the eigenvalues of a Ã ´ µ. we may write ´×µ ÆÐ ´×µ corresponding to ÅÐ Á . all closedloop transfer functions (Ë . complex matrix. see page 312 in Zhou et al. In this section we will study the use of frequencydomain techniques to derive information about closedloop stability from the openloop transfer matrix Ä´ µ.2 Unstable plants For an unstable plant ´×µ.1 Open and closedloop characteristic polynomials We ﬁrst derive some preliminary results involving the determinant of the return difference operator Á · Ä. the search over all ´Á · Ã µ ½ must be stable) is replaced by a search over stable stabilizing Ã ’s (e. Note that when we talk about eigenvalues in this section. First. 1985) ´×µ is then (4.g.8. This provides a direct generalization of Nyquist’s stability test for SISO systems. and (4. e. and not those of the state matrix .19) select ÍÖ ¼ and ÎÖ Á .6. where Ä´×µ is the loop ¿ A function ´Üµ is afﬁne in Ü if ´Üµ Ü· . usually of Ä´ µ 4.9. be it openloop or closedloop.9 Stability analysis in the frequency domain As noted above the stability of a linear system is equivalent to the system having no poles in the closed righthalf plane (RHP). This further simpliﬁes the optimization problem. This test may be used for any system. Remark 3 We can also formulate the parameterization of all stabilizing controllers in statespace form. In this case Ã¼ ¼ is a stabilizing controller.Ä Å ÆÌË Ç ÄÁÆ Ê Ë ËÌ Å ÌÀ ÇÊ ½ 4. and is linear in Ü if ´Üµ Ü. Ë É’s.91) Ã ´×µ where ÎÖ and ÍÖ satisfy the Bezout identity (4.90) A parameterization of all stabilizing negative feedback controllers for the plant (Vidyasagar.
½
ÅÍÄÌÁÎ ÊÁ
Ä
Ö
+ ¹ ¹
Ä
Ý Õ¹
Ã ÇÆÌÊÇÄ
Figure 4.6: Negative feedback system transfer function matrix. Stability of the openloop system is determined by the poles of Ä´×µ. If Ä´×µ has a statespace realization
ÓÐ ÓÐ
ÓÐ ÓÐ
, that is
Ä´×µ
ÓÐ ´×Á ÓÐ µ ÓÐ ´×µ
½ ÓÐ · ÓÐ
(4.92)
then the poles of Ä´×µ are the roots of the openloop characteristic polynomial
Ø´×Á ÓÐ µ
(4.93)
Assume there are no RHP polezero cancellations between ´×µ and Ã ´×µ. Then from Theorem 4.5 internal stability of the closedloop system is equivalent to the stability of Ë ´×µ ´Á · Ä´×µµ ½ . The state matrix of Ë ´×µ is given (assuming Ä´×µ is wellposed, i.e. ÓÐ · Á is invertible) by
Ð
ÓÐ ÓÐ ´Á ·
ÓÐ µ
½ ÓÐ
(4.94)
This equation may be derived by writing down the statespace equations for the transfer function from Ö to Ý in Figure 4.6
and using (4.96) to eliminate Ý from (4.95). The closedloop characteristic polynomial is thus given by
Ü Ý
ÓÐ Ü · ÓÐ ´Ö Ý µ ÓÐ ´Ö Ý µ
(4.95) (4.96)
ÓÐ Ü ·
Ð ´×µ ¸
Ø´×Á
Ðµ
Ø´×Á ÓÐ · ÓÐ ´Á · ÓÐ µ ½ ÓÐ µ
(4.97)
Relationship between characteristic polynomials
The above identities may be used to express the determinant of the return difference operator, Á · Ä, in terms of Ð ´×µ and ÓÐ ´×µ. From (4.92) we get
Ø´Á · Ä´×µµ
Schur’s formula (A.14) then yields ÓÐ ¾½ ÓÐ )
Ø´Á · ÓÐ ´×Á ÓÐ µ ½ ÓÐ · ÓÐ µ Á · ÓÐ ½¾ ÓÐ ¾¾ (with ½½
Ð ´×µ ÓÐ ´×µ
(4.98)
×Á
Ø´Á · Ä´×µµ
¡
(4.99)
Ø´Á · ÓÐ µ is a constant which is of no signiﬁcance when evaluating the where poles. Note that Ð ´×µ and ÓÐ ´×µ are polynomials in × which have zeros only, whereas Ø´Á · Ä´×µµ is a transfer function with both poles and zeros.
Ä Å ÆÌË Ç ÄÁÆ
Ê Ë ËÌ Å ÌÀ ÇÊ
½
Þ´×µ The ÓÐ ´×µ
(4.100)
Example 4.15 We will rederive expression (4.99) for SISO systems. Let Ä´×µ sensitivity function is given by
Ë ´×µ
and the denominator is
½ ½ · Ä´×µ
ÓÐ ´×µ´½ ·
ÓÐ ´×µ Þ ´×µ · ÓÐ ´×µ
´×µ
Þ ´×µ · ÓÐ ´×µ
ÓÐ ´×µ
Þ ´×µ µ
ÓÐ ´×µ´½ · Ä´×µµ
(4.101)
which is the same as Ð ´×µ in (4.99) (except for the constant which is necessary to make the leading coefﬁcient of Ð ´×µ equal to ½, as required by its deﬁnition). Remark 1 One may be surprised to see from (4.100) that the zero polynomial of Ë ´×µ is equal to the openloop pole polynomial, ÓÐ ´×µ, but this is indeed correct. On the other hand, Ä´×µ ´½ · Ä´×µµ is equal to Þ ´×µ, the note from (4.74) that the zero polynomial of Ì ´×µ openloop zero polynomial. Remark 2 From (4.99), for the case when there are no cancellations between ÓÐ ´×µ and Ð ´×µ, we have that the closedloop poles are solutions to
Ø´Á · Ä´×µµ
¼
(4.102)
4.9.2 MIMO Nyquist stability criteria
We will consider the negative feedback system of Figure 4.6, and assume there are no internal RHP polezero cancellations in the loop transfer function matrix Ä´×µ, i.e. Ä´×µ contains no unstable hidden modes. Expression (4.99) for Ø´Á · Ä´×µµ then enables a straightforward generalization of Nyquist’s stability condition to multivariable systems. Theorem 4.7 Generalized (MIMO) Nyquist theorem. Let ÈÓÐ denote the number of openloop unstable poles in Ä´×µ. The closedloop system with loop transfer function Ä´×µ and negative feedback is stable if and only if the Nyquist plot of Ø´Á · Ä´×µµ i) makes ÈÓÐ anticlockwise encirclements of the origin, and ii) does not pass through the origin. The theorem is proved below, but let us ﬁrst make some important remarks. Remark 1 By “Nyquist plot of Ø´Á · Ä´×µµ” we mean “the image of Ø´Á · Ä´×µµ as × goes clockwise around the Nyquist contour”. The Nyquist contour includes the ) and an inﬁnite semicircle into the righthalf plane as illustrated in entire axis (× Figure 4.7. The contour must also avoid locations where Ä´×µ has axis poles by making small indentations (semicircles) around these points. Remark 2 In the following we deﬁne for practical reasons unstable poles or RHPpoles as poles in the open RHP, excluding the axis. In this case the Nyquist contour should make axis poles, a small semicircular indentation into the RHP at locations where Ä´×µ has thereby avoiding the extra count of encirclements due to axis poles.
½
ÅÍÄÌÁÎ ÊÁ
Im
Ä
Ã ÇÆÌÊÇÄ
¼
Re
Figure 4.7: Nyquist
contour for system with no openloop
axis poles
Remark 3 Another practical way of avoiding the indentation is to shift all axis poles into the LHP, for example, by replacing the integrator ½ × by ½ ´× · ¯µ where ¯ is a small positive number. Remark 4 We see that for stability Ø´Á · Ä´ µµ should make no encirclements of the origin if Ä´×µ is openloop stable, and should make ÈÓÐ anticlockwise encirclements if Ä´×µ is unstable. If this condition is not satisﬁed then the number of closedloop unstable poles of ´Á · Ä´×µµ ½ is È Ð · ÈÓÐ , where is the number of clockwise encirclements of the origin by the Nyquist plot of Ø´Á · Ä´ µµ.
Æ
Æ
Remark 5 For any real system, Ä´×µ is proper and so to plot Ø´Á · Ä´×µµ as × traverses along the imaginary axis. This follows since the contour we need only consider × Ð Ñ× ½ Ä´×µ the Nyquist plot of Ø´Á · Ä´×µµ ÓÐ is ﬁnite, and therefore for × converges to Ø´Á · ÓÐ µ which is on the real axis.
½
¼ the plot of Ø´Á · Ä´ µµ Remark 6 In many cases Ä´×µ contains integrators so for may “start” from . A typical plot for positive frequencies is shown in Figure 4.8 for the system
¦½
Ä
Ã
¿´ ¾× · ½µ Ã ´ × · ½µ´½¼× · ½µ
½ ½ ½¾ × · ½ ½¾ ×
(4.103)
Note that the solid and dashed curves (positive and negative frequencies) need to be connected as approaches 0, so there is also a large (inﬁnite) semicircle (not shown) corresponding ¼ (the indentation is to avoid the to the indentation of the contour into the RHP at × integrator in Ä´×µ). To ﬁnd which way the large semicircle goes, one can use the rule (based on conformal mapping arguments) that a rightangled turn in the contour will result in a rightangled turn in the Nyquist plot. It then follows for the example in (4.103) that there will be an inﬁnite semicircle into the RHP. There are therefore no encirclements of the origin. Since there are no openloop unstable poles ( axis poles are excluded in the counting), ÈÓÐ ¼, and we conclude that the closedloop system is stable. Proof of Theorem 4.7: The proof makes use of the following result from complex variable theory (Churchill et al., 1974):
Ä Å ÆÌË Ç ÄÁÆ
Ê Ë ËÌ Å ÌÀ ÇÊ
Im
2
½
1
¦½
−1 −0.5 −1 0.5 1
Re
−2
−3
Figure 4.8: Typical Nyquist plot of ½ ·
Ø Ä´ µ ´×µ and let
denote a
Lemma 4.8 Argument Principle. Consider a (transfer) function closed contour in the complex plane. Assume that: 1. 2. 3.
´×µ is analytic along , that is, ´×µ has no poles on ´×µ has zeros inside . ´×µ has È poles inside .
.
Then the image ´×µ as the complex argument × traverses the contour È clockwise encirclements of the origin. direction will make
once in a clockwise
Let
Æ ´ ´×µ µ denote the number of clockwise encirclements of the point by the image ´×µ as × traverses the contour clockwise. Then a restatement of Lemma 4.8 is Æ ´¼ ´×µ µ È
(4.104)
Ð ´×µ Ø´Á · Ä´×µµ We now recall (4.99) and apply Lemma 4.8 to the function ´×µ ÓÐ ´×µ selecting to be the Nyquist contour. We assume Ø´Á · ÓÐ µ ¼ since otherwise goes along the axis and around the feedback system would be illposed. The contour ¼) by the entire RHP, but avoids openloop poles of Ä´×µ on the axis (where ÓÐ ´ µ making small semicircles into the RHP. This is needed to make ´×µ analytic along . We ÈÓÐ poles and È Ð zeros inside . Here È Ð denotes the then have that ´×µ has È number of unstable closedloop poles (in the open RHP). (4.104) then gives
Æ ´¼ Ø´Á · Ä´×µµ µ È Ð ÈÓÐ
(4.105)
Since the system is stable if and only if È Ð ¼, condition i) of Theorem 4.7 follows. However, Ø´Á · Ä´×µµ, and hence Ð ´×µ has we have not yet considered the possibility that ´×µ
½¼
zeros on the avoid this, 4.7 follows.
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
Ø´Á · Ä´ µµ must not be zero for any value of
contour itself, which will also correspond to a closedloop unstable pole. To and condition ii) in Theorem
¾
Example 4.16 SISO stability conditions. Consider an openloop stable SISO system. In this case, the Nyquist stability condition states that for closedloop stability the Nyquist plot of ½ · Ä´×µ should not encircle the origin. This is equivalent to the Nyquist plot of Ä´ µ not encircling the point ½ in the complex plane
4.9.3 Eigenvalue loci
The eigenvalue loci (sometimes called characteristic loci) are deﬁned as the eigenvalues of the frequency response of the openloop transfer function, ´Ä´ µµ. They partly provide a generalization of the Nyquist plot of Ä´ µ from SISO to MIMO systems, and with them gain and phase margins can be deﬁned as in the classical sense. However, these margins are not too useful as they only indicate stability with respect to a simultaneous parameter change in all of the loops. Therefore, although characteristic loci were well researched in the 70’s and greatly inﬂuenced the British developments in multivariable control, e.g. see Postlethwaite and MacFarlane (1979), they will not be considered further in this book.
4.9.4 Small gain theorem
The Small Gain Theorem is a very general result which we will ﬁnd useful in the book. We present ﬁrst a generalized version of it in terms of the spectral radius, ´Ä´ µµ, which at each frequency is deﬁned as the maximum eigenvalue magnitude
´Ä´ µµ ¸ Ñ Ü
´Ä´ µµ
(4.106)
Theorem 4.9 Spectral radius stability condition. Consider a system with a stable loop transfer function Ä´×µ. Then the closedloop system is stable if
´Ä´ µµ
½
(4.107)
Proof: The generalized Nyquist theorem (Theorem 4.7) says that if Ä´×µ is stable, then the closedloop system is stable if and only if the Nyquist plot of Ø´Á · Ä´×µµ does not encircle the origin. To prove condition (4.107) we will prove the “reverse”, that is, if the system is unstable and therefore Ø´Á · Ä´×µµ does encircle the origin, then there is an eigenvalue, ´Ä´ µµ which is larger than ½ at some frequency. If Ø´Á · Ä´×µµ does encircle the origin, then there must exists a gain ¯ ´¼ ½ and a frequency ¼ such that
¾
Ø´Á · ¯Ä´ ¼ µµ
¼
(4.108)
Ä Å ÆÌË Ç ÄÁÆ
Ê Ë ËÌ Å ÌÀ ÇÊ ¼ µµ ½ for ¯
½½ ¼. (4.108) is
(4.109) (4.110) (4.111) (4.112) (4.113)
This is easily seen by geometric arguments since Ø´Á · ¯Ä´ equivalent to (see eigenvalue properties in Appendix A.2.1)
´Á · ¯Ä´ ¼ µµ
¼
¸ ¸ µ ¸
½ · ¯ ´Ä´ ¼ µµ ¼ ÓÖ ×ÓÑ ´Ä´ ¼ µµ ½ ÓÖ ×ÓÑ ¯ ´Ä´ ¼ µµ ½ ÓÖ ×ÓÑ ´Ä´ ¼ µµ ½
¾
Theorem 4.9 is quite intuitive, as it simply says that if the system gain is less than 1 in all directions (all eigenvalues) and for all frequencies ( ), then all signal deviations will eventually die out, and the system is stable. In general, the spectral radius theorem is conservative because phase information is not Ä´ µ , and consequently the above stability considered. For SISO systems ´Ä´ µµ ½ for all frequencies. This is clearly conservative, since condition requires that Ä´ µ ½ at from the Nyquist stability condition for a stable Ä´×µ, we need only require Ä´ µ ´× · ¯µ. frequencies where the phase of Ä´ µ is ½ ¼Æ Ò ¿ ¼Æ . As an example, let Ä ¼. Since the phase never reaches ½ ¼Æ the system is closedloop stable for any value of ¯, which for a small value of ¯ is very conservative However, to satisfy (4.107) we need indeed.
¦ ¡
Remark. Later we will consider cases where the phase of Ä is allowed to vary freely, and in which case Theorem 4.9 is not conservative. Actually, a clever use of the above theorem is the main idea behind most of the conditions for robust stability and robust performance presented later in this book. The small gain theorem below follows directly from Theorem 4.9 if we consider a matrix , since at any frequency we then have ´Äµ Ä (see norm satisfying (A.116)).
¡
Theorem 4.10 Small Gain Theorem. Consider a system with a stable loop transfer function Ä´×µ. Then the closedloop system is stable if
Ä´ µ
where
½
(4.114)
Ä
denotes any matrix norm satisfying
¡
.
Remark 1 This result is only a special case of a more general small gain theorem which also applies to many nonlinear systems (Desoer and Vidyasagar, 1975). Remark 2 The small gain theorem does not consider phase information, and is therefore independent of the sign of the feedback.
½¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ ´Äµ.
Remark 3 Any induced norm can be used, for example, the singular value,
Remark 4 The small gain theorem can be extended to include more than one block in Ä½ Ä¾ . In this case we get from (A.97) that the system is stable if the loop, e.g. Ä Ä½ Ä¾ ½ .
¡
Remark 5 The small gain theorem is generally more conservative than the spectral radius condition in Theorem 4.9. Therefore, the arguments on conservatism made following Theorem 4.9 also apply to Theorem 4.10.
4.10 System norms
Û
¹
Figure 4.9: System
Þ ¹
Consider the system in Figure 4.9, with a stable transfer function matrix ´×µ and impulse response matrix ´Øµ. To evaluate the performance we ask the question: given information about the allowed input signals Û´Øµ, how large can the outputs Þ ´Øµ become? To answer this, we must evaluate the relevant system norm. We will here evaluate the output signal in terms of the usual ¾norm,
×
Þ ´Øµ ¾
½ ½
Þ´ µ¾
(4.115)
and consider three different choices for the inputs: 1. 2. 3.
Û´Øµ is a series of unit impulses. Û´Øµ is any signal satisfying Û´Øµ ¾ ½. Û´Øµ is any signal satisfying Û´Øµ ¾ ½, but Û´Øµ Þ ´Øµ for Ø ¼.
¼ for Ø
¼, and we only measure
The relevant system norms in the three cases are the ¾ , ½ , and Hankel norms, respectively. The ¾ and ½ norms also have other interpretations as are discussed below. We introduced the ¾ and ½ norms in Section 2.7, where we also discussed the terminology. In Appendix A.5.7 we present a more detailed interpretation and comparison of these and other norms.
À À
À À
À À
Ä Å ÆÌË Ç ÄÁÆ
Ê Ë ËÌ Å ÌÀ ÇÊ
½¿
4.10.1
À¾ norm
À
´×µ ¾ ¸
Ú Ù Ù ½ Ù Ø¾
Consider a strictly proper system ´×µ, i.e. ¼ in a statespace realization. For the ¾ norm we use the Frobenius norm spatially (for the matrix) and integrate over frequency, i.e.
½ ½
ØÖ´ ´ µÀ ´ µµ ßÞ ´ µ¾ È ´ µ¾
(4.116)
We see that ´×µ must be strictly proper, otherwise the ¾ norm is inﬁnite. The ¾ norm can also be given another interpretation. By Parseval’s theorem, (4.116) is equal to the ¾ norm of the impulse response
À
À
À
´×µ ¾
´Øµ ¾ ¸
Ú Ù Ù Ù Ø
½
¼
ØÖ´ Ì ´ µ ´ µµ ßÞ ´µ¾ È ´ µ¾
(4.117)
Remark 1 Note that ´×µ and ´Øµ are dynamic systems while matrices (for a given value of or ).
×
´ µ and ´ µ are constant
Remark 2 We can change the order of integration and summation in (4.117) to get
´×µ ¾
´Øµ ¾
½
¼
´ µ¾
(4.118)
´Øµ is the ’th element of the impulse response matrix, ´Øµ. From this we see that where the ¾ norm can be interpreted as the 2norm output resulting from applying unit impulses Æ ´Øµ to each input, one after another (allowing the output to settle to zero before applying an ÔÈÑ ´×µ ¾ impulse to the next input). This is more clearly seen by writing ¾ ½ Þ ´Øµ ¾ where Þ ´Øµ is the output vector resulting from applying a unit impulse Æ ´Øµ to the ’th input.
À
In summary, we have the following deterministic performance interpretation of the
À¾ norm:
(4.119)
´× µ ¾
Û´Øµ
Ñ ÙÒ Ø ÜÑÔÙÐ× × Þ ´Øµ ¾
The ¾ norm can also be given a stochastic interpretation (see page 371) in terms of the quadratic criterion in optimal control (LQG) where we measure the expected root mean square (rms) value of the output in response to white noise excitation. For numerical computations of the ¾ norm, consider the statespace realization ´×Á µ ½ . By substituting (4.10) into (4.117) we ﬁnd
À
À
´×µ
(4.120)
´× µ ¾
Ô
ØÖ´ Ì É µ ÓÖ
´×µ ¾
Ô
ØÖ´ È Ì µ
where É and È are the observability and controllability Gramians, respectively, obtained as solutions to the Lyapunov equations (4.48) and (4.44).
½
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
4.10.2
À½ norm
À
Consider a proper linear stable system ´×µ (i.e. ¼ is allowed). For the ½ norm we use the singular value (induced 2norm) spatially (for the matrix) and pick out the peak value as a function of frequency ´×µ ½ Ñ Ü ´ ´ µµ (4.121)
¸
In terms of performance we see from (4.121) that the ½ norm is the peak of the transfer function “magnitude”, and by introducing weights, the ½ norm can be interpreted as the magnitude of some closedloop transfer function relative to a speciﬁed upper bound. This leads to specifying performance in terms of weighted sensitivity, mixed sensitivity, and so on. However, the ½ norm also has several time domain performance interpretations. First, as discussed in Section 3.3.5, it is the worstcase steadystate gain for sinusoidal inputs at any frequency. Furthermore, from Tables A.1 and A.2 in the Appendix we see that the ½ norm is equal to the induced (worstcase) 2norm in the time domain:
À À
À
À
´×µ ½
Û´Øµ
Ñ Ü Þ ´Øµ ¾ ¼ Û´Øµ ¾
Û´Øµ ¾
ÑÜ
½ Þ ´Øµ ¾
(4.122)
This is a fortunate fact from functional analysis which is proved, for example, in Desoer and Vidyasagar (1975). In essence, (4.122) arises because the worst input signal Û´Øµ is a sinusoid with frequency £ and a direction which gives ´ ´ £ µµ as the maximum gain. The ½ norm is also equal to the induced power (rms) norm, and also has an interpretation as an induced norm in terms of the expected values of stochastic signals. All these various interpretations make the ½ norm useful in engineering applications. The ½ norm is usually computed numerically from a statespace realization as the smallest value of such that the Hamiltonian matrix À has no eigenvalues on the imaginary axis, where
À
À
À
À
Ì , see Zhou et al. (1996, p.115). This is an iterative procedure, where one and Ê ¾ Á may start with a large value of and reduce it until imaginary eigenvalues for À appear.
Ì ´Á · Ê ½ Ì µ
· Ê ½ Ì
´ · Ê ½
Ê ½ Ì
Ì µÌ
(4.123)
4.10.3 Difference between the À¾ and À½ norms
To understand the difference between the ¾ and ½ norms, note that from (A.126) we can write the Frobenius norm in terms of singular values. We then have
×
À
À
´× µ ¾
½ ½ ¾ ½
¾ ´ ´ µµ
(4.124)
From this we see that minimizing the ½ norm corresponds to minimizing the peak of the largest singular value (“worst direction, worst frequency”), whereas minimizing the ¾ norm corresponds to minimizing the sum of the square of all the singular values over all frequencies (“average direction, average frequency”). In summary, we have
À
À
Ä Å ÆÌË Ç ÄÁÆ
Ê Ë ËÌ Å ÌÀ ÇÊ
½
¯ À½ : “push down peak of largest singular value”. ¯ À¾ : “push down whole thing” (all singular values over all frequencies).
Example 4.17 We will compute the
À½ and À¾ norms for the following SISO plant
´×µ ½ ×· ´ ½ ½ ½ Ø Ò ½ ´ µ µ¾ ½
Ö
(4.125)
The
À¾ norm is
´×µ ¾ ´
½ ½ ¾ ½
´ µ¾ ßÞ ½ ¾· ¾ ´Øµ
½ µ¾
¾
½ ¾
(4.126)
To check Parseval’s theorem we consider the impulse response
½ Ä ½ × ·
×
ØØ
Ö
¼ ½ ¾ ½
(4.127)
and we get
´Øµ ¾
as expected. The
½
À½ norm is
´×µ ½
¼
´ Ø µ¾ Ø ÑÜ
(4.128)
ÑÜ ´ µ
½ ½ ¾ · ¾µ ¾ ´
(4.129)
For interest, we also compute the ½norm of the impulse response (which is equal to the induced norm in the time domain):
½
´Øµ ½
In general, it can be shown that may have equality.
½
¼
Ø
ßÞ
´Øµ Ø
½
(4.130)
´×µ ½
´Øµ ½ , and this example illustrates that we
Example 4.18 There exists no general relationship between the example consider the two systems
À¾ and À½ norms. As an
½ ¯× (4.131) ¾ ´×µ ×¾ · ¯× · ½ ¯× · ½ ¼. Then we have for ½ that the À½ norm is ½ and the À¾ norm is inﬁnite. For and let ¯ ¾ the À½ norm is again ½, but now the À¾ norm is zero.
½ ´×µ
Why is the ½ norm so popular? In robust control we use the ½ norm mainly because it is convenient for representing unstructured model uncertainty, and because it satisﬁes the multiplicative property (A.97):
À
À
´×µ ´×µ ½
´×µ ½ ¡
´×µ ½
(4.132)
½
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
This follows from (4.122) which shows that the
What is wrong with the ¾ norm? The ¾ norm has a number of good mathematical and numerical properties, and its minimization has important engineering implications. However, the ¾ norm is not an induced norm and does not satisfy the multiplicative property. This implies that we cannoy by evaluating the ¾ norm of the individual components say anything avout how their series (cascade) interconnection will behave.
À
À
À½ norm is an induced norm.
À
À
Example 4.19 Consider again ´×µ Ô ½ ¾ . Now consider the ¾ norm of
À
½ ´× · µ in (4.125), for which we found ´×µ ´×µ:
Ö ßÞ
´×µ ¾
´×µ ´×µ ¾
and we ﬁnd, for
Ú Ù Ù Ù Ø
½
¼
½ Ä ½ ´ × · µ¾ ¾
Ø Ø
½ ½ ¾
Ö
½
´×µ ¾ ¾
½, that ´×µ ´×µ ¾ ´×µ ¾ ¡ ´×µ ¾
(4.133)
which does not satisfy the multiplicative property (A.97). On the other hand, the ½ norm does satisfy the multiplicative property, and for the speciﬁc example we have equality with ´×µ ½ ´×µ ½ . ´×µ ´×µ ½ ½¾
À
¡
4.10.4 Hankel norm
In the following discussion, we aim at developing an understanding of the Hankel norm. The Hankel norm of a stable system ´×µ is obtained when one applies an input Û´Øµ up to Ø ¼ and measures the output Þ ´Øµ for Ø ¼, and selects Û´Øµ to maximize the ratio of the 2norms of these two signals: Õ
´×µ À
¸ Ñ´ØÜ ÕÊ ¼¼ Û µ
Ê½
Þ´ µ ¾ ¾ ¾ ½ Û´ µ ¾
(4.134)
The Hankel norm is a kind of induced norm from past inputs to future outputs. Its deﬁnition is analogous to trying to pump a swing with limited input energy such that the subsequent length of jump is maximized as illustrated in Figure 4.10. It may be shown that the Hankel norm is equal to
´×µ À
Ô
´È Éµ
(4.135)
where is the spectral radius (maximum eigenvalue), È is the controllability Gramian deﬁned in (4.43) and É the observability Gramian deﬁned in (4.47). The name “Hankel” is used because the matrix È É has the special structure of a Hankel matrix (which has identical elements along the “wrongway” diagonals). The corresponding Hankel singular values are the positive square roots of the eigenvalues of È É,
Ô
´È Éµ
(4.136)
which is also Model reduction. The Hankel and Ø ¼ À½ norms are closely related and we have (Zhou et al. the Hankel norm is always smaller than (or equal to) the reasonable by comparing the deﬁnitions in (4.Figure 4. For model reduction. À½ norm. 1996. We may then discard such that É È states (or rather combinations of states corresponding to certain subspaces) corresponding to the smallest Hankel singular values..138) where ´×µ denotes a truncated or residualized balanced realization with states.20 We want to compute analytically the various system norms for ½ ´× · µ using statespace methods. The method of Hankel norm minimization gives a somewhat improved error ´×µ ´×µ ½ is less than the sum of the discarded bound. we usually start with a realization of which is internally balanced. since the inputs only affect the outputs through the states at Ø ¼.122) and (4.e. where we are guaranteed that Hankel singular values.111) ´× µ À ½ ´×µ ½ ¾ Ò ½ (4. i. Consider the following problem: given a statespace description ´×µ of a ´×µ with fewer states such that the inputoutput behaviour (from Û to system. where ¦ is the matrix of Hankel singular values. ¦. p. Based on the discussion above it seems reasonable to make use of the Hankel norm.10: Pumping a swing: illustration of Hankel norm. ﬁnd a model Þ ) is changed as little as possible. see Chapter 11. that is. Example 4. The change in ½ norm caused by deleting states in ´×µ is less than twice the sum of the discarded Hankel singular values. À ´×µ ´× µ ½ ¾´ ·½ · ·¾ · ¡ ¡ ¡µ (4. This and other methods for model reduction are discussed in detail in Chapter 11 where a number of examples can be found. The input is applied for and the jump starts at Ø ¼. ´×µ ½.137) Thus. A statespace realization is .134). .
4.120) the À¾ norm is then ´×µ ¾ ´À µ Ô ØÖ´ Ì É µ Ô ½¾ The eigenvalues of the Hamiltonian matrix À in (4. Similarly. The topics are standard and the treatment is complete for the purposes of this book. The controllability Gramian È is obtained from the Lyapunov equation Ì ¸ È È ½.135) the Hankel norm is ´×µ À ´È Éµ ½¾ These results agree with the frequencydomain calculations in Example 4.123) are ½ ¾ ½ ´×µ ½ Ô Ô ¦ ¾ ½ ¾ We ﬁnd that À has no imaginary eigenvalues for ½ ½ .11 Conclusion This chapter has covered the following important elements of linear system theory: system descriptions. [sysb. state controllability and observability.19 and 4.hsig]=sysbal(sys).16 Let ¼ and ¯ ¼ ¼¼¼½ and check numerically the results in Examples 4. the MATLAB toolbox commands h2norm.17. poles and zeros. for example. stability and stabilization. max(hsig). From (4. and system norms. hinfnorm.17.18. so È ½ ¾ .20 using.½ ½ and È ·È Ì Gramian is É ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¼. so The Hankel matrix is È É ½ ¾ and from (4. and for the Hankel norm. 4. Exercise 4. 4. . the observability ½ ¾ .
which variables should we manipulate. Inputoutput controllability of a plant is the ability to achieve acceptable control performance. output and disturbance variables prior to this analysis is critical. 2. we discuss the fundamental limitations on performance in SISO systems. methods for controller design and stability analysis are usually emphasized.e. in practice the following three questions are often more important: I. For example. the following rules are given: 1. Proper scaling of the input. and how are these variables best paired together? In other textbooks one can ﬁnd qualitative rules for these problems. 5. Select the inputs that rapidly affect the controlled variables . However. i.1 InputOutput Controllability In university courses on control. for each output. We summarize these limitations in the form of a procedure for inputoutput controllability analysis. 3. 4. Control the outputs that are not selfregulating. Select the inputs that have large effects on the outputs. How well can the plant be controlled? Before starting any controller design one should ﬁrst determine how easy the plant actually is to control. in Seborg et al. direct and rapid effect on it. What control structure should be used? By this we mean what variables should we measure and control. there should exist an input which has a signiﬁcant. (1989) in a chapter called “The art of process control”. Control the outputs that have favourable dynamic and static characteristics. which is then applied to a series of examples. does there even exist a controller which meets the required performance objectives? II.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË In this chapter. Is it a difﬁcult control problem? Indeed.
How might the process be changed to improve control? For example. and Morari (1983) who made use of the concept of “perfect control”. to reduce the effects of a disturbance one may in process control consider changing the size of a buffer tank. relocating sensors and actuators adding new equipment to dampen disturbances adding extra sensors adding extra actuators changing the control objectives changing the conﬁguration of the lower layers of control already in place Whether or not the last two actions are design modiﬁcations is arguable. Important ideas on performance limitations are also found in Bode (1945). in spite of unknown but bounded variations. Thus. size. The above three questions are each related to the inherent control characteristics of the process itself. Francis and Zames (1984). controllability is independent of the controller. In summary.g. 1992. Rosenbrock (1970). Kwakernaak (1985). to keep the outputs (Ý ) within speciﬁed bounds or displacements from their references (Ö). or in automotive control one might decide to change the properties of a spring. We also refer the reader to two IFAC workshops on Interactions between process design and process control (Perkins. 1968b).½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ These rules are reasonable. Kwakernaak and Sivan (1972) Horowitz and Shaked (1975). using available inputs (Ù) and available measurements (ÝÑ or Ñ ). In other situations. that is. Early work on inputoutput controllability analysis includes that of Ziegler and Nichols (1943). It can only be affected by changing the plant itself. Morari and Zaﬁriou (1989). but at least they address important issues which are relevant before the controller is designed. III. but what is “selfregulating”. These may include: ¯ ¯ ¯ ¯ ¯ ¯ ¯ changing the apparatus itself. “large”. a plant is controllable if there exists a controller (connecting plant measurements and plant inputs) that yields acceptable performance for all expected plant variations. Zames (1981). Boyd and Barratt (1991). such as disturbances ( ) and plant changes (including uncertainty). and Chen (1995).1 (Inputoutput) controllability is the ability to achieve acceptable control performance. by (plant) design changes. Horowitz (1963). 1994). Boyd and Desoer (1985). e. Frank (1968a. . type. Engell (1988). A major objective of this chapter is to quantify these terms. and is a property of the plant (or process) alone. etc. Freudenberg and Looze (1985. the speed of response of a measurement device might be an important factor in achieving acceptable control. Deﬁnition 5. Doyle and Stein (1981). that is. 1988). Zaﬁriou. We will introduce the term inputoutput controllability to capture these characteristics as described in the following deﬁnition. “rapid” and “direct”.
ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ 5. Surprisingly. With model uncertainty this involves designing a optimal controller (see Chapter 8). is to have a few simple tools which can be used to get a rough idea of how easy the plant is to control. no deﬁnition of the desired performance is included. This may seem restrictive. etc. can be handled quite well with a linear analysis. see Chapter 10. to keep the output Ý ´Øµ within the range Ö´Øµ ½ to Ö´Øµ · ½ (at least most of the time). the model uncertainty. performance is assessed by exhaustive simulations. namely that associated with input constraints. ´Øµ × Ò Ø. with this approach. Consequently. and so on. In fact. or if it depends on the speciﬁc controller designed. to determine whether or not a plant is controllable. The main objective of this chapter is to derive such controllability tools based on appropriately scaled models of ´×µ and ´×µ.1.e. to deal with slowly varying changes one may perform a controllability analysis at several selected operating points. An apparent shortcoming of the controllability analysis presented in this book is that all the tools are linear.2 Scaling and performance The above deﬁnition of controllability does not specify the allowed bounds for the displacements or the expected variations in the disturbance. in practice such an approach is difﬁcult and time consuming. Another term for inputoutput controllability analysis is performance targeting. In most cases the “simulation approach” is used i. the disturbances or the setpoints.1 Inputoutput controllability analysis Inputoutput controllability analysis is applied to a plant to ﬁnd out what control performance can be expected. More desirable. Throughout this chapter and the next. Experience from a large number of case studies conﬁrms that the linear measures are often very good. that is. one can never know if the result is a fundamental property of the plant. given the plethora of mathematical methods available for control system design. Nonlinear simulations to validate the linear controllability analysis are of course still recommended. the methods available for controllability analysis are largely qualitative. i. using an input Ù´Øµ within the range ½ to ½. A rigorous approach to controllability analysis would be to formulate mathematically the control objectives. However. one of the most important nonlinearities. without performing a detailed controller design. However.1. the class of disturbances. 5. so that the requirement for acceptable performance is: ¯ For any reference Ö´Øµ between Ê and Ê and any disturbance ´Øµ between ½ and ½. With Ý Ö we then have: . but usually it is not. and then to synthesize controllers to see whether the objectives can be met. we will assume that the variables and models have been scaled as outlined in Section 1. i. Also. especially if there are a large number of candidate measurements or actuators.4. We will interpret this deﬁnition from a frequencybyfrequency sinusoidal point of view..e. when we discuss controllability. this requires a speciﬁc controller design and speciﬁc values of disturbances and setpoint changes.e.
The concept of state controllability is important for realizations and numerical calculations. for simplicity we assume that Ê´ µ is Ê (a constant) up to the frequency Ö and is 0 above that frequency. However.1) to unify our treatment of disturbances and references. Unfortunately. should be zero at low frequencies. but as long as we know that all the unstable modes are both controllable and observable. For example. However. in the 60’s “controllability” became synonymous with the rather narrow concept of “state controllability” introduced by Kalman. there are many systems.3 Remarks on the term controllability The above deﬁnition of (inputoutput) controllability is in tune with most engineers’ intuitive feeling about what the term means. However. it might also be argued that the allowed control error should be frequency dependent. we will derive results for disturbances. see (5. we really only care about frequencies within the bandwidth of the system. Ziegler and Nichols (1943) deﬁned controllability as “the ability of the process to achieve and maintain the desired equilibrium value”. the the control error performance requirement is to keep at each frequency ´ µ ½. but which are not inputoutput controllable. i. For example. which can then be applied directly to the references by by Ê. so we will assume that Ê´ µ is frequencydependent.5. like the tanks in series Example 4. It is impossible to track very fast reference changes.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ For any disturbance ´ µ ½ and any reference Ö´ µ Ê´ µ. Speciﬁcally. so instead we prefer the term inputoutput controllability. we may require no steadystate offset.e. 177) notes that “most industrial plants are controlled quite satisfactorily though they are not [state] controllable”. and was also how the term was used historically in the control literature. using an input Ù´ µ ½.1. While this may be true.4) the control error may be written as Ý Ö Ù· ÊÖ (5. and in most cases it is reasonable to assume that the plant experiences sinusoidal disturbances of constant magnitude up to this frequency. and the term is still used in this restrictive manner by the system theory community. . one may take such considerations into account when interpreting the results). p. We will use (5. State controllability is the ability to bring a system from a given initial state to any ﬁnal state within a ﬁnite time. It could also be argued that the magnitude of the sinusoidal disturbances should approach zero at high frequencies. And conversely. For example. as shown in Example 4. or simply controllability when it is clear we are not referring to state controllability. including frequency variations is not recommended when doing a preliminary analysis (however. Recall that with Ö ÊÖ (see Section 1. replacing 5. Rosenbrock (1970. which are state controllable. Similarly.1) .1) where Ê is the magnitude of the reference and Ö´ µ ½ and ´ µ ½ are unknown signals. this term does not capture the fact that it is related to control. Morari (1983) introduced the term dynamic resilience. it usually has little practical signiﬁcance.5 this gives no regard to the quality of the response between and after these two states and the required inputs may be excessive. To avoid any confusion between practical controllability and Kalman’s state controllability.
5. To ﬁnd the corresponding plant input set Ý Ö and solve for Ù in (5. there are several deﬁnitions of and bandwidth ( . 1983).ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ Where are we heading? In this chapter we will discuss a number of results related to achievable performance.4) is the same as the perfect control input in (5. cannot be realized in practice) is achieved when the output is identically equal to the reference. Many of the results can be formulated as upper and lower bounds on the bandwidth of the system. is to consider the inputs needed to achieve perfect control (Morari. but we know that we cannot have high gain feedback at all frequencies. we have from (2. (5. The required input in (5.4) We see that at frequencies where feedback is effective and Ì Á (these arguments also apply to MIMO systems and this is the reason why we here choose to use matrix notation).3) must not exceed the maximum physically allowed value. Let the plant model be Ý Ù· (5.2): ½ Ö ½ Ù (5.3) (5. for feedforward control we have that perfect control cannot be achieved if is uncertain (since then ½ cannot be obtained exactly) The last restriction may be overcome by high gain feedback. perfect control cannot be achieved if . Ý Ö. When feedback Ù Ù ÃËÖ ÃË ½ Ì Ö ½ Ì or since the complementary sensitivity function is Ì ÃË . An important lesson therefore is that perfect control requires the controller to somehow generate an inverse of . From this we get that perfect control cannot be achieved if ¯ ¯ ¯ ¯ contains RHPzeros (since then ½ is unstable) contains time delay (since then ½ contains a noncausal prediction) has more poles than zeros (since then ½ is unrealizable) In addition.2) “Perfect control” (which. the input generated by feedback in (5. That is. but since we are looking for approximate bounds we will not be too concerned with these differences. Therefore. The main results are summarized at end of the chapter in terms of eight controllability rules.20) that is measurable. 5. As noted in Section 2. Ä and Ì . Ì ) in terms of the transfer functions Ë . of course.2 Perfect control and plant inversion A good way of obtaining insight into the inherent limitations on performance originating in the plant itself.4.3). i.e. assuming control Ù Ã ´Ö Ý µ is used.3) represents a perfect feedforward controller. high gain feedback generates an inverse of even though the controller Ã may be very simple.
a tradeoff between sensitivity reduction and sensitivity increase must be performed whenever: . but in practice. we want Ë small to obtain the beneﬁts of feedback (small control error for commands and disturbances).½ ÅÍÄÌÁÎ ÊÁ ½ ½ Ê is large is large Ä Ã ÇÆÌÊÇÄ ¯ ¯ ¯ ¯ where “large” with our scaled models means larger than 1. (5. The effect is similar to sitting on a waterbed: pushing it down at one point. respectively. Speciﬁcally. which reduces the water level locally will result in an increased level somewhere else on the bed.5) Ë·Ì (or Ë · Ì ½ for a SISO system). and Ì small to avoid sensitivity to noise which is one of the disadvantages of feedback. the outputs will “take off”. in the form of theorems. So in both cases control is required.1 Ë plus Ì is one From the deﬁnitions Ë ´Á · Äµ ½ and Ì Ä´Á · Äµ ½ we derive Á (5. There are also other situations which make control difﬁcult such as is unstable is large If the plant is unstable. 5. then unless feedback control is applied to stabilize the system. We have assumed perfect measurements in the discussion so far. which essentially say that if we push the sensitivity down at some frequencies then it will have to increase at others.3 Constraints on Ë and Ì In this section. Unfortunately.3. is large. these requirements are not simultaneously possible at any frequency as is clear from (5. if without control a disturbance will cause the outputs to move far away from their desired values.5). we will show that this peak is unavoidable in practice.2 The waterbed effects (sensitivity integrals) A typical sensitivity function is shown by the solid line in Figure 5.1. 5. 5. and eventually hit physical constraints. In general. and problems occur if this demand for control is somehow in conﬂict with the other factors mentioned above which also make control difﬁcult.3. Ideally.5) implies that at any frequency either Ë ´ µ or Ì ´ µ must be larger than or equal to 0. noise and uncertainty associated with the measurements of disturbances and outputs will present additional problems for feedforward and feedback control.5. Similarly. We note that Ë has a peak value greater than 1. Two formulas are given. we present some fundamental algebraic and analytic constraints which apply to the sensitivity Ë and complementary sensitivity Ì .
g.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Magnitude £ 10 0 Å Ë ½ ÛÈ 10 −1 Frequency [rad/s] Figure 5. This behaviour may be quantiﬁed by the following theorem. In practice. of which the stable case is a classical result due to Bode. Ä´×µ has at least two more poles than zeros (ﬁrst waterbed formula). e. As shown in Figure 5. 100) or Zhou et al.1: Plot of typical sensitivity. (1992. . there exists a frequency range over which the Nyquist plot of Ä´ µ is inside the unit circle centred on the point ½.6) note that the magnitude scale is logarithmic whereas the frequencyscale is linear. Suppose that the openloop transfer function Ä´×µ is rational and has at least two more poles than zeros (relative degree of two or more). due to actuator and measurement dynamics). Suppose also that Ä´×µ has ÆÔ RHPpoles at locations Ô . Then for closedloop stability the sensitivity function must satisfy ½ ¼ where Ê ÐÒ Ë ´ µ Û ¡ ÆÔ ½ Ê ´Ô µ (5. with upper bound ½ ÛÈ 1. and thus Ë Ä´×µ will have at least two more poles than zeros (at least at sufﬁciently high frequency. such that ½ · Ä . (1996).1 Bode Sensitivity Integral (First waterbed formula). between Ä and ½. which is the distance ½ · Ä ½ is greater than one. 1988). Ë . Theorem 5. 2. so there will always exist a frequency range over which Ë is greater than one.6) ´Ô µ denotes the real part of Ô . is less than one.2. Pole excess of two: First waterbed formula To motivate the ﬁrst waterbed formula consider the openloop transfer function Ä´×µ ½ ×´×·½µ . For a graphical interpretation of (5. The generalization of Bode’s ¾ criterion to unstable plants is due to Freudenberg and Looze (1985. or Ä´×µ has a RHPzero (second waterbed formula). Proof: See Doyle et al. p.
In this respect. ÈÆÔ as seen from the positive contribution ½ Ê ´Ô µ in (5. imagine a waterbed of inﬁnite size. and it is not strictly implied by (5.7) and the area of sensitivity reduction (ÐÒ Ë negative) must equal the area of sensitivity increase (ÐÒ Ë positive). Remark.6) anyway.½ ÅÍÄÌÁÎ ÊÁ Im 2 Ä Ã ÇÆÌÊÇÄ Ä´×µ 1 ×´×·½µ ¾ −1 −1 1 2 Re Ä´ µ −2 Figure 5. the beneﬁts and costs of feedback are balanced exactly. This is illustrated in Figure 5.2: Ë ½ whenever the Nyquist plot of Ä is inside the circle Stable plant. The presence of unstable poles usually increases the peak of the sensitivity. in practice the frequency response of Ä has to roll off at high frequencies so ¾ is limited. From this we expect that an increase in the bandwidth (Ë smaller than 1 over a larger frequency range) must come at the expense of a larger peak in Ë . Speciﬁcally. where Æ is arbitrarily small (small peak). then we can choose ½ arbitrary large (high bandwidth) simply by selecting the interval ½ ¾ to be sufﬁciently large. Consider Ë ´ µ ½·Æ for ½ ¾ . and (5. For a stable plant we must have ½ ¼ ÐÒ Ë ´ µ Û ¼ (5.5.6). as in the waterbed analogy.7) impose real design limitations. This is because the increase in area may come over an inﬁnite frequency range. Although this is true in most practical cases. the effect may not be so striking in some cases. ¡ . However. ¾ Unstable plant. This is plausible since we might expect to have to pay a price for stabilizing the system. the area of ½) exceeds that of sensitivity reduction by an amount proportional sensitivity increase ( Ë to the sum of the distance from the unstable poles to the lefthalf plane.6) and (5.
4 which shows the magnitude of the sensitivity function for the following loop transfer function Ä´×µ The plant has a RHPzero Þ see that an increase in the controller gain . results in a larger peak for Ë . and we ¼½ ¼ ½¼ ¾¼ (5.0 ½ = ×·½ ×·½ ×·½ Figure 5. consider the nonminimum phase loop transfer function Ä´×µ ½·× ½·× and its ½ .ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË Im 1 ½ Ë´ µ ½ 1 −1 ÄÑ ´ −1 µ µ Re Ä´ ½ ÄÑ ´×µ = ×·½ Ä´×µ −1.3: Additional phase lag contributed by RHPzero causes Ë ½ RHPzeros: Second waterbed formula For plants with RHPzeros the sensitivity function must satisfy an additional integral relationship. Suppose that Ä´×µ has a single real RHPzero Þ or a complex conjugate pair of zeros Þ Ü Ý .5 −2.2 Weighted sensitivity integral (Second waterbed formula). which has stronger implications for the peak of Ë . Before stating the result. ¾ the closedcorresponding to a higher bandwidth.3 we see that the additional phase minimum phase counterpart ÄÑ ´×µ ½·× lag contributed by the RHPzero and the extra pole causes the Nyquist plot to penetrate the unit circle and hence causes the sensitivity function to be larger than one. let us illustrate why the presence of a RHPzero implies that the peak of Ë must exceed ½ ½ × one. consider Figure 5. Theorem 5. As a further example.9) . and the peak of Ë is inﬁnite. First. Ô .8) ¦ ½ ¼ ÐÒ Ë ´ µ ¡ Û´Þ µ ¡ ÐÒ ÆÔ ¬ ¬ ½ ¬Ô ¬Ô · Þ¬ ¬ Þ¬ ¬ (5. and has ÆÔ RHPpoles. Then for closedloop stability the sensitivity function must satisfy ¾ × × ¾·× ¾. Let Ô denote the complex conjugate of Ô . For loop system becomes unstable with two poles on the imaginary axis. From Figure 5.
7). which ½ . except that the tradeoff between Ë less than 1 and Ë larger than 1. and a large peak for Ë is unavoidable if we try to push down Ë at low frequencies. Exercise 5. (Solution: 1. see Example 9. Explain why this also applies to unstable plants. Thus. .5.2. Optimal control with state feedback yields a loop transfer function with a polezero excess of one so (5. Thus.10) (5.4: Effect of increased controller gain on Ä´×µ ¾ × × ¾·× Ë for system with RHPzero at Þ ¾. where if the zero is real ¾Þ ¾ ½ Þ ¾ · ¾ Þ ½ · ´ Þ µ¾ and if the zero pair is complex (Þ Ü ¦ Ý ) Ü Ü · Û´Þ µ Ü¾ · ´Ý µ¾ Ü¾ · ´Ý · µ¾ Û ´Þ µ (5.11) Proof: See Freudenberg and Looze (1985. says that Ë does not conﬂict with the above sensitivity integrals.4 and further by the example in Figure 5. This is illustrated by the example in Figure 5. 2. for a stable plant where Ë is reasonably close to ½ at high at frequencies frequencies we have approximately Þ ¼ ÐÒ Ë ´ µ ¼ (5.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10 1 Ë (unstable) ¾¼ Magnitude 10 0 ¼½ ¼ ½¼ 10 −1 10 −2 10 −1 Frequency 10 0 10 1 Figure 5. ¾ Ô Note that when there is a RHPpole close to the RHPzero (Ô Þ µ then Ô ·Þ Þ not surprising as such plants are in practice impossible to stabilize.12) This is similar to Bode’s sensitivity integral relationship in (5.6) does not apply. ½.1 Kalman inequality The Kalman inequality for optimal state feedback. This is The weight Û´Þ µ effectively “cuts off” the contribution from ÐÒ Ë to the sensitivity integral Þ . 1988).9) does not apply). in this case the waterbed is ﬁnite. There are no RHPzeros when all states are measured so (5. is done over a limited frequency range.
83) and (4.12). The bound for Ë was originally derived by Zames (1981). 5.5: Sensitivity Ë Ä½ ×· (with RHPzero at Þ ×· ½ ½·Ä In both cases the areas of Ë below and above 1 (dotted line) are equal. see (5.3. Here we derive explicit bounds on the weighted peak of Ë . we found that a RHPzero implies that a peak in Ë is inevitable. if Þ is a RHPzero of Ä´×µ then ½ ¼ Ë ´Ôµ Ë ´Þ µ ¼ ½ (5.13) Ì ´Þ µ (5.7).84). but for a quantitative analysis of achievable performance they are less useful.3 Interpolation constraints If Ô is a RHPpole of the loop transfer function Ä´×µ then Ì ´Ôµ Similarly.2. which are more useful in applications than the integral relationship. The conditions clearly restrict the allowable Ë and Ì and prove very useful in the next subsection. The basis for these bounds is the interpolation constraints which we discuss ﬁrst. but for Ä this must ¾ . In addition. and that the peak will increase if we reduce Ë at other frequencies. however.4 Sensitivity peaks In Theorem 5.21). we can derive lower bounds on the weighted sensitivity and weighted complementary sensitivity. 5. The results are based on the interpolation constraints Ë ´Þ µ ½ and Ì ´Ôµ ½ given above. Ä½ ¾ ×´×·½µ (solid line) and Ä¾ The two sensitivity integrals (waterbed formulas) presented above are interesting and provide valuable insights. see (5. which are more useful for analyzing the effects of RHPzeros and RHPpoles. Fortunately. see (5.3. and the peak of Ë must be higher happen at frequencies below Þ ÐÒ corresponding to ) (dashed line).ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ 10 1 ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Ë½ Equal areas Nagnitude 10 0 Ë¾ 10 −1 10 −2 0 1 2 3 4 5 6 Frequency (linear scale) Figure 5. we make use of the maximum .14) These interpolation constraints follow from the requirement of internal stability as shown in (4.
applying (5. In such a plot ´×µ has “peaks” at its poles and “valleys” at its zeros.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ modulus principle for complex analytic functions (e.e. If ¡ Þ ·Ô ½ Þ Ô (5.3 Weighted sensitivity peaks. and To derive the results below we ﬁrst consider ´×µ ÛÌ ´×µÌ ´×µ (weighted complementary sensitivity). we have for a stable ´×µ ´ µ½ ÑÜ ´ µ ´×¼ µ ×¼ ¾ ÊÀÈ (5. If the plant interpolation constraint Ë ´Þ µ also has RHPpoles then the bound is larger. Hence. somewhere along the axis. see Chapter 6. (5.19) These bounds may be generalized to MIMO systems if the directions of poles and zeros are taken into account. For closedloop stability the weighted sensitivity function must satisfy for each RHPzero Þ ÛÈ Ë ½ where Ô denote the ÆÔ RHPpoles of ÛÈ ´Þ µ . ÛÈ ´×µË ´×µ (weighted sensitivity). 1974) which for our purposes can be stated as follows: Maximum modulus principle. ÛÈ ´×µË ´×µ and using the For a plant with a RHPzero Þ . The weights are included then ´×µ ½ and to make the results more general.16) has no RHPpoles then the bound is simply ÛÈ Ë ½ ÛÈ ´Þ µ ÆÞ (5. Thus. . Then the maximum value of ´×µ for × in the righthalf plane is attained on the region’s boundary. the weighted complementary sensitivity function must satisfy for each RHPpole Ô ÛÌ Ì ½ where Þ denote the ÆÞ RHPzeros of ÛÌ ´Ôµ . gives ÛÈ Ë ½ ÛÈ ´Þ µË ´Þ µ ÛÈ ´Þ µ . If ¡ ÆÔ Þ·Ô Þ Ô ½ (5.15) can be understood by imagining a 3D plot of ´×µ as a function of the complex variable ×..17) Similarly.g. see maximum principle in Churchill et al. ´×µ is analytic in the complex RHP).18) has no RHPzeros then the bound is simply ÛÌ Ì ½ ÛÌ ´Ôµ (5. as given in the following theorem: Theorem 5. and we ﬁnd that ´×µ slopes downwards from the LHP and into the RHP.e. i. Suppose ´×µ is stable (i.15) to ´×µ ½. but if required we may of course select ÛÈ ´×µ ÛÌ ´×µ ½. if ´×µ has no poles (peaks) in the RHP.15) Remark.
Since have RHPpoles at Ô . (Remark: There is a technical problem here with axis poles. The basis for the proof is a “trick” where we ﬁrst factor out the RHPzeros in Ë into an allpass part (with magnitude 1 at all points on the axis). ¾ If we select ÛÈ ÛÌ Ë ½ ½.44). we derive the following bounds on the peaks of Ë and Ì : ÑÜ ÆÔ Þ ·Ô Þ Ô ½ Ì ½ ÑÜ ÆÞ Þ ·Ô Þ Ô ½ (5. 5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ Proof of (5. Consider a RHPzero located at Þ . Ë ´×µ has RHPzeros at Ô and we may write Ë Ë ËÑ Ë ´× µ × Ô ×·Ô (5.16):. the “ideal” controller is the one that generates the plant input Ù´Øµ (zero for Ø ¼) which minimizes ÁË ½ ¼ Ý ´Øµ Ö´Øµ ¾ Ø (5. This proves (5. 1968b) and Morari and Zaﬁriou (1989). where ËÑ ´Þ µ Ë ´Þ µË ´Þ µ ½ ½ ÓØË ´Þ µ ½ ½ . The proof of (5. the “ideal” response Ý Ì Ö when Ö´Øµ is a unit step is given by Ì ´×µ × · Þ × ×·Þ (5.9. That is. for a given command Ö´Øµ (which is zero for Ø ¼). Ñ Ü ÛÈ Ë ´ µ for which we get from the maximum modulus principle ÛÈ Ë ½ Ñ Ü ÛÈ ËÑ ´ µ ÛÈ ´Þ µËÑ ´Þ µ . The weight ÛÈ ´×µ is as usual assumed to be stable and minimum phase.16). Morari and Zaﬁriou show that for stable plants with RHPzeros at Þ (real or complex) and a time delay . is to consider an “ideal” controller which is integral square error (ISE) optimal. Ë ´×µ is allpass with Ë ´ µ ½ at all frequencies. see the proof of the generalized bound (5. these must ﬁrst be moved slightly into the RHP). This is illustrated by examples in Section 5. and also Qiu and Davison (1993) who study “cheap” LQR control.18) is similar.4 Ideal ISE optimal control Another good way of obtaining insight into performance limitations.20) Here ËÑ is the “minimumphase version” of Ë with all RHPzeros mirrored into the LHP.21) This shows that large peaks for Ë and Ì are unavoidable if we have a RHPzero and RHPpole located close to each other.23) .22) This controller is “ideal” in the sense that it may not be realizable in practice because the cost function includes no penalty on the input Ù´Øµ. This particular problem is considered in detail by Frank (1968a.
Even the “ideal” controller cannot remove this delay. we have ½ × The magnitude Ë is plotted in Figure 5.5 Limitations imposed by time delays 1 10 Ë Magnitude 10 0 10 −1 10 −2 ½ −2 10 10 −1 Ö ÕÙ Ò Ý ¢ 10 0 ÐÝ 10 1 10 2 Figure 5.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ where Þ is the complex conjugate of Þ .23) are 5. and applies to feedforward as well as feedback control. we have to wait a time until perfect control is achieved. The optimal ISEvalues for three simple stable plants are: ½ ÛØ ¾ ÛØ ¿ ÛØ ÐÝ ÊÀÈ Þ ÖÓ Þ ÓÑÔÐ Ü ÊÀÈ Þ ÖÓ× Þ Ü¦ Ý ÁË ÁË ÁË ¾Þ Ü ´Ü¾ · Ý ¾ µ This quantiﬁes nicely the limitations imposed by nonminimum phase behaviour. Remark 2 The ideal Ì ´×µ is “allpass” with Ì ´ case the ideal sensitivity function is Ë ´ µ frequencies. For a step change in the reference Ö´Øµ. the “ideal” complementary × .24) .23) is derived by considering an “openloop” optimization problem. and the implications in terms of the achievable bandwidth are considered below.6. Thus. µ ½ at all frequencies.6: “Ideal” sensitivity function (5. × (by a Taylor series expansion of the exponential) and the lowfrequency asymptote of Ë ´ µ crosses 1 at a frequency of about ½ (the exact frequency where Ë ´ µ crosses ½ Ì ½ × (5.24) for a plant with delay Consider a plant ´×µ that contains a time delay × (and no RHPzeros). At low frequencies. In the feedback Ä ½ ´ µÌ ´ µ ½ Ä´ µ at all Remark 3 If Ö´Øµ is not a step then other expressions for derived. see Morari and Zaﬁriou (1989) for details. The corresponding “ideal” sensitivity function is sensitivity function will be Ì Ë ½.23). Ì rather than that in (5. as shown in (5. Remark 1 The result in (5.
6 is ¿ ½ ½ ¼ ). that is. so we expect this value to provide an approximate upper bound on Û . For example.6. 1970) that the output in response to a step change in the input will cross zero (its original value) ÒÞ times.1 Inverse response For a stable plant with ÒÞ real RHPzeros. Rosenbrock. page 38.2 Highgain instability It is wellknown from classical rootlocus analysis that as the feedback gain increases towards inﬁnity. the closedloop poles migrate to the positions of the openloop zeros. We may also have complex zeros. In the following we attempt to build up insight into the performance limitations imposed by RHPzeros using a number of different results in both the time and frequency domains.23) that the “ideal” ISEoptimal complementary sensitivity function Ì is allpass. and for a single real RHPzero the “ideal” . In practice.6.6.76). the plant ´×µ has a real RHPzero at Þ . 5. RHPzeros typically appear when we have competing effects of slow and fast dynamics. the presence of RHPzeros implies highgain instability. and since these always occur Ý where Ü ¼ for RHPzeros. We see that the output initially decreases before increasing to its positive steadystate value. we also have that ½ is equal to the gain crossover frequency for Ä.3 Bandwidth limitation I For a step change in the reference we have from (5.6. namely ½ (5. A typical response for the case with one RHPzero is shown in Figure 2. we have inverse response behaviour. 5. the “ideal” controller cannot be realized. 1985b. With two real RHPzeros the output will initially increase. in complex conjugate pairs we have Þ Ü ½ ¾ × · ½ × · ½¼ × · ´× · ½µ´× · ½¼µ ¦ 5.2 by considering the limitations imposed on a loopshaping design by a time delay .14. it may be proven (Holt and Morari.25) This approximate bound is the same as derived in Section 2. Since in this case Ë ½ Ä . 5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ 1 in Figure 5.6 Limitations imposed by RHPzeros We will here consider plants with a zero Þ in the closed righthalf plane (and no pure time delay). also see (4. then decrease below its original value. Thus. and ﬁnally increase to its positive steadystate value.
and we derive (for a real RHPzero) the approximate requirement Þ ¾ is also which we also derived on 45 using loopshaping arguments.25) for a time delay. we get from (5. Þ ¾ (5. which has a RHPzero at ¾ . Þ Ü¦ Ý Figure 5.23) the “ideal” sensitivity (5.½ sensitivity function is ÅÍÄÌÁÎ ÊÁ Ä ¾× ×·Þ Ã ÇÆÌÊÇÄ (5.26) Ë ½ Ì ½ × · Þ Þ·Þ The Bode magnitude plot of Ë ´ ½ Ä ) is shown in Figure 5. function Þ Ü ¦ Ý . The bound ½ in (5. the “ideal” ISE optimal controller cannot be realized. The lowfrequency asymptote of Ë ´ µ crosses 1 at the frequency Þ ¾. This is seen from the Pad´ e consistent with the bound approximation of a delay.27) Magnitude Ë 10 0 10 −1 10 −2 Þ ¾ −2 10 10 −1 10 0 Frequency ¢½ Þ 10 1 10 2 (a) Real RHPzero Ý Ü Magnitude ¼½ Ý Ü ½¼ Ý Ü ¼ Ë 10 0 10 −1 Ý Ü ½ 10 0 10 −2 10 −2 10 −1 Frequency ¢½ Ü 10 1 10 2 (b) Complex pair of RHPzeros.7: “Ideal” sensitivity functions for plants with RHPzeros For a complex pair of RHPzeros.7(a). × ´½ ¾ ×µ ´½ · ¾ ×µ.28) Ë Ü× ´× · Ü · Ý µ´× · Ü Ý µ . In practice.
ﬁrst by considering (A) a weight that requires good performance at low frequencies. except for a single “spike” at where it jumps up to 2.31) to gain insight into the limitations imposed by RHPzeros.4 Bandwidth limitation II Another way of deriving a bandwidth limitation is to use the interpolation constraint Ë ´Þ µ ½ and consider the bound (5. the ﬂexible structure is a rather special case where the plant also has imaginary poles which counteracts most of the effect of the imaginary zeros. However.6.3. We will now use (5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ In Figure 5. for a zero located on the imaginary axis (Ü ¼) the the imaginary axis (Ü Ý ideal sensitivity function is zero at all frequencies. see does the ideal ISEvalue in response to a step in the reference. and then by (B) considering a weight that requires good performance at high frequencies. The integral under the curve for Ë ´ µ ¾ thus approaches zero.30) we must at least require that the weight satisﬁes ÛÈ ´Þ µ ½ ÛÈ ´Þ µ . even though the plant has a pair of imaginary zeros. ¦ 5. for which the response to an input disturbance is satisfactory.1 on page 165). This is also conﬁrmed by the ﬂexible structure in Example 2.29) In summary. ÁË Section 5. As usual. An analysis of (5. The idea is to select a form for the performance weight ÛÈ ´×µ. for example. the presence of imaginary zeros may limit achievable performance. Þ Ý becomes increasingly “thin” as the zero approaches that the resonance peak of Ë at ¼). in other cases. so to be able to (5. 10 and 50. in the presence of uncertainty which makes it difﬁcult to place poles in the controller to counteract the zeros.28) and the ﬁgure yields the following approximate bounds Þ Þ ¾ Þ Ê ´Þ µ Ê ´Þ µ Ê ´Þ µ ÁÑ´Þ µ ´Ý Ü ½µ ÁÑ´Þ µ ´Ý Ü ½µ ÁÑ´Þ µ ´Ý Ü ½µ (5. and it is worse for them to be located closer to the real axis than the imaginary axis. RHPzeros located close to the origin (with Þ small) are bad for control.1. that is. Thus.31) (We say “at least” because condition (5. Ü Ý .17) on weighted sensitivity in Theorem 5. we require Ë´ µ ½ ÛÈ ´ µ ¸ ÛÈ Ë ½ ½ (5.10. as Ü ´Ü¾ · Ý ¾ µ.4. from (5. .17) we have that ÛÈ Ë ½ satisfy (5. we select ½ ÛÈ as an upper bound on the sensitivity function (see Figure 5.30) ÛÈ ´Þ µË ´Þ µ However. 1.28) and Figure 5.17) is not an equality).7 Remark. and then to derive a bound for the “bandwidth parameter” in the weight. For a complex pair of zeros. Therefore. This indicates that purely imaginary zeros do not always impose limitations. we notice from (5.7(b) we plot Ë for Ý Ü equal to 0.
36) with ½. Plot the weight for ½¼ and Å ¾.34) Ë ½ ¾) we must at least ¼ Þ in (5. ¼. and gives the frequency range over which we allow a peak. Derive an upper bound on £ for the case with ½¼ and Å ¾. £ ÛÈ ´×µ (5.3 Consider the weight ÛÈ ´×µ Å · ´ × µ which requires Ë to have a slope of Ò at low frequencies and requires its lowfrequency asymptote to cross ½ at a frequency £ . Derive an upper bound on £ when the Note that Ò ½ yields the weight (5. Performance at low frequencies ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Consider the following performance weight is the frequency where the straightline approximation of the weight crosses 1). Then all variables are real and positive and (5. If the plant has a RHPzero at × Þ .27). or on how the weight behaves at high frequencies.½ A. a maximum peak of Ë less than Å . and at frequencies lower than the bandwidth the sensitivity steadystate offset less than is required to improve by at least 20 dB/decade (i. Ò ½ Exercise 5.33) Real zero. Ë has slope 1 or larger on a loglog plot). with Å ¾ we require £ Þ given in (5. The next two exercises show that the bound on £ does not depend much on the slope of the weight at low frequencies.33) is equivalent to For example.e. a ½. Þ ¼: with Ö Þ ½ ½ Å ½ ¼ (no steadystate offset) and Å £ (5. a similar derivation yields (5.35) £ Þ ½ ¼ Þ . This is the same weight as (5. Exercise 5.32) ÛÈ ´Þ µ ¬ ¬Þ Å ¬ ¬ Þ· ¬ · £¬ £ ¬ ¬ ½ (5. plant has a RHPzero at Þ .29). Show that the bound becomes £ £ ½ .32) with ¼ except that it approaches ½ at high frequencies. which is consistent with the requirement require £ Imaginary zero. Consider the case when Þ is real. then from (5.2 Consider the weight ½ Å¾ ÛÈ ´×µ ×·Å £ ×· Å £ × × · Å¾ £ (5. with ¾( ¼ Þ . For a RHPzero on the imaginary axis. Þ .32) with Þ as Ò .30) it speciﬁes a minimum bandwidth £ (actually. which is very similar to the requirement For example.31) we must require × Å· £ ×· £ From (5.
and we have shown that the presence of a RHPzero then imposes an upper bound on the achievable bandwidth. Exercise 5. so we can only achieve tight control For example. the weight ¼ at high frequencies.39) This weight is equal to ½ Å at low and high frequencies. This illustrates that ÛÈ ´Þ µ it must at least satisfy this condition). ½ B. This is discussed next. Performance at high frequencies Here. and the asymptote crosses ½ at frequencies £ ½¼¼¼ and £ .37) £ Þ ½ ½ ½ Å (5.3 is a bit surprising. condition (5. Important. This is surprising since from Bode’s integral relationship (5. £ ½¼¼¼ ½. by use of the performance weight ÛÈ ´×µ This requires tight control ( Ë ´ µ ½) at frequencies higher than £ . and to satisfy ÛÈ Ë ½ with a real RHPzero we derive for the weight in (5. Admittedly. but since it is not sufﬁcient it can be optimistic.32). so we would expect £ to be smaller for ½ in (5.38) ¾ the requirement is £ ¾Þ . with Ò ½. ½ we must at least require that the weight satisﬁes ÛÈ ´Þ µ ½. In any case. Exercise 5.e.6) we expect to pay something for having the sensitivity smaller at low frequencies. Ë £. but apparently it is optimistic for Ò large. We have so far implicitly assumed that we want tight control at low frequencies. whereas the only requirement at low frequencies is that the peak of Ë is less than Å . in the frequency range between £ Ä Thus we require “tight” control.5 where a more realistic weight is studied.4 Draw an asymptotic magnitude Bodeplot of ÛÈ ´×µ in (5. and £ À .37) ½ · ×£ Å (5. To this effect let ÛÈ ´×µ ½ ´½¼¼¼× £ · Å µ´× ´Å £ µ · ½µ ´½¼× £ · ½µ´½¼¼× £ · ½µ (5. For the simple weight (5.37) is unrealistic in that it requires Ë the result as is conﬁrmed in Exercise 5. but this does not affect in (5. It says that the bound £ Þ .31) is not very optimistic (as is conﬁrmed by other results). However.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Remark. The result for Ò in exercise 5.37).5 Consider the case of a plant with a RHPzero where we want to limit the sensitivity function over some frequency range. we consider a case where we want tight control at high frequencies. larger Ò. has a maximum value of about ½¼ Å at intermediate frequencies. with Å at frequencies beyond the frequency of the RHPzero. then a RHPzero imposes a lower bound on the bandwidth. is independent of the required slope (Ò) at low frequency and is also independent of Å .31) is a necessary condition on the weight (i. if we instead want tight control at high frequencies.
and the sign of the controller is based on the steadystate gain of the plant. However.40) . The reversal of the sign in the controller is probably best understood by considering the inverse response behaviour of a plant with a RHPzero. b) Show that the RHPzero cannot be in the frequency range where we require tight control. above about ¾Þ .8 and 5. To see this select Å ¾ and evaluate ÛÈ ´Þ µ for various values of £ ½ ½ ½¼ ½¼¼ ½¼¼¼ ¾¼¼¼ ½¼¼¼¼. and instead achieve tight control at frequencies higher than about ¾Þ .½ a) Make a sketch of ½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÛÈ (which provides an upper bound on Ë ). In most cases we desire tight control at low frequencies. consider in Figures 5. 10 1 Ë Magnitude 10 0 Ý´Øµ Ã ¼¾ Ã ¼ Ã ¼ 2 1 0 Ã Ã ¼ ¼ Ã ¼¾ Setpoint 10 −1 −1 10 −2 −2 0 2 10 10 Frequency [rad/s] 10 −2 0 1 2 Time 3 4 (a) Sensitivity function (b) Response to step in reference Figure 5. we want tight control at low frequencies. then we may usually reverse the sign of the controller gain. Normally.8: Control of plant with RHPzero at Þ ´×µ ×·½ . and that we can achieve tight control either at frequencies below about Þ ¾ (the usual case) or Þ.1 To illustrate this.34) and (5. ¼ (corresponding to the requirement £ À Þ ¾) and for ¾¼¼¼ (corresponding to the requirement £ Ä ¾Þ )) 5.6. However.9 the use of negative and positive feedback for the plant ´×µ × · Þ ×·Þ Þ ½ (5. and with a RHPzero this may be achieved at frequencies lower than about Þ ¾. if we do not need tight control at low frequencies. Ã½ ´×µ Ã ×·½ ¼ ¼ ½×·½ ×·½ × ½ using negative feedback Example 5.g.38) we see that a RHPzero will pose control limitations either at low or high frequencies. if we instead want tight control at high frequencies (and have no requirements at low frequencies) then we base the controller design on the plants initial response where the gain is reversed because of the inverse response. (You will ﬁnd that ÛÈ ´Þ µ ¼ ( ½) for e. Remark.5 RHPzero: Limitations at low or high frequencies Based on (5.
9). However. With most medications we ﬁnd that in the shortterm the treatment has a positive effect. For positive feedback the step change in reference only has a duration of ¼ ½ s. for example ´×µ case.5 ¼½ 10 2 ¼½ 10 −1 0 −2 10 10 Frequency [rad/s] 0 0 0. may not be signiﬁcant provided Ø ¼ ¼½ [s]. whereas ´×µ ½ at high frequencies Þ ). is × ´ × · ½µ.8) 2.9: Control of plant with RHPzero at Þ × ´×µ ×·½ . The only way to achieve tight control at low frequencies is to use an additional actuator (input) as is often done in practice. Let Ù be the dosage of medication and Ý the condition of the patient. In which case. For example.1 (a) Sensitivity function (b) Response to step in reference Figure 5.9(b) if (with Þ small). 10 1 Ã Ã 0 ¼ ¼ Ý´Øµ Ã 1 Magnitude Ã Ã Ã Setpoint 10 ¼ ¼ Ë 0. where we can only achieve tight control at high frequencies. PIcontrol with negative feedback (Figure 5. Shortterm control. we show in the ﬁgures the sensitivity function and the time response to a step change in the reference using 1. but the control has no effect at steadystate. whereas in the ½ .ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Note that ´×µ ½ at low frequencies ( Þ ). In this characterized by plants with a zero at the origin. Note that the time scales for the simulations are different. good transient control is possible. More precisely. As an example of shortterm control. Derivative control with positive feedback (Figure 5. we generally assume that the system behaviour as Ø is important. the total control time is Ø Remark. Ã¾ ´×µ Ã ´¼ ¼ ×·½µ´¼ ¼¾×·½µ ×·½ ½ using positive feedback. consider treating a patient with some medication. the presence of a “slow” RHPzero ½ Þ . The negative plant gain in the latter case explains why we then use positive feedback ( in order to achieve tight control at high frequencies.05 Time 0. this is not true in some cases because the system may only be under closedloop control for a ﬁnite time Ø . An important case. in Figure 5. In this book. then the RHPzero at Þ ½ [rad/s] is insigniﬁcant. This is because we cannot track references over longer times than this since the RHPzero then causes the output to start drifting away (as can be seen in Figure 5.9(b)).
by letting the weight have a steeper slope at the crossover near the RHPzero. on future inputs. the last point is illustrated by the upper left curve in Figure 5. Second. Such a controller may be used if we have knowledge about controller ÃÖ future changes in Ö´Øµ or ´Øµ. one may encounter problems with input constraints at low frequencies (because the steadystate gain is small). a controller which uses information about the future. zeros can cross from the LHP into the RHP both through zero (which is worst if we want tight control at low frequencies) or through inﬁnity.32) and ½¼¼¼ and Å ¾ in (5. usually corresponding to “overshoots” in the time response.66) contains no adjustable poles that can be used to counteract the effect of a LHPzero.6. A brief discussion is given here.7 RHPzeros amd noncausal controllers Perfect control can actually be achieved for a plant with a time delay or RHPzero if we use a noncausal controller1 . First. This may be relevant for certain servo problems. e. À 5.½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ longterm the treatment has a negative effect (due to side effects which may eventually lead to death). ½ A system is causal if its outputs depend only on past inputs. For a delay × we may achieve perfect control with a noncausal feedforward × (a prediction). do not present a fundamental limitation on control. this inverseresponse behaviour (characteristic of a plant with a RHPzero) may be largely neglected during limited treatment.6 (a) Plot the plant input Ù´Øµ corresponding to Figure 5.6 LHPzeros Zeros in the lefthalf plane. we use simple PI. but in practice a LHPzero located close to the origin may cause problems. Use performance weights in the form given by (5.9 and discuss in light of the above remark.37) and ÛÙ ½ (for the weight on (5. As an alternative use the Ë ÃË method in (3. a simple PID controller as in (5.10. We discuss this in Chapter 7. Try to improve the response.59) to synthesize ½ controllers for both the negative and positive feedback cases.e. respectively. in robotics and for product changeovers in chemical plants. which shows the input Ù´Øµ using an internally unstable controller which over some ﬁnite time may eliminate the effect of the RHPzero. although one may ﬁnd that the dosage has to be increased during the treatment to have the desired effect. Exercise 5. For uncertain plants. With £ ÃË ) you will ﬁnd that the time response is quite similar to that in Figure 5. for example. i.g.8 and 5.9 with Ã ¼ . but noncausal controllers are not considered in the rest of the book since our focus is on feedback control.and derivative controllers. and noncausal if its outputs also depend . For example. Time delay.37). a simple controller can probably not then be used. (b) In the simulations in Figures 5. Interestingly. 5.9. However.
A causal unstable feedback controller ÃÖ ´×µ For a step in Ö from 0 to 1 at Ø ¼.g. this controller generates the following input signal ¼ Ø ¼ Ù´Øµ ½ ¾ ÞØ Ø ¼ × · Þ ×·Þ . RHPzero.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ For example.41) Ö´Øµ ¼ Ø ½ Ø With a feedforward controller ÃÖ the response from Ö to Ý is Ý ´×µÃÖ ´×µÖ. Input: unstable controller 0 −5 −10 −5 2 1 0 −5 2 1 0 −5 0 5 1 0 −1 −5 1 0 −1 −5 1 0 −1 −5 Output: unstable controller 0 5 Input: noncausal controller Output: noncausal controller 0 5 0 5 Input: stable causal controller Output: stable causal controller 0 5 0 5 Time [sec] Time [sec] Figure 5. if we know that we should be at work at 8:00. Eaton and Rawlings (1992)). then we make a prediction and leave home at 7:30 (We don’t wait until 8:00 when we suddenly are told. that we should be at work). and we know that it takes 30 min to get to work. As an example. Future knowledge can also be used to give perfect control in the presence of a RHPzero.10: Feedforward control of plant with RHPzero 1. by the appearance of a step change in our reference position. In theory we may achieve perfect control (y(t)=r(t)) with the following two controllers (e. consider a plant with a real RHPzero given by ´×µ and a desired reference change × · Þ ×·Þ Þ ¼ ¼ ¼ (5.
is usually not possible because future setpoint changes are unknown.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ However. 3. The second option. The ideal causal feedforward controller in terms of ½. For unstable plants we need feedback for stabilization. we need to manipulate the plant inputs Ù. since the controller cancels the RHPzero in the plant it yields an internally unstable system.10 for ½. A stable noncausal (feedforward) controller that assumes that the future setpoint change is known. the presence of an ustable pole Ô requires for internal stability Ì ´Ôµ ½ . and minimizing the ISE ( ¾ norm) of Ý ´Øµ for the plant in (5. However. and the transfer function ÃË from plant outputs to plant inputs must always satisfy (Havre and Skogestad. À 5. which again imposes the following two limitations: RHPpole Limitation 1 (input usage). 2. Note that for perfect control the noncausal controller needs to start a plant with Þ . the plant ´×µ ½ ´× ¿µ has a RHPpole with Ô ¿. is not acceptable as it yields an internally unstable system in which Ù´Øµ goes to inﬁnity as Ø increases (an exception may be if we want to control the system only over a limited time Ø . ´×µ × ´×µ ¡ ×·Ô × Ô .42) × is the “stable version” of with its RHPpoles mirrored into the LHP. 1997)(Havre and Skogestad. More precicely. the unstable controller. but it will generate the following input Ù´Øµ ¾ ÞØ Ø ½ Ø ¼ ¼ These input signals Ù´Øµ and the corresponding outputs Ý ´Øµ are shown in Figure 5. ½ The ﬁrst option.41) is to use ÃÖ the corresponding plant input and output responses are shown in the lower plots in Figure 5. In most cases we have to accept the poor performance resulting from the RHPzero and use a stable causal controller.8 Limitations imposed by unstable (RHP) poles We here consider the limitations imposed when the plant has a unstable (RHP) pole at × For example. if we have such information. it is certainly beneﬁcial for plants with RHPzeros.10. the noncausal controller. Ô. 2001) ÃË ½ where × ´Ôµ ½ (5. but for practical reasons we started the simulation at Ø changing the input at Ø where Ù´Øµ ¾ ¼ ¼½¿. Most importantly. This controller cannot be represented in the usual transfer function form. see page 179).
and where ÎÑ× is the “minimumphase and stable version” of Î with its (possible) RHPpoles and RHPzeros mirrored into the LHP. Thus. whereas RHPzeros imposes limitation on the plant outputs.44) is (If has no RHPzeros the the bound is simply Î Ì ½ tight (equality) for the case when has only one RHPpole.44) with ½ ÃË ½ Ñ× ´Ôµ ½ ¡ Þ ·Ô Þ Ô × ´Ôµ . and use the maximum modulus principle. ÎÑ× ´Ôµ . so write Ì Ì ÌÑ É × Þ with Ì ´×µ .44) ¡ where Þ denote the (possible) ÆÞ RHPzeros of . In summary. whereas the We derive thos from the bound ÛÌ Ì ½ presence of RHPzeros usually places an upper bound on the allowed bandwidth.19).44) we may easily generalize (5. ÛÌ ´Ôµ in (5.42) and (5. has RHPzeros at Þ .43) imply that stabilization may Since Ù be impossible in presence of disturbances or measurement noise Ò. bound on the peak of ÃË is × ´Ôµ ½ ÃË ´ · Òµ. the presence of RHPpoles generally imposes a lower bound. ½ Þ Ô To prove (5.4 Let Î Ì be a (weighted) closedloop transfer function where Ì is complementary sensitivity.3 page 191. Next.42) we make use of the identity ÃË ½ then gives Î ½ ÃË ¡ ¡ ½ Ì . Example 5.43) ÃË This gives ½ × ´Ôµ ½ Ñ× ´Ôµ ÃË ½ ½¼× · ½µ´× · ¿µ ¼ ´× · ¿µ´¼ ¾× · ½µ × ¿ ¿½ ¡ ¼ ¡½ ½ RHPpole Limitation 2 (bandwidth). ÆÞ Þ ·Ô Î Ì ½ ÎÑ× ´Ôµ (5. approximately.42). Use of (5. ×·Þ Consider a RHPpole located at Ô. 1997)(Havre and Skogestad.) The bound (5.2 Consider a plant with ´×µ ´½¼× · ½µ´× ¿µ´¼ ¾× · ½µ. Î Ì ½ É Ô·Þ ÎÑ× ´ÔµÌÑ ´Ôµ ÎÑ× ´ÔµÌ ´ÔµÌ ´Ôµ ½ ÎÑ× ´Ôµ ½ Ô Þ which proves (5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ For example. the bounds (5. ¿µ and ¼ ´× (5. Then for closedloop stability we must require for each RHPpole Ô in . for the plant ´×µ ½ ´× ¿µ we have × ½ ´× · ¿µ and the lower ¿·¿ . the system is practically openloop and stabilization is impossible. and for a real RHPpole Ô we must require that the closedloop bandwidth is larger than ¾Ô. 2001) Theorem 5. Using (5. see Section 5. note that Î Ì ½ ÎÑ× ÌÑ× ½ ÎÑ× ÌÑ ½ .44). since the required inputs Ù may be outside the saturation limit.11. RHPpoles mainly impose limitations on the plant inputs. and therefore Ì must have RHPzeros at Þ . When the inputs saturate. We need to react sufﬁciently fast. Proof of Limitation 1 (input usage): We will ﬁrst prove the following generalized bound (Havre and Skogestad.
11.45) which requires Ì (like Ä ) to have a rolloff rate of at least ½ at high frequencies (which must be satisﬁed for any real system). We start by selecting a weight ÛÌ ´×µ such that complementary sensitivity function.½ ÅÍÄÌÁÎ ÊÁ Ñ× ´×µ ¡ Ä Ã ÇÆÌÊÇÄ which proves (5. The requirements on Ì are shown graphically in Figure 5.11: Typical complementary sensitivity.47) Ô ÅÌ ÅÌ ½ £Ì ¾Ô (5. For a real RHPpole at × Ô condition ÛÌ ´Ôµ ½ yields £ Ì Ê Ð ÊÀÈ ÔÓÐ ¾ (reasonable robustness) this gives which proves the above With ÅÌ bandwidth requirement. the presence of the RHPpole puts a lower limit on the bandwidth in terms of Ì . and we have approximately ¾Ô (5.42). we cannot let the system rolloff at frequencies lower than about ¾Ô. since from (5. Thus. that Ì is less than ÅÌ at low frequencies. Ì .19) ÛÌ Ì ½ ÛÌ ´Ôµ . at least require that the weight satisﬁes ÛÌ ´Ôµ ½ Now consider the following weight ÛÌ ´×µ ½ × £ Ì · ÅÌ (5. with upper bound ½ ÛÌ ½ ÛÌ is a reasonable upper bound on the Ì´ µ ½ ÛÌ ´ µ ¸ ÛÌ Ì ½ ½ To satisfy this we must. and that Ì drops below 1 at frequency £ Ì . that is.46) . Here the latter equality follows since × ´×µ × Þ ×·Þ ¾ Proof of Limitation 2 (bandwidth): ÅÌ Magnitude 10 0 £Ì ½ ÛÌ Ì 10 −1 Frequency [rad/s] Figure 5.
In theory.7 Derive the bound £Ì ½½ Ô for a purely complex pole. The following example for a plant with Þ acceptable performance when the RHPpole and zero are located this close. Above we derived for a real RHPzero the Þ ¾ (with ÅË ¾). the RHPzero must be located a bit further away from the RHPpole. Note that the presence of any complex RHPpoles or complex RHPzeros does not affect this result. This indicates that for a system with a single real RHPpole bound Ô in order to get acceptable performance and a RHPzero we must approximately require Þ Ô shows that we can indeed get and robustness. For example.48) We use the Ë ÃË design method as in Example 2.g.32) with =0. 5. We then have: ¯ A strictly proper plant with a single real RHPzero Þ and a single real RHPpole Ô. and the time response to a unit step reference change À . It has been proved by Youla et al. The corresponding sensitivity and complementary sensitivity functions. any linear plant may be stabilized irrespective of the location of its RHPpoles and RHPzeros. If such a controller exists the plant is said to be strongly stabilizable. shows that we must at least require Ô a similar analysis of the weight (5. provided the plant does not contain unstable hidden modes. For a plant with a single RHPpole and RHPzero. Example 5. but this plant is not strictly proper. The software gives a stable and performance weight (5. the plant ´×µ ½µ ´× ¾µ with Þ ½ Ô ¾ is stabilized with a stable constant gain controller with ¾ Ã ½. e. and for a real RHPpole the approximate approximate bound ¾Ô (with ÅÌ ¾). Å minimum phase controller with an ½ norm of ½ . However.11 with input weight ÏÙ ½ and ¾.45) with £Ì ½ ½ Ô .9 Combined unstable (RHP) poles and zeros Stabilization. this may require an unstable controller. ´× Note the requirement that ´×µ is strictly proper.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ For a purely imaginary pole located at Ô ÅÌ ¾. ´×µ ´× Ô×µ´ Þ×·½µ . £ ½. (1974) that a strictly proper SISO plant is strongly stabilizable by a proper controller if and only if every real RHPzero in ´×µ lies to the left of an even number (including zero) of real RHPpoles in ´×µ.3 ½ design for plant with RHPpole and RHPzero. In words. In summary. and for practical purposes it is sometimes desirable that the controller is stable. “the system may go unstable before we have time to react”. ¾ Exercise 5. However. can be stabilized by a stable proper controller if and only if Þ Ô. the strong stabilizability requirement is Þ Ô. ½ controller for a plant with Þ À À ´× µ × ´× ½µ´¼ ½× · ½µ (5. We want to design an and Ô ½. the presence of RHPzeros (or time delays) make stabilzation more difﬁcult. in order to achieve acceptable performance and robustness.
In both cases. The objective is to keep the rod upright. A short rod with a large mass gives a large poles: two at the origin and one at Ô ÅÐ value of Ô. Here Ð [m] is the length of the rod and Ñ [kg] its mass.47) we desire a bandwidth of with Å Ñ and Ð ½ [m] we get Ô about [rad/s] (corresponding to a response time of about ¼ ½ [s]). With Þ follows that for any controller we must at least have actual peak values for the above Ë ÃË design are ¾ and it . taking into account the closeness of the RHPpole and zero.5 Balancing a rod.4 Consider the plant in (5. the plant has three unstable to . (5. For example. Sensitivity peaks. In Theorem 5.12. This example is taken from Doyle et al. For example.49) gives ¿ ½ Ë ½ ½ and Ì ½ ½ ¼ and ¾ ¿.12: À½ design for a plant with RHPzero at Þ and RHPpole at Ô ½ are shown in Figure 5.½ ÅÍÄÌÁÎ ÊÁ 2 10 Magnitude 0 Ä Ã ÇÆÌÊÇÄ Ì 1 Ý ´Øµ Ù´Øµ 10 −1 Ë Ô −2 0 0 −1 10 −2 Þ 10 2 10 10 Frequency [rad/s] (a) −2 0 1 2 3 4 Time [sec] (b) Response to step in reference Ë and Ì Figure 5. (5.48). The linearized transfer functions for the two cases are ½ ´×µ ×¾ ´ÅÐ×¾ ´Å · Ñµ µ ¾ ´×µ Ð×¾ ×¾ ´ÅÐ×¾ ´Å · Ñµ µ ´Å ·Ñµ .49) Example 5.3 we derived lower bounds on the weighted sensitivity and complementary sensitivity. The time response is good. for a plant with a single real RHPpole Ô and a single real RHPzero Þ . we always have Ë ½ Ì ½ Þ ·Ô Þ Ô Ô. by small hand movements. The Example 5. respectively. and this in turn means that the system is more difﬁcult to stabilize. (1992) Consider the problem of balancing a rod in the palm of one’s hand. based on observing the rod either at its far end (output Ý½ ) or the end in one’s hand (output Ý¾ ). Å [kg] is the mass of your hand and [ ½¼ m/s¾ ] is the acceleration due Õ gravity. [rad/s] and from (5.
if one tries to balance the rod by looking at one’s hand (Ý¾ µ there Ô is also a RHPzero at Þ Ð . However. in which case we have ´×µ The performance requirement and only if Ë ´×µ ´×µ ´×µ ½ for any ´ µ (5.10 Performance requirements imposed by disturbances and commands The question we here want to answer is: how fast must the control system be in order to reject disturbances and track commands of a given magnitude? We ﬁnd that some plants have better “builtin” disturbance rejection capabilities than others. and we must have We obtain ½· ½ Ö Å ·Ñ Å 5. recall (2. Disturbance rejection. for tracking we may consider the magnitude Ê of the reference change. i. This may be analyzed directly by considering the appropriately scaled disturbance model. stabilization is very difﬁcult because Ô Þ whereas we would normally prefer to have Ô). even with a large mass. highlights the importance of sensor location on the achievable performance of control. If the mass of the rod is small (Ñ Å is small). ´ µ ½. ´ µ ½ at all frequencies (in which case the plant is said to If ´ µ ½ at some frequency. However. i. measuring Ý½ and measuring Ý¾ . Similarly. then Ô is close to Þ and stabilization is in practice impossible with any controller. ´ µ ½.e. it seems doubtful that this is possible for a human. To quantify these problems use (5.e.9).4 then ½. Ë½ ¾ and Ì ½ ¾. From this we can immediately objective is that at each frequency ´Øµ conclude that ¯ no control is needed if be “selfregulated”). We get ¼ ½. In the following. so poor control performance is inevitable if we try to balance the rod by looking at our hand (Ý¾ ). then we need control (feedforward or feedback). ´×µ. i. If the variables have been scaled as outlined in Section 1.50) ´ µ ½ ½ at any frequency. The difference between the two cases.51) Ë ´ µ ¸ Ë .ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ If one is measuring Ý½ (looking at the far end of the rod) then achieving this bandwidth is the main requirement. Without control the steadystate sinusoidal response is ´ µ constant.49). is satisﬁed if ½ ½ (5. Consider a single disturbance and assume that the reference is ¼. and the control the worstcase disturbance at any frequency is ´Øµ × Ò Ø. So although the RHPzero far from the origin and the RHPpole close to the origin (Þ in theory the rod may be stabilized by looking at one’s hand ( ¾ ).e. we consider feedback control. Ö ´ µ ´ µ. Þ ·Ô Þ Ô Consider a light weight rod with Ñ Å ¾. for which we expect stabilization to be difﬁcult.
given a feedback controller (which ﬁxes Ë ) the effect of disturbances on the output is less.13 (dotted line).54) A plant with a small or a small is preferable since the need for feedback control is then less. Remark. There we actually overcome the limitation on the slope of Ä´ µ around crossover by using local feedback loops in series. is of high order. or alternatively. The actual bandwidth requirement imposed by disturbances if ´ µ drops with a slope steeper than ½ (on a loglog plot) just may be higher than before the frequency . is given later in Section 5.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (5.13: Typical performance requirement on Ë imposed by disturbance rejection Example 5. also ensure stability with reasonable margins. in addition to satisfying (5. Scaling has been applied to so this means that without feedback. and since rad/s.6. in which ´×µ is of high order.3 for a neutralization process. We ﬁnd that. then using (5. The reason for this is that we must. so as discussed in Section 2. If the plant has a RHPzero A typical plot of ½ at × Þ .16. which ﬁxes Ë ´Þ µ ½. Thus feedback is required. ½¼ times larger than we the effect of disturbances on the outputs at low frequencies is crosses ½ at a frequency ¼½ desire. An example.6 Assume that the disturbance model is ´×µ ´½ · ×µ where ½¼ and ½¼¼ [seconds].17) we have the following necessary condition for satisfying Ë ½ ½: ´Þ µ ½ (5.52) ¸ Ë´ µ ½ ´ µ ´ µ is shown in Figure 5.52). the minimum bandwidth requirement for disturbance rejection is ¼ ½ [rad/s].2 we cannot let the slope of Ä´ µ around crossover be much larger than ½.52) we also get that the frequency bound on the bandwidth: where crosses 1 from above yields a lower Û Ö × ¬Ò Ý ´ µ ½ (5. 10 0 Magnitude 10 −1 ½ Ë −2 10 Frequency [rad/s] Figure 5. although each loop has .53) From (5.
Remark. In this section. and if Ë increases with a slope of ¾ it becomes Ô 5. we assume that the model has been scaled as outlined in Section 1. Remark 3 Below we require Ù used to simplify the presentation. i. ½ rather than Ù ½. If Ë increases with a slope of ½ then the approximate bandwidth requirement becomes Ê Ö. and is . and consider a reference ÊÖ´Øµ Ê × Ò´ Øµ. The bandwidth requirement imposed by (5. Since Ù· ÊÖ. acceptable control ( At the end of the section we consider the additional problems encountered for unstable plants (where feedback control is required). ½. see (5. 284) who refers to lectures by Bode).55) where Ö is the frequency up to which performance tracking is required. This has no practical effect. This is a case where stability is É determined by each Á · Ä separately. Ù Ø is done by considering Ù Ø as the plant input by including a term ½ × in the plant model ´×µ. The worstcase disturbance at each frequency is ´ µ ½ and the worstcase reference is Ö ÊÖ with Ö´ µ ½. Remark 1 We use a frequencybyfrequency analysis and assume that at each frequency ´ µ ½ (or Ö´ µ ½).55) depends on on how sharply Ë ´ µ increases in the frequency range from Ö (where Ë ½ Ê) to (where Ë ½). p.4. the overall loop transfer function Ä´×µ Ä½ ´×µÄ¾ ´×µ ÄÒ ´×µ has a slope of about Ò. ¡¡¡ Command tracking. applies to command tracking with replaced by Ê. Thus for acceptable control ( ´ µ ½) we must have Ë ´ µÊ ½ Ö (5. Assume there are no disturbances.51).e. may also be handled by our analysis. These results apply to both feedback and feedforward control. but the beneﬁts of feedback are determined by ½· Ä (also see Horowitz (1991. The question we want to answer is: can the so that at any time we must have Ù´Øµ expected disturbances be rejected and can we track the reference changes while maintaining Ù´Øµ ½? We will consider separately the two cases of perfect control ( ¼) and ½). ¼. the same performance change Ö´Øµ requirement as found for disturbances. This Remark 2 Note that rate limitations.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ a slope ½ around crossover.11 Limitations imposed by input constraints In all physical systems there are limits to the changes that can be made to the manipulated variables. see the example for more details. Ê Ö. ½.
½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 5.57) is not satisﬁed for ½ . Thus. to achieve perfect control and avoid input saturation we need at all frequencies. in practice.) where Command tracking.14 we see that for ½ for . (However. we do not really need control at frequencies ½. However.11.58) In other words.1 Inputs for perfect control From (5. Example 5.57) In other words.14: Input saturation is expected for disturbances at intermediate frequencies from to ½ . we expect that disturbances in the frequency range may cause input saturation. for frequencies really need control. Thus.7 Consider a process with Ê at all frequencies where perfect ´×µ ¼ ´ × · ½µ´¾ × · ½µ ´×µ ¿ ¼× · ½ ´½¼ · ½µ´× · ½µ From Figure 5. Next let ¼ and consider the worstcase reference command which Ê at all frequencies up to Ö .56) ¼ and ´ µ ½ ´ µ ½ the requirement Ù´ µ ½ is equivalent (5. between ½ and 10 2 Magnitude 10 1 10 0 10 −1 ½ 10 −1 Frequency [rad/s] 10 0 10 1 10 2 Figure 5.3) the input required to achieve perfect control ( Ù Disturbance rejection. and we do not condition (5. To keep the inputs within their constraints we must is Ö´ µ then require from (5. With Ö to ½ Ö ½ ´ µ ½ ¼) is (5. to avoid input saturation we need command tracking is required. as is discussed below. ½ .56) that ½ ´ µÊ ½ Ö (5.
perfect control is never really required.58) should be then and Ê are replaced by Ê ½. ´ µ frequency. 5.62) ½ ´× · ½µ´¼ ¾× · ½µ ½ and the performance objective is ½.60) ½) rather than “perfect control” ( ¼). and similarly. since the plant has (5.2 Inputs for acceptable control For simplicity above. ½ Ù . Thus at frequencies where ´ µ ½ the smallest input needed to reduce ½ is found when Ù´ µ is chosen such that the complex vectors Ù and the error to ´ µ have opposite directions.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½½ 5. and Otherwise.59) and (5.11.11. and the input magnitude required for acceptable ½) is somewhat smaller. Ê in (5. However. In summary. input constraints combined with large disturbances may make stabilization difﬁcult. from (5. and the result follows by requiring Ù ½.3 Unstable plant and input constraints Feedback control is required to stabilize an unstable plant. For disturbance rejection we must then control (namely ´ µ require ½ Ø Ö ÕÙ Ò × Û Ö ½ (5.61) × Ò Ø. That is. Specially.8 Consider ´×µ Since disturbance rejection. if we want “acceptable control” ( in (5. we assumed perfect control. However.57) should be replaced by ½. to achieve acceptable control for command tracking we must require Ê ½ ½ Ö (5.43) we must for an unstable plant with a real RHPpole at × Ô require × ´Ôµ Ñ× ´Ôµ (5. ¾ Similarly.59) ½ The control error is Ý Proof: Consider a “worstcase” disturbance with ´ µ Ù· . The requirements given by (5.60) are restrictions imposed on the plant design in order to avoid input constraints and they apply to any controller (feedback or feedforward control). but feedback control is required for stabilization. Note that this bound must be satisﬁed also when ½. Ñ× Example 5.e. and with ½ we get ½ ´ Ù ½µ. the input will saturate when there is a sinusoidal disturbance ´Øµ we may not be able to stabilize the plant. The differences are clearly small at frequencies where much larger than 1. If these bounds are violated at some frequency then performance will not be ½) for a worstcase disturbance or reference occurring at this satisfactory (i. especially not at high frequencies. we do not really need control for ´½¼× · ½µ´× ½µ ´×µ .
But in contrast to disturbance changes and measurement noise. We get × ´½µ ¼ ¾¾ ´½¼× · ½µ´× · ½µ × ½ Ñ× ´½µ ¼ ½ ´× · ½µ´¼ ¾× · ½µ × ½ ¼ to avoid input saturation of and the requirement × ´Ôµ Ñ× ´Ôµ gives sinusoidal disturbances of unit magnitude. reference changes can also drive the system into input saturation and instability. and when Ù is constrained to be within the interval ½ ½ we ﬁnd indeed that the system is unstable (solid lines).e. The stable closedloop respons when there is no input constraint is shown by the dashed line. We have (i. The closedloop response to a unit step disturbance occurring after ½ second is shown in Figure 5. see Figure 5. However. of about ½ . . This may must require ¼ value is conﬁrmed by the exact bound in (5.5 −2 10 Ô −2 Ý´Øµ 5 Time [sec] 10 −1 0 10 10 Frequency [rad/s] −1.61). so from (??) we do not expect problems with input constraints at and from (??) we low frequencies. we note that the input signal exceeds 1 for a short time.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ a RHPpole at Ô ½. For unstable plants. one then has the option to use a two degreesoffreedom controller to ﬁlter the reference signal and thus reduce the magnitude of the manipulated input.5 0 (a) and with ¼ (b) Response to step in disturbance ( ) ¼ Figure 5.15(b). ¼ to avoid problems with input saturation.15(a). at frequencies higher than we have Ô.63) Ã ´×µ ¼ ¼ ´½¼× · ½µ¾ × ´¼ ½× · ½µ¾ which without constraints yields a stable closedloop system with a gain crossover frequency. i. ½ ½) for frequencies lower than ¼ .5 0 10 −1 −0. However.15: Instability caused by input saturation for unstable plant To check this for a particular case we select ¼ and use the controller (5. 10 1 1 Magnitude 10 0 Ù´Øµ Unconstrained: Constrained: 0.e. .
but are there any limitations imposed by the phase lag resulting from minimumphase elements? The answer is both no and yes: No. Thus with these two simple controllers a phase lag in the plant does pose a fundamental limitation: Stability bound for P. i. frequency. Then.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½¿ 5. and with a PIcontroller ½ ¼ Ù . consider a minimumphase plant of the form ´×µ where ´½ · ½ ×µ´½ · ¾ ×µ´½ · ¿ ×µ ¡ ¡ ¡ ÉÒ ½ ´½ · ×µ (5. in practice a large phase lag at high frequencies. As an example. is small) that we encounter problems with input saturation. A commonly used controller is the PID controller which has a maximum phase lead of ¼Æ at high frequencies. the phase of Ä must be larger than ½ ¼Æ at the gain crossover frequency . it is therefore likely (at least if ¡ In principle. there are no fundamental limitations. beyond Ù . the presence of highorder lags does not present any fundamental limitations. ×·½ Ã ´×µ Ã Á × · ½ Á× ¼ ½ × · ½ and the maximum phase lead is 55Æ (which is the maximum phase lead of the term (5.57). ´ Ù µ ¸ ½ ¼Æ Note that Ù depends only on the plant model. Ò is three or larger. but in most practical cases there is a close relationship. This is because for stability we need a positive phase margin. i.27).12 Limitations imposed by phase lag We already know that phase lag from RHPzeros and time delays is a fundamental problem. the maximum phase lead is smaller than 90Æ . see (2. For example.64) ´ µ Ò ¼Æ for However. Otherwise. ½ ¼ (the frequency at which the phase lag around the loop is ½ ¼Æ ) is not directly related to phase lag in the plant.or PIcontrol: Ù (5. That is.66) This is also a reasonable value for the phase margin. with a proportional controller we have that ½ ¼ Ù . e. From condition (5. ×·½ .64). Deﬁne Ù as the frequency where the phase lag in the plant is ½ ¼Æ .e. and for performance (phase and gain margin) we typically need less than bout ¼ Ù . so for performance we approximately require (5.e. In practice.g. the plant in (5. an industrial cascade PID controller typically has derivative action over only one decade.67) Practical performance bound (PID control): Ù ¼½ ×·½ ). for stability we need ½ ¼ . there are often limitations on practical designs. “derivative action”) to provide phase lead which counteracts the negative phase in the plant.g. poses a problem (independent of Ã ) even when input saturation is not an Ã issue.65) Note that this is a strict bound to get stability. we must place zeros in the If we want to extend the gain crossover frequency controller (e. At high frequencies the gain drops sharply with É ´ µ ´ µ Ò . but Yes.
and then let the controller roll off at high frequencies beyond the dynamics of the plant.g.13 Limitations imposed by uncertainty The presence of uncertainty requires us to use feedback control rather than just feedforward control. 5. we want the inputs to directly affect the outputs.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We stress again that plant phase lag does not pose a fundamental limitation if a more complex controller is used.67) applies because we may want to use a simple controller. For ¼Æ . Ý Ö ¼.10. From (5. a minimumphase plant the phase lag at inﬁnite frequency is the relative order times Of course.69). where we consider MIMO systems. one may time delays. we ﬁnd that the actual control error with the “perfect” feedforward controller is ¼ Ý¼ Ö Ö Ð ÖÖÓÖ Ò ßÞ ½ Ö Ö Ð ÖÖÓÖ Ò ßÞ ½ ¼ (5. Daoutidis and Kravaris. The phase lag of ´×µ as a function of frequency. Assume that ´×µ is minimum phase and stable and assume there are no problems with input saturation. provides much more information. 1992). then one may in theory extend simply invert the plant model by placing zeros in the controller at the plant poles. The relative order may be deﬁned also for nonlinear plants. For example. so we want the relative order to be small. A further discussion is given in Section 6.68) Now consider applying this perfect controller to the actual plant with model Ý¼ ¼ ¼Ù · ¼ ¼ ¼ (5. The main objective of this section is to gain more insight into this statement. 5.70) Thus. Speciﬁcally.68) into (5. we ﬁnd for feedforward control that the model error propagates directly to the control error. the practical usefulness of the relative order is rather limited since it only gives information at inﬁnite frequency.69) After substituting (5. if the model is known exactly and there are no RHPzeros or to inﬁnite frequency. in many practical cases the bound in (5. including the value of Ù . Remark. However. is obtained using a perfect feedforward controller which generates the following control inputs ½ Ö ½ Ù (5. The relative order (relative degree) of the plant is sometimes used as an inputoutput controllability measure (e. and also because uncertainty about the plant model often makes it difﬁcult to place controller zeros which counteract the plant poles at high frequencies.1 Feedforward control Consider a plant with the nominal model Ý Ù· . However.13. Then perfect control. and it corresponds for linear plants to the pole excess of ´×µ.70) we see that to achieve ¼ ½ for ½ we must require that the relative .
2 Feedback control With feedback control the closedloop response with no model error is Ý Ö where Ë ´Á · Ã µ ½ is the sensitivity function. and with also feedforward about ¾ ½¼ ¾).72) is the relative error for Ì is the complementary sensitivity From (5.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ ¼ . and (5.71) we see that the control error is only weakly affected by model error at frequencies ½ and Ì ½). (Feedback will hardly be affected by the above error as is discussed next The minimum bandwidth requirement with ½¼¼ ½¼ ½¼. 5. such that the relative error in is larger than 1.70) the response in this case is Ý¼ ¼ ¼ ½ ¼ ¼ ¾¾ ¼ ¾¼ ½¼× · ½ Thus. With model error we get where Ë ¼ Ë´ Öµ (5.9 Consider a case with gives perfect control Ý ¼. if we have integral where feedback is effective (where Ë action in the feedback loop and if the feedback system with model error is stable. and this motivates the need for feedback control. the perfect feedfeedforward controller Ù ½ ¿¿¼ ½¼× · ½ ¼ ½¼× · ½ ¿¼¼ ½¼× · ½ From (5. if the gain in in increased by 50% and the gain in reduced by 50%. Now apply this feedforward controller to the actual process where the gains have changed by 10 ½¼¼ ½¼× · ½ Nominally. the output Ý will approach 20 (much larger than 1). feedback only is Note that if the uncertainty is sufﬁciently large. and feedback control is required to keep the output less than 1. Example 5. then Ë ´¼µ Ë ¼ ´¼µ ¼ and the steadystate control error is zero even with model error. ´ ¼ µ ½ ½· Ì . This requirement is clearly very difﬁcult to satisfy model error in is less than ½ at frequencies where ¼ is much larger than 1. For example. for a step disturbance of magnitude 1. for example.71) ´Á · ¼ Ã µ ½ can be written (see (A.139)) as Ý¼ Ö Ë¼ Ë¼´ ¼ Ë Öµ Here function.13. This may quite easily is happen in practice. . then feedforward control may actually make control worse.
that is. for practical designs the bounds will need to be satisﬁed to get acceptable performance. the coupling between the uncertain parameters. 5.73) constant. where Example 5. We see that to keep ´ Ù µ constant we want that an increase in increases ´ Ù µ . are independent of the speciﬁc controller.e. which determines the highfrequency gain. see (2. uncertainty in the crossover frequency region can result in poor performance and even instability.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Uncertainty at crossover. Most practical controllers behave as a constant gain ÃÓ in ÃÓ ´ ½ ¼ where ½ ¼ the crossover region.73) we see. where ½ ¼ is the frequency where Ä is ½ ¼Æ . We use the term “(inputoutput) controllability” since the bounds depend on the plant only. For example. and may yield instability. is unchanged. are approximate (i.14 Summary: Controllability analysis with feedback control We will now summarize the results of this chapter by a set of “controllability rules”. The above example illustrates the importance of taking into account the structure of the uncertainty. This rule is useful. they may be off by a factor of 2 or thereabout). Here Ù is ´ Ùµ ½ ¼Æ . × ´½ · ×µ. and are uncertain in the sense that they may vary with operating then Ù ´ ¾µ and we derive conditions. in the parameters is often coupled. A robustness analysis which assumes the uncertain parameters to be uncorrelated is generally conservative. see also Section 5.10 Consider a stable ﬁrstorder delay process. for example. From (5.12). when evaluating the effect of parametric uncertainty. for example. This observation yields the following approximate the frequency where rule: ¯ Uncertainty which keeps ´ Uncertainty which increases Ù µ approximately constant will not change the gain margin. ´ Ù µ may yield instability. but this may not affect stability if the ratio . Although feedback control counteracts the effect of uncertainty at frequencies where the loop gain is large. In another case the steadystate gain may change with operating point. although some of the expressions. This is illustrated in the following example. This is further discussed in Chapters 7 and 8. This may be analyzed by considering the effect of the uncertainty on the gain margin. ´×µ the parameters . However. . as seen from the derivations. so Ä´ ½ ¼ µ Ù (since the phase lag of the controller is approximately zero at this frequency.33). GM ½ Ä´ ½ ¼ µ . If we assume ´ Ùµ ¾ (5. for example. the uncertainty may be approximately constant. Except for Rule 7. However. the ratio in which case an increase in may not affect stability. all requirements are fundamental.
control ( ´×µ Ñ ´×µ.59)). for the case when all blocks are scalar. The from above. For a real Þ ¾ and for an imaginary RHPzero we approximately RHPzero we require Þ . (See (5. For acceptable control ( ´ µ ½ at frequencies where ´ µ ½.25)).ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ Ö + ¹  ¹ Ã ¹ Ñ ¹+ + Õ ¹Ý Figure 5.27) and (5. Speed of response to reject disturbances. deﬁned as the frequency where Ä´ µ crosses 1 denote the frequency at which ´ µ ﬁrst crosses 1 from above. require Rule 5. Time delay in Remark. and therefore ´×µ and denote the gain crossover frequency. For perfect require ´ µ ¼) the requirement is ´ µ ´ µ . The model is Ý ´×µÙ · ´×µ ÝÑ (5.54)). In this case we must for a RHPzero Rule 4. We require frequency Ö where tracking is required. More .29)).16. Input constraints arising from disturbances. . Input constraints arising from setpoints. Speed of response to track reference changes. (See (5. with feedback control we require Ë ´ µ and (5. ´ µ Ê ½ up to the . (See (5.4. We require the frequency Ö where tracking is required.55)). then we may reverse the sign of the controller gain. (See (5. 1996): Rule 1. If we do not need tight control at low frequencies. Ý and Ö are assumed to have been scaled as ´×µ are the scaled transfer functions. (See (5. Rule 6. Tight control at low frequencies with a RHPzero Þ in ´×µ Ñ ´×µ.74) Ñ ´×µÝ Ë´ µ ½ Ê up to ½) we Rule 3. Let outlined in Section 1. The variables . a RHPzero only makes it impossible to have tight control in the frequency range close to the location of the RHPzero. and instead achieve tight control at higher frequencies.51) Consider the control system in Figure 5. Strictly speaking.57) and (5. We approximately require ½ ´ µ speciﬁcally. Rule 2. (See (5. Let following rules apply(Skogestad. We approximately require ½ . Ù.60)).16: Feedback control system Here Ñ ´×µ denotes the measurement transfer function and we assume Ñ ´¼µ ½ (perfect steadystate measurement).
(See Ù . Ù ½. A special case is for plants with a zero at the origin. the input may saturate when there are disturbances.17. We need high feedback gains to ¾Ô. stabilize the system and we approximately require In addition. Ä Å½ Å¾ ½ Control needed to reject disturbances ¾Ô Å¿ Å Þ ¾ Ù ½ Å Å Margins for stability and performance: Å½ Margin to stay within constraints. Otherwise.17: Illustration of controllability requirements Most of the rules are illustrated graphically in Figure 5. Rule 7. Å Margin because of RHPzero.47)). Figure 5.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Þ approximately require ¾Þ .61). with PID control): ½ ¼Æ .67)). one may in in most practical cases combine Rules 5. ½. for unstable plants we need × ´Ôµ Ñ× ´Ôµ . and the plant cannot be stabilized. (See (5. Phase lag constraint. here we can achieve good transient control even though the control has no effect at steadystate. We require in most practical cases (e. Þ . Rule 8. see (5. . Ô. Å¾ Margin for performance. .g. Å Margin because of phase lag. ´ Ù µ Å Margin because of delay. Real openloop unstable pole in ´×µ at × Ô. ½ ¼Æ . 6 and 7 into the single rule: Ù (Rule 7). Å¿ Margin because of RHPpole. Here the ultimate frequency Ù is where Ñ ´ Ùµ (5. Since time delays (Rule 5) and RHPzeros (Rule 6) also contribute to the phase lag.
16. as given in (5.72). This is because. The rule “Select inputs that have a large which effect on the outputs” may be quantiﬁed as: “In terms of scaled variables we must have ½ at frequencies where ½ (Rule 3). except at frequencies where the relative uncertainty obviously have to detune the system. Sometimes the problem is that the disturbances are so large that we hit input saturation. to track setpoints and to stabilize the plant. or the required bandwidth is not achievable.71) and (5. Another important insight from the above rules is that a larger disturbance or a smaller speciﬁcation on the control error requires faster response (higher bandwidth). and we must have Ê ½ at frequencies where setpoint tracking is desired (Rule 4)”. Þ ¾ (Rule 6) and where as found above we approximately require Ù (Rule 7). to determine the size of equipment.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ½ We have not formulated a rule to guard against model uncertainty. The rules quantify the qualitative rules given in the introduction. A major ´×µ is an advantage for feedforward control difference. is that a delay in (“it gives the feedforward controller more time to make the right action”). Condition (5. That is. We have formulated these requirements for high and low gain as bandwidth requirements. In summary. as shown below. as in the example of Section 5. Rules 1. it is for the presence of a RHPzero on the imaginary axis at this frequency ( ´ µ already covered by Rule 6. but we ﬁnd that essentially the same conclusions apply to feedforward control when relevant. On the other hand. the rule “Control outputs that are not selfregulating” may be quantiﬁed as: “Control outputs Ý for ´ µ ½ at some frequency” (Rule 1). To avoid the latter problem. 2 and 8 tell us that we need high feedback gain (“fast control”) in order to reject disturbances. Also. 5. For example. They are not sufﬁcient since among other things we have only considered one effect at a time. it is usually not controllable with feedforward control.3 below. and we systems.75) may be used. 6 and 7 tell us that we must use low feedback gains in the frequency range where there are RHPzeros or delays or where the plant has a lot of phase lag. (Rule 1) ´ µ ½ (5. we must at least require that the effect of the disturbance is less than 1 (in terms of scaled variables) at frequencies beyond the bandwidth. Rules 5. if a plant is not controllable using feedback control. If they somehow are in conﬂict then the plant is not controllable and the only remedy is to introduce design modiﬁcations to the plant. Also.15 Summary: Controllability analysis with feedforward control The above controllability rules apply to feedback control. uncertainty has only a minor effect on feedback performance for SISO approaches 100%. The rules are necessary conditions (“minimum requirements”) to achieve acceptable control performance. a RHPzero ´×µ is also an advantage for feedforward control if ´×µ has a RHPzero at the same in . since 100% uncertainty at a given frequency allows ¼).75) ½ (Rule 5).
76). Rules 3 and 4 on input constraints apply directly to feedforward control. If perfect control is not possible. if ´ µ ½¼ exceeds ½ then the error in must not exceed 10% at this frequency. From (5. An example is given below in (5. µ ½ and (5.86) and (5.13.14 if ´×µ Ã Ñ · (5. To analyze controllability in this case we may assume that the feedforward controller Ã has already been designed. Model uncertainty. a delay or a large phase lag in Ñ ´×µ. The disturbance response with Ù· Perfect control.78) However. For disturbance rejection. Ideal control. then one may analyze controllability by considering an “ideal” feedforward controller. From (5. for example. disturbance (i. we have from (5. Here is the scaled disturbance model. but Rule 8 does not apply since unstable plants can only be stabilized by feedback control. one must beware that the feedforward control may be very sensitive to model error. Then from (5. so the beneﬁts of feedforward may be less in practice. In practice. and so have no RHPzeros and no (positive) delay. The remaining rules in terms of performance and “bandwidth” do not apply directly to feedforward control. this means that feedforward control has to be combined with feedback control if the output is sensitive to the is much larger than 1 at some frequency). Ã Ð . model uncertainty is a more serious problem for feedforward than for feedback control because there is no correction from the output measurement.87) for a ﬁrstorder delay process.70) that the plant is not at any frequency controllable with feedforward control if the relative model error for .76) the controllability of the remaining feedback problem can be analyzed using the rules in Section ´×µ is replaced by 5. The controller is ideal in that it assumes we have a perfect model. Controllability can be analyzed by considering the feasibility of achieving perfect control.76) (Reference tracking can be analyzed similarly by setting Ñ Ê. which is (5. .¾¼¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ location.e.) (5.77) ¼ is achieved with the controller ÃÔ Ö Ø ½ ½ Ñ ½ Ñ should This assumes that Ã Ô Ö Ø is stable and causal (no prediction). As discussed in Section 5. For example.76).77) modiﬁed to be stable and causal (no prediction). if Combined feedback and feedforward control. From this we ﬁnd that a delay (or RHPzero) in ´×µ is an advantage if it cancels a delay (or RHPzero) in Ñ . The feedforward controller is Ù where Ñ becomes Ã ´×µ Ñ Ö ¼ ´ Ã Ñ · Ñ ´×µ is the measured disturbance.78) we see that the primary potential beneﬁt of feedforward control is less than 1 at frequencies where feedback to reduce the effect of the disturbance and make control is not effective due to. Conclusion. Controllability is then analyzed by using Ã Ð in (5.
Consider disturbance rejection for the following process ´× µ × ½· × ´×µ × ½· × (5. For large.82) yields the following requirement for controllability Feedback: · Ñ (5. Solution. By “direct” we mean without any delay or inverse response. Speciﬁcally.79) In addition there are measurement delays Ñ for the output and Ñ for the disturbance. Treat separately the two cases of i) feedback control only. while we want the disturbance to have a “small. Clearly. but otherwise has no effect. This leads to the following conclusion.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼½ 5. ½) we must from Rule 4 require (b) Quantitative.80) Now consider performance where the results for feedback and feedforward control differ. For both feedback and feedforward control we want and feedforward control we also want large (we then have more time to react). for both feedback and feedforward control (5. From Rule 1 we need for acceptable performance ( with disturbances (5. and we want Ñ small for feedforward control (it is not used for feedback). small.81) On the other hand. from Rule 5 we require for stability and performance ½ ØÓØ (5. but for feedback does not matter. We want the input to have a “large. (i) ½) First consider feedback control. it translates time.16. (a) Qualitative. All ½ ½ and parameters have been appropriately scaled such that at each frequency Ù ½. and carry out the following: a) For each of the eight parameters in this model explain qualitatively what value you would choose from a controllability point of view (with descriptions such as large. Assume ½. and . Assume that appropriate scaling has been applied in such a way that the disturbance is less than 1 in magnitude. indirect and slow effect”.81) and (5. direct and fast effect” on the output.82) where ØÓØ · Ñ is the total delay around the loop. and small. we the value of want Ñ small for feedback control (it is not used for feedforward). we want and ii) feedforward control only. The combination of (5. To stay within the input constraints ´ Ù ´ µ ´ µ for frequencies .16 Applications of controllability analysis 5.1 Firstorder delay process Problem statement. and that the input and the output are required to be less than 1 in magnitude.83) . b) Give quantitative relationships between the parameters which should be satisﬁed to achieve controllability. value has no effect).
84): Introduce · Ñ . Is the plant controllable with respect to disturbances? 2. ½ we need “only” require and to have Feedforward: · Ñ (5. so instead ¼. From Rule 1 feedback control is necessary up to the frequency ½¼ ½¼¼¼ ¼ ¼½ rad/s. Let Ý be the room temperature.16.84) Proof of (5.87) ½ we must require and to achieve Ü Ü for small Ü) which is equivalent to (5. 1. Let the measurement delay for temperature (Ý ) be Ñ ½¼¼ s.26) ´×µ The frequency responses of ¾¼ ½¼¼¼× · ½ and ´ ×µ ½¼ ½¼¼¼× · ½ (5. see Figure 1.88) are shown in Figure 5.86) From (5. Perfect control is not possible.18. Ù the heat input and the outdoor temperature. Disturbances.2.2 Application: Room heating Consider the problem of maintaining a room at constant temperature. In this case perfect control is possible using the controller (5.84).¾¼¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (ii) For feedforward control. From Rule 3 at all frequencies.77). Feedback control should be used. This is exactly the ¼ ¼½ rad/s ( ½ ). and consider ﬁrst the case with ¼ (so (5. where crosses 1 in magnitude ( ).5. ½ therefore conclude that the system is barely controllable for this disturbance. consider so we can even achieve we use the “ideal” controller obtained by deleting the prediction × .85) ¼. A critical part of controllability analysis is scaling. ÃÔ Ö Ø ½ ½ Ñ ½· × × ½· × (5. These no problems with input constraints are expected since . Ã Ð · ½½ · ×× µ × ´½ × µ ½· × ½ (5.84) is clearly satisﬁed). A model in terms of scaled variables was derived in (1. Next. ½ (using asymptotic values and ¾ 5. 1. any delay for the disturbance itself yields a smaller “net delay”. Is the plant controllable with respect to setpoint changes of magnitude Ê the desired response time for setpoint changes is Ö ½¼¼¼ s (17 min) ? ¿ (¦¿ K) when Solution.76) the response with this controller is ´ Ã Ð Ñ · (5. We same frequency as the upper bound given by the delay. as discussed in Section 1.
5 0 −0.5 0 500 3 Ý ´Øµ Ù´Øµ 1000 2 1 Ý ´Øµ Ö´Øµ Ù´Øµ 0 0 500 1000 Time [sec] (a) Step disturbance in outdoor temperature Time [sec] (b) Setpoint change ¿ ´½ ¼× · ½µ Figure 5.19: PID feedback control of room heating example .18: Frequency responses for room heating example 1 0.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼¿ 10 1 Magnitude 10 0 10 −1 10 −5 10 −4 Frequency [rad/s] 10 −3 10 −2 10 −1 Figure 5.
We want to use feedback control to ½ (“salt water”) by manipulating keep the pH in the product stream (output Ý ) in the range the amount of base. and thus poses Ê ¿ up to about ½ ¼ ¼¼ [rad/s] which is seven times no problem. Intuitively.3 Application: Neutralization process ACID Õ Î BASE Õ Õ ¹ Figure 5.16. Consider the process in Figure 5. The delay in the pHmeasurement is Ñ ½¼ s. one might expect that the main control problem is to adjust the base accurately by means of a very accurate valve. The output error exceeds its allowed value of 1 for a very short time after about 100 s. as we will see this “feedforward” way of thinking is misleading. The plant is controllable with respect to the desired setpoint changes. Setpoints. First. ´ µ ½ Ö ¼ ¼¼½ [rad/s].¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ conclusions are supported by the closedloop simulation in Figure 5. but then returns quite quickly to zero. Á ¾¼¼ s and ¼ controller of the form in (5. This means that input constraints pose higher than the required Ö ½ ¼s no problem. ¦ 2. where a strong acid with pH (yes.20: Neutralization process with one mixing tank The following application is interesting in that it shows how the controllability analysis tools may assist the engineer in redesigning the process to make it controllable. 5. Õ (disturbance ). we should be able to achieve response times of about ½ ½ without reaching the input constraints.19(b) for a desired setpoint change ¿ ´½ ¼× · ½µ using the same PID controller as above. However.20.66) with Ã s.8 and thus remains within its allowed bound of ½. In fact. Second. ½ Problem statement. This is conﬁrmed by the simulation in Figure 5. the delay is 100 s which is much smaller than the desired response time of 1000 s. .19(a) for a unit step disturbance (corresponding to a sudden 10 K increase in the outdoor temperature) using a PID¼ (scaled variables). ¦ To achieve the desired product with pH=7 one must exactly balance the inﬂow of acid (the disturbance) by addition of base (the manipulated input). Õ (input Ù) in spite of variations in the ﬂow of acid. The input goes down to about 0. and the main hurdle to good control is the need for very fast response times. a negative pH is possible — it corresponds to À · ½¼ mol/l) is neutralized by a strong base (pH=15) in a mixing tank with volume Î = 10m¿ .
92) This requires a response time of ½ ¾ ¼¼ ¼ milliseconds which is clearly impossible in a process control application. input constraints do not pose a problem since all frequencies. Here superscript £ denotes the steadystate value. [mol/l]. and the plant is a simple mixing process modelled by Õ £ ¼ ¼¼ [ m¿ /s] resulting in a The nominal values for the acid and base ﬂows are Õ£ £ ¼ ¼½ [m¿ /s] ½¼ [l/s]. Note that the steadystate gain in terms of scaled variables is more than a million so the output is extremely sensitive to both the input and the disturbance.89) Ù Õ Õ£ Õ ¼ Õ£ ¾ (5. .90) Then the appropriately scaled linear model for one tank becomes ´×µ ½· × ´×µ ¾ ½· × ¡ ½¼ (5. From Rule 2. The main control problem is the high disturbance sensitivity. and from (5. compared to that desired in the product stream. and is in any case much less than the measurement delay of 10 s. In terms of this variable the control objective is to keep Ñ Ü ½¼ mol/l. deﬁned as À· ÇÀ .21: Frequency responses for the neutralization process with one mixing tank Controllability analysis.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼ We take the controlled output to be the excess of acid. The question is: Can acceptable control be achieved? 10 5 Magnitude 10 0 −4 10 10 −2 Frequency [rad/s] 10 0 10 2 10 4 Figure 5.81) (Rule 1) we ﬁnd the frequency up to which feedback is needed ¾ ¼¼ Ö × (5.21. The frequency responses of ´×µ and ´×µ are shown graphically ¾ at in Figure 5. The reason for this high gain is the much higher concentration in the two feed streams.91) where Î Õ ½¼¼¼ s is the residence time for the liquid in the tank. product ﬂow Õ Divide each variable by its maximum deviation to get the following scaled variables Ý ½¼ Ø ´Î µ Õ ·Õ Õ (5.
and is the total residence time. 10 0 ¡ Ò Magnitude Ò Ò 10 −5 Ò ½ ¾ Ò ¿ 10 2 10 −1 10 0 Ö ÕÙ Ò Ý ¢ 10 1 Figure 5.22: Neutralization process with two tanks and one controller Design change: Multiple tanks. Ò ´×µ is the transfer function of the mixing tanks.¾¼ ACID ÅÍÄÌÁÎ ÊÁ BASE Ä Ã ÇÆÌÊÇÄ pHC pHI ¹ Figure 5.22 for the case of two tanks.23: Frequency responses for Ò tanks in series with the same total residence time Ò ´×µ ½ ´ Ò × · ½µÒ Ò ½¾¿ .94) ¸ . With Ò equal mixing tanks in series the transfer function for the effect of the disturbance becomes ´×µ Ò ´×µ Ò ´×µ ½ ´ Ò × · ½µÒ (5. ÎØÓØ Õ .93) where ¾ ½¼ is the gain for the mixing process.23 for one to four equal tanks in series. The only way to improve controllability is to modify the process. This is similar to playing golf where it is often necessary to use several strokes to get to the hole. The magnitude of Ò ´×µ as a function of frequency is shown in Figure 5. This is done in practice by performing the neutralization in several steps as illustrated in Figure 5. From controllability Rules 1 and 5. we must at least require for acceptable disturbance rejection that ´ µ ½ ½ (5.
16 0. the control system in Figure 5. The condition which formed the basis for redesigning the process. With ÎØÓØ Õ we obtain the following minimum value for the total volume for Ò equal tanks in series ¡ ÎØÓØ Õ Ò ´ µ¾ Ò ½ Õ (5. approximately (see Section 2.70 Volume each tank [Ñ¿ ] 250000 158 13. With ½¼ s we then ﬁnd that the following designs have the same controllability with respect to disturbance rejection: No.662 m¿ . and this may be difﬁcult to achieve since ´×µ ´×µ is of high order. one purpose of the mixing tanks Ò ´×µ is to ½¼ µ at the frequency ´ ¼ ½ reduce the effect of the disturbance by a factor ´ ¾ [rad/s]). or approximately Ä . from Rule that we have Ë ½ .6. However. The solution is to install a local feedback control system on each tank and to add base in each tank as shown in Figure 5. This is another plant design change since it requires an additional measurement and actuator for each tank.90 1. Consider the case of Ò tanks in series.96 5.94).95) where Õ ¼ ¼½ m¿ /s. whereas for stability and performance the slope of Ä at crossover should not be steeper than ½.9 9. at frequencies lower than 1 we also require that Ë Û . We are not quite ﬁnished yet.51 6.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾¼ where is the delay in the feedback loop.24. measurements and control. Thus. we would probably select a design with 3 or 4 tanks for this example. Thus. Ò ´ µ ½ . The minimum total volume is obtained with 18 tanks of about 203 litres each — giving a total volume of 3. Control system design.22 with a single feedback controller will not achieve the desired performance. of tanks Ò 1 2 3 4 5 6 7 Total volume ÎØÓØ [Ñ¿ ] 250000 316 40.7 15. may be optimistic because it only ensures ½ at the crossover frequency . which results in a large negative phase for Ä.2). With Ò controllers the overall closedloop response from a disturbance into the ﬁrst tank to the pH in the last tank becomes Ý Ò ´ ½ µ ½ ½·Ä Ä Ä¸ Ò ½ Ä (5.98 1.6 3.e. ´ µ ½ in (5. i. However. The problem is that this requires Ä to drop steeply with frequency.81 With one tank we need a volume corresponding to that of a supertanker to get acceptable controllability. taking into account the additional cost for extra equipment such as piping. mixing.96) .
p.24: Neutralization process with two tanks and two controllers. Importantly. Exercise 5. Our conclusions are in agreement with what is used in industry. É Ä Ã . Exercise 5. our analysis conﬁrms the usual recommendation of adding base gradually and having one pHcontroller for each tank (McMillan. 1984. Ò where ½ and where feedback is effective. In this case. How many tanks in series should one have? What is the total residence time? (b) The feed to a distillation column has large variations in concentration and the use of one buffer tank is suggested to dampen these. this application has shown how a simple controllability analysis may be used to make decisions on both the appropriate size of the equipment.8 Comparison of local feedback and cascade control. 208).9 (a) The effect of a concentration disturbance must be reduced by a factor of ½¼¼ at the frequency ¼ rad/min. The effect of the feed concentration on the product . and the selection of actuators and measurements for control. Explain why a cascade control system with two measurements (pH in each tank) and only one manipulated input (the base ﬂow into the ﬁrst tank) will not achieve as good performance as the control system in Figure 5. we arrived at these conclusions. the conclusions from the controllability analysis should be veriﬁed by simulations using a nonlinear model. we can design each loop Ä ´×µ with a slope of ½ and bandwidth at all such that the overall loop transfer function Ä has slope Ò and achieves Ä (the size of the tanks are selected as before such that ). In summary. without having to design any controllers or perform any simulations.24 where we use local feedback with two manipulated inputs (one for each tank). Of course. temperature) disturbances in chemical processes. and the approximation applies at low frequencies . frequencies lower than Thus. as a ﬁnal test. The disturbances should be dampened by use of buffer tanks and the objective is to minimize the total volume.¾¼ ACID ÅÍÄÌÁÎ ÊÁ BASE Ä Ã ÇÆÌÊÇÄ pHC BASE pHC ¹ Figure 5. It seems unlikely that any other control strategy can achieve a sufﬁciently high rolloff for Ä . The following exercise further considers the use of buffer tanks for reducing quality (concentration.
This is the topic of the following exercise. it is optimal to have buffer tanks of equal size. Problem statement. Feedback control should be used and there is an additional measurement delay of minutes. and apply this stepwise procedure to two cases: (i) The desired transfer function is ´×µ ½ ´ × · ½µ. Is this a good idea? Buffer tanks are also used in chemical processes to dampen liquid ﬂowrate disturbances (or gas pressure disturbances). time in minutes) ´×µ That is. ¦ ½¼¼± where Õ£Ò is the (b) Explain why it is usually not recommended to have integral action in Ã ´×µ. and use a “slow” level controller Ã such that the outﬂow ¾ ÕÓÙØ (the “new” disturbance) is smoother than the inﬂow Õ Ò (the “original” disturbance). We add a buffer tank (with liquid volume Î [m¿ ]). The idea is to temporarily increase or decrease the liquid volume in the tank to avoid sudden changes in ÕÓÙØ . after an initial delay of ½ min. after a step in the output Ý will. note that we must always have ´¼µ ½). ¾ (ii) The desired transfer function is ´×µ ½ ´ ¾ × · ½µ . A material balance yields Î ´×µ Ã ´×µÎ ´×µ we ﬁnd that ´Õ Ò ´×µ ÕÓÙØ ´×µµ × and with a level controller ÕÓÙØ ´×µ ¾ ´×µ Ã ´×µ ´×µ × · Ã ´×µ ½ ßÞ ´×µ (5. 2. (a) Assume the inﬂow varies in the range Õ£ Ò nominal value. (Solution: The required total volume is the same. Explain why this is most likely not a good solution.g.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË × ¿× ¾¼ composition Ý is given by (scaled variables. (d) Is there any reason to have buffer tanks in parallel (they must not be of equal size because then one may simply combine them)? (e) What about parallel pipes in series (pure delay). Design the size of the tank (determine its volume ÎÑ Ü ) such that the tanks does not overﬂow or go empty for the expected disturbances in ½ Õ Ò . . Note that the steadystate value of ÕÓÙØ must equal that of Õ Ò . Õ Ò [m¿ /s] denote a ﬂowrate which acts as a disturbance to the Exercise 5. but the cost of two smaller tanks is larger than one large tank). (c) In case (ii) one could alternatively use two tanks in series with controllers designed as in (i). What should be the residence time in the tank? (c) Show that in terms of minimizing the total volume for buffer tanks in series. determined by a controllability analysis of how ¾ affects the remaining process. increase in a ramplike fashion and reach its maximum allowed value (which is ½) after another ¿ minutes.10 Let ½ process. Design the level controller Ã ´×µ such that ´×µ has the desired shape (e.97) The design of a buffer tank for a ﬂowrate disturbance then consists of two steps: 1.
The following model in terms of deviation variables is derived from heat balances ¦ ¦ ¦ Õ ´×µ · ¼ ´¾¼× · ½µ Ì¼ ´×µ (5. ¾ is reduced to ¾ [s]. ½ [s]. so the plant is not controllable). and Ì ¾ [s]. in which frequency range is it important to know the model well? To answer this problem you may think about the following subproblems: (a) Explain what information about the plant is used for ZieglerNichols tuning of a SISO PIDcontroller. ´×µ ½ Ã¾ ¼ × ´¿¼×·½µ´Ì×·½µ . The main disturbance is on Ì¼ . ¾ [s]. but the effect of the disturbance is a bit too large at high frequencies (input saturation). ½ The unit for time is seconds. We summarized our ﬁndings in terms . 5. The measurement device for the output has transfer function Ñ ´×µ ¼¾ .17 Conclusion The chapter has presented a frequency domain controllability analysis for scalar systems applicable to both feedback and feedforward control. Is the plant inputoutput controllable? (b) What is the effect on controllability of changing one model parameter at a time in the following ways: 1.16.13 A heat exchanger is used to exchange heat between two streams. Ì is increased to ¿¼ [s]. Ã¾ is reduced to .11 What information about a plant is important for controller design. How does the controller design and the closedloop response depend on the that steadystate gain ´¼µ ½ ?) Exercise 5. Derive the scaled Ì ´×µ model.12 Let ´×µÀ ´×µ. The nominal parameter values are: Ã½ Ã¾ ¿ . 5. and ´×µ ¾×. (b) Is the steadystate plant gain ´¼µ important for controller design? (As an example ½ ½ and design a Pcontroller Ã ´×µ Ã such consider the plant ´×µ ×· with ½¼¼. a coolant with ﬂowrate Õ (½ ½ kg/s) is used to cool a hot stream with inlet temperature Ì¼ (½¼¼ ½¼Æ C) to the outlet temperature Ì (which should be ¼ ½¼Æ C). ½ is reduced to ¼ ½ [s]. The measurement delay for Ì is ¿s. Exercise 5. and in particular.98) ´ ¼× · ½µ´½¾× · ½µ ´ ¼× · ½µ´½¾× · ½µ where Ì and Ì¼ are in Æ C. Õ is in kg/s. (a) Assume all variables have been appropriately scaled.¾½¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 5. À ´×µ Ã½ ½ × . Ã½ is reduced to ¼ ¼¾ .4 Additional exercises Exercise 5. 2. and the unit for time is seconds. 3. 4. Is the plant controllable with feedback control? (Solution: The delay poses no problem (performance).
The method has been applied to a pH neutralization process. The rules may be used to determine whether or not a given plant is controllable. They may also be generalized to multivariable plants where directionality becomes a further crucial consideration. The tools presented in this chapter may also be used to study the effectiveness of adding extra manipulated inputs or extra measurements (cascade control). and it is found that the heuristic design rules given in the literature follow directly. . These rules are necessary conditions (“minimum requirements”) to achieve acceptable control performance. see page 197.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ËÁËÇ Ë ËÌ ÅË ¾½½ of eight controllability rules. a direct generalization to decentralized control of multivariable plants is rather straightforward and involves the CLDG and the PRGA. They are not sufﬁcient since among other things they only consider one effect at a time. The key steps in the analysis are to consider disturbances and to scale the variables properly. Interestingly. see page 453 in Chapter 10.
¾½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .
RHPpoles and disturbances each have directions associated with them. see (6. we summarize the main steps in a procedure for analyzing the inputoutput controllability of MIMO plants. . Ú Ù . 6.68) ÝÔ : output direction of a RHPpole. RHPpoles.ÄÁÅÁÌ ÌÁÇÆË ÇÆ È Ê ÇÊÅ Æ ÁÆ ÅÁÅÇ Ë ËÌ ÅË In this chapter.30) Ý : output direction of a disturbance. see (4.61) ½ . Finally. delays. ÝÞ and ÝÔ are ﬁxed complex Ý ´×µ and Ù ´×µ are frequencydependent (× may here be viewed as a Note that Ù here is the ’th output singular vector. see (4. RHPzeros. Ý ¾ Ù : ’th output direction (singular vector) of the plant. disturbances. RHPzeros. ´Ô µÙÔ ½ ¡ ÝÔ . We ﬁrst discuss fundamental limitations on the sensitivity and complementary sensitivity functions imposed by the presence of RHPzeros. ´Þ µÙÞ ¼ ¡ ÝÞ . as we did in the SISO case.38)1 All these are Ð vectors. We then consider separately the issues of functional controllability. see (3. the plant.1 Introduction In a MIMO system. but we will nevertheless see that most of the SISO results may be generalized. input constraints and uncertainty. we generalize the results of Chapter 5 to MIMO systems. while ¢ ½ vectors where Ð is the number of outputs. disturbances. and not the ’th input. This makes it more difﬁcult to consider their effects separately. We will quantify the directionality of the various effects in and by their output directions: ¯ ¯ ¯ ¯ ½ ÝÞ : output direction of a RHPzero.
2 Constraints on Ë and Ì 6. that is. The control error in terms of scaled variables is then where at each frequency we have the control objective is to achieve Remark 1 Here Ñ Ü is the vector inﬁnitynorm.113). the output angle between a pole and a zero is À Ó× ½ ÝÞ ÝÔ ÝÞ ¾ ÝÔ ¾ À Ó× ½ ÝÞ ÝÔ We assume throughout this chapter that the models have been scaled as outlined in Section 1.1) ½ ´Ì µ ´Ë µ ½ · ´ Ì µ (6.4. ´ µ Ñ Ü Ñ Ü ´ µ ½.49). For example. on the grounds that it is unlikely for several disturbances to attain their worst values simultaneously. Ö and are diagonal matrices with elements equal to the maximum change in each variable Ù . The angles between the various output directions can be quantiﬁed using their inner products: À À ÝÞ ÝÔ . Ö and . and from this we can deﬁne the angle in the ﬁrst quadrant. The inner product gives a number between 0 and 1. However. except that the scaling factors Ù . we mainly consider their effects separately. these directions are usually of less interest since we are primarily concerned with the performance at the output of the plant.1 Ë plus Ì is the identity matrix Á and (A. Remark 2 As for SISO systems. × ). in most cases such that they have Euclidean length 1.¾½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ generalized complex frequency. which involve the elements of different matrices rather than matrix norms. case of disturbances by replacing Remark 3 Whether various disturbances and reference changes should be considered separately or simultaneously is a matter of design philosophy. ÝÞ Ý . .2) This shows that we cannot have both Ë and Ì small simultaneously and that ´Ë µ is large if and only if ´Ì µ is large. . ½ and Ö´ µ Ñ Ü ½. the largest element in the vector. ¡ Ý Ö Ù· ÊÖ Ù´ µ Ñ Ü ½. The vectors are here normalized ÝÞ ¾ ½ ÝÔ ¾ ½ Ý ´×µ ¾ ½ Ù ´×µ ¾ ½ We may also consider the associated input directions of . we get ½ ´Ë µ ´Ì µ ½ · ´Ë µ (6. The scaling procedure is the same as that for SISO systems. From the identity Ë · Ì . and À ¡ ½ 6.2. This norm is sometimes denoted ½ . see (A. This leads to necessary conditions for acceptable performance. In this chapter. but this is not used here to avoid confusing it with the ½ norm of the transfer function (where the denotes the maximum over frequency rather than the maximum over the elements of the vector). etc. we see that reference changes may be analyzed as a special by Ê.
although these relationships are interesting. then for internal stability of the feedback system the following interpolation constraints must apply: À ÝÞ Ì ´Þ µ ¼ À ÝÞ Ë ´Þ µ À ÝÞ (6. It then follows. (6.2.6) may be written ½ ¼ ÐÒ Ø Ë´ µ ½ ¼ ÐÒ ´Ë ´ µµ ¡ ÆÔ ½ Ê ´Ô µ (6. However. that Ë ´ÔµÝÔ Ì ´ÔµÄ ½ ´ÔµÝÔ ¼. Now Ë ´Á · Äµ ½ is stable and has no À À ÄË that ÝÞ Ì ´Þ µ ¼ and ÝÞ ´Á Ë µ ¼.72) there exists an output pole direction ÝÔ such that Ä ½ ´ÔµÝÔ ¼ (6. ÝÞ Ä´Þ µ ¼.3) For a stable Ä´×µ. so Ì ´Ôµ is ﬁnite. ÙÞ and ÙÔ . see Boyd and Barratt (1991) and Freudenberg and Looze (1988).5): The square matrix Ä´Ôµ has a RHPpole at × Ô. ¾ RHPpole at × Þ . Other generalizations are also available.2 Sensitivity integrals For SISO systems we presented several integral constraints on sensitivity (the waterbed effects).e. For example. it seems difﬁcult to derive any concrete bounds on achievable performance from them. ËÁ and ÌÁ .68) there exists an output direction ÝÞ such that ÝÞ ´Þ µ Ã has internal stability.5) Proof of (6. If ´×µ has a RHPpole at Ô with output direction ÝÔ . Ì Ä ½ . i.4): From (4.4) says that Ì must have a RHPzero in the same direction as has an eigenvalue of 1 corresponding to the left eigenvector ÝÞ . and that Ë ´Þ µ In words. the generalization of the Bode sensitivity integral in (5. (1996). it has no RHPpole at × Ô. . À ¼. from ¾ Similar constraints apply to ÄÁ . the controller cannot cancel the RHPzero and it follows that Ä À a RHPzero in the same direction. These may be generalized to MIMO systems by using the determinant or the singular values of Ë .4) .2.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾½ 6. 6. see Zhou et al. then for internal stability the following interpolation constraints apply Ë ´ÔµÝÔ ¼ Ì ´ÔµÝÔ ÝÔ (6. and if we assume that Ä´×µ has no RHPzeros at × Ô then Ä ½ ´Ôµ exists and from (4. For Proof of (6. It then follows from Ì RHPpole. If ´×µ has a RHPzero at Þ with output direction ÝÞ . but these are in terms of the input zero and pole directions. the integrals are zero.3 Interpolation constraints RHPzero.6) Since Ë Ì is stable.
8) Remark.7) where × is the stable and Ñ the minimumphase version of . Þ . Then for closedloop stability the weighted sensitivity function must satisfy for each RHPzero Þ ÛÈ Ë ½ Here ½ ÛÈ ´Þ µ (6. and so on. ½ ½ if has no RHPpoles.1 MIMO sensitivity peaks. the peaks may be large if the plant has both RHPzeros and RHPpoles. the pole direction of µ. With this factorization we have the following theorem. (1996. starting with ´×µ Ô½ ´×µ Ô½ ´×µ Ô À where Ô½½ ´×µ Á · ¾Ê Ô½½ ÝÔ½ ÝÔ½ where ÝÔ½ ÝÔ½ is the output pole direction for Ô½ . that is. replaces Þ ½ .145). In particular.10) ¾ Note that the Blaschke notation is reversed compared to that given in the ﬁrst edition (1996). We ﬁrst need to introduce some notation. Note that we here only use the output factorizations.4 Sensitivity peaks Based on the above interpolation constraints we here derive lower bounds on the weighted sensitivity functions. The bound on weighted sensitivity with no RHPpoles was ﬁrst derived by Zames (1981). Theorem 6. Ô ´×µ and Þ ´×µ are stable ) containing the RHPpoles and allpass transfer matrices (all singular values are 1 for × HPzeros.8) and compute the real constants ½ À ÝÞ Ô ´Þ µ ¾ ½ ¾ ½ Þ ´Ô µÝÔ ¾ ½ (6. and that a peak on ´Ì µ larger 1 is unavoidable if the plant has a RHPpole. This is to get consistency with the literature in general. Ô ´×µ is obtained ½ by factorizing to the output one RHPpole at a time.9) Let ÛÈ ´×µ be a stable weight. The generalized result below was derived by Havre and Skogestad (1998a). Deﬁne the allpass transfer matrices given in (6. A similar procedure may be used for the RHPzeros. ½ Ô ´×µ ÆÔ À ´Á · ¾Ê ´Ô µ ÝÔ ÝÔ µ × Ô ½ ½ Þ ´×µ ÆÞ À ´Á · ¾Ê ´Þ µ ÝÞ ÝÞ µ × Þ ½ (6. Consider a plant ´×µ with RHPpoles Ô and RHPzeros Þ . respectively. and ÆÔ RHPpoles Ô with output directions ÝÔ . Suppose that ´×µ has ÆÞ RHPzeros Þ with output directions ÝÞ . Statespace realizations are provided by Zhou et al. The results show that a peak on ´Ë µ larger than 1 is unavoidable if the plant has a RHPzero. and Ô replaces Ô ½ .¾½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 6. The bounds are direct generalizations of those found for SISO systems.2. and factorize ´×µ in terms of (output) Blaschke products as follows2 ´×µ ½ Ô ´×µ × ´×µ ´×µ Þ ´×µ Ñ ´×µ (6. × This procedure may be continued to factor out Ô¾ from Ô½ ´×µ where ÝÔ¾ is the output pole direction of Ô½ (which need not coincide with ÝÔ¾ . Note that these realizations may be complex. p.
The proof of ¾ is similar. À ÝÞ Ô ½ ´Þ µ ¾ .ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾½ Let ÛÌ ´×µ be a stable weight. We then have ÛÈ Ë ´×µ ½ ÛÈ ËÑ ½ ´×µ ½ ´Þ µ The ﬁnal equality follows since ÛÈ is a scalar and À ÝÞ Ô ½ ´Þ µ. We ﬁnally select Ý such that the lower bound is as large as possible and derive ½ . Then.1 we get by selecting ÛÈ ´×µ Ë ½ ½ and (6. and the projection onto the remaining subspace is ½ × Ò . From (6. Then for closedloop stability the weighted complementary sensitivity function must satisfy for each RHPpole Ô ÛÌ Ì ½ Here ¾ ÛÌ ´Ô µ (6. ¾ Þ · Ô ¾ Ó×¾ Þ Ô ¾ (6.15) follows. then ½ ¾ Æ and Þ½ Ô ¾ ½ and there is no additional penalty for having both a RHPpole and ¼ a RHPzero. and À ÛÈ ´Þ µ ¡ ÝÞ Ô ½ ´Þ µÝ (6. so Ô ´Þ µ is greater than or equal to 1 in all directions and hence ½. Since has RHPpoles at Ô . and (6. Ý is a vector of unit length which can be chosen freely. except for one which is Þ·Ô Þ Ô ½ (since Þ and Ô are both in the RHP).18) with Þ·Ô ½. A weaker version of this result was ﬁrst proved by Boyd and Desoer (1985). An alternative proof of (6.9): Consider here a RHPzero Þ with direction ÝÞ (the subscript is Ë Ã omitted).15) ¡ . To prove that ½ ½.14) ÞÑ Ü ½ ÖÓ× Þ × Ò¾ · Ì ½ Ñ Ô ÔÓÐ × Ü ¾ One RHPpole and one RHPzero. we follow Chen (1995) and introduce the matrix Î whose columns À together with ÝÔ form an orthonormal basis for Ð¢Ð .15) simpliﬁes to give the SISOconditions in (5.11) ¾ ½ if has no RHPzeros.13) the projection of ÝÞ in the Proof of (6. where ÝÔ We then get that if the pole and zero are aligned in the same direction such that ÝÞ and ¼.12) À Ë½ ´Þ µ À Ë ´Þ µ ½ ´Þ µ ÝÞ ÝÞ Ô Ô ´×µ (6. We may factorize Ë Ì Ä ½ ËÑ Ô ´×µ and introduce the scalar function À ´×µ ÝÞ ÛÈ ´×µËÑ ´×µÝ which is analytic (stable) in the RHP.9) ½ direction of the largest singular value of Ô ½ ´Þ µ has magnitude Þ · Ô Þ Ô Ó× . Á ÝÔ ÝÔ · Î Î À . then (6. ¾ ½ À Á · ¾Ê ´Ô µ ÝÔ ÝÔ × Ô × · Ô Ý ÝÀ · Î Î À × Ô Ô Ô ÝÔ Î ×·Ô × Ô ¼ Á ¼ À ÝÔ ÎÀ Lower bound on ÛÌ ´×µ ½ Ë ½ and Ì ½ .16) and (5.9) × ½ ¾ À Ó× ½ ÝÞ ÝÔ is the angle between the output directions of the pole and zero. Ë must have RHPzeros at Ô . From Theorem 6. Conversely. Thus all singular values of ½ ½ Ô ´Þ µ are 1 or larger. Proof of ½ in (6. such that Ì is stable.15) is given by Chen (1995) who presents a slightly improved bound (but his additional factor É´Þ µ is rather difﬁcult to evaluate). For the case with one RHPzero Þ and one RHPpole Ô we derive from (6.15): From (6. if the pole and zero are orthogonal to each other.13) and we see that all singular values of Ô ´Þ µ are equal to 1.
that is. then there are Ð Ö output directions. Another example is when there are fewer inputs than outputs.¾½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Later in this chapter we discuss the implications of these results and provide some examples. × except at a ﬁnite number of ¼ . is the minimum singular value. If the plant is not functionally controllable. This follows since the rank of a product of matrices is less than or equal to the minimum rank of the individual matrices. Remark 1 The only example of a SISO system which is functionally uncontrollable is the ¼. see (A.16) . denoted Ö. it does not depend on speciﬁc parameter values. We will use the following deﬁnition: Deﬁnition 6.e. the plant must be functionally controllable. This term was introduced by Rosenbrock (1970. A typical example of this is when none of the inputs Ù affect a particular output Ý which would be the case if one of the rows in ´×µ was identically zero. These directions will vary with frequency. which cannot be affected. The normal rank of ´×µ is the rank of ´×µ at all values of singularities (which are the zeros of ´×µ). that is. i. Ñ Ð. or if Ö Ò ´×Á µ Ð (fewer states than outputs). In order to control all outputs independently we must require Ö Ð. or if additional actuators are needed to increase the rank of ´×µ. 70) for square systems. By analyzing these directions. is equal to the number of outputs (Ö Ð). which for the interesting case when there is at least as many inputs as outputs. As a Remark 2 A plant is functionally uncontrollable if and only if Ð ´ ´ µµ measure of how close a plant is to being functionally uncontrollable we may therefore consider Ð ´ ´ µµ. Remark 3 In most cases functional uncontrollability is a structural property of the system.1 Functional controllability. ´ ´ µµ. p. Í ¦Î À . we have that ´×µ is Remark 4 For strictly proper systems. the uncontrollable output directions Ý¼ ´ À Ý¼ ´ µ ´ µ ¼ (6. and it may often be evaluated from causeandeffect graphs.35). An Ñinput Ðoutput system ´×µ is functionally controllable if the normal rank of ´×µ. that is. and related concepts are “right invertibility” and “output realizability”. and we have (analogous to the concept of a zero direction) µ are the last From an SVD of ´ µ Ð Ö columns of Í ´ µ. ´×µ functionally uncontrollable if Ö Ò ´ µ Ð (the system is input deﬁcient). an engineer can then decide on whether it is acceptable to keep certain output combinations uncontrolled. or if Ö Ò ´ µ Ð (the system is output deﬁcient). A system is functionally uncontrollable if Ö Ð. if ´×µ has full row rank. 6.3 Functional controllability Consider a plant ´×µ with Ð outputs and let Ö denote the normal rank of ´×µ. A square MIMO system is functional uncontrollable if and only if system ´×µ Ø ´×µ ¼ ×. Ö denoted Ý¼ . ´×Á µ ½ . Ð.
On the other hand.17) ¼. Holt and Morari (1985a) have derived additional bounds. let denote the time delay in the ’th element of ´×µ. Exercise 6. but their usefulness is sometimes limited since they assume a decoupled closedloop response (which is usually not desirable in terms of overall performance) and also assume inﬁnite power in the inputs. control is easier the larger is. ÑÒ ÑÒ This bound is obvious since Ñ Ò is the minimum time for any input to affect output . compute the sensitivity function Ë for the plant (6.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË Example 6. For MIMO systems we have the surprising result that an increased time delay may sometimes improve the achievable performance. Then a lower bound on the time delay for output is given by the smallest delay in row of ´×µ. Bandwidth is frequencies)? What is the corresponding bandwidth? (Answer: Need equal to .1 The following plant is singular and thus not functionally controllable ¾½ ´× µ ×·½ ¾ ×·¾ ½ ×·½ ×·¾ ¾ This is easily seen since column ¾ of ´×µ is two times column ½.1 To further illustrate the above arguments. for control is possible. Use the approximation × ½ × to show that at low frequencies the elements of Ë ´×µ are of magnitude ½ ´ ·¾µ.4 Limitations imposed by time delays Time delays pose limitations also in MIMO systems. respectively Ý¼ ´¼µ ½ ½ Ô ½ ¾ Ý¼ ´½µ ½ ¾ Ô ½ 6. In words. That is. How large must be to have acceptable performance (less than 10% offset at low . As a simple example. The uncontrollable output directions at low and high frequencies are. and Ñ Ò can be regarded as a delay pinned to output . so we can obtain tight control if the controller reacts within this initial time period. the plant is singular (not functionally controllable). that is. provided the bandwidth is larger than about ½ . consider the plant ´×µ × ½ ½ ½ (6. To illustrate this. the presence of the delay decouples the initial (highfrequency) response. for this example. and note that it is inﬁnite at low frequencies.17) using a simple diagonal controller Ã × Á . and controlling the two With ¼. Speciﬁcally. we may compute the magnitude of the RGA (or the condition number) of as a function of frequency.) . but drops to ½ at about frequency ½ . effective feedback outputs independently is clearly impossible.
Therefore.2 Repeat Exercise 6. the derivations given in Exercise 6. All the results derived in Section 5. ¾ Þ from For ideal ISE optimal control (the “cheap” LQR problem). for example. RHPdisturbance or reference is directly related to zeros close to the origin imply poor control performance.5 Limitations imposed by RHPzeros RHPzeros are common in many practical multivariable problems. we may to some extent choose the worst direction. (e. The limitations of a RHPzero located at Þ may also be derived from the bound ÛÈ Ë ´×µ ½ Ñ Ü ÛÈ ´ µ ´Ë ´ µµ ÛÈ ´Þ µ (6. ´Ë µ. by selecting the weight ÛÈ ´×µ such that we require tight control at low frequencies and a peak for ´Ë µ less than ¾.g. and therefore the RHPzero may not be a limitation in other directions. Remark 2 Note that condition (6.18) in (6.¾¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 1 The observant reader may have noticed that ´×µ in (6. if we require tight control at high frequencies. yields an internally unstable system. that the bandwidth (in the “worst” direction) must for a real RHPzero satisfy £ Alternatively. therefore generalize if we consider the “worst” direction corresponding to the maximum singular value.18) is somewhat restrictive.1 numerically.4 for SISO systems. see Qiu and Davison (1993). then we must from (5.34) Þ ¾.1 are not strictly correct. we derive from (5.4 can be generalized. Furthermore. as for SISO systems. we may assume. Alternatively. a controller with integral action which cancels this zero. that × is replaced × so that the plant is not singular at steadystate (but it is close to singular). This is discussed next.38) satisfy £ ¾Þ .10) where ÛÈ ´×µ is a scalar weight. For instance. Thus.6. with × replaced by ¼ ´½ ¾Ò ×µÒ ´½ · Ò is the order of the Pad´ approximation). ½ and . the SISO result ÁË Section 5. This means that although the conclusion that the time delay helps is correct. Remark 1 The use of a scalar weight ÛÈ ´×µ in (6.18) involves the maximum singular value (which is associated with the “worst” direction). . To “ﬁx” the results we may assume that the plant is only going to be controlled over a limited time so that internal instability is not an issue. 6. the assumption is less restrictive if one follows the scaling procedure in Section 1. The limitations they impose are similar to those for SISO systems.4 and scales all outputs by their allowed variations such that their magnitudes are of approximately equal importance. and plot the elements of Ë ´ µ e ¾Ò ×µ (where Ò as functions of frequency for ¼½ .17) is singular at × ¼ (even with nonzero) and thus has a zero at × ¼. by ¼ Exercise 6. although often not quite so serious as they only apply in particular directions. However. the transfer function ÃË contains an integrator). They show for a MIMO plant (the with RHPzeros at Þ that the ideal ISEvalue È “cheap” LQR cost function) for a step ¾ Þ .
1 Moving the effect of a RHPzero to a speciﬁc output In MIMO systems one can often move the deteriorating effect of a RHPzero to a given output. For the output which is not perfectly controlled. we must pay for this by having to accept some interaction. However. and Ì¾ with output ¾ perfectly controlled. which may be less important to control well. ¬¾ . Of course. so the RHPzero must be contained in both diagonal elements. Most of the results in this section are from Holt and Morari (1985b) where further extensions can also be found. Ì ´¼µ Á .19). and the offdiagonal Á .19). and this imposes the following relationships between the column elements of Ì ´×µ: ½ ¾ Ô ½ ¼ ¾Ø½½ ´Þ µ Ø¾½ ´Þ µ ¼ ¾Ø½¾ ´Þ µ Ø¾¾ ´Þ µ ¼ (6. In all three cases. This is possible because. the diagonal element must have a RHPzero to satisfy (6. although the À ¼ imposes a certain relationship between the elements interpolation constraint ÝÞ Ì ´Þ µ within each column of Ì ´×µ. We therefore see that it is indeed possible to move the effect of the RHPzero to a particular output. i.5. is Ì¼ ´×µ ½ ¼ ×·Þ ×·Þ ×·Þ ¼ ×·Þ ×·Þ ¼ ×·Þ (6. which also satisﬁes Ì ´¼µ Á .19) Ì Ö and examine three possible choices for Ì : Ì¼ We will consider reference tracking Ý diagonal (a decoupled design). but we make the assumption to simplify our argument. Ì½ with output ½ perfectly controlled. Let us ﬁrst consider an example to motivate the results that follow. and to satisfy (6.8 continued.4). ¬½ ½ design Ì½ ´×µ. we must element must have an × term in the numerator to give Ì ´¼µ then require for the two designs The RHPzero has no effect on output ½ for and no effect on output ¾ for design Ì¾ ´×µ. A decoupled design has Ø½¾ ´×µ Ø¾½ ´×µ ¼.19) we then need Ø½½ ´Þ µ ¼ and Ø¾¾ ´Þ µ ¼. the columns of Ì ´×µ may still be selected independently. Consider the plant ´×µ ½ ´¼ ¾× · ½µ´× · ½µ ½ ½ ½ · ¾× ¾ ¼ which has a RHPzero at × Þ ¼ . we cannot achieve perfect control in practice. This is the same plant considered on page 85 where À we performed some ½ controller designs. Example 3.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾½ 6. One possible choice. The output zero direction satisﬁes ÝÞ ´Þ µ ¼ and we ﬁnd À ÝÞ À Any allowable Ì ´×µ must satisfy the interpolation constraint ÝÞ Ì ´Þ µ ¼ in (6.e. we require perfect tracking at steadystate. To satisfy (6.20) For the two designs with one output perfectly controlled we choose Ì½ ´×µ ¬½ × ×·Þ Ì¾ ´×µ ×·Þ ×·Þ ¼ ¬¾ × ×·Þ ½ The basis for the last two selections is as follows.
but we then have to accept some interaction. . .. This was also observed in the controller designs in Section 3. i. then the interactions will be signiﬁcant. see also Holt ¾ The effect of moving completely the effect of a RHPzero to output is quantiﬁed by (6.e. . as expressed by ¬ .e.3 Consider the plant ´× µ ½ ½ « ×·½ « (6. Ì can be chosen of the form ¾ ½ ¼ .10 on page 86. if ÝÞ is much smaller ¾ÝÞ ÝÞ than 1. we have to accept that the multivariable RHPzero appears as a RHPzero in each of the diagonal elements of Ì ´×µ. . see Figure 3.22) . Theorem 6. ×·Þ ¬ ·½ × ¡ ¡ ¡ ¬Ò × ×·Þ ×·Þ ×·Þ (6. ¼ ¡ ¡¡ ¬ ¼ Ý ¾ ÝÞ ÓÖ Þ ¼ ¼ ¡¡¡ ½ (6. We also see that we can move the effect of the RHPzero to a particular output. ¼ ¼ ¬ ½ × ×·Þ ¼ ¼ ¼ ¼ ¡¡¡ ¡¡¡ ¼ ¼ ¿ Ì ´×µ ¬½ × ×·Þ ¬¾ × ×·Þ ¡ ¡¡ . so we have to “pay more” to push its effect to output ¾.2 Assume that ´×µ is square. We see that if the zero is not “naturally” aligned with this output. . it is possible to obtain “perfect” control on all outputs zero direction is nonzero. This is reasonable since the zero output direction ÝÞ ¼ ¼ Ì is mainly in the direction of output ½. We see from the above example that by requiring a decoupled response from Ö to Ý .21) satisﬁes the interpolation constraint ÝÞ Ì ´Þ µ and Morari (1985b). we cannot move the effect of a RHPzero to an output corresponding to a zero element in ÝÞ . is largest for the case where output ½ is perfectly controlled. requiring a decoupled response generally leads to the introduction of additional RHPzeros in Ì ´×µ which are not present in the plant ´×µ.21) ¼ where .22). This is stated more exactly in the following Theorem. Exercise 6. ¼ ¡ ¡¡ ½ ¡ ¡¡ . .5. functionally controllable and stable and has a Þ and no RHPpole at × Þ .e. . Speciﬁcally. Ì¼ ´×µ has two. In particular. i. which occurs frequently if we have a RHPzero pinned to a subset of the outputs.20). À Proof: It is clear that (6. in terms of yielding some ¬ much larger than 1 in magnitude. In other words. ¼. Then if the ’th element of the output single RHPzero at × ¼. whereas ´×µ has one RHPzero at × Þ .23) .¾¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We note that the magnitude of the interaction. ÝÞ with the remaining output exhibiting no steadystate offset. i.. as in design Ì¼ ´×µ in (6. . .
The corresponding input and output pole directions are ÙÔ ¼ ¾ ¼ ÝÔ ½ ¼ . This bound is tight in the sense that there always exists a controller (possibly improper) which achives the bound.6 Limitations imposed by unstable (RHP) poles For unstable plants we need feedback for stabilization. As for SISO systems this imposes the following two limitations: RHPpole Limitation 1 (input usage). we get strong interaction with ¬ ¾¼ if we want to control Ý¾ perfectly. where ÝÔ is the output pole direction. (b) Which values of « yield a RHPzero. More precicely.) Exercise 6. ´×µ × Þ × Ô × Þ ¼ ½×·½ × ¼ ×½ ·½ Ô ½ with Þ ¾ and Ô ¾ (6. (6.25) directly generalises the bound (5. the presence of an ustable pole Ô requires for internal stability Ì ´ÔµÝÔ ÝÔ .11) we ﬁnd that a RHPpole Ô imposes restrictions on ´Ì µ which are identical to those derived on Ì for SISO systems in Section 5. we need to react sufﬁciently fast and we must require that ´Ì ´ µµ is about ½ or larger up to the frequency ¾ Ô .ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾¿ ½ (a) Find the zero and its output direction.42) for SISO systems. From the bound ÛÌ ´×µÌ ´×µ ½ (6. ÛÌ ´Ôµ in RHPpole Limitation 2 (bandwidth).8. if control at steadystate is required then worst for « ½ with zero at × ¼.24) 6. 1997)(Havre and Skogestad. (Answer: Þ «¾ ½ and ÝÞ « ½ Ì ).2 Consider the following multivariable plant . Example 6. Best for « ¼ with zero achievable performance? (Answer: We have a RHPzero for « at inﬁnity.26) The plant has a RHPpole Ô ¾ (plus a LHPzero at 2. The transfer function ÃË from plant outputs to plant inputs must satisfy for any RHPpole Ô (Havre and Skogestad.25) with its where ÙÔ is the input pole direction.2. (Answer: Output ¾ is the most difﬁcult since the zero is mainly in that direction.7). approximately.5 which poses no limitation). Which output is the most difﬁcult to control? Illustrate your conclusion using Theorem 6. see (6. Thus. and which of these values is best/worst in terms of ½. 2001) ÃË ½ ÙÀ × ´Ôµ ½ ¾ Ô (6.) (c) Suppose « ¼ ½. and × is the “stable version” of RHPpoles mirrored into the LHP.4 Repeat the above exercise for the plant ´× µ ½ × « ½ × · ½ ´« · ¾µ¾ × « (6.
¾¾ The RHPpole Ô can be factorized as ÅÍÄÌÁÎ ÊÁ ´×µ and Ä ×·¾ ×·Ô ×·¾ ¼ ½×·½ × ¼ ×½··½ Ô Ã ÇÆÌÊÇÄ ½ Ô ´ µ × ´×µ where × ´×µ Ô´ µ × Ô ×·Ô ¼ ¼ ½ ½ Consider input usage in terms of ÃË . We next consider an example which demonstrates the importance of the directions as expressed by the angle . and closedloop performance is expected to be poor.25) we must have that ÃË ½ ÙÀ × ´Ôµ ½ ¾ Ô ¼ ¼ ¾ ½ ½¾ ¼ ¿ ½ ¼ ¿ ½ ¾ Havre (1998) presents more details including statespace realizations for controllers that achieve the bound. From (6. 6.14) × Ë ½ Ì ½ × Ò¾ · Þ · Ô ¾ Ó×¾ Þ Ô¾ (6.7 RHPpoles combined with RHPzeros For SISO systems we found that performance is poor if the plant has a RHPpole located close to a RHPzero.3 Consider the plant « ´×µ × Ô ¼ ½ ½ ×·Ô ¼ Ó× « × Ò « × Ò « Ó× « Í« ßÞ × Þ ¼ ½×·½ ¼ ×·Þ ¼ ½×·½ ¼ Þ ¾Ô ¿.27) À where Ó× ½ ÝÞ ÝÔ is the angle between the RHPzero and RHPpole.1. This is also the case in MIMO systems provided that the directions coincide.15) and (6. For example. This was quantiﬁed in Theorem 6. Next. Example 6. there are no particular control problems related to ¼Æ for which we have the subsystem ¾¾ . consider « Í« ¼ ½ ½ ¼ Ò ¼ ´×µ ´¼ ½×·½µ´×·Ôµ × Þ ¼ ×·Þ ´¼ ½×·½µ´× Ôµ ¼ . and the plant consists of two decoupled subsystems × Þ ¼ ´¼ ½×·½µ´× Ôµ ×·Þ ¼ ´¼ ½×·½µ´×·Ôµ ¾ and a RHPpole at Ô Here the subsystem ½½ has both a RHPpole and a RHPzero. for a MIMO plant with single RHPzero Þ and single RHPpole Ô we derive from (6. ¿ (6. On the other hand.28) which has for all values of « a RHPzero at Þ For « ¼Æ the rotation matrix Í« ¼ ´×µ Á .
89 2.15 1. Æ RHPzero output the and we ﬁnd ÝÔ ¼Æ to ¼ ½ Ì for « ¼ . its output direction is ﬁxed ½ ¼ Ì for all values of «. Response to step in reference Ö ½ ½ Ì with ½ controller for four different values of . Since in (6.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾ and we again have two decoupled subsystems.98 1. but this time in the offdiagonal elements. however. and there will be some interaction between the RHPpole and RHPzero.76 3. but and « are not equal.00 7. This is seen from the follwoing Table below. where we also give in (6. À The Table also shows the values of Ë½ and Ì ½ obtained by an À½ optimal Ë ÃË .0 7.40 9.60 2. the angle direction changes from ½ ¼ Ì for « between the pole and zero direction also varies between ¼Æ and ¼Æ .31 1. The main difference. Solid line: Ý½ . so we expect this plant to be easier to control.01 2 1 0 −1 −2 0 1 2 ¼Æ 2 1 0 −1 ¼ Æ 3 4 5 −2 0 1 2 3 4 5 Time 2 1 0 −1 −2 0 1 2 ¿ Æ Time 2 1 0 −1 ¼Æ 3 4 5 −2 0 1 2 3 4 5 Time Time Figure 6. Thus.28) the RHPpole is located at the output of the plant.59 ¼ ½ 1. Dashed line: Ý¾ .55 ½ ¼ ¼ ¼ ¿¿ ¼ Æ ¿¼Æ ¼ ¼ ½½ ¿ Æ ¼Æ ¼Æ ¼Æ 1.53 1.59 1. « ¼Æ ¿¼Æ ¼Æ and ¼Æ : « ÝÞ À Ó× ½ ÝÞ ÝÔ Ë ½ Ì ½ ´Ë ÃË µ ¼Æ ¼Æ 5.27). On the other hand.0 1.1: MIMO plant with angle between RHPpole and RHPzero. For intermediate values of « we do not have decoupled subsystems. for four rotation angles. is that there is no interaction between the RHPpole and RHPzero in this case.60 2.
as does the values of in (6. This is not altogether 6. The corresponding responses to a step change in the ½ ½ . reference. the output pole and zero directions do depend on the relative scaling of the outputs.3. 6. This ¼ ¼½Á . 3.¾¾ design using the following weights ÅÍÄÌÁÎ ÊÁ Ä ¾ £ Ã ÇÆÌÊÇÄ ÏÙ Á ÏÈ × Å· £ Á Å × ¼ (6.26). and require tight control up to a ¼ Ö ×.9). The minimum ½ norm for the overall Ë ÃË problem frequency of about £ is given by the value of in Table 6. weights that the ½ designs for the four angles yield Ë ½ which are very close to . . as required by the performance weight ÏÈ . For that Ý½ (solid line) has on overshoot due to the RHPpole.27) are tight since there is only one RHPzero and one RHPpole. From the simulation we see 2. is quite different from the rotation angle « at intermediate values between ¼Æ and ¼Æ . 4.27).5 Consider the plant in (6. ¼Æ and ¿¼Æ . and thus tends to push the zero direction towards output ¾. Compute the lower bounds on Ë ½ and ¾ so that the plant now also has a ÃË ½ . We ﬁnd with these ¼ ½ ¼ ½ ½ ½ ¼¼ . The angle between the pole and zero.1 that the response for Ý½ is very poor.29) The weight ÏÈ indicates that we require Ë ½ less than ¾. We see from the simulation for « ¼Æ in Figure 6. However. The bounds in (6. which yields a strong gain in this direction. £ ¼ ¼½ and Å ½ (ÏÙ can be conﬁrmed numerically by selecting ÏÙ and are small so the main objective is to minimize the peak of Ë ). À À Exercise 6.1. pole and zero directions provide useful information about a plant. Ö À Several things about the example are worth noting: 1. are shown in Figure 6. The ½ optimal controller is unstable for « Æ the plant becomes two SISO systems one of which needs an surprising. but again the issue of directions is important.8 Performance requirements imposed by disturbances For SISO systems we found that large and “fast” disturbances require tight control and a large bandwidth. but with Þ RHPzero. because for « ¼ unstable controller to stabilize it since Ô Þ (see Section 5. which must therefore be done appropriately prior to any analysis. The same results apply to MIMO systems. whereas Ý¾ (dashed line) has an inverse response due to the RHPzero. This is because of the inﬂuence of the RHPpole in output ½. so Ë ½ and Ì ½ and it is clearly impossible to 5. « ¼Ó the RHPpole and RHPzero do not interact. In conclusion. This is as expected because of the closeness of the RHPpole and zero (Þ ¾ Ô ¿). For « ¼Æ we have get Ë ½ less than ¾.
we can use (6. For a plant with many disturbances is a column of the matrix . We can also derive bounds in terms of the singular values of Ë .42) For SISO systems. and we therefore have: · Äµ is larger ¯ For acceptable performance ( Ë ¾ ½) we must at least require that ´Á than ¾ and we may have to require that ´Á · Äµ is larger than ¾. ´ µ ¾ ½. we used this to derive tight bounds on the sensitivity function and the loop gain. and the plant.34) ´Ë µ ½ ´Á · Äµ and ´Ë µ ½ ´Á · Äµ. Here Ù the condition number ´ µ and Ù are the output directions in which the plant has its largest and smallest gain.32). Ë ½ and ½ · Ä . ËÝ ¾ ½ (6. (6. Remark. Consider a single (scalar) disturbance and let the ).35) . In the following. The disturbance condition number provides a measure of how a disturbance is aligned with Ù) if the disturbance is in the “good” direction. To see this use ÝÞ Ë ´Þ µ ÝÞ À Ë to get and apply the maximum modulus principle to ´×µ ÝÞ Ë ½ À ÝÞ ´Þ µ À ÝÞ Ý ¡ ´Þ µ ¾ (6. see Chapter 3. The disturbance direction is deﬁned as vector represent its effect on the outputs (Ý Ý ´ µ Here ½ ¾ (6. If ´×µ has a RHPzero at × Þ then the performance may be poor if À the disturbance is aligned with the output direction of this zero.31) Ý is the pseudoinverse which is ½ for a nonsingular (rather than ) to show that we consider a single disturbance. Plant with RHPzero.33) ¾ ´Ë µ Now ¾ Ë ¾ ´Ë µ ¾ (6. which is equivalent to (6.e. With feedback control and the performance objective is then satisﬁed if Ë ¾ ´Ë µ ½ ¸ Ë ½ ½ (6. We here use is a vector. i. To see this.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾ Deﬁnition 6.30) The associated disturbance condition number is deﬁned as ´ µ ´ ÝÝ µ . Since is a vector we have from (3. A similar derivation is complicated for MIMO systems because of directions. Also assume that the frequency the worstcase disturbance may be selected as ´ µ outputs have been scaled such that the performance objective is that at each frequency the Ë 2norm of the error should be less than 1. It may vary between 1 (for Ý ´ µ ´ Ý µ (for Ý Ù) if it is in the “bad” direction.2 Disturbance direction. i.32) which shows that the Ë must be less than ½ ¾ only in the direction of Ý . let Ö ¼ and assume that the disturbance has been scaled such that at each ½.30) to get the following requirement.e.
´×µ has a RHPzero Þ and in Example 4. and the corresponding results for reference tracking are obtained by replacing by Ê.94)). if ¼ . We cannot really conclude that the plant is controllable for ¼ since (6. we must then for a given disturbance À ÝÞ ´Þ µ ½ where ÝÞ is the direction of the RHPzero.36) we conclude that the plant is not inputoutput controllable if ¼ ¿ ¼ ½.53). We derive the results for control ´ disturbances. i.4 leads naturally to the vector maxnorm as the way to measure signals and performance. we will consider the case of perfect ¼µ and then of acceptable control ´ ½). However. From this we get À ÝÞ ´Þ µ ¼ ¿ ¼ ¡ ½ ¼ ¿ ¼ and from (6. 6.36) is only a necessary (and not sufﬁcient) condition for acceptable performance. Remark. for any value of ½. the scaling procedure presented in Section 1. and there may also be other factors that determine controllability.and ¾norms are at most a factor Ñ apart.e.¾¾ To satisfy ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ at least require (6. . In the above development we consider at each frequency performance in terms of ¾ (the ¾norm).36) Ë ½ ½.9 Limitations imposed by input constraints Constraints on the manipulated variables can limit the ability to reject disturbances and track references. whether we can achieve Ë ½ ½. such as input constraints which are discussed next. and the question is whether the plant is inputoutput controllable. Exercise. This provides a generalization of the SISOÀ ´Þ µ ½ in (5. Show that the disturbance condition number may be interpreted as the ratio between the actual input for disturbance rejection and the input that would be needed if the same disturbance was aligned with the “best” plant direction. As was done for SISO plants in Chapter 5. so the values of an Ñ ½ vector we have ÑÜ ¾ max.37) It is assumed that the disturbance and outputs have been appropriately scaled. Fortunately. and we will neglect it in the following.4 Consider the following plant and disturbance models ´×µ ½ × ½ ×·¾ ¾´× ½µ ´×µ ×·¾ ½ ½ (6. The results in this section apply to both feedback and feedforward control.e. For combined disturbances the condition is ÝÞ ´Þ µ ¾ condition ½. The reason is that for Ñ Ñ Ü (see (A. this difference is not too important.11 on page 134 we have already computed the zero direction. ¢ Ô Ô Example 6. i.
This yields more information about which particular input is most likely to saturate and which disturbance is the most problematic. Then with a single disturbance Ù Ý (6. that avoided ( Ù Ñ Ü is. the induced ¾norm is less than 1. Usually Ê is diagonal with all elements larger than ½. We will consider both the maxnorm and ¾norm. it is usually recommended in a preliminary analysis to consider one disturbance at a time. to measure the vector signal magnitudes at each frequency makes some difference. by plotting as a function of frequency the individual elements of the matrix ½ . has full row rank so that the outputs can be perfectly controlled. ¡ 6. For combined reference changes. .105)).61)) ´Ê ½ µ ½ Ö for perfect (6. . We here measure both the disturbance and the input in terms of the ¾norm. Twonorm.9. For a square plant the input needed for perfect disturbance ½ rejection is Ù (as for SISO systems).39) ´ Ý µ ½. see (A. we must quantify the set of possible disturbances and the set of allowed input signals. Maxnorm and square plant. Then the worstcase disturbance is ´ µ ½) if all elements in the vector ½ are less than 1 in magnitude. However. for mathematical convenience we will also consider the vector 2norm (Euclidean norm).38) where ½ is the induced Ñ Ünorm (induced norm. and we get that input saturation is vector). Then the Assume that smallest inputs (in terms of the ¾norm) needed for perfect disturbance rejection are ¡ ½ where Ý ¾ ½.63). For MIMO systems the choice of vector norm. Ö´ µ ¾ ½. With combined disturbances we must require we must require that is. more generally.41) ´ ´ µµ large.106). In most cases the difference between these two norms is of little practical signiﬁcance. we want ½ Ö (6. ½ ÑÜ ½ ½ For simultaneous disturbances ( is a matrix) the corresponding requirement is ½ ½ (6. The vector Ñ Ünorm (largest element) is the most natural choice when considering input saturation and is also the most natural in terms of our scaling procedure. the corresponding condition ÝÊµ ½.40) where Ö is the frequency up to which reference tracking is required. or equivalently (see (A. Consider a single disturbance ( is a ½. for example. However. maximum row sum.1 Inputs for perfect control We here consider the question: can the disturbances be rejected perfectly while maintaining Ù ½? To answer this. control with Ù ¾ ½ becomes ´ Ý À´ À µ ½ is the pseudoinverse from (A.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¾ Remark. and we must at least require ´ ´ µµ or. see (A.
The question we want ½ for any ½ using inputs to answer in this subsection is: is it possible to achieve ½? We use here the maxnorm.g. with Ù Ñ Ü (the vector inﬁnitynorm).2 Inputs for acceptable control Let Ö ¼ and consider the response Ù· to a disturbance .59) we ´ ½µ . Unfortunately. The resulting conditions are therefore only necessary (i. we do not have an exact analytical result.46) below. the minimum achievable control error or the minimum required input.45) . a nice approximate generalization (6.¾¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 6.43) If and are real (i.42) can be formulated as a linear programming (LP) problem. a vector. To simplify the problem. Introduce the rotated control error and rotated input ÍÀ Ù Î ÀÙ (6. but by making the approximation in (6.e.9. A necessary condition for avoiding input saturation (for each get ÍÑ Ò disturbance) is then ËÁËÇ ½ Ø Ö ÕÙ Ò × Û Ö ½ (6. for the vector signals.e. Exact conditions Mathematically. from the proof of (5.42).44) We would like to generalize this result to MIMO systems.42) A necessary condition for avoiding input saturation (for each disturbance) is then (6. The worstcase disturbance is then ½ and the problem i.e. and in the general case as a convex optimization problem. by considering the maximum allowed disturbance. We here use the latter approach. e. is a scalar and is at each frequency is to compute ÍÑ Ò ¸ ÑÙ Ò Ù Ñ Ü ×Ù Ø Ø ÍÑ Ò ½ Ù· ÑÜ ½ ½ (6. the problem can be formulated in several different ways. For SISO systems we have an analytical solution of (6. Approximate conditions in terms of the SVD At each frequency the singular value decomposition of the plant (possibly nonsquare) is Í ¦Î À . at steadystate) then (6. we consider one disturbance at a time.47) is available. and also to provide more insight. ¡ We consider this problem frequencybyfrequency. This means that we neglect the issue of causality which is important for plants with RHPzeros and time delays. minimum requirements) for achieving ÑÜ ½. see Skogestad and Wolff (1992).
must approximately satisfy Ô ÅÁÅÇ ´ µ ÙÀ ½ Ø Ö ÕÙ Ò × Û Ö ÙÀ ½ (6. to Several disturbances. Proof of (6. This ½ . assuming that (6. Í À ´ Ù · µ ¦Ù · Í À (6. For combined disturbances.47) is a necessary condition for achieving acceptable control ½) with Ù Ñ Ü ½. This is ½ a scalar problem similar to that for the SISOcase in (5. similar results are derived for references by replacing Ê. 2. ´ µ.48) ´ µÙ · Í À . one requires the ’th row sum of Í À be less than ´ µ (at frequencies where the ’th row sum is larger than 1). This may give ideas on which disturbances should be reduced. (6. In which direction the plant gain is too small. ÙÀ projection of onto the ’th output singular vector of the plant. and if we assume ÙÀ (otherwise we may simply set Ù ¼ and achieve ½) then the smallest Ù is achieved ½.47) provides a nice generalization of (6.46) ( Ñ Ü ½) for a single disturbance ( holds. so from (A. Thus. Ù . For which disturbances and at which frequencies input constraints may cause problems.47): Let Ö We have may be interpreted as the ¼ and consider the response Ù· to a single disturbance . ÍÀ ´ ÙÀ ½µ ´ µ Ù Ñ Ü is less than 1. More precisely. Note: Ù is equivalent to requiring (a vector) is the ’th column of Í . For the worstcase disturbance ( ¦Î ½). and is a vector since we consider a single disturbance.47) follows by requiring that Based on (6. Condition (6. However.124) this would be an equality for the 2norm. Reference commands. whereas Ù (a scalar) is the ’th rotated plant input.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË and assume that the maxnorm is approximately unchanged by these rotations ¾¿½ ÑÜ ÑÜ ÙÑÜ ÙÑÜ (6.59).47) where Ù is the ’th output singular vector of .48) À À . Ú .46) From (A. where the last equality follows since Í we want to ﬁnd the smallest possible input such that ÑÜ Ñ Ü is less than 1.49) ÙÑÜ ¾ by .47) we can ﬁnd out: 1. (6. one can determine which actuators should be redesigned (to get more power in certain directions) and by looking at the corresponding output singular vector. usually we derive more insight by considering one disturbance at a time.44). We then ﬁnd that each singular value of . for example by redesign. one can determine on which outputs we may have to reduce our performance requirements. As usual.94) the error by using the approximation for the maxnorm is at most a factor Ñ where Ñ is the number of elements in the vector. the minimum input is when the right hand side is “lined up” to make Ù and (6. By looking at the corresponding input singular vector. where from (6.
The disturbance condition numbers. are ½½ respectively. We will now analyze whether the disturbance rejection requirements will cause input ¼) and acceptable saturation by considering separately the two cases of perfect control ( control ( Ñ Ü ½). We will use the approximate requirement (6.49) the required input in this ´½ ½µ ¼ ½¿ for both disturbances which is direction is thus only about Ù½ much less than ½. the large value of the condition number is not caused by a small ´ µ (which would be a problem). Approximate result for acceptable control. perfect rejection of disturbance ¾ ¼ ¼¼ ¼ ¾½¿ Ì .¾¿¾ ÅÍÄÌÁÎ ÊÁ ¾ Ä Ã ÇÆÌÊÇÄ Example 6. which suggests that there 3. 1. On the other hand. 4.47) to evaluate the inputs needed for acceptable control. The elements in are about times larger than those in should be no problems with input constraints. but this does not by itself 5.5 Distillation process Consider a appropriately scaled steadystate model is ¢¾ plant with two disturbances. reﬂux and boilup as inputs. . and from (6. changes are a factor ¾ smaller. The requirement is also satisﬁed in the lowgain direction since ½ ½ and ¼ ½½ are both less than ¾ ´ µ · ½ ½ .50) This is a model of a distillation column with product compositions as outputs. but we note that the margin is relatively small . ´ µ ´ µ ½ ½ is large. Thus. than the elements in and ½ . From an SVD of we have ´ µ Some immediate observations: are larger than ½ so control is needed to reject disturbances. In the highgain direction this is easily satisﬁed since ½ ¼ and ½ ¾ are both much less than ½ ´ µ · ½ .45) because the allowed input and ´ µ ¼ ¼. Perfect control. This indicates that the direction of disturbance ½ is less favourable than that of disturbance ¾. perfect control of disturbance ½ is not possible ½ is without violating input constraints. We have ÍÀ ½ ¼ ½ ¾ ½ ½ ¼ ½½ ½´ µ ¾´ µ ¼ ¼ and the magnitude of each element in the ’th row of Í À should be less than ´ µ · ½ to avoid input constraints. The ¼ ½¼ ¾ ½¼ ½ ½½ ¾ ½½ ½ (6. The elements of the matrix ¼ we are able to perfectly track reference changes of magnitude ¼ (in 2. Since ´ µ terms of the 2norm) without reaching input constraints. The inputs needed for perfect disturbance rejection are Ù ½ where ½ ½ ¼ ¼ ¼¼ ½ ¾ ¼ ¾½¿ We note that perfect rejection of disturbance ½ ½ requires an input Ù ½ ¼ ½ ¾ Ì which is larger than ½ in magnitude. In this case. ´ µ. The elements in are scaled by a factor ¼ compared to (3. so input constraints may be an issue after all. However. but rather by a large ´ µ. 1. for the two disturbances. possible as it requires a much smaller input Ù 2. ´ µ ¼ is much less . The condition number ´ µ imply control problems. and feed rate (¾¼% change) and feed composition (¾¼% change) as disturbances.
we conclude that perfect control is not possible. Exact numerical result for acceptable control. This seems inconsistent with the above approximate results. You will ﬁnd that both inputs are about ¾ in magnitude at low frequency. whereas disturbance ¾ requires magnitude of approximately Ù¾ no control (as its effect is ¼ ½½ which is less than ½). ½ and compute. However. we ﬁnd that the results based on perfect control are misleading. but the difference is much smaller than with perfect control. evaluate ´ µ Í ¦Î À .ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¿¿ for disturbance ½. elements of ¦´ µ · Á to see whether “acceptable control” ( ´ µ 6.6 Consider again the plant in (6.17 in the lowgain direction of which is about 10 times larger than the value of 0. since ½ ½ was just above ½. the required inputs ½ ´ µ for perfect control. and compute Í À ´ µ as a function of frequency and compare with the ½) is possible. as a function Exercise 6. the values of Ù Ñ Ü equally difﬁcult. Again we ﬁnd disturbance ½ to be more difﬁcult.3 Unstable plant and input constraints Active use of inputs are needed to stabilize and unstable plant and from (6. From the example we conclude that it is difﬁcult to judge. so if the inputs and disturbances have been appropriately scaled. (The approximate results indicated that some control was needed for disturbance ½ in the lowgain direction. This can be seen from the second row of ÍÀ . as acceptable control is indeed possible. for which the approximate results gave the same value of ¼ ½¿ for both disturbances.25) we must ÙÀ × ´Ôµ ½ ¾ where ÃË is the transfer function from outputs to inputs. but apparently this is inaccurate).9.37). which conﬁrms that input saturation is not a problem. since then a much larger fraction of it must be rejected. The discussion at the end of the example illustrates an advantage of the approximate analytical method in (6. in the lowgain direction disturbance ½ requires an input ´½ ½ ½µ ¼ ¼ ¾ . no such “warning” is provided by the exact numerical method. However. In conclusion. we found that disturbance ½ may be much more difﬁcult to reject because it has a component of 1. More precisely. 3. whether a disturbance is difﬁcult to reject or not. the results are consistent if for both disturbances control is only needed in the highgain direction. Next. The exact values of the minimum inputs ¼ ¼ for disturbance ½ and Ù Ñ Ü needed to achieve acceptable control are Ù Ñ Ü ¼ ¼ for disturbance ¾.50) that the two disturbances have almost it would appear from the column vectors of identical effects. this changes drastically if disturbance ½ is larger. require ÃË ½ Ô . ¼ ½¼ indicate that the two disturbances are about However. of the elements in in (6. In the above example. The reason is that we only need to reject about ½¾% (½ ½¿ ½ ½ ½¿) of disturbance ½ in the lowgain direction. However. Let of frequency.11 for disturbance 2. simply by looking at the magnitude . On the other hand. where we found disturbance ½ to be more difﬁcult. The fact that disturbance ½ is more difﬁcult is conﬁrmed in Section 10.10 on page 455 where we also present closedloop responses.47). namely that we can easily see whether we are close to a borderline value where control may be needed in some direction.
Á ¯½ ¯¾ (6. In this section.10 Limitations imposed by uncertainty The presence of uncertainty requires the use of feedback. Typically. but for any real system we have a crossover frequency range where the loop gain has to drop below 1.1 or larger. However. but we will show that their implications for control may be very different.51) (6. the magnitude of ¯ is 0. we focus on input uncertainty and output uncertainty. which are useful in picking out plants for which one might expect sensitivity to directional uncertainty. In Chapter 8.¾¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ù likely not possible.52) These forms of uncertainty may seem similar. we discuss more exact methods for analyzing performance with almost any kind of uncertainty and a given controller. and we have that Á or Ç are diagonal matrices. to get acceptable performance. The main objective of this section is to introduce some simple tools. Sensitivity reduction with respect to uncertainty is achieved with highgain feedback. 6. Remark. the output and input uncertainties (as in Figure 6. for example.10. . However. and the presence of uncertainty in this frequency range may result in poor performance or even instability. in many cases the source of uncertainty is in the individual input or output channels. It is important to stress that this diagonal input uncertainty is always present in real systems. This involves analyzing robust performance by use of the structured singular value. However. These issues are the same for SISO and MIMO systems. ÃË ´ · Òµ.53) where ¯ is the relative uncertainty in input channel . like the RGA and the condition number. in this section the treatment is kept at a more elementary level and we are looking for results which depend on the plant only. with MIMO systems there is an additional problem in that there is also uncertainty associated with the plant directionality. In a multiplicative (relative) form.1 Input and output uncertainty In practice the difference between the true perturbed plant ¼ and the plant model is caused by a number of different sources.2) are given by Output uncertainty: Input uncertainty: ¼ ¼ ´Á · Ç µ ´Á · Á µ or or Ç Á ´ ¼ µ ½ ½ ´ ¼ µ (6. If all the elements in the matrices Á or Ç are nonzero. then we have full block (“unstructured”) uncertainty. If the required inputs exceed the constraints then stabilization is most 6. rather than simply feedforward control.
However.56) attractive. for the nominal case with no uncertainty. for the actual plant ¼ (with uncertainty) the actual control error ¼ ½ Ö Ö. This is seen from the following expression for the ¾ ¾ case ¢ Á ½ ¼. We then get for the two sources of uncertainty becomes ¼ Ý ¼ Ö ¼ Output uncertainty: (6.2: Plant with multiplicative input and output uncertainty 6.70) for SISO systems.2 Effect of uncertainty on feedforward control Consider a feedforward controller Ù ÃÖ Ö for the case with no disturbances ( assume that the plant is invertible so that we can select ¼).80): Diagonal input uncertainty: Á ½ Ò ½ ´ µ¯ (6. consider a triangular plant with ½¾ the diagonal elements of Á ½ are ¯½ and ¯¾ . the system may be sensitive to input uncertainty. if the RGA has small elements we cannot conclude that the sensitivity to input uncertainty is small. that is. we see that (6.56) The RGAmatrix is scaling independent. That is. ½½ ¯½ · ½¾ ¯¾ ¾½ ½½ ´¯½ ¯¾ µ ½½ ½¾ ¾¾ ½½ ´¯½ ¯¾ µ ¾½ ¯½ · ¾¾ ¯¾ (6. Still. then it is not possible to achieve good reference tracking with feedforward control because of strong sensitivity to diagonal input uncertainty. see (A. which makes the use of condition (6.10. In particular.55) Input uncertainty: Á For output uncertainty.54) is identical to the result in (5.57) . In this case the RGA is £ Á so For example.54) ÇÖ ¼ ½ Ö (6. However. for diagonal input uncertainty the elements of Á to the RGA. The reverse statement is not true. for input uncertainty the sensitivity may be much larger because the elements in the matrix Á ½ can be much larger than the elements in ½ are directly related Á . the worstcase relative control error ¼ ¾ Ö ¾ is equal to the magnitude of the relative output uncertainty ´ Ç µ.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾¿ ¹ Á ¹ ¹+ ¹ + Ó Õ Õ ¹+ ¹ + Figure 6. Since diagonal input uncertainty is always present we can conclude that ¯ if the plant has large RGA elements within the frequency range where effective control is desired. Ý Ö Ù Ö ÃÖ Ö Ö ¼. since from (6.57) the ¾ ½element of Á ½ may be large if ¾½ ½½ is large. We ½ ÃÖ and achieve perfect control.
4. Ý ¼ Ý Ç ÌÖ Ö Ç Ý . the opposite may be true in the crossover frequency range where Ë¼ may have elements larger than 1. Ideally. Ý ¼ Ý Ë¼ ÇÌ Ö Ë ¼ Ç Ý . see Section 6. consider a SISO system and let ¡ ¼. and the change in response is Ý¼ Ý ´ÌÖ¼ ÌÖ µÖ where error ÌÖ Thus.¾¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 6. and with feedforward control the relative control error caused by the uncertainty is equal to the relative output uncertainty. Feedforward control. Remark 1 For square plants.60) provides a generalization of Bode’s Ì Ì Ë (6. Let the nominal transfer function with feedforward control be Ý ÌÖ Ö where ÌÖ ÃÖ and ÃÖ denotes the feedforward controller.3 Uncertainty and the beneﬁts of feedback To illustrate the beneﬁts of feedback control in reducing the sensitivity to uncertainty. and we see that ¯ with feedback control the effect of the uncertainty is reduced by a factor that with feedforward control. Feedforward control: Feedback control: (Simple proof for square plants: switch ´ ¼ µ ¼ ½ ). With one degreeoffreedom feedback control the nominal transfer Ì Ö where Ì Ä´Á · Äµ ½ is the complementary sensitivity function.59) Thus. Ì (A.61) Remark 2 Alternative expressions showing the beneﬁts of feedback control are derived by ½ . Ì ¼ ÌÖ Ö ´ ¼ µ ½ ÌÖ Ç ÌÖ (6.58) and (6.58) Feedback control. 1975).60) ¼ .59) and use (6. The change in response with model error is Ý¼ Ý ´Ì ¼ Ì µÖ where from Ideally.23) for SISO systems.60) ´ ¼ µ ½ and (6. we consider the effect of output uncertainty on reference tracking. Ç where ¡Ì Ì ¼ Ì and ¡ differential relationship (2. function is Ý Á .63) Ç . Equation (6. ÌÖ Á . feedback control is much less sensitive to uncertainty than feedforward control at frequencies where feedback is effective and the elements in Ë¼ are small. To see this.59) becomes ¡Ì ¡ Ì ½ Ë ¼ ¡ ¡ ¡ ½ (6.10. Then Ë ¼ Ë and we have from (6.62) (6. Ë¼ compared to Thus. With model ¼ ¼ ÃÖ .144) Ì ¼ Ì Ë¼ ÇÌ (6. We then get ´Á introducing the inverse output multiplicative uncertainty ¼ Çµ (Horowitz and Shaked. As a basis for comparison we ﬁrst consider feedforward control. However.10. and ¼ ¼ ÌÖ¼ ÌÖ Ç ÌÖ Ì ¼ Ì Ë ÇÌ ¼ in (6.
´Ë ¼ µ.67) (6. We then get that ´Á · Ç Ì µ ½ and ´Á · Á ÌÁ µ ½ are stable.6) (6.4 Uncertainty and the sensitivity peak We demonstrated above how feedback may reduce the effect of uncertainty. and again Ì ¼ Ì is small. We will derive upper bounds on ´Ë¼ µ which involve the plant and controller condition numbers ´ µ £ Á ´ µ Ñ Ò ´ Á ´ µ ´ µ Áµ ´Ã µ £ Ç ´Ã µ ´Ã µ ´Ã µ Ñ Ò ´ ÇÃµ Ç (6. Typically the uncertainty bound.64) we see that with feedback control Ì ¼ Ì is small at frequencies where feedback is effective (i. (6.3. so that both Ë and Ë ¼ are stable.66) where Á and Ç are diagonal scaling matrices.69) and singular value inequalities (see Appendix A. From (6.4) of the kind ´´Á · Á ÌÁ µ ½ µ ´Á · ½ Á ÌÁ µ ½ ´ ½ Á ÌÁ µ ½ ½ ´ Á µ ´ÌÁ µ ½ ½ ÛÁ ´ÌÁ µ .51) and (6. we state a lower bound on ´Ë¼ µ for an inversebased controller in terms of the RGAmatrix of the plant.65) and the following minimized condition numbers of the plant and the controller (6. Similarly. especially at crossover frequencies. The objective in the following is to investigate how the magnitude of the sensitivity. These are based on identities (6.69) We assume that and ¼ are stable.ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË Remark 3 Another form of (6.52).75).10. is affected by the multiplicative output uncertainty and input uncertainty given as (6. 1981) ¾¿ Conclusion. We also assume closedloop stability. Ì¼ Ì Ë ¼ ´Ä¼ ÄµË (6.67)(6.59). Thus with feedback.e. At higher frequencies we have for real systems that Ä is small.59) is (Zames. ÛÇ . so Ì is small.64) 6. but we also pointed out that uncertainty may pose limitations on achievable performance. In most cases we assume that the magnitude of the multiplicative (relative) uncertainty at each frequency can be bounded in terms of its singular value Ë¼ Ë ´Á · Ç Ì µ ½ ¼ Ë ´Á · Á ½ Ì µ ½ Ë ´Á · Á ÌÁ µ ½ ½ Ë Ë ¼ ´Á · Ì Ã ½ Á Ã µ ½ Ë Ã ½ ´Á · ÌÁ Á µ ½ ÃË ´ Áµ ÛÁ ´ Çµ ÛÇ ÛÁ or (6. is We ﬁrst state some upper bounds on ´Ë¼ µ. Ë and Ë¼ are small). The following factorizations of Ë ¼ in terms of the nominal sensitivity form the basis for the development: Output uncertainty: Input uncertainty: Ë (see Appendix A.70) where ÛÁ ´×µ and ÛÇ ´×µ are scalar weights. This is usually at low frequencies.68) (6. ¼ ¾ at low frequencies and exceeds 1 at higher frequencies.74) and (A.63) and (6. These minimized condition numbers may be computed using (A. uncertainty only has a signiﬁcant effect in the crossover region where Ë and Ì both have norms around 1.
Consider a decoupling controller in the ´×µ ½ ´×µ where ´×µ is a diagonal matrix. That is.68) and (6. 1. and Á ÌÁ is diagonal. if we have a reasonable margin to stability ( ´Á · Ç Ì µ ½ ½ is not too much larger than ½). The bounds (6.71) we see that output uncertainty.67) we derive (6. then the nominal and perturbed sensitivities do not differ very much. decoupling with ÌÁ Ì ´Ë µ ½ ÛÁ ´ÌÁ µ ´Ë µ £ Ç ´Ã µ ½ ÛÁ ´ÌÁ µ £ Á ´ µ (6. For simplicity. we will not state these assumptions each time. The second identity in (6. 2. Diagonal uncertainty and decoupling controller. ´ Á µ ÛÁ ´ÌÁ µ ½. be it diagonal or full block.74) (6. Ã is diagonal so form Ã ´×µ ÌÁ Ã ´Á · Ã µ ½ is diagonal. which yields inputoutput In particular.73) From (6. General case (full block or diagonal input uncertainty and any controller).75) ¡ . then the sensitivity function is not sensitive to input uncertainty. meaning that ´Ã µ is close to ½.¾¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´ÌÁ µ ½ and Of course these inequalities only apply if we assume ´ Á ÌÁ µ ½. poses no particular problem when performance is measured at the plant output. From (6.69) we derive ´Ë ¼ µ ´Ë ¼ µ ´ µ ´Ë µ ´´Á · Á ÌÁ µ ½ µ ´Ã µ ´Ë µ ´´Á · ÌÁ Á µ ½ µ ´Ë µ ½ ÛÁ ´ÌÁ µ ´Ë µ ´Ã µ ½ ÛÁ ´ÌÁ µ ´ µ (6.73) are not very useful because they yield unnecessarily large upper bounds.73) we have the important result that if we use a “round” controller. they apply to inversebased control. Ð Ø Á where Ø ½·Ð .68) may Ë ´ Á µ´Á · Á ÌÁ µ ½ ´ Á µ ½ where Á is freely chosen. Upper bound on ´Ë ¼ µ for output uncertainty ´Ë ¼ µ ´Ë µ ´´Á · Ç Ì µ ½ µ ´Ë µ ½ ÛÇ ´Ì µ From (6. Upper bounds on ´Ë ¼ µ for input uncertainty The sensitivity function can be much more sensitive to input uncertainty than output uncertainty. To improve on this conservativeness we present below some bounds for special cases.71) From (6.75) apply to any decoupling controller in the form Ã Ð´×µÁ .74) and (6. In many cases (6.72) (6. In this case.72) and (6. and we then be written Ë ¼ get ´Ë ¼ µ ´Ë ¼ µ £ Á ´ µ ´Ë µ ´´Á · Á ÌÁ µ ½ µ £ Ç ´Ã µ ´Ë µ ´´Á · ÌÁ Á µ ½ µ ½ . where we either restrict the uncertainty to be diagonal or restrict the controller to be of a particular form.
ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
¾¿
£ Remark. A diagonal controller has Ç ´Ã µ ½, so from (6.77) below we see that (6.75) applies to both a diagonal and decoupling controller. Another bound which does apply to any controller is given in (6.77).
3. Diagonal uncertainty (Any controller). From the ﬁrst identity in (6.68) we get Ë ´Á · ´ Á µ Á ´ Á µ ½ Ì µ ½ and we derive by singular value inequalities
Ë¼
(6.76) (6.77)
´Ë ¼ µ ´Ë ¼ µ
£ Note that Ç ´Ã µ ½ for a diagonal controller so (6.77) shows that diagonal uncertainty does not pose much of a problem when we use decentralized control.
Lower bound on
´Ë µ £ ½ Á ´ µ ÛÁ ´Ì µ ´Ë µ £ ½ Ç ´Ã µ ÛÁ ´Ì µ
´Ë ¼ µ for input uncertainty
Above we derived upper bounds on ´Ë¼ µ; we will next derive a lower bound. A lower bound is more useful because it allows us to make deﬁnite conclusions about when the plant is not inputoutput controllable. Theorem 6.3 Input uncertainty and inversebased control. Consider a controller Ã ´×µ Ð´×µ ½ ´×µ which results in a nominally decoupled response with sensitivity Ë × Á and Ø Á where Ø´×µ ½ ×´×µ. Suppose the plant has diagonal complementary sensitivity Ì input uncertainty of relative magnitude ÛÁ ´ µ in each input channel. Then there exists a combination of input uncertainties such that at each frequency
¡
¡
where
ÛÁ Ø £´ µ ½ ½ · ÛÁ Ø £´ µ ½ is the maximum row sum of the RGA and ´Ë µ × .
´Ë ¼ µ ´Ë µ ½ ·
(6.78)
The proof is given below. From (6.78) we see that with an inverse based controller the worst case sensitivity will be much larger than the nominal at frequencies where the plant has large ½) this implies RGAelements. At frequencies where control is effective ( × is small and Ø that control is not as good as expected, but it may still be acceptable. However, at crossover ½ × are both close to 1, we ﬁnd that ´Ë¼ µ in (6.78) frequencies where × and Ø may become much larger than 1 if the plant has large RGAelements at these frequencies. The bound (6.78) applies to diagonal input uncertainty and therefore also to fullblock input uncertainty (since it is a lower bound).
Worstcase uncertainty. It is useful to know which combinations of input errors give poor performance. For an inversebased controller a good indicator results if we consider Á ½ , ¯ . If all ¯ have the same magnitude ÛÁ , then the largest possible where Á £´ µ ½ . To obtain this magnitude of any diagonal element in Á ½ is given by ÛÁ where denotes the row value one may select the phase of each ¯ such that ¯ of £´ µ with the largest elements. Also, if £´ µ is real (e.g. at steadystate), the signs of the ¯ ’s should be the same as those in the row of £´ µ with the largest elements.
¡
¾¼
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
Proof of Theorem 6.3: (From Skogestad and Havre (1996) and Gjøsæter (1995)). Write the sensitivity function as
Ë¼
´Á · ¼ Ã µ ½
Ë ´Á · Á ÌÁ µ ½ ½ ßÞ
Á
¯
Ë
×Á
(6.79)
Since is a diagonal matrix, we have from (6.56) that the diagonal elements of Ë¼ are given in terms of the RGA of the plant as
×¼
×
Ò
(Note that × here is a scalar sensitivity function and not the Laplace variable.) The singular value of a matrix is larger than any of its elements, so ´Ë¼ µ Ñ Ü ×¼ , and the objective in the following is to choose a combination of input errors ¯ such that the worstcase ×¼ is as large as possible. Consider a given output and write each term in the sum in (6.80) as
½
½ ½ · Ø¯
£
¢ ´ ½ µÌ
(6.80)
ÛÁ ¯ . We We choose all ¯ to have the same magnitude ÛÁ ´ µ , so we have ¯ ´ µ 3 ½ at all frequencies , such that the phase of ½ · Ø¯ lies between also assume that Ø¯ ¼Æ and ¼Æ . It is then always possible to select ¯ (the phase of ¯ ) such that the last term in (6.81) is real and negative, and we have at each frequency, with these choices for ¯ ,
½ · Ø¯
½ ·Ø¯ Ø¯
(6.81)
×¼ ×
Ò
½
½·
½·
Ò
Ò
½ ½ · ÛÁ Ø
¡ ÛÁ Ø
¡ Ø¯ ½ · Ø¯ ½
½·
ÛÁ Ø Ò ½ · ÛÁ Ø ½
(6.82)
where the ﬁrst equality makes use of the fact that the rowelements of the RGA sum to 1, È ( Ò ½ ½). The inequality follows since ¯ ÛÁ and ½ · Ø¯ ½ · Ø¯ ½ · ÛÁ Ø . This derivation holds for any (but only for one at a time), and (6.78) follows by È (the maximum rowsum of the RGA of ). ¾ selecting to maximize Ò ½
We next consider three examples. In the ﬁrst two, we consider feedforward and feedback control of a plant with large RGAelements. In the third, we consider feedback control of a plant with a large condition number, but with small RGAelements. The ﬁrst two are sensitive to diagonal input uncertainty, whereas the third is not. Example 6.6 Feedforward control of distillation process. process in (3.81).
Consider the distillation
´×µ
¿ The assumption Ø¯
is not included in the theorem since it is actually needed for robust stability, so if it does not hold we may have Ë¼ inﬁnite for some allowed uncertainty, and (6.78) clearly holds.
½
½ ×·½
½¼ ¾ ½¼ ´ µ
£´ µ
¿ ½ ¿ ½ ¿ ½ ¿ ½
(6.83)
ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
With
¾½
Á
¯½ ¯¾
we get for all frequencies
Á
½
¿ ½¯½ ¿ ½¯¾ ¾ ¯½ · ¾ ¯¾ ¿ ¾¯½ ¿ ¾¯¾ ¿ ½¯½ · ¿ ½¯¾
(6.84)
We note as expected from (6.57) that the RGAelements appear on the diagonal elements in ½ . The elements in the matrix Á ½ are largest when ¯½ and ¯¾ have the matrix Á ¼¾ opposite signs. With a ¾¼% error in each input channel we may select ¯½ ¼ ¾ and ¯¾ and ﬁnd
Á
½
Thus with an “ideal” feedforward controller and ¾¼% input uncertainty, we get from (6.55) that the relative tracking error at all frequencies, including steadystate, may exceed ½¼¼¼%. This demonstrates the need for feedback control. However, applying feedback control is also difﬁcult for this plant as seen in Example 6.7.
½¿ ½½ ½ ½ ¾ ½¿
(6.85)
Example 6.7 Feedback control of distillation process. Consider again the distillation £ ½ and ´ µ Á ´ µ process ´×µ in (6.83) for which we have £´ ´ µµ ½ ½ ½ at all frequencies.
1. Inverse based feedback controller. Consider the controller corresponding to the nominal sensitivity function
Ã ´×µ
´¼
×µ ½ ´×µ
Ë ´×µ
× Á ×·¼
The nominal response is excellent, but we found from simulations in Figure 3.12 that the closedloop response with ¾¼% input gain uncertainty was extremely poor (we used ¯½ ¼ ¾ and ¯¾ ¼ ¾). The poor response is easily explained from the lower RGAbound on ´Ë¼ µ in (6.78). With the inversebased controller we have Ð´×µ × which has a nominal phase ¼Æ so from (2.42) we have, at frequency , that ×´ µ Ø´ µ margin of PM ½ ¾ ¼ ¼ . With ÛÁ ¼ ¾, we then get from (6.78) that
Ô
´Ë ¼ ´
µµ
¼ ¼
½·
¼ ¼ ¡¼¾¡ ½½
½
¼ ¼
¡
(6.86)
½ at frequency ¼ rad/min.) Thus, we have (This is close to the peak value in (6.78) of ½ and this explains the observed that with ¾¼% input uncertainty we may have Ë¼ ½ poor closedloop performance. For comparison, the actual worstcase peak value of ´Ë¼ µ, with the inversebased controller is 14.5 (computed numerically using skewed as discussed ¯½ ¯¾ below). This is close to the value obtained with the uncertainty Á ¼¾ ¼¾ ,
Ë¼ ½
¼ Á· ×
½¾
¼
½
½
½
½ ¾
for which the peak occurs at ¼ rad/min. The difference between the values ½ and ½ illustrates that the bound in terms of the RGA is generally not tight, but it is nevertheless very useful.
¾¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
£ £ Next, we look at the upper bounds. Unfortunately, in this case Á ´ µ Ç ´Ã µ ½ ½ , so the upper bounds on ´Ë¼ µ in (6.74) and (6.75) are not very tight (they are of magnitude ½ ½ at high frequencies).
2. Diagonal (decentralized) feedback controller. Consider the controller
problem with the simple diagonal controller is that (although it is robust) even the nominal response is poor.
¾ ´ × · ½µ ½ ¼ ¾ ¾ ¡ ½¼ ¾ Ñ Ò ½ ¼ ½ × The peak value for the upper bound on ´Ë¼ µ in (6.77) is ½ ¾ , so we are guaranteed Ë¼ ½ ½ ¾ , even with ¾¼% gain uncerainty. For comparison, the actual peak in the ¼ ¾ ¼ ¾ is Ë ¼ ½ ½ ¼ . Of course, the perturbed sensitivity function with Á
Ã
´×µ
The following example demonstrates that a large plant condition number, ´ µ, does not necessarily imply sensitivity to uncertainty even with an inversebased controller.
Example 6.8 Feedback control of distillation process, DVmodel. In this example we consider the following distillation model given by Skogestad et al. (1988) (it is the same system as studied above but with the DV rather than the LVconﬁguration for the lower control levels, see Example 10.5):
½ ½ ¼ ¼ ¾ £´ µ (6.87) ¼ ¾ ¼ × · ½ ½¼ ¾ ½ ½, ´ µ ¼ and £ ´ µ ½ ½½ at all frequencies. We have that £´ ´ µµ ½ £ ´ µ are small we doÁ not expect problems with input Since both the RGAelements and Á uncertainty and an inverse based controller. Consider an inversebased controller Ã ÒÚ ´×µ £ £ ´¼ ×µ ½ ´×µ which yields ´Ã µ ´ µ and Ç ´Ã µ Á ´ µ. In Figure 6.3, we show the lower bound (6.78) given in terms of £ ½ and the two upper bounds (6.74) and (6.76) £ given in terms of Á ´ µ for two different uncertainty weights ÛÁ . From these curves we see
that the bounds are close, and we conclude that the plant in (6.87) is robust against input uncertainty even with a decoupling controller.
Remark. Relationship with the structured singular value: skewed . To analyze exactly the worstcase sensitivity with a given uncertainty ÛÁ we may compute skewed ( × ). With reference to Section 8.11, this involves computing ¡ ´Æ µ with ¡ ´¡Á ¡È µ and
Æ
ÛÁ ÌÁ ÛÁ ÃË and varying × until ´Æ µ × Ë × Ë
given frequency is then
´Ë¼ µ
× ´Æ µ .
½. The worstcase performance at a
Example 6.9 Consider the plant
´×µ
for which at all frequencies £´ µ Á , ´ µ The RGAmatrix is the identity, but since ½¾
½ ½¼¼ ¼ ½
½¼ , £ ´ µ ½½ ½¼¼ we
£ ½ ¼¼ and Á ´ µ ¾¼¼. expect from (6.57) that this
ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
2
Magnitude Magnitude
¾¿
2
Í½ Ä½ Í¾
1
Í¾ Í½ Ä½
1
10
−1
10
0
10
1
10
2
10
−1
10
(b) ÏÁ
0
10
1
10
2
(a) ÏÁ
Frequency [rad/min]
´×µ ¼ ¾Á
Frequency [rad/min]
·½ ´×µ ¼ ¾ ¼ ××·½ Á
Figure 6.3: Bounds on the sensitivity function for the distillation column with the DV conﬁguration: lower bound Ä½ from (6.78), upper bounds Í½ from (6.76) and Í¾ from (6.74) plant will be sensitive to diagonal input uncertainty if we use inversebased feedback control, ½ . This is conﬁrmed if we compute the worstcase sensitivity function Ë¼ for Ã × ¼ ´Á · ÛÁ ¡Á µ where ¡Á is diagonal and ÛÁ ¼ ¾. We ﬁnd by computing skewed , × ´Æ½ µ, that the peak of ´Ë ¼ µ is Ë ¼ ½ ¾¼ ¿
Note that the peak is independent of the controller gain in this case since ´×µ is a constant matrix. Also note that with fullblock (“unstructured”) input uncertainty (¡Á is a full matrix) the worstcase sensitivity is Ë ¼ ½ ½¼¾½
Conclusions on input uncertainty and feedback control The following statements apply to the frequency range around crossover. By “small’, we mean about 2 or smaller. By “large” we mean about 10 or larger. 1. Condition number ´ µ or ´Ã µ small: robust performance to both diagonal and fullblock input uncertainty; see (6.72) and (6.73). £ £ 2. Minimized condition numbers Á ´ µ or Ç ´Ã µ small: robust performance to diagonal input uncertainty; see (6.76) and (6.77). Note that a diagonal controller £ always has Ç ´Ã µ ½. 3. RGA´ µ has large elements: inversebased controller is not robust to diagonal input uncertainty; see (6.78). Since diagonal input uncertainty is unavoidable in practice, the rule is never to use a decoupling controller for a plant with large RGAelements. Furthermore, a diagonal controller will most likely yield poor nominal performance for a plant with large RGAelements, so we conclude that plants with large RGAelements are fundamentally difﬁcult to control. £ 4. Á ´ µ is large while at the same time the RGA has small elements: cannot make any deﬁnite conclusion about the sensitivity to input uncertainty based on the bounds in this section. However, as found in Example 6.9 we may expect there to
¾
be problems.
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
6.10.5 Elementbyelement uncertainty
Consider any complex matrix and let denote the ’th element in the RGAmatrix of . The following result holds (Yu and Luyben, 1987; Hovd and Skogestad, 1992): becomes singular if we make a relative Theorem 6.4 The (complex) matrix in its th element, that is, if a single element in is perturbed change ½ from to Ô ´½ ½ µ. The theorem is proved in Appendix A.4. Thus, the RGAmatrix is a direct measure of sensitivity to elementbyelement uncertainty and matrices with large RGAvalues become singular for small relative errors in the elements. Example 6.10 The matrix
½¾ ´ µ
¿ ½. Thus, the matrix Ô½¾
in (6.83) is nonsingular. The becomes singular if ½¾
½ ¾element
´½ ½ ´ ¿ ½µµ
of the RGA is is perturbed to (6.88)
The above result is an important algebraic property of the RGA, but it also has important implications for improved control: 1) Identiﬁcation. Models of multivariable plants, ´×µ, are often obtained by identifying one element at a time, for example, using step responses. From Theorem 6.4 it is clear that this simple identiﬁcation procedure will most likely give meaningless results (e.g. the wrong sign of the steadystate RGA) if there are large RGAelements within the bandwidth where the model is intended to be used. 2) RHPzeros. Consider a plant with transfer function matrix ´×µ. If the relative uncertainty in an element at a given frequency is larger than ½ ´ µ then the plant may be singular at this frequency, implying that the uncertainty allows for a RHPzero on the axis. This is of course detrimental to performance both in terms of feedforward and feedback control.
Remark. Theorem 6.4 seems to “prove” that plants with large RGAelements are fundamentally difﬁcult to control. However, although the statement may be true (see the conclusions on page 243), we cannot draw this conclusion from Theorem 6.4. This is because the assumption of elementbyelement uncertainty is often unrealistic from a physical point of view, since the elements are usually coupled in some way. For example, this is the case for the distillation column process where we know that the elements are coupled due to an underlying physical constraint in such a way that the model (6.83) never becomes singular even for large changes in the transfer function matrix elements.
ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
¾
6.10.6 Steadystate condition for integral control
Feedback control reduces the sensitivity to model uncertainty at frequencies where the loop gains are large. With integral action in the controller we can achieve zero steadystate control error, even with large model errors, provided the sign of the plant, as expressed by Ø ´¼µ, does not change. The statement applies for stable plants, or more generally for cases where the number of unstable poles in the plant does not change. The conditions are stated more exactly in the following theorem. Theorem 6.5 Let the number of openloop unstable poles (excluding poles at × ¼) of ´×µÃ ´×µ and ¼ ´×µÃ ´×µ be È and È ¼ , respectively. Assume that the controller Ã is such that Ã has integral action in all channels, and that the transfer functions Ã and ¼ Ã are strictly proper. Then if
Ø ¼ ´¼µ
Ø ´¼µ
¼ ¼
ÓÖ È È ¼ Ú Ò Ò ÐÙ Ò Þ ÖÓ ÓÖ È È ¼ Ó
(6.89)
at least one of the following instabilities will occur: a) The negative feedback closedloop system with loop gain Ã is unstable. b) The negative feedback closedloop system with loop gain ¼ Ã is unstable.
Proof: For stability of both ´Á · Ã µ ½ and ´Á · ¼ Ã µ ½ we have from Lemma A.5 È ¼ times as × in Appendix A.6.3 that Ø´Á · Ç Ì ´×µµ needs to encircle the origin È traverses the Nyquist contour. Here Ì ´¼µ Á because of the requirement for integral action in all channels of Ã . Also, since Ã and ¼ Ã are strictly proper, Ç Ì is strictly ¼ as × . Thus, the map of det´Á · Ç Ì ´×µµ starts at proper, and hence Ç ´×µÌ ´×µ Ø ¼ ´¼µ Ø ´¼µ (for × ¼) and ends at ½ (for × ). A more careful analysis of the Nyquist plot of det´Á · Ç Ì ´×µµ reveals that the number of encirclements of the origin will Ø ´¼µ ¼, and odd for Ø ¼ ´¼µ Ø ´¼µ ¼. Thus, if this be even for Ø ¼ ´¼µ parity (odd or even) does not match that of È È ¼ we will get instability, and the theorem ¾ follows.
½
½
Example 6.11 Suppose the true model of a plant is given by identiﬁcation we obtain a model ½ ´×µ,
´×µ,
and that by careful
½ ×·½
½¼ ¾ ½¼
½ ´×µ
½ ×·½
½¼ ½¼
At ﬁrst glance, the identiﬁed model seems very good, but it is actually useless for control ¾ and Ø ½ ´¼µ ½ purposes since Ø ½ ´¼µ has the wrong sign; Ø ´¼µ (also the RGAelements have the wrong sign; the ½ ½element in the RGA is instead of ·¿ ½). From Theorem 6.5 we then get that any controller with integral action designed based on the model ½ , will yield instability when applied to the plant .
¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
6.11 MIMO Inputoutput controllability
We now summarize the main ﬁndings of this chapter in an analysis procedure for inputoutput controllability of a MIMO plant. The presence of directions in MIMO systems makes it more difﬁcult to give a precise description of the procedure in terms of a set of rules as was done in the SISO case.
6.11.1 Controllability analysis procedure
The following procedure assumes that we have made a decision on the plant inputs and plant outputs (manipulations and measurements), and we want to analyze the model to ﬁnd out what control performance can be expected. The procedure can also be used to assist in control structure design (the selection of inputs, outputs and control conﬁguration), but it must then be repeated for each corresponding to each candidate set of inputs and outputs. In some cases the number of possibilities is so large that such an approach becomes prohibitive. In this case some prescreening is required, for example, based on physical insight or by analyzing the “large” model, ÐÐ , with all the candidate inputs and outputs included. This is brieﬂy discussed in Section 10.4. A typical MIMO controllability analysis may proceed as follows: 1. Scale all variables (inputs Ù, outputs Ý , disturbances , references, Ö) to obtain a scaled model, Ý ´×µÙ · ´×µ Ö ÊÖ; see Section 1.4. 2. Obtain a minimal realization. 3. Check functional controllability. To be able to control the outputs independently, we ﬁrst need at least as many inputs Ù as outputs Ý . Second, we need the rank of ´×µ to be equal to the number of outputs, Ð, i.e. the minimum singular value of ´ µ, ´ µ Ð ´ µ, should be nonzero (except at possible axis zeros). If the plant is not functionally controllable then compute the output direction where the plant has no gain, see (6.16), to obtain insight into the source of the problem. 4. Compute the poles. For RHP (unstable) poles obtain their locations and associated directions; see (6.5). “Fast” RHPpoles far from the origin are bad. 5. Compute the zeros. For RHPzeros obtain their locations and associated directions. Look for zeros pinned into certain outputs. “Small” RHPzeros (close to the origin) are bad if tight performance at low frequencies is desired. ¢ 6. Obtain the frequency response ´ µ and compute the RGA matrix, £ ´ Ý µÌ . Plants with large RGAelements at crossover frequencies are difﬁcult to control and should be avoided. For more details about the use of the RGA see Section 3.6, page 87. 7. From now on scaling is critical. Compute the singular values of ´ µ and plot them as a function of frequency. Also consider the associated input and output singular vectors.
ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
¾
8. The minimum singular value, ´ ´ µµ, is a particularly useful controllability measure. It should generally be as large as possible at frequencies where control ½ then we cannot at frequency make independent is needed. If ´ ´ µµ output changes of unit magnitude by using inputs of unit magnitude. . At frequencies where 9. For disturbances, consider the elements of the matrix one or more elements is larger than 1, we need control. We get more information by considering one disturbance at a time (the columns of ). We must require for each disturbance that Ë is less than ½ ¾ in the disturbance direction Ý , i.e. ËÝ ¾ ½ ; see (6.33). Thus, we must at least require ´Ë µ ½ ¾ ¾ and we may have to require ´Ë µ ½ ¾ ; see (6.34).
Remark. If feedforward control is already used, then one may instead analyze Ã Ñ · where Ã denotes the feedforward controller, see (5.78).
´×µ
10. Disturbances and input saturation: First step. Consider the input magnitudes needed for perfect control by computing the elements in the matrix Ý . If all elements are less than 1 at all frequencies then input saturation is not expected to be a problem. If some elements of Ý are larger than 1, then perfect control ( ¼) cannot be achieved at this frequency, but “acceptable” control ( ¾ ½) may be possible, and this may be tested in the second step. Second step. Check condition (6.47), that is, consider the elements of Í À and make sure that the elements in the ’th row are smaller than frequencies.
´ µ · ½, at all
11. Are the requirements compatible? Look at disturbances, RHPpoles and RHPzeros and their associated locations and directions. For example, we must require À for each disturbance and each RHPzero that Ý Þ ´Þ µ ½; see (6.35). For combined RHPzeros and RHPpoles see (6.10) and (6.11). 12. Uncertainty. If the condition number ´ µ is small then we expect no particular problems with uncertainty. If the RGAelements are large, we expect strong sensitivity to uncertainty. For a more detailed analysis see the conclusion on page 243. 13. If decentralized control (diagonal controller) is of interest see the summary on page 453. 14. The use of the condition number and RGA are summarized separately in Section 3.6, page 87.
A controllability analysis may also be used to obtain initial performance weights for controller design. After a controller design one may analyze the controller by plotting, for example, its elements, singular values, RGA and condition number as a function of frequency.
¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
6.11.2 Plant design changes
If a plant is not inputoutput controllable, then it must somehow be modiﬁed. Some possible modiﬁcations are listed below. Controlled outputs. Identify the output(s) which cannot be controlled satisfactorily. Can the speciﬁcations for these be relaxed? Manipulated inputs. If input constraints are encountered then consider replacing or moving actuators. For example, this could mean replacing a control valve with a larger one, or moving it closer to the controlled output. If there are RHPzeros which cause control problems then the zeros may often be eliminated by adding another input (possibly resulting in a nonsquare plant). This may not be possible if the zero is pinned to a particular output. Extra Measurements. If the effect of disturbances, or uncertainty, is large, and the dynamics of the plant are such that acceptable control cannot be achieved, then consider adding “fast local loops” based on extra measurements which are located close to the inputs and disturbances; see Section 10.8.3 and the example on page 207. Disturbances. If the effect of disturbances is too large, then see whether the disturbance itself may be reduced. This may involve adding extra equipment to dampen the disturbances, such as a buffer tank in a chemical process or a spring in a mechanical system. In other cases this may involve improving or changing the control of another part of the system, e.g. we may have a disturbance which is actually the manipulated input for another part of the system. Plant dynamics and time delays. In most cases, controllability is improved by making the plant dynamics faster and by reducing time delays. An exception to this is a strongly interactive plant, where an increased dynamic lag or time delay, may be helpful if it somehow “delays” the effect of the interactions; see (6.17). Another more obvious exception is for feedforward control of a measured disturbance, where a delay for the disturbance’s effect on the outputs is an advantage. Example 6.12 Removing zeros by adding inputs. Consider a stable ¾ ¢ ¾ plant
½ ´×µ
½ ×·½ ×·¿ ¾ ´× · ¾µ¾ ½
which has a RHPzero at × ½ which limits achievable performance. The zero is not pinned to a particular output, so it will most likely disappear if we add a third manipulated input. Suppose the new plant is
¾ ´×µ
½ ×·½ ×·¿ ×· ¾ ¿ ´× · ¾µ¾ ½
which indeed has no zeros. It is interesting to note that each of the three ¾ ¾ ´×µ has a RHPzero (located at × ½, × ½ and × ¿, respectively).
¢ ¾ subplants of
ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË
¾
Remark. Adding outputs. It has also been argued that it is possible to remove multivariable zeros by adding extra outputs. To some extent this is correct. For example, it is wellknown that there are no zeros if we use all the states as outputs, see Example 4.13. However, to control all the states independently we need as many inputs as there are states. Thus, by adding outputs to remove the zeros, we generally get a plant which is not functionally controllable, so it does not really help.
6.11.3 Additional exercises
The reader will be better prepared for some of these exercises following an initial reading of Chapter 10 on decentralized control. In all cases the variables are assumed to be scaled as outlined in Section 1.4. Exercise 6.7 Analyze inputoutput controllability for
´×µ
½ ½ ¼×¼½×·½ ·¼ ½ ×¾ · ½¼¼ ×·½
½ ½
Compute the zeros and poles, plot the RGA as a function of frequency, etc.
Exercise 6.8 Analyze inputoutput controllability for
where
½ ×·½·« « « ×·½·« ´ × · ½µ´ × · ½ · ¾«µ ½¼¼; consider two cases: (a) « ¾¼, and (b) « ¾. ´×µ
Ì½ÓÙØ and « is the number of heat transfer units. Ì¾ÓÙØ
Remark. This is a simple “twomixingtank” model of a heat exchanger where Ù
Ý
Ì½ Ò , Ì¾ Ò
Exercise 6.9 Let
½¼ ¼ ¼ ½
Á
½¼ ½ ½ ½¼ ¼
¼ ¼ ¼ ½
(a) Perform a controllability analysis of ´×µ. Ü · Ù · and consider a unit disturbance Þ½ Þ¾ Ì . Which direction (b) Let Ü (value of Þ½ Þ¾ ) gives a disturbance that is most difﬁcult to reject (consider both RHPzeros and input saturation)? (c) Discuss decentralized control of the plant. How would you pair the variables?
Exercise 6.10 Consider the following two plants. Do you expect any control problems? Could decentralized or inversebased control be used? What pairing would you use for decentralized control?
´×µ
½ × ½ × ½ ¾ ´× · ½µ´× · ¾¼µ ¾ × ¾¼
Consider the following ¿ ¢ ¿ plant ´×µ ´×µ ½ ´ ¾¼×¾ · ¿¾ × · ½µ ¿¼ ´ ¾ ½× · ½µ ¿¼´ ¾ × · ½µ ½ ´ × · ½µ ¿½ ¼´ × · ½µ´½ × · ½µ ½ ½´ × · ½µ ½ ¾ ´ ¿ × · ½µ ½´ ¿× · ½µ ¼ Ý½ Ý¾ Ý¿ Ù½ ´×µ Ù¾ Ù¿ ´×µ ´½ ½ × · ½µ´ × · ½µ . ×·¾¼ Exercise 6.13 Analyze inputoutput controllability for ´×µ ½¼¼ ½¼¾ ½¼¼ ½¼¼ ½ ´×µ ½¼ ¾ ×·½ ½ ×·½ ½ Which disturbance is the worst? Exercise 6. ½. ×·¾ ½ × ¾¼ .11 Order the following three plants in terms of their expected ease of controllability ½ ´×µ ½¼¼ ½¼¼ ½¼¼ ¾ ´×µ ¿ ´×µ Remember to also consider the sensitivity to input gain uncertainty. Exercise 6. steadystate.16 Controllability of an FCC process. Consider also the input magnitudes. ½ ´×µ ½ ´×µ × ¾ ×·¾ × ¾ ×·¾ ¾ ´×µ ¾ ´×µ × ¾ ½ .15 Find the poles and zeros and analyze inputoutput controllability for ´× µ · ´½ ×µ ½ × ´½ ×µ · ½ × Here is a constant. Remark.¾¼ ´×µ ½ ´×¾ · ¼ ½µ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ½ ¼ ½´× ½µ ½¼´× · ¼ ½µ × ´× · ¼ ½µ × ½¼¼ × × ½¼¼ ½¼¼ ½¼¼ × ½¼¼ ½¼¼ Exercise 6. A similar model form is encountered for distillation conﬁguration.14 (a) Analyze inputoutput controllability for the following three plants each of which has ¾ inputs and ½ output: ´×µ ´ ½ ´×µ ¾ ´×µµ (i) ½ ´×µ ¾ ´×µ × ¾ . ×·¾ (ii) (iii) (b) Design controllers and perform closedloop simulations of reference tracking to complement your analysis. In which case the physical reason for the columns controlled with the model being singular at steadystate is that the sum of the two manipulated inputs is ﬁxed at . e. · Exercise 6.g.12 Analyze inputoutput controllability for ´× µ ¾´ ×·½µ ¼¼¼× ´ ¼¼¼×·½µ´¾×·½µ ½¼¼×·½ ¿ ¿ ×·½ ×·½ ×·½ ½¼ ×·½ Exercise 6.
which choice of outputs do you prefer? Which seems to be the worst? ¢ It may be useful to know that the zero polynomials: ¡ ½¼ × · ¿ ¾ ¡ ½¼ ¡ ½¼ ¡ ½¼ × ¡ ½¼ ×¿ · ¿ ×¿ ½ ¼ ×¿ ¡ ½¼ ¡ ½¼ ¡ ½¼ ×¾ · ½ ¾¾ ¡ ½¼ × · ½ ¼¿ ¡ ½¼¿ ×¾ ½ ¡ ½¼ × ¿ ¡ ½¼¾ ×¾ · ¿ ¡ ½¼¿ × · ½ ¼ ¡ ½¼¾ have the following roots: ¼ ¼ ¼ ¼ ¾ ¼ ¼ ½ ¼ ¼½¿¾ ¼ ¿¼¿ ¼ ¼ ¿¾ ¼ ¼½¿¾ ¼ ½ ¼ ¼ ¿¾ ¼ ¼¾¼¼ ¼ ¼½¿¾ (b) For the preferred choice of outputs in (a) do a more detailed analysis of the expected control performance (compute poles and zeros. However. That is. For RHPzeros. and Ý ´Ì½ Ì Ý ÌÖ µÌ represents three temperatures. More details are found in Hovd and Skogestad (1993).ÄÁÅÁÌ ÌÁÇÆË ÁÆ ÅÁÅÇ Ë ËÌ ÅË ¾½ Acceptable control of this ¿ ¿ plant can be achieved with partial control of two outputs with input ¿ in manual (not used). This is an issue which is unique to MIMO systems. ½ ´×µ. ¾ ´×µ and ¿ ´×µ. airﬂow and feed composition. 6. ( (a) Based on the zeros of the three ¾ ¾ plants. comment on possible problems with input constraints (assume the inputs and outputs have been properly scaled). Assume that the third input is a disturbance Ù¿ ). RHPpoles and disturbances. . We summarized on page 246 the main steps involved in an analysis of inputoutput controllability of MIMO plants. ½ ´×µ is called the Hicks control structure and ¿ ´×µ the conventional structure. What type of controller would you use? What pairing would you use for decentralized control? (c) Discuss why the ¿ ¢ ¿ plant may be difﬁcult to control.12 Conclusion We have found that most of the insights into the performance limitations of SISO systems developed in Chapter 5 carry over to MIMO systems. This is actually a model of a ﬂuid catalytic cracking (FCC) reactor where Ù ´ × µÌ represents the circulation. etc. Remark. the situation is usually the opposite with model uncertainty because for MIMO systems there is also uncertainty associated with plant directionality. we have a ¾ ¾ control problem. sketch RGA½½ . the issue of directions usually makes the limitations less severe for MIMO than for SISO systems. Consider three options for the controlled outputs: ¢ ¢ ½ Ý½ Ý¾ ¾ Ý¾ Ý¿ ¿ Ý½ Ý¾ Ý¿ In all three cases the inputs are Ù½ and Ù¾ .). discuss the effect of the disturbance.
¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .
7. determine whether the performance speciﬁcations are met for all plants in the uncertainty set. may yield better performance. such as optimizing some average performance or using adaptive control. 2. we show how to represent uncertainty by real or complex perturbations and we analyze robust stability (RS) and robust performance (RP) for SISO systems using elementary methods. Check Robust performance (RP): if RS is satisﬁed.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ÇÊ ËÁËÇ Ë ËÌ ÅË In this chapter. In particular. This approach may not always achieve optimal performance. Chapter 8 is devoted to a more general treatment. The key idea in the À ½ robust control paradigm we use is to check whether the design speciﬁcations are satisﬁed even for the “worstcase” uncertainty. 3. Our approach is then as follows: 1.1 Introduction to robustness A control system is robust if it is insensitive to differences between the actual system and the model of the system which was used to design the controller. Check Robust stability (RS): determine whether the system remains stable for all plants in the uncertainty set. Nevertheless. Determine the uncertainty set: ﬁnd a mathematical representation of the model uncertainty (“clarify what we know about what we don’t know”). the linear uncertainty descriptions presented in this book are very useful in many practical situations. other approaches. These differences are referred to as model/plant mismatch or simply model uncertainty. if the worstcase plant rarely or never occurs. .
We adopt the following notation: Sometimes Ô is used rather than ¥ to denote the uncertainty set. we mean robustness with respect to model uncertainty. and there will be an inﬁnite number of possible plants Ô in the set ¥. and let the perturbed For example.g. physical constraints. Furthermore. in ÛÈ . when we refer to robustness in this book. this is the only way of handling model uncertainty within the socalled LQG approach to optimal control (see Chapter 9). Another strategy for dealing with model uncertainty is to approximate its effect on the feedback system by adding ﬁctitious disturbances or noise. which denotes performance. This corresponds to a continuous description of the model uncertainty. etc. consider a plant with a nominal model Ý · where represents additive model uncertainty. and let ¡ denote a normalized perturbation with À ½ norm less than ½. changes in control objectives. the opening and closing of loops. Signal uncertainty ( ¾ Ù· . ¥ – a set of possible perturbed plant models. We let denote a perturbation which is not normalized. Other considerations include sensor and actuator failures. However. For example. This is easily illustrated for linear systems where the addition of disturbances does not affect system stability. where Ý is different from what we ideally expect (namely 1. The subscript Ô stands for perturbed or possible or ¥ (pick your choice). sometimes denoted the “uncertainty set”.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ It should also be appreciated that model uncertainty is not the only concern when it comes to robustness. whereas model uncertainty combined with feedback may easily create instability. e. and assume that a ﬁxed (linear) controller is used. Uncertainty in the model ( 2. This should not be confused with the subscript capital È . To account for model uncertainty we will assume that the dynamic behaviour of a plant is described not by a single linear time invariant model but by a set ¥ of possible linear time invariant models. Ô ´×µ ¾ ¥ and ¼ ´×µ ¾ ¥ – particular perturbed plant models.1) ÔÙ · Ù) for two reasons: ½ Ù) ) . the numerical design algorithms themselves may not be robust. We will use a “normbounded uncertainty description” where the set ¥ is generated by allowing À½ normbounded stable perturbations to the nominal plant ´×µ. ´×µ ¾ ¥ – nominal plant model (with no uncertainty). then robustness problems may also be caused by the mathematical objective function not properly describing the real control problem. Then the output plant model be Ô of the perturbed plant is Ý Ù· ½· ¾ (7. whereas ¼ always refers to a speciﬁc uncertain plant. if a control design is based on an optimization. Remark. the answer is no. Also. Is this an acceptable strategy? In general.
. in reality Û Ù · ¾ . 3. Here the model is in error because of missing dynamics. We will next discuss some sources of model uncertainty and outline how to represent these mathematically. usually at high frequencies. Measurement devices have imperfections. In this case one may include uncertainty to allow for controller order reduction and implementation inaccuracies.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ In LQG control we set Û ½ · ¾ where Û is assumed to be an independent variable such as white noise. Then in the design problem we may make Û large by selecting appropriate weighting functions. either through deliberate neglect or because of a lack of understanding of the physical process. Even when a very detailed model is available we may choose to work with a simpler (loworder) nominal model and represent the neglected dynamics as “uncertainty”. but some of the parameters are uncertain. The parameters in the linear model may vary due to nonlinearities or changes in the operating conditions. Any model of a real system will contain this source of uncertainty. This may even give rise to uncertainty on the manipulated inputs. Parametric uncertainty. the controller implemented may differ from the one obtained by solving the synthesis problem. since the actual input is often measured and adjusted in a cascade manner. 5. 2. the closedloop system ´Á · ´ · µÃ µ ½ may ¼. 6. but its presence will never cause instability. 4. Finally. The various sources of model uncertainty mentioned above may be grouped into two main classes: 1. and the uncertainty will always exceed 100% at some frequency. For example. At high frequencies even the structure and the model order is unknown. this is often the case with valves where a ﬂow controller is often used.2 Representing uncertainty Uncertainty in the plant model may have several origins: 1. Speciﬁcally. so Û depends on the signal Ù and this may cause instability in the presence of feedback when Ù depends on Ý . 7. Here the structure of the model (including the order) is known. There are always parameters in the linear model which are only known approximately or are simply in error. However. it may be important to explicitly take into account be unstable for some model uncertainty when studying feedback control. 2. Neglected and unmodelled dynamics uncertainty. In conclusion. In other cases limited valve resolution may cause input uncertainty.
1. we will deal mainly with this class of perturbations.2) which may be represented by the block diagram in Figure 7. Some examples of allowable ¡ Á ´×µ’s with À½ norm less than one. In this chapter. Here ¡ Á ´×µ is any stable transfer function which at each frequency is less than or equal to one in magnitude. but it appears that the frequency domain is particularly well suited for this class. That is. For completeness one may consider a third class of uncertainty (which is really a combination of the other two): 3. Ö « ´«Ñ Ü «Ñ Òµ ´«Ñ Ü · «Ñ Òµ is the relative uncertainty in the parameter.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Parametric uncertainty will be quantiﬁed by assuming that each uncertain parameter is bounded within some region « Ñ Ò «Ñ Ü . In most cases we prefer to lump the uncertainty into a multiplicative uncertainty of the form ¥Á Ô ´×µ ´×µ´½ · ÛÁ ´×µ¡Á ´×µµ ¡Á ´ µ ßÞ ¡Á ½ ½ ½ (7. ¹ ÛÁ ¹ ¡Á Ô Õ ¹+ ¹ + Figure 7.1: Plant with multiplicative uncertainty ¹ The frequency domain is also well suited for describing lumped uncertainty. and ¡ is any real scalar satisfying ¡ ½. ¡Á ½ ½. Neglected and unmodelled dynamics uncertainty is somewhat less precise and thus more difﬁcult to quantify. are × Þ ×·Þ ½ ×·½ ½ ´ × · ½µ¿ ×¾ · ¼ ½× · ½ ¼½ . we have parameter sets of the form «Ô «´½ · Ö« ¡µ where « is the mean parameter value. This leads to complex perturbations which we normalize such that ¡ ½ ½. Here the uncertainty description represents one or several sources of parametric and/or unmodelled dynamics uncertainty combined into a single lumped perturbation of a chosen structure. Lumped uncertainty.
which is better suited for representing pole uncertainty.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ Remark 1 The stability requirement on ¡Á ´×µ may be removed if one instead assumes that the number of RHP poles in ´×µ and Ô ´×µ remains unchanged. A problem with the multimodel approach is that it is not clear how to pick the set of models such that they represent the limiting (“worstcase”) plants. The multimodel approach can also be used when there is unmodelled dynamics uncertainty. normal) distribution of the parameters. Performance may be measured in terms of the worstcase or some average of these models’ responses. however. used in this book. However. This approach is well suited for parametric uncertainty as it eases the burden of the engineer in representing the uncertainty. In particular. there are many ways to deﬁne uncertainty. To summarize. Analogously. is the inverse multiplicative uncertainty ¥Á Ô ´×µ ´×µ´½ · Û Á ´×µ¡ Á ´×µµ ½ ¡ Á´ µ ½ (7. and of these the À ½ frequencydomain approach. there are several ways to handle parametric uncertainty. and it is very well suited when it comes to making simple yet realistic lumped uncertainty descriptions. Weinmann (1991) gives a good overview. However. Parametric uncertainty is sometimes called structured uncertainty as it models the uncertainty in a structured manner. This stochastic uncertainty is. from stochastic uncertainty to differential sensitivity (local robustness) and multimodels. Remark 2 The subscript Á denotes “input”. lumped dynamics uncertainty is sometimes called unstructured uncertainty. Another approach is the multimodel approach in which one considers a ﬁnite set of alternative models. may not be the best or the simplest.3) Even with a stable ¡ Á ´×µ this form allows for uncertainty in the location of an unstable pole. the frequencydomain is excellent for describing neglected or unknown dynamics. Remark. difﬁcult to analyze exactly. .g.and righthalf planes. since ´½ · ÛÁ ¡Á µ ´½ · ÛÇ ¡Ç µ Û Ø ¡Á ´×µ ¡Ç ´×µ Ò ÛÁ ´×µ ÛÇ ´×µ Another uncertainty form. one should be careful about using these terms because there can be several levels of structure. in order to simplify the stability proofs we will in this book assume that the perturbations are stable. One approach for parametric uncertainty is to assume a probabilistic (e. but it can handle most situations as we will see. and to consider the “average” response. and it also allows for poles crossing between the left. but for SISO systems it doesn’t matter whether we consider the perturbation at the input or output of the plant. Alternative approaches for describing uncertainty and the resulting performance may be considered. especially for MIMO systems. In addition.
however. We see that the uncertainty in (7. at least if we restrict the perturbations ¡ to be real.5) where Ö is the relative magnitude of the gain uncertainty and is the average gain. the model set (7.3 Parametric uncertainty In spite of what is sometimes claimed.6) Ö . Let the set of possible plants be Ô ´×µ where Ô is an uncertain gain and Ô ¼ ´×µ ÑÒ Ö Ô ÑÜ ¾ (7. Consider a set of plants. ½ ´× · Ôµ.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7.6) can also handle cases where the gain changes sign ( Ñ Ò Ñ Ü ¼) corresponding to Ö ½.6) where ¡ is a real scalar and ´×µ is the nominal plant. with an uncertain time constant. Note that it does not make physical sense ¼ corresponds to a pole at inﬁnity in the RHP.7) be ´½ · Ö ¡µ.8) which is in the inverse multiplicative form of (7.3). parametric uncertainty may also be represented in the À ½ framework. To represent cases in which a pole may cross between the half planes.4) may be rewritten as multiplicative uncertainty Ô ´×µ ßÞ ¡ ½ (7. Example 7. Here we provide just two simple examples. This is discussed in more detail in Section 7.2) with a constant multiplicative weight ÛÁ ´×µ ¼ and description in (7.1 Gain uncertainty. (7. at least is impossible to get any beneﬁt from control for a plant where we can have Ô with a linear controller. given by Ô ´×µ By writing Ô rewritten as Ô× · ½ ½ ¼ ´×µ ÑÒ Ô ÑÜ (7.7. Example 7.5) with ¡ ½.85).7) can Ô ´×µ ¼ ½ · × · Ö ×¡ ½ ¼ ½ · × ½ · Û Á ´×µ¡ ßÞ ´×µ Û Á ´×µ Ö × ½· × (7. By writing Ô ´½ · Ö ¡µ ¸ ÑÒ· Ñ Ü ¾ ¼ ´×µ´½ · Ö ¡µ ´×µ ¸ ´ Ñ Ü Ñ Òµ (7. similar to (7.2 Time constant uncertainty. The usefulness of this is rather limited. . one should instead consider parametric uncertainty in the pole itself. since it ¼. as described in (7. because a value Ô and the corresponding plant would be impossible to stabilize. The uncertainty is in the form of (7. for Ô to change sign.4) ¼ ´×µ is a transfer function with no uncertainty.
Real perturbations are required. This is of course conservative as it introduces possible plants that are not present in the original set. 3. It usually requires a large effort to model parametric uncertainty. Typically.. even though the underlying assumptions about the model and the parameters may be much less exact.. Ô How is it possible that we can reduce conservatism by lumping together several real perturbations? This will become clearer from the examples in the next section. 2. However. The exact model structure is required and so unmodelled dynamics cannot be dealt with. is the raison d’ etre for À½ feedback optiˆ mization. A parametric uncertainty model is somewhat deceiving in the sense that it provides a very detailed and accurate description. ½ a complex perturbation with ¡´ µ ½. we may simply replace the real perturbation. In fact. for if disturbances and plant models are clearly parameterized then À½ methods seem to offer no clear advantages over more conventional statespace and parametric methods. especially when it comes to controller synthesis. 4. However. ¡ ½ by For example. parametric uncertainty is often represented by complex perturbations. 7. perturbation is used.4 Representing uncertainty in the frequency domain In terms of quantifying unmodelled dynamics uncertainty the frequencydomain approach (À ½ ) does not seem to have much competition (when compared with other norms). a complex multiplicative ´Á · ÛÁ ¡µ. e. then the conservatism is often reduced by lumping these perturbations into a single complex perturbation. but simply stated the answer is that with several uncertain parameters the true uncertainty region is often quite “diskshaped”. if there are several real perturbations. parametric uncertainty is often avoided for the following reasons: 1. which are more difﬁcult to deal with mathematically and numerically. Owen and Zames (1992) make the following observation: The design of feedback controllers in the presence of nonparametric and unstructured uncertainty . Therefore.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ As shown by the above examples one can represent parametric uncertainty in the À½ framework. .g. and may be more accurately represented by a single complex perturbation.
2.g.9) to an uncertainty region description as in Figure 7. Remark 2 Exact methods do exist (using complex region mapping. resulting in a (complex) additive uncertainty description as discussed next.2: Uncertainty regions of the Nyquist plot at given frequencies.9) Remark 1 There is no conservatism introduced in the ﬁrst step when we go from a parametric uncertainty description as in (7. consider in Figure 7. we derive in this and the next chapter necessary and sufﬁcient frequencybyfrequency conditions for robust stability based on uncertainty regions. e.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7. Thus. as already mentioned these . We therefore approximate such complex regions as discs (circles) as shown in Figure 7. from one corner of a region to another).9).g. ÁÑ ½ ¾ ¼ ¼½ Ê ¼¼ ¼ ¼¾ Figure 7. For example.2. However. a region of complex numbers Ô ´ µ is generated by varying the three parameters in the ranges given by (7. they allow for “jumps” in Ô ´ µ from one frequency to the next (e.4.2 seem to allow for more uncertainty. and are cumbersome to deal with in the context of control system design. the only conservatism is in the second step where we approximate the original uncertainty region by a larger discshaped region as shown in Figure 7.3.9) Step 1.2 the Nyquist plots (or regions) generated by the following set of plants Ô ´×µ ×·½ × ¾ ¿ (7.3. At each frequency. these uncertainty regions have complicated shapes and complex mathematical descriptions. This is somewhat surprising since the uncertainty regions in Figure 7. Step 2. Nevertheless. Data from (7. see Laughlin et al. In general. (1986)) which avoid the second conservative step.1 Uncertainty regions To illustrate how parametric uncertainty translates into frequency domain uncertainty. see Figure 7.
ÍÆ
ÊÌ ÁÆÌ
Æ ÊÇ ÍËÌÆ ËË
¾½
½
+
¾
¿
½
¾
Figure 7.3: Disc approximation (solid line) of the original uncertainty region (dashed line). Plot corresponds to ¼ ¾ in Figure 7.2 methods are rather complex, and although they may be used in analysis, at least for simple systems, they are not really suitable for controller synthesis and will not be pursued further in this book. Remark 3 From Figure 7.3 we see that the radius of the disc may be reduced by moving the center (selecting another nominal model). This is discussed in Section 7.4.4.
7.4.2 Representing uncertainty regions by complex perturbations
We will use discshaped regions to represent uncertainty regions as illustrated by the Nyquist plots in Figures 7.3 and 7.4. These discshaped regions may be generated by additive complex normbounded perturbations (additive uncertainty) around a nominal plant
¥
Ô ´×µ
´×µ · Û ´×µ¡ ´×µ
¡ ´ µ
½
(7.10)
where ¡ ´×µ is any stable transfer function which at each frequency is no larger than one in magnitude. How is this possible? If we consider all possible ¡ ’s, then at each frequency ¡ ´ µ “generates” a discshaped region with radius 1 centred at 0, so ´ µ · Û ´ µ¡ ´ µ generates at each frequency a discshaped region of radius Û ´ µ centred at ´ µ as shown in Figure 7.4. In most cases Û the case).
´×µ is a rational transfer function (although this need not always be
¾¾
ÅÍÄÌÁÎ ÊÁ
ÁÑ
Ä
Ã ÇÆÌÊÇÄ
+
Ê
Û
+
´
µ
´
µ
+
Figure 7.4: Discshaped uncertainty regions generated by complex additive uncertainty,
Ô
·Û ¡
One may also view Û ´×µ as a weight which is introduced in order to normalize the perturbation to be less than 1 in magnitude at each frequency. Thus only the magnitude of the weight matters, and in order to avoid unnecessary problems we always choose Û ´×µ to be stable and minimum phase (this applies to all weights used in this book).
ÁÑ
(centre)
Û
·
Ê
Figure 7.5: The set of possible plants includes the origin at frequencies where ´ µ , or equivalently ÛÁ ´ µ ½
Û ´ µ
The diskshaped regions may alternatively be represented by a multiplicative uncertainty description as in (7.2),
¥Á
Ô ´×µ
´×µ´½ · ÛÁ ´×µ¡Á ´×µµ
¡Á ´ µ
½
(7.11)
By comparing (7.10) and (7.11) we see that for SISO systems the additive and
ÍÆ
ÊÌ ÁÆÌ
Æ ÊÇ ÍËÌÆ ËË
¾¿
multiplicative uncertainty descriptions are equivalent if at each frequency
ÛÁ ´ µ
Û ´ µ
´ µ
(7.12)
However, multiplicative (relative) weights are often preferred because their ½ the numerical value is more informative. At frequencies where Û Á ´ µ uncertainty exceeds 100% and the Nyquist curve may pass through the origin. This follows since, as illustrated in Figure 7.5, the radius of the discs in the Nyquist plot, Û ´ µ ´ µÛÁ ´ µ , then exceeds the distance from ´ µ to the origin. At these frequencies we do not know the phase of the plant, and we allow for zeros crossing from the left to the righthalf plane. To see this, consider a frequency ¼ where ÛÁ ´ ¼ µ ½. Then there exists a ¡Á ½ such that Ô ´ ¼ µ ¼ in (7.11), that is, there exists a possible plant with zeros at × ¦ ¼ . For this plant at frequency ¼ the input has no effect on the output, so control has no effect. It then follows that tight control is not possible at frequencies where Û Á ´ µ ½ (this condition is derived more rigorously in (7.33)).
7.4.3 Obtaining the weight for complex uncertainty
Consider a set ¥ of possible plants resulting, for example, from parametric uncertainty as in (7.9). We now want to describe this set of plants by a single (lumped) complex perturbation, ¡ or ¡Á . This complex (diskshaped) uncertainty description may be generated as follows: 1. Select a nominal model ´×µ. 2. Additive uncertainty. At each frequency ﬁnd the smallest radius includes all the possible plants ¥:
Ð ´ µ
which
Ñ Ü Ô´ µ ´ µ (7.13) È ¾¥ If we want a rational transfer function weight, Û ´×µ (which may not be the case
Ð ´ µ
if we only want to do analysis), then it must be chosen to cover the set, so
Û ´ µ
Ð ´ µ
(7.14)
Usually Û ´×µ is of low order to simplify the controller design. Furthermore, an objective of frequencydomain uncertainty is usually to represent uncertainty in a simple straightforward manner. 3. Multiplicative (relative) uncertainty. This is often the preferred uncertainty form, and we have ¬ ¬ ¬ Ô´ µ ´ µ ¬ ¬ ¬ ÐÁ ´ µ Ñ Ü ¬ (7.15) ¬
Ô ¾¥
´ µ
and with a rational weight
ÛÁ ´ µ
ÐÁ ´ µ
(7.16)
¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
Example 7.3 Multiplicative weight for parametric uncertainty. Consider again the set
of plants with parametric uncertainty given in (7.9)
¥
Ô ´×µ
×·½
×
¾
¿
(7.17)
We want to represent this set using multiplicative uncertainty with a rational weight ÛÁ ´×µ. To simplify subsequent controller design we select a delayfree nominal model
´×µ
×·½
¾ ¾ ×·½
(7.18)
To obtain ÐÁ ´ µ in (7.15) we consider three values (¾, ¾ and ¿) for each of the three ). (This is not, in general, guaranteed to yield the worst case as the worst parameters ( µ case may be at the interior of the intervals.) The corresponding relative errors ´ Ô are shown as functions of frequency for the ¿¿ ¾ resulting Ô ’s in Figure 7.6. The curve
Magnitude
10
0
10
−1
10
−2
10
−1
Frequency
10
0
10
1
Figure 7.6: Relative errors for 27 combinations of and with delayfree nominal plant (dotted lines). Solid line: Firstorder weight ÛÁ ½ in (7.19). Dashed line: Thirdorder weight ÛÁ in (7.20) for ÐÁ ´ µ must at each frequency lie above all the dotted lines, and we ﬁnd that ÐÁ ´ µ is ¼ ¾ at low frequencies and ¾ at high frequencies. To derive ÛÁ ´×µ we ﬁrst try a simple ﬁrstorder weight that matches this limiting behaviour:
ÛÁ ½ ´×µ
Ì× · ¼ ¾ ´Ì ¾ µ× · ½
Ì
(7.19)
As seen from the solid line in Figure 7.6, this weight gives a good ﬁt of ÐÁ ´ µ, except around ½ where ÛÁ ½ ´ µ is slightly too small, and so this weight does not include all possible ÐÁ ´ µ at all frequencies, we can multiply ÛÁ ½ by a plants. To change this so that ÛÁ ´ µ ½. The following works well correction factor to lift the gain slightly at
ÛÁ ´×µ
Á ½ ´×µ
×¾ · ½ × · ½ ×¾ · ½ × · ½
(7.20)
ÍÆ
ÊÌ ÁÆÌ
Æ ÊÇ ÍËÌÆ ËË
¾
as is seen from the dashed line in Figure 7.6. The magnitude of the weight crosses ½ at ¼ ¾ . This seems reasonable since we have neglected the delay in our nominal about ¼ ¿¿ (see model, which by itself yields ½¼¼% uncertainty at a frequency of about ½ Ñ Ü Figure 7.8(a) below).
An uncertainty description for the same parametric uncertainty, but with a meanvalue nominal model (with delay), is given in Exercise 7.8. Parametric gain and delay uncertainty (without time constant uncertainty) is discussed further on page 268.
Remark. Pole uncertainty. In the example we represented pole (time constant) uncertainty by a multiplicative perturbation, ¡Á . We may even do this for unstable plants, provided the poles do not shift between the half planes and one allows ¡Á ´×µ to be unstable. However, if the pole uncertainty is large, and in particular if poles can cross form the LHP to the RHP, then one should use an inverse (“feedback”) uncertainty representation as in (7.3).
7.4.4 Choice of nominal model
With parametric uncertainty represented as complex perturbations there are three main options for the choice of nominal model: 1. A simpliﬁed model, e.g. a loworder, delayfree model. ´×µ. 2. A model of mean parameter values, ´×µ 3. The central plant obtained from a Nyquist plot (yielding the smallest discs). Option 1 usually yields the largest uncertainty region, but the model is simple and this facilitates controller design in later stages. Option 2 is probably the most straightforward choice. Option 3 yields the smallest region, but in this case a signiﬁcant effort may be required to obtain the nominal model, which is usually not a rational transfer function and a rational approximation could be of very high order. Example 7.4 Consider again the uncertainty set (7.17) used in Example 7.3. The nominal models selected for options 1 and 2 are
½ ´×µ
×·½
¾ ´×µ
×·½
×
For option 3 the nominal model is not rational. The Nyquist plot of the three resulting discs at ¼ are shown in Figure 7.7. frequency Remark. A similar example was studied by Wang et al. (1994), who obtained the best controller designs with option 1, although the uncertainty region is clearly much larger in this case. The reason for this is that the “worstcase region” in the Nyquist plot in Figure 7.7 corresponds quite closely to those plants with the most negative phase (at coordinates about ´ ½ ½ µ). Thus, the additional plants included in the largest region (option 1) are
¾
2
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
1
0
1
¾
+ +
¿
+
½
2
3
4 2
1
0
1
2
3
4
Figure 7.7: Nyquist plot of Ô ´ µ at frequency ¼ (dashed region) showing complex disc approximations using three options for the nominal model: 1. Simpliﬁed nominal model with no time delay 2. Mean parameter values 3. Nominal model corresponding to the smallest radius
generally easier to control and do not really matter when evaluating the worstcase plant with respect to stability or performance. In conclusion, at least for SISO plants, we ﬁnd that for plants with an uncertain time delay, it is simplest and sometimes best (!) to use a delayfree nominal model, and to represent the nominal delay as additional uncertainty.
The choice of nominal model is only an issue since we are lumping several sources of parametric uncertainty into a single complex perturbation. Of course, if we use a parametric uncertainty description, based on multiple real perturbations, then we should always use the mean parameter values in the nominal model.
7.4.5 Neglected dynamics represented as uncertainty
We saw above that one advantage of frequency domain uncertainty descriptions is that one can choose to work with a simple nominal model, and represent neglected dynamics as uncertainty. We will now consider this in a little more detail. Consider a set of plants
Ô ´×µ
¼ ´×µ ´×µ
where ¼ ´×µ is ﬁxed (and certain). We want to neglect the term ´×µ (which may be ﬁxed or may be an uncertain set ¥ ), and represent Ô by multiplicative uncertainty with a nominal model ¼ . From (7.15) we get that the magnitude of the relative
ÍÆ
ÊÌ ÁÆÌ
Æ ÊÇ ÍËÌÆ ËË
¬ ¬ Ñ Ü¬ Ô Ô ¬
¾
¬ ¬ ¬ ¬
uncertainty caused by neglecting the dynamics in
ÐÁ ´ µ
´×µ is ´ µ ½
(7.21)
´×µ¾¥
Ñ Ü
Three examples illustrate the procedure. Ô× , where ¼ Ô 1. Neglected delay. Let ´×µ Ñ Ü . We want to represent Ô × by a delayfree plant ¼ ´×µ and multiplicative uncertainty. Let Ô ¼ ´×µ us ﬁrst consider the maximum delay, for which the relative error ½ Ñ Ü is shown as a function of frequency in Figure 7.8(a). The relative uncertainty crosses 1 in magnitude at about frequency ½ Ñ Ü , reaches 2 at frequency Ñ Ü (since at Ñ Ü ½), and oscillates between 0 and 2 at higher frequencies this frequency (which corresponds to the Nyquist plot of Ñ Ü going around and around the unit circle). Similar curves are generated for smaller values of the delay, and they also oscillate between ¼ and ¾ but at higher frequencies. It then follows that if we consider all ¾ ¼ Ñ Ü then the relative error bound is 2 at frequencies above Ñ Ü, and we have
ÐÁ ´ µ
½ ¾
ÑÜ
ÑÜ ÑÜ
¼.
1
(7.22)
Rational approximations of (7.22) are given in (7.26) and (7.27) with Ö
10
1
10
Magnitude
10
Magnitude
0
10
0
10
−1
10
−1
10
−2
10
−2
½ ÑÜ
10 Frequency (a) Time delay
0
10
2
10
−2
10
−2
½ ÑÜ
10 Frequency (b) Firstorder lag
0
10
2
Figure 7.8: Multiplicative uncertainty resulting from neglected dynamics
2. Neglected lag. Let ´×µ ½ ´ Ô × · ½µ, where ¼ Ô Ñ Ü . In this case the resulting ÐÁ ´ µ, which is shown in Figure 7.8(b), can be represented by a rational ÐÁ ´ µ where transfer function with Û Á ´ µ
ÛÁ ´×µ
½
½ Ñ Ü× · ½
Ñ Ü× Ñ Ü× · ½
This weight approaches 1 at high frequency, and the lowfrequency asymptote crosses 1 at frequency ½ Ñ Ü .
¾
ÅÍÄÌÁÎ ÊÁ
Ä
Ã ÇÆÌÊÇÄ
3. Multiplicative weight for gain and delay uncertainty. Consider the following set of plants × (7.23) Ô ´×µ Ô Ô ¼ ´×µ Ô ¾ ÑÒ Ñ Ü Ô ¾ ÑÒ Ñ Ü which we want to represent by multiplicative uncertainty and a delayfree nominal ´ Ñ Ü Ñ Òµ ¾ . Lundstr¨ m Ñ Ò · Ñ Ü and Ö o model, ´×µ ¼ ´×µ where ¾ (1994) derived the following exact expression for the relative uncertainty weight
ÐÁ ´ µ
Ö¾ · ¾´½ · Ö µ´½ Ó× ´ Ñ Ü µµ ¾·Ö
Ô
for for
where Ö is the relative uncertainty in the gain. This bound is irrational. To derive a rational weight we ﬁrst approximate the delay by a ﬁrstorder Pad´ approximation to e get ¡ ½ · Ö¾ Ñ Ü × · Ö (7.25) ½ Ñ Ü× Ñ Ü× ¾ ´½ · Ö µ ÑÜ ÑÜ ÑÜ
ÑÜ ÑÜ
(7.24)
½· ¾ ×
¾ ×·½
Since only the magnitude matters this may be represented by the following ﬁrstorder weight ´½ · Ö¾ µ Ñ Ü × · Ö ÛÁ ´×µ (7.26) ÑÜ However, as seen from Figure 7.9 by comparing the dotted line (representing Û Á )
10
1
¾ ×·½
Magnitude
ÐÁ
10
0
ÛÁ
10
−1
10
−2
10
−1
Frequency
10
0
10
1
10
2
Figure 7.9: Multiplicative weight for gain and delay uncertainty in (7.23)
with the solid line (representing Ð Á ), this weight ÛÁ is somewhat optimistic (too ÐÁ ´ µ small), especially around frequencies ½ Ñ Ü . To make sure that Û Á ´ µ at all frequencies we apply a correction factor and get a thirdorder weight
ÛÁ ´×µ
´½ · Ö¾ µ Ñ Ü × · Ö Ñ Ü× · ½ ¾
¿ ¡ ¾ Ñ Ü¿ ¡¾ ×¾ · ¾ ¡ ¼ ¾¿ ¿ × ·¾¡¼
Ñ Ü ¡¾ ¾
¿
¡ ¾ Ñ Ü¿ × · ½ ¿ Ñ Ü×·½ ¡ ¾¿ ¿
(7.27)
ÍÆ
ÊÌ ÁÆÌ
Æ ÊÇ ÍËÌÆ ËË
¾
The improved weight Û Á ´×µ in (7.27) is not shown in Figure 7.9, but it would be almost indistinguishable from the exact bound given by the solid curve. In practical applications, it is suggested that one starts with a simple weight as in (7.26), and if it later appears important to eke out a little extra performance then one can try a higherorder weight as in (7.27). Example 7.5 Consider the set Ô ´×µ Ô Ô × ¼ ´×µ with ¾ Ô ¿ and ¾ Ô ¿. ¾ ¼ and relative We approximate this with a nominal delayfree plant ¼ ·¼ uncertainty. The simple ﬁrstorder weight in (7.26), ÛÁ ´×µ ¿½¿××·½¾ , is somewhat optimistic.
To cover all the uncertainty we may use (7.27), ÛÁ ´×µ
¾ ¿ ¿×·¼ ¾ ¡ ½ ½¾×¾ ·¾ ½¾ ×·½ . ½ ×·½ ½ ½¾× ·½ ¿ ×·½
7.4.6 Unmodelled dynamics uncertainty
Although we have spent a considerable amount of time on modelling uncertainty and deriving weights, we have not yet addressed the most important reason for using frequency domain (À ½ ) uncertainty descriptions and complex perturbations, namely the incorporation of unmodelled dynamics. Of course, unmodelled dynamics is close to neglected dynamics which we have just discussed, but it is not quite the same. In unmodelled dynamics we also include unknown dynamics of unknown or even inﬁnite order. To represent unmodelled dynamics we usually use a simple multiplicative weight of the form
ÛÁ ´×µ
× · Ö¼ ´ Ö½ µ× · ½
(7.28)
where Ö¼ is the relative uncertainty at steadystate, ½ is (approximately) the frequency at which the relative uncertainty reaches 100%, and Ö ½ is the magnitude ¾). Based on the above examples of the weight at high frequency (typically, Ö ½ and discussions it is hoped that the reader has now accumulated the necessary insight to select reasonable values for the parameters Ö ¼ Ö½ and for a speciﬁc application. The following exercise provides further support and gives a good review of the main ideas. Exercise 7.1 Suppose that the nominal model of a plant is
´×µ
½ ×·½
and the uncertainty in the model is parameterized by multiplicative uncertainty with the weight
ÛÁ ´×µ
¼ ½¾ × · ¼ ¾ ´¼ ½¾ µ× · ½
Call the resulting set ¥. Now ﬁnd the extreme parameter values in each of the plants (a)(g) below so that each plant belongs to the set ¥. All parameters are assumed to be positive. One
¾¼
ÅÍÄÌÁÎ ÊÁ
Ä ¼(
Ã ÇÆÌÊÇÄ
, etc.) and adjust the
½ ¼ ½ in (7.15) for each approach is to plot ÐÁ ´ µ parameter in question until ÐÁ just touches ÛÁ ´ µ .
(a) Neglected delay: Find the largest (b) Neglected lag: Find the largest
for for for
(c) Uncertain pole: Find the range of
½ ×· (Answer: ¼ to ½ ¿¿).
½ ×·½ (Answer: ¼ ½ ).
× (Answer: ¼ ½¿).
(d) Uncertain pole (time constant form): Find the range of Ì for ½ ). (e) Neglected resonance: Find the range of to ¼ ). for
½ Ì×·½ (Answer: ¼ to
(Answer: ¼ ¼¾
½ ´× ¼µ¾ ·¾ ´× ¼µ·½ ½ ¼ ¼½×·½
Ñ
(f) Neglected dynamics: Find the largest integer Ñ for
(Answer: ½¿).
Þ ×·½ (Answer: ¼ ¼ ). These results (g) Neglected RHPzero: Find the largest Þ for Þ ×·½ imply that a control system which meets given stability and performance requirements for all , plants in ¥, is also guaranteed to satisfy the same requirements for the above plants . ½ ´× (h) Repeat the above with a new nominal plant ½ ´Ì × ½µ). (Answer: Same as above). same except
½µ (and with everything else the
Exercise 7.2 Repeat Exercise 7.1 with a new weight,
ÛÁ ´×µ
×·¼¿ ´½ ¿µ× · ½
We end this section with a couple of remarks on uncertainty modelling: 1. We can usually get away with just one source of complex uncertainty for SISO systems. 2. With an À½ uncertainty description, it is possible to represent time delays (corresponding to an inﬁnitedimensional plant) and unmodelled dynamics of inﬁnite order , using a nominal model and associated weights with ﬁnite order.
7.5 SISO Robust stability
We have so far discussed how to represent the uncertainty mathematically. In this section, we derive conditions which will ensure that the system remains stable for all perturbations in the uncertainty set, and then in the subsequent section we study robust performance.
ÍÆ
ÊÌ ÁÆÌ
Æ ÊÇ ÍËÌÆ ËË
¾½
¹

ÛÁ
¹
¡Á
Ô
¹
Ã
Õ
¹+ ¹ +
Õ¹
Figure 7.10: Feedback system with multiplicative uncertainty
7.5.1 RS with multiplicative uncertainty
We want to determine the stability of the uncertain feedback system in Figure 7.10 when there is multiplicative (relative) uncertainty of magnitude Û Á ´ µ . With uncertainty the loop transfer function becomes
ÄÔ
ÔÃ
Ã ´½ · ÛÁ ¡Á µ
Ä · ÛÁ Ä¡Á
¡Á ´ µ
½
(7.29)
As always, we assume (by design) stability of the nominal closedloop system (i.e. ¼). For simplicity, we also assume that the loop transfer function Ä Ô is with ¡Á stable. We now use the Nyquist stability condition to test for robust stability of the closedloop system. We have
ÊË
¸
ËÝ×Ø Ñ ×Ø Ð
ÄÔ ÄÔ
(7.30)
¸ ÄÔ should not encircle the point ½
ÁÑ
½
¼ ½· Ê
Ä´
µ
Ä´
µ
ÛÁ Ä
Figure 7.11: Nyquist plot of ÄÔ for robust stability
11. By trial and error we ﬁnd that reducing the gain to Ã ¾ ¼ ¿½ just achieves RS.12. Consider the Nyquist plot of Ä Ô as shown ½ · Ä is the distance from the in Figure 7.34) is exact (necessary and sufﬁcient) provided there exist uncertain plants such that at each ½ are possible.11 ÊË Note that for SISO systems ÛÁ ÛÇ and Ì ÌÁ Ã ´½ · Ã µ ½ .¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 1.34) is only sufﬁcient for RS. Convince yourself that ½ Ä point ½ to the centre of the disc representing Ä Ô . Encirclements are avoided if none of the discs cover ½. this is the case if the perturbation is restricted to be real. as for the parametric gain uncertainty in (7. It results in a nominally stable ´ ¿×·½µ ´ ×· closedloop system.e. e. Example 7. For this plant the relative error ´ ¼ µ is ¼ ¿¿ at low frequencies. Suppose that one “extreme” uncertain plant is ¼ ´×µ ½µ¾ . Condition (7. . This is not the case as seen from Figure 7.12 where we see that the satisfying ¡ ½ Ã½ ´½· Ã½ µ exceeds magnitude of the nominal complementary sensitivity function Ì½ the bound ½ ÛÁ from about ¼ ½ to ½ rad/s. it is ½ at about ¼ ½ rad/s. then (7. Graphical derivation of RScondition. To achieve robust stability we need to reduce the controller gain.32) (7.6 Consider the following nominal plant and PIcontroller ´×µ ¿´ ¾× · ½µ Ã ´×µ ´ × · ½µ´½¼× · ½µ Ã ½¾ × · ½ ½¾ × Recall that this is the inverse response process from Chapter 2. and that ÛÁ Ä is the radius of the disc. so the condition could equivalently be written in terms of Û Á ÌÁ or ÛÇ Ì . and it is ¾ at high frequencies.34) is not satisﬁed. Thus.28) we choose the following uncertainty weight ÛÁ ´×µ ½¼× · ¼ ¿¿ ´½¼ ¾ µ× · ½ which closely matches this relative error. We now want to evaluate whether the system remains ´½· ÛÁ ¡Á µ where ¡Á ´×µ is any perturbation stable for all possible plants as given by Ô ½. and we get from Figure 7.33) ÊË ¸ Ì ½ ÛÁ (7. we select Ã Ã ½ ½ ½¿ as suggested by the ZieglerNichols’ tuning rule. make Ì small) at frequencies where the relative uncertainty Û Á exceeds ½ in magnitude. If this is not the frequency all perturbations satisfying ¡´ µ case. Initially. as is seen from the curve for Ì¾ Ã¾ ´½ · Ã¾ µ in Figure 7. Based on this and (7.6). the requirement of robust stability for the case with multiplicative uncertainty gives an upper bound on the complementary sensitivity: ¸ ÛÁ Ì ¸ ¬ÛÁ Ä ¬ ¬ Á ¬ ¸ ¬ ½Û·Ä ¬ ¬ Ä¬ ½ ½·Ä ½ ½ (7.31) ¸ ÛÁ Ì ½ (7.g.34) We see that we have to detune the system (i. so (7.
Unstable plants. then there must be another Ä Ô¾ in the uncertainty set which goes exactly through ½ at some frequency. with Ã ¾ ¼ ¿½ the system is stable with reasonable unstable with Ã ½ margins (and not at the limit of instability as one might have expected). it then follows that if some ÄÔ½ in the uncertainty set encircles ½. Therefore. The stability condition (7. Algebraic derivation of RScondition. Thus. For the “extreme” plant ¼ ´×µ we ﬁnd as expected that the closedloop system is ½ ½¿. since the set of plants is normbounded.12: Checking robust stability with multiplicative uncertainty Remark.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ½ ÏÁ ¾¿ Ì½ (not RS) Magnitude 10 0 10 −1 Ì¾ (RS) −3 10 10 −2 Frequency 10 −1 10 0 10 1 Figure 7.33) also applies to the case when Ä and ÄÔ are unstable as long as the number of RHPpoles remains the same for each plant in the .38) and we have rederived (7. Since Ä Ô is assumed stable. and the nominal closedloop is stable. Thus ÊË ¸ ½ · Ä ÛÁ Ä ¼ ¸ ÛÁ Ì ½ (7. the nominal loop transfer function Ä´ µ does not encircle ½. with Ã ¾ ¼ ¿½ there exists an allowed ¾ ´½ · ÛÁ ¡Á µ that yields Ì¾Ô ½· Ô ÃÃ¾ on the complex ¡Á and a corresponding Ô Ô limit of instability. Remark. This illustrates that gain by almost a factor of two to Ã condition (7. we can increase the ¼ before we get instability. ÊË ¸ ¸ ¸ ½ · ÄÔ ¼ ÄÔ ½ · ÄÔ ¼ ÄÔ ½ · Ä · ÛÁ Ä¡Á ¼ (7. 2.33). However. However.34) is only a sufﬁcient condition for stability. and a violation of this bound does not imply instability for a speciﬁc plant ¼ .35) (7.36) ¡Á ½ (7.37) At each frequency the last condition is most easily violated (the worst case) when the ½ and with phase such that complex number ¡ Á ´ µ is selected with ¡Á ´ µ the terms ´½ · Äµ and ÛÁ Ä¡Á have opposite signs (point in the opposite direction).
so we must make sure that the perturbation does not change the number of encirclements. We therefore get ÊË ¸ ½ · Å¡ ¼ ¡ ½ Å´ µ ¼ (7. The reader should not be too concerned if he or she does not fully understand the details at this point. If ¼) feedback system is stable then the stability of the system in the nominal (¡ Á Figure 7. Thus. and (7. ¡ Ù¡ Ý¡ ¹ Å Figure 7. We assume that ¡ and Å ÛÁ Ì are stable. ½ (7.10 is the new feedback loop created by ¡ Á .40) The last condition is most easily violated (the worst case) when ¡ is selected at each ½ and the terms Å ¡ and 1 have opposite signs (point in frequency such that ¡ the opposite direction). and we will discuss this at length in the next chapter where we will see that (7.10 is equivalent to stability of the system in Figure 7.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ uncertainty set. Notice that the only source of instability in Figure 7.42) which is the same as (7.41) Å´ µ ½ (7. The derivation is based on applying the Nyquist stability condition to an alternative “loop transfer function” Å ¡ rather than Ä Ô .33) is the condition which guarantees this. This follows since the nominal closedloop system is assumed stable. The Nyquist stability condition then determines RS if and only if the “loop transfer function” Å ¡ does not encircle ½ for all ¡.13. This derivation is a preview of a general analysis presented in the next chapter. ¸ ¸ . where ¡ ¡Á and Å ÛÁ Ã ´½ · Ã µ ½ ÛÁ Ì (7. The argument goes as follows.13. the former implies that and Ô must have the same unstable poles. We now apply the Nyquist stability condition to the system in Figure 7.13: Å ¡structure 3. Å ¡structure derivation of RScondition.42) is essentially a clever application of the small gain theorem where we avoid the usual conservatism since any phase in Å ¡ is allowed.33) and (7. The Å ¡structure provides ÊË a very general way of handling robust stability.38) since Å Û Á Ì .39) is the transfer function from the output of ¡ Á to the input of ¡ Á . the latter is equivalent to assuming nominal stability of the closedloop system.
Show that this is indeed the case using the numerical values from Example 7. can we multiply the loop gain. and the exact factor by which we can increase Å ¾. Ñ Ü ½ ½ .7 To check this numerically consider a system with Ä¼ ½¼ ¼ ½ ×·¾ . instability? In other words.44).5. (b) One expects Ñ Ü ¿ to be even more conservative than Ñ Ü ¾ since this uncertainty description is not even tight when ¡ is real. (7. ¾ [rad/s] and Ä¼ ´ ½ ¼ µ the loop gain is.43) ÄÔ ﬁnd the largest value of Ô Ä¼ Ô ¾ ½ ÑÜ Ñ Ü such that the closedloop system is stable.44) 2. given Ä¼ ¼ Ã . Condition (7. Note that no iteration is (a) Find ÛÁ and use the RScondition ÛÁ Ì ½ Ì¼ is independent of Ñ Ü . We ﬁnd . We have (recall (2. Complex perturbation. On the other hand.45) where Ñ Ü·½ ¾ Note that the nominal Ä Ä¼ is not ﬁxed. . but depends on Ñ Ü . The exact value of Ñ Ü .46) Ä¼ ½ · Ä¼ ½ ½ (7.7.3 Represent the gain uncertainty in (7.2 Comparison with gain margin By what factor. This illustrates the conservatism involved in replacing a real perturbation by a complex one. Alternatively. 1. is the gain margin (GM) from classical control. use of (7. × ×·¾ ½ to ﬁnd Ñ Ü ¿ . represent the gain uncertainty as complex multiplicative uncertainty. and (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ 7.47) Here both Ö and depend on Ñ Ü .47) would be exact if ¡ were complex. which is obtained with ¡ real. from (7. but since it is not we expect Ñ Ü ¾ to be somewhat smaller than GM. Real perturbation. which as expected Ñ Ü¾ Example 7.43) as multiplicative complex uncertainty with nominal model ¼ (rather than ¼ used above). The robust stability condition ÛÁ Ì ½ ½ (which is derived for complex ¡) with Û Á Ö then gives Ö Ñ Ü ½ Ñ Ü·½ (7. Ä¼ ´½ · Ö ¡µ Ö (7. Ñ Ü .47) yields is less than GM=¾. needed in this case since the nominal model and thus Ì Exercise 7. before we get (7.47) must be solved iteratively to ﬁnd Ñ Ü ¾ .33)) Ñ Ü½ where Å ½ ¼ is the frequency where Ä ¼ ÄÔ Ô Ä¼ ½ Ä¼ ´ ½ ¼ µ ½ ¼Æ.
52) (7.3 RS with inverse multiplicative uncertainty ÛÁ Ù¡ ¡Á Ý¡  ¹ Ã ¹+ + Õ ¹ Õ¹ Figure 7. the RScondition (7.51) The last condition is most easily violated (the worst case) when ¡ Á is selected at ½ and the terms ½ · Ä and Û Á ¡ Á have opposite each frequency such that ¡ Á signs (point in the opposite direction).53) applies even when the number of RHPpoles of Ô can change. but this is not necessary as one may show by deriving the condition using the Å ¡structure.14) in which Ô ´½ · Û Á ´×µ¡ Á µ ½ (7. From (7.54) . Control implications. Robust stability is then guaranteed if encirclements by Ä Ô ´ µ of the point ½ are avoided. and since ÄÔ is in a normbounded set we have ÊË ¸ ¸ ¸ ½ · ÄÔ ¼ ÄÔ ½ · Ä´½ · Û Á ¡ Á µ ½ ½ · Û Á¡ Á · Ä ¼ ¼ ¡Á ¡Á ½ ½ (7. In this derivation we have assumed that ÄÔ is stable.5.53) Remark.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7.14: Feedback system with inverse multiplicative uncertainty We will derive a corresponding RScondition for a feedback system with inverse multiplicative uncertainty (see Figure 7.53) we ﬁnd that the requirement of robust stability for the case with inverse multiplicative uncertainty gives an upper bound on the sensitivity. Actually.50) (7. ÊË ¸ Ë ½ Û Á (7.48) Algebraic derivation. Thus ÊË ¸ ½·Ä ÛÁ ¸ Û ÁË ½ ¼ (7.49) (7. and assume stability of the nominal closedloop system. Assume for simplicity that the loop transfer function Ä Ô is stable.
ÊÈ ¸ ÛÈ ËÔ ½ ËÔ ¸ ÛÈ ½ · ÄÔ ÄÔ ÔÃ (7. This may be somewhat surprising since we intuitively expect to have to detune the system (and make Ë ½) when we have uncertainty. Ë ½ may not always be possible.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ We see that we need tight control and have to make Ë small at frequencies where the uncertainty is large and Û Á exceeds ½ in magnitude. and at frequencies where Û Á exceeds 1 we allow for poles crossing from the left to the righthalf plane ( Ô becoming unstable).15.9. However. that is.1 SISO nominal performance in the Nyquist plot Consider performance in terms of the weighted sensitivity function as discussed in Section 2.2.6. Ä´ µ must stay outside a disc of radius Û È ´ µ centred on ½.15)). Thus. The reason is that this uncertainty represents pole uncertainty. and we then know that we need feedback ( Ë ½) in order to stabilize the system. and the set of possible loop transfer functions is ÄÔ Ä´½ · ÛÁ ¡Á µ Ä · ÛÁ Ä¡Á (7.57) This corresponds to requiring Ý ½ ¡ Á in Figure 7. In particular.58) . we cannot have large pole uncertainty with Û Á ´ µ ½ (and hence the possibility of instability) at frequencies where the plant has a RHPzero.16. 7. including the worstcase uncertainty. where we see that for NP. assume that the plant has a RHPzero at × Þ . The condition for nominal performance (NP) is then ÆÈ ¸ ÛÈ Ë ½ ¸ ÛÈ ½·Ä (7.2 Robust performance For robust performance we require the performance condition (7. Then we have the interpolation constraint Ë ´Þ µ ½ and we must as a prerequisite for RS.55) Now ½ · Ä represents at each frequency the distance of Ä´ µ from the point ½ in the Nyquist plot.56) (7.7. 7. require that Û Á ´Þ µ ½ (recall the maximum modulus theorem.6 SISO Robust performance 7. This is illustrated graphically in Figure 7. Û Á Ë ½ ½. where we consider multiplicative uncertainty. while this condition tells us to do the opposite. so Ä´ µ must be at least a distance of Û È ´ µ from ½. see (5. This is consistent with the results we obtained in Section 5.55) to be satisﬁed for all possible plants.6.
Algebraic derivation of RPcondition.57) is illustrated graphically by the Nyquist plot in Figure 7. Since their centres are located a distance ½ · Ä apart.56) we have that RP is satisﬁed if the worstcase (maximum) weighted sensitivity at each frequency is less than 1.62)  ¹ Ã Õ ¹ ÛÁ ¹ ¡Á ¹+¹ + ¹ + Õ¹ + ÛÈ Ý ¹ Figure 7. we see from Figure 7. For RP we must require that all possible Ä Ô ´ µ stay outside a disc of radius Û È ´ µ centred on ½. ÊÈ ¸ Ñ Ü Û È ËÔ ËÔ ½ (7.61) ¸ Ñ Ü ´ ÛÈ Ë · Û Á Ì µ 2.17.15: Nyquist plot illustration of nominal performance condition ÛÈ ½·Ä 1.59) ½ (7. Condition (7. the RPcondition becomes ÊÈ or in other words ¸ ÛÈ · ÛÁ Ä ¸ ÛÈ ´½ · Äµ ½ ÊÈ ½·Ä · ÛÁ Ä´½ · Äµ ½ ½ (7. Since Ä Ô at each frequency stays within a disc of radius Û Á Ä centred on Ä. Graphical derivation of RPcondition.60) (7.16: Diagram for robust performance with multiplicative uncertainty . with radii Û È and ÛÁ Ä .17 that the condition for RP is that the two discs. From the deﬁnition in (7.¾ ÅÍÄÌÁÎ ÊÁ ÁÑ Ä Ã ÇÆÌÊÇÄ ½ ¼ ½· Ä´ µ Ê Ä´ ÛÈ ´ µ µ Figure 7. do not overlap. that is.
The RPcondition (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË Im ¾ 1+Ljw −1 0 Ljw wjw Re wL Figure 7.95) that condition (7.61). Remarks on RPcondition (7. The perturbed sensitivity is ËÔ ´Á ·ÄÔ µ ½ ½ ´½·Ä·ÛÁ Ä¡Á µ.63) into (7.61) can be used to derive bounds on the loop shape Ä . However.4) frequency we have that ÛÈ Ë · ÛÁ Ì Ô À Ä or if ½ · ÛÈ ½ ÛÁ ½ ÛÈ ½ · ÛÁ (at frequencies where ÛÁ ÛÈ ½) ½) (7.61).17: Nyquist plot illustration of robust performance condition ÛÈ ½ · ÄÔ (strictly speaking.62) we rederive the RPcondition in (7. so there is little need to make use of the structured singular value.64) is within a factor of at most ¾ to condition (7.61) for this problem is closely approximated by the following mixed sensitivity ½ condition: À ÛÈ Ë ÛÁ Ì ½ ÑÜ Ô ÛÈ Ë ¾ · ÛÁ Ì ¾ ½ (7. we will see in the next chapter that the situation can be very different for MIMO systems. At a given ½ (RP) is satisﬁed if (see Exercise 7. the supremum). Ñ Ü should be replaced by ×ÙÔ. 2.66) . This means that for SISO systems we can closely approximate the RPcondition in terms of an ½ problem.65) Ä (at frequencies where (7.63) and by substituting (7. The RPcondition (7. We get Ñ Ü ÛÈ ËÔ ËÔ ÛÈ ½ · Ä ÛÁ Ä ÛÈ Ë ½ ÛÁ Ì (7. we ﬁnd from (A. and the worstcase (maximum) is obtained at each frequency by selecting ¡ Á =1 such that the terms ´½ · Äµ and ÛÁ Ä¡Á (which are complex numbers) point in opposite directions.64) To be more precise.61). 1.
(more than 100% uncertainty). and furthermore derive necessary bounds for RP in addition to the sufﬁcient bounds in (7. Condition ½ and ÛÈ ½ (tight (7. Consider robust performance of the SISO system in Figure 7.65) and (7.67) ½. from (which must be satisﬁed) ÛÈ Ë · ÛÁ Ì ´for ÛÈ ´for ÛÈ ½. Does each system satisfy robust performance? . see also Exercise 7. Thus. sketch the magnitudes of S and its ¼ × and 2) Ã¾ ´×µ × ½·× performance bound as a function of frequency. We will discuss in much more detail in the next chapter. for which we have ÊÈ ¸ ¬ ¬ ¬ ¬ ¬Ý ¬ ¬ ¬ ½ ¡Ù ½ ÛÈ ´×µ ¼¾ · ¼½ ÛÙ ´×µ × ÖÙ × ×·½ (7. see (8.65) and (7. The worstcase weighted sensitivity is equal to skewed.3.8 Robust performance problem.5 Also derive. the following necessary bounds for RP ½ and ÛÁ ½ and ÛÁ ½µ ½µ Ä Ä Hint: Use ½·Ä ½· Ä.61) is the structured singular value ( ) for RP 3. The structured singular value Ñ ÜËÔ ÛÈ ËÔ .66) may be combined over different frequency ranges. The loopshaping conditions (7.10.65) and (7.66) which are sufﬁcient for ÛÈ Ë · ÛÁ Ì ½ (RP). (b) For what values of ÖÙ is it impossible to satisfy the robust performance condition? ¼ .4 Derive the loopshaping bounds in (7.66). condition (7.63) (although many people seem to think it is).66) is most ½. see Section 8. ÛÈ Ë · ÛÁ Ì in (7. The term ´ÆÊÈ µ for this particular problem.5. For each system.65) and (7. Exercise 7.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Conditions (7.68) (a) Derive a condition for robust performance (RP). 4. Hint: Start from the RPcondition in the form ÛÈ · ÛÁ Ä ½·Ä and use the facts that ½·Ä ½ Ä and ½·Ä Ä ½. Consider two cases for the nominal loop transfer function: 1) Ã½ ´×µ (c) Let ÖÙ ¼ ½ × . Conversely. given in (7.66) may in the general case be obtained numerically from conditions as outlined in Remark 13 on page 320. Exercise 7.128).( × ) with ﬁxed uncertainty. (1996) who derive bounds also in terms of Ë and Ì .18. in summary we have for this particular robust performance problem: ÛÈ Ë · ÛÁ Ì Note that and × are closely related since × ½ if and only if × ÛÈ Ë ½ ÛÁ Ì (7.65) is most useful at low frequencies where generally ÛÁ performance requirement) and we need Ä large. ÛÈ ½ ½ ÛÁ ½ ÛÈ ÛÁ ½ Example 7. useful at high frequencies where generally ÛÁ ÛÈ ½ and we need Ä small. This is discussed by Braatz et al. is not equal to the worstcase weighted sensitivity.
6.76) .3 The relationship between NP. we must at least require ÖÙ ¼ for RP.18: Diagram for robust performance in Example 7.73) ½ ÛÈ ´ Ûµ · ÛÙ ´ Ûµ ½ at high frequencies and therefore (b) Since any real system is strictly proper we have Ë ½ as .73) graphically as shown in Figure 7.70) A simple analysis shows that the worst case corresponds to selecting ¡Ù with magnitude 1 such that the term ÛÙ ¡Ù Ë is purely real and negative.19. while Ë¾ does not.72) (7. The conditions for nominal performance (NP). With the weights in (7. and hence we have ÊÈ ¸ ¸ ¸ ÛÈ Ë ½ Û Ù Ë ÛÈ Ë · ÛÙ Ë ½ Ë ´ Ûµ (7. Ë½ has a peak of ½ while Ë¾ has a peak of about ¾ .75) (7. robust stability (RS) and robust performance (RP) can then be summarized as follows ÆÈ ÊË ÊÈ ¸ ¸ ¸ ÛÈ Ë ½ ÛÁ Ì ½ ÛÈ Ë · ÛÁ Ì (7.8 Solution. This is seen by checking the RPcondition (7. where the possible sensitivities (7. RS and RP Consider a SISO system with multiplicative uncertainty. 7. and assume that the closedloop is nominally stable (NS). Therefore.74) ½ (7. ½ (c) Design Ë½ yields RP.71) (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾½ ¹  ÛÙ Ã ¹ ¹ ¡Ù Õ ¹ ¹+ ¹+ Õ¹ + + ÛÈ Ý ¹ Figure 7.68) this is we must require ÛÙ ´ µ · ÛÈ ´ µ ½. (a) The requirement for RP is are given by ÛÈ ËÔ ½ ËÔ .69) ËÔ ÊÈ ½ ½ · Ã · ÛÙ ¡ Ù ¬ ¬ ¬ ÛÈ Ë ¬ ¬ ¬ ¬ ½ · ÛÙ ¡Ù Ë ¬ Ë ½ · ÛÙ ¡Ù Ë ½ ¡Ù The condition for RP then becomes ¸ (7. so RP equivalent to ÖÙ · ¼ ¾ cannot be satisﬁed if ÖÙ ¼ .
RP is not a “big issue” for SISO systems. more than ½¼¼% uncertainty) at the same frequency. where Ë · Ì Ë · Ì ½.e. if we satisfy both RS and NP.77) It then follows that. 7. Thus. we cannot make both Ë and Ì small at the same frequency because of the identity Ë · Ì ½. for MIMO systems we may get very poor RP even though the subobjectives of NP and RS are individually satisﬁed.¾¾ 10 1 ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ë¾ Magnitude 10 0 ½ ÛÈ · ÛÙ Ë½ 10 −1 10 −2 10 −2 10 −1 Frequency 10 0 10 1 10 2 Figure 7. To see this consider the following two cases as illustrated .4 The similarity between RS and RP Robust performance may be viewed as a special case of robust stability (with mulptiple perturbations).e. and we know that we cannot have tight performance in the presence of RHPzeros. since Û È Ë · ÛÁ Ì Ñ Ò ÛÈ ÛÁ ´ Ë · Ì µ. In addition. and we derive at each frequency ÛÈ Ë · ÛÁ Ì Ñ Ò ÛÈ ÛÁ (7. then we have at each frequency ÛÈ Ë · ÛÁ Ì ¾ Ñ Ü ÛÈ Ë ÛÁ Ì ¾ (7. This has implications for RP. within a factor of at most ¾. whereas to satisfy NP we generally want Ë small.6.19: Robust performance test From this we see that a prerequisite for RP is that we satisfy NP and RS. To satisfy RS we generally want Ì small. However. One explanation for this is that at frequencies where Û Á ½ the uncertainty will allow for RHPzeros. we will automatically get RP when the subobjectives of NP and RS are satisﬁed. both for SISO and MIMO systems and for any uncertainty. for SISO systems. as we will see in the next chapter. This applies in general. good performance) and ÛÁ ½ (i. and this is probably the main reason why there is little discussion about robust performance in the classical control literature. On the other hand.78) We conclude that we cannot have both Û È ½ (i.
20: (a) Robust performance with multiplicative uncertainty (b) Robust stability with combined multiplicative and inverse multiplicative uncertainty in Figure 7. We want the system to be closedloop stable for all possible ¡ ½ and ¡¾ .61). We found ÊÈ ¸ Û½ Ë · Û¾ Ì ½ (7. i. and therefore ÊË ¸ ½ · ÄÔ ¼ ÄÔ (7.ÍÆ ÊÌ ÁÆÌ  ¹ Ã Õ ¹ Æ ÊÇ ÍËÌÆ ËË Û¾ ¹ ¾¿ ¡¾ ¹+ ¹ + (a) ¹ + Õ¹ + Û½ Þ ¹  ¹ Ã Õ ¹ Û¾ ¹ ¡¾ ¹+ ¹ + (b) ¹ + Õ¹ + ¡½ Û½ Figure 7.20(a) and (b) are the same.20: (a) RP with multiplicative uncertainty (b) RS with combined multiplicative and inverse multiplicative uncertainty ½ and As usual the uncertain perturbations are normalized such that ¡ ½ ½ ¡¾ ½ ½. RS is equivalent to avoiding encirclements of ½ by the Nyquist plot of ÄÔ .79) (b) We will now derive the RScondition for the case where Ä Ô is stable (this assumption may be relaxed if the more general Å ¡structure is used. but with Û½ replaced by Û È and with Û¾ replaced by Û Á . This may be argued from the block diagrams. (a) The condition for RP with multiplicative uncertainty was derived in (7. the tests for RP and RS in cases (a) and (b). ½ · ÄÔ ¼. or by simply evaluating the conditions for the two cases as shown below. the distance between Ä Ô and ½ must be larger than zero.127)).80) . respectively. are identical.e. That is. Since we use the À½ norm to deﬁne both uncertainty and performance and since the weights in Figures 7. see (8.
7 Examples of parametric uncertainty We now provide some further examples of how to represent parametric uncertainty. a value of Û Á larger than 1 means that Ë must be less than 1 at the .1 Parametric pole uncertainty Consider uncertainty in the parameter in a state space model. We get ÊË ¸ ¸ ½ · Ä ÄÛ¾ Û½ Û½ Ë · Û¾ Ì ½ ¼ (7.85) If Ñ Ò and Ñ Ü have different signs then this means that the plant can change from stable to unstable with the pole crossing through the origin (which happens in some applications). 7. More generally. This set of plants can be written as Ô ¼ ´×µ × ´½ · Ö ¡µ ½ ¡ ½ (7.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (7.84) which is the same condition as found for RP. ´× Ôµ.86) which can be exactly described by inverse multiplicative uncertainty as in (7.87) The magnitude of the weight Û Á ´×µ is equal to Ö at low frequencies.81) (7.53). corresponding to the uncertain transfer function Ô ´×µ consider the following set of plants Ô ´×µ ½ ´×µ × Ô ¼ ÑÒ Ô ÑÜ (7. As seen from the RScondition in (7.48) with nominal model ¼ ´×µ ´× µ and Û Á ´×µ Ö × (7. 7. The perturbations ¡ must be real to exactly represent parametric uncertainty.83) (7.82) ¸ ¸ ½ · Ä´½ · Û¾ ¡¾ µ´½ Û½ ¡½ µ ½ ¼ ¡½ ¡¾ ½ · Ä · ÄÛ¾ ¡¾ Û½ ¡½ ¼ ¡½ ¡¾ Here the worst case is obtained when we choose ¡ ½ and ¡¾ with magnitudes 1 such that the terms ÄÛ¾ ¡¾ and Û½ ¡½ are in the opposite direction of the term ½ · Ä. If Ö is larger than 1 then the plant can be both stable and unstable.7. Ý Ý · Ù.
92). Consider the following alternative form of parametric zero uncertainty Ô ´×µ ´× · ÞÔ µ ¼ ´×µ ÞÑ Ò ÞÔ ÞÑ Ü (7. Then the possible zeros Þ Ô ½ Ô cross from the LHP to the RHP through inﬁnity: Þ Ô ½ ¿ (in LHP) and ÞÔ ½ (in RHP).90).90) and (7. For example. can occur in practice.10.91) The magnitude Û Á ´ µ is small at low frequencies.8) is This weight is zero at weight Û Á in (7. Exercise 7.88) This results in uncertainty in the pole location. as in ´½ · Ô ×µ ¼ ´×µ Ñ Ò Ô Ñ Ü (7. which allows for changes in the steadystate gain. which is consistent with the fact that we need feedback (Ë small) to stabilize an unstable plant. For cases with Ö cross from the LHP to the RHP (through inﬁnity). An example of the zero uncertainty form in (7.88) the uncertainty affects the model at high frequency.85) the uncertainty affects the model at low frequency. (7.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ same frequency. The corresponding uncertainty weight as derived in (7. is given in Example 7.92) is entirely different from that with the zero uncertainty in “time constant” form in (7.2 Parametric zero uncertainty Consider zero uncertainty in the “time constant” form.87) is Ö at Ö × (7. Time constant form. whereas in (7.6 Parametric zero uncertainty in zero form. Explain what the implications are for control if ÖÞ ½. and approaches Ö (the relative ½ we allow the zero to uncertainty in ) at high frequencies.90) may be written as multiplicative (relative) uncertainty with Ô ´×µ ÛÁ ´×µ Ö × ´½ · ×µ (7. .7. Both of the two zero uncertainty forms. but the set of plants is entirely different from that in (7. The reason is that in (7. whereas the ¼ and approaches zero at high frequencies. The set of plants in (7.90) where the remaining dynamics ¼ ´×µ are as usual assumed to have no uncertainty. namely that associated with the time constant: Ô ´×µ ½ ´×µ ×·½ ¼ Ô ÑÒ Ô ÑÜ (7.89) ½· × ¼ and approaches Ö at high frequency. Remark. It is also interesting to consider another form of pole uncertainty.92) which caters for zeros crossing from the LHP to the RHP through the origin (corresponding to a sign change in the steadystate gain). let ½ Ô ¿.85). Û Á ´×µ 7. Show that the resulting multiplicative weight is ÛÁ ´×µ ÖÞ Þ ´× · Þ µ and explain why the set of plants given by (7.92).
and assume parameters Æ½ Æ¾ in the simplest case that the statespace matrices depend linearly on these parameters i. is given by Packard (1988). which is in the form of an inverse additive perturbation (see Figure 8.95) Ô ´×µ Ô ´×Á Ô µ Assume that the underlying cause for the uncertainty is uncertainty in some real (these could be temperature. we may write Ô · Æ · Ï¾ ¡Ï½ (7. (7. so it cannot be represented by a single perturbation.3 Parametric statespace uncertainty The above parametric uncertainty descriptions are mainly included to gain insight. etc.93) (7. Consider an uncertain statespace model Ü Ý or equivalently ÔÜ · ÔÙ ÔÜ · ÔÙ (7. Ô · È Æ Ô · È Æ Ô · È Æ Ô · È Æ (7. This description has multiple perturbations.98) ´×Á Ô µ ½ ´×Á Ï¾ ¡Ï½ µ ½ ´Á ¨´×µÏ¾ ¡Ï½ µ ½ ¨´×µ This is illustrated in the block diagram of Figure 7.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 7. more suited for numerical calculations.96) where and model the nominal system.e.101) . Some of the Æ ’s may have to be repeated.7. A general procedure for handling parametric uncertainty.). Introduce ¨´×µ ¸ ´×Á and we get µ ½ .5(d)).94) ½ Ô · Ô (7.97) where ¡ is diagonal with the Æ ’s along its diagonal.9 Suppose Ô is a function of two parameters Ô ¿ · Û¾ Æ¾ ( ½ Æ¾ ½) as follows: ½ · Û½ Æ½ ( ½ Æ½ ½) Ô Then ¾ Ô Ô · ¾«Ô Ô «Ô Ô (7.100) ¼ Û¾ ¾Û¾ ¼ Ï¾ ßÞ ½ Æ½ ¼ ¼ ¼ Æ¾ ¼ ¼ ßÞ Æ¾ ¼ ¡ ½ ½ ½ ¼ ¼ ßÞ½ Ï½ ¾ (7. and «Ô Example 7. mass. volume.99) Ô ¿ ¾ ·Æ½ Û½ Û½ ·Æ¾ ¼ Û¾ ½ Û½ Û½ ¾Û¾ ¼ ßÞ ßÞ ßÞ · Û½ Û½ (7. but it should be and .21. For example. and fairly clear that we can separate out the perturbations affecting then collect them in a large diagonal matrix ¡ with the real Æ ’s along its diagonal.
ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ Ï¾ ¡ Ï½ µ ½ ´×Á Ô µ ½ ¹ + + ¹ ´×Á Ö matrix ¹ Figure 7. etc. This example illustrates how most forms of parametric uncertainty can be represented in terms of the ¡representation using linear fractional transformations (LFTs). This is related to the ranks of the matrices ½ (which has rank ½) and ¾ (which has rank ¾). whereas Æ¾ needs to be repeated. Zhou et al.21: Uncertainty in state space Note that Æ½ appears only once in ¡.10 Parametric uncertainty and repeated perturbations. ½ £ µÜ½ · ´ £ ½µÙ Ü½ ´½ · Õ Ü¾ Ü½ ´½ · Õ£ µÜ¾ Ù ½ ½ ½ ¡ Ü¾ ½ . and it is given that at steadystate we always have Õ£ £ (physically. Component balances yield Ü ÔÜ · ÔÙ Ý Ü (7. we can handle Æ ½ (which would need Æ ½ repeated).103). It can be shown that the minimum number of repetitions uncertainty in of each Æ in the overall ¡matrix is equal to the rank of each matrix (Packard.. Ü ½ £ . The values of Õ£ and £ depend on the operating point. By introducing Õ£ we get the model in (7. Linearization and introduction of deviation variables. 1988. Example 7. Ü½ is the concentration of reactant A. note that seemingly nonlinear parameter dependencies may be rewritten in our standard linear block diagram form. we may have an upstream mixing tank where a ﬁxed amount of A is fed in). ¾ Õ Õ Õ ½ Î ÑÓÐ × · ½ Î ¾ Î ÑÓÐ × ¡ ¡ where the superscript £ signiﬁes a steadystate value. Ý Ü¾ is the concentration of intermediate product B and Õ£ is the steadystate value of the feed ﬂowrate. 1996). Additional repetitions of the parameters Æ may be necessary if we also have and .102) Î Î where Î is the reactor volume. with ½ Î and £ . yields. This is illustrated next by an example. Also. «½·Û¾ Æ¾ ¾ . Consider the following state space description of a SISO plant1 ½ This is actually a simple model of a chemical reactor (CSTR) where Ù is the feed ﬂowrate. and Ù Õ. for ·Û½ Æ½ Æ ¾ example.
The transfer function for this plant is Ô ´×µ ´× · ´ ·¾ ½ µ´ ¼ ½ µ µ ´× · ½ · µ¾ (7. that such a complicated description should be used in practice for this example. Consequently.107) and .21.109) . which may be written as possible plants. It is not suggested. we may pull out the perturbations and collect them in a ¿ ¿ diagonal ¡block with the scalar perturbation Æ repeated three times. and we will need to use repeated perturbations in order to rearrange the model to ﬁt our standard formulation with the uncertainty represented as a blockdiagonal ¡matrix.104) Note that the parameter enters into the plant model in several places. A little more analysis will show why. that is. Remark. however. Let us ﬁrst consider the input gain uncertainty for state ½. so the above description generates a set of The constant ¼ ¼ ½. the plant uncertainty may be For our speciﬁc example with uncertainty in both represented as shown in Figure 7. Assume that ½ ´½ · µ ¼ Ô ½ ½ ½ ¼ ½ (7.108) and we may then obtain the interconnection matrix È by rearranging the block diagram of Figure 7. but it is relatively straightforward with the appropriate software tools.103) ¦ ¼ · ¼ ½Æ Æ (7.22 where Ã ´×µ is a scalar controller. The above example is included in order to show that quite complex uncertainty representations can be captured by the general framework of blockdiagonal perturbations. the variations in Ô½ ´½ µ . We have Ô½ ´Æ µ ½ ¼ ¼ ½Æ ¼ · ¼ ½Æ ½ ¼ ¾Æ ½ · ¼ ¾Æ (7. It is rather tedious to do this by hand.105) which may be written as a scalar LFT Ô½ ´Æ µ Ù ´Æ Æ µ Ò¾¾ · Ò½¾ Æ ´½ Ò½½ Æ µ ½ Ò¾½ (7.106) with Ò¾¾ ½ Ò½½ ¼ ¾ Ò½¾ Ò¾½ ¼ .22 to ﬁt Figure 3.¾ Ô ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´½ · µ ¼ may vary during operation. Even though Ô½ is a nonlinear function of Æ . ¢ ¾ ¡ Æ ¿ Æ Æ (7. it has a blockdiagram representation and may thus be written as a linear fractional transformation (LFT). Next consider the pole uncertainty caused by variations in the matrix. which may be written as Ô ½ ½ ½ ¼ · ¼ ½ ¼ ½ ¼ ¼ ¼ Æ ¼ Æ (7.
ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾ ¹ ¹ ¹ Æ ¹ Ã ´×µ ÙÕ ¹ ½ ½ ¾ ¹¹¹ Õ Æ ¼ ¼ Æ ½ ¼ ¼ ½ µ ½ ¹ ´×Á ¹ Õ Õ¹ ¼ ¹ ½ Ý Õ¹ Figure 7.8 Additional exercises Exercise 7.g. Laughlin et al. (c) Apply this test for the controller Ã ´×µ Is this condition tight? × and ﬁnd the values of that yield stability.111) In any case.110) From a practical point of view the pole uncertainty therefore seems to be of very little importance and we may use a simpliﬁed uncertainty description with zero uncertainty only.6 are Ô¾ ´×µ ´× · ¼ ¿µ ´× · ½ µ¾ (7.22: Block diagram of parametric uncertainty ¼ and we note that it has a RHPzero for ¼ ½ . 7. the performance at low frequencies will be very poor for this plant. e.112) . (1987) considered the following parametric uncertainty description Ô ´×µ Ô Ô× Ô× · ½ Ô ¾ ÑÒ Ñ Ü Ô ¾ ÑÒ Ñ Ü Ô ¾ ÑÒ Ñ Ü (7. ¿ ¼ ½× ´¾× · ½µ´¼ ½× · ½µ¾ ´×µ (a) Derive and sketch the additive uncertainty weight when the nominal model is (b) Derive the corresponding robust stability condition. and that the steady state gain is zero for and 0.8 Uncertainty weight for a ﬁrstorder model with delay. Ô ´×µ · µ ´´×· ½ÞÔµ¾ × ¼ ½ ÞÔ ¼ ¿ (7. Exercise 7. we know that because of the RHPzero crossing through the origin.7 Consider a “true” plant ¼ ´×µ ¿ ´¾× · ½µ. The three plants corresponding to ´×µ · ´´××· ½¼ µµ¾ Ô½ ´×µ ½ ´´××· ½¼ ½µ¾ µ ¼ ¼ .
113) and suggested use of the following multiplicative uncertainty weight ÛÁÄ ´×µ ÑÜ¡ × · ½ ¡ Ì× · ½ ½ Ñ Ò× · ½ Ì × · ½ ´½ ¾ ×¾ · ½ Ì Ñ Ü ÑÒ (7. not quite around frequency ).9 Consider again the system in Figure 7. Alternatively. and compare this with ½ Ñ Ü . We showed in Example 7.1 how to represent scalar parametric gain uncertainty Ô ´×µ Ô ¼ ´×µ where ÑÒ Ô ÑÜ (7.118) Ö and ´×µ ¾ ÑÒ Ñ Ü Ñ Ü· ÑÒ (7.2) include all possible plants? (Answer: No. They chose the mean parameter values as ( giving the nominal model ´× µ ´×µ ¸ ×·½ × (7.117) ´½ · ÛÁ ¡Á µ with nominal model ´×µ as multiplicative uncertainty Ô ¼ ´×µ and Ö ´ ÑÜ uncertainty weight ÛÁ Ñ Ò µ ´ Ñ Ü · Ñ Ò µ. Plot ÐÁ ´ µ and approximate it by a rational transfer function ÛÁ ´×µ. (b) Plot the magnitude of ÛÁÄ as a function of frequency.19) or (7.119) .115) and the uncertainty model (7. Comment on your answer. Find the frequency where the weight crosses ½ in magnitude.20) since the nominal plant is different.18. (c) Find ÐÁ ´ µ using (7. ½ ¡Á ½. we can represent gain uncertainty as inverse multiplicative uncertainty: ¥Á with Û Á Ô ´×µ ´×µ´½ · Û Á ´×µ¡ Á µ ½ where ½ ¡ Á ½ (7. What kind of uncertainty might ÛÙ and ¡Ù represent? Exercise 7.15) and compare with ÛÁÄ .¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ) where all parameters are assumed positive. Exercise 7.10 Neglected dynamics. Exercise 7. Assume we have derived the following detailed model Ø Ð ´×µ ¿´ ¼ × · ½µ ´¾× · ½µ´¼ ½× · ½µ¾ (7.17) is ÛÁÄ ´×µ × · ¼ ¾µ ´¾× · ½µ´¼ ¾ × · ½µ (7.115) Note that this weight cannot be compared with (7. Does the weight (7.116) ¿ ´¾× · ½µ with multiplicative and we want to use the simpliﬁed nominal model ´×µ uncertainty. ¡Á is here a real scalar.114) (a) Show that the resulting stable and minimum phase weight corresponding to the uncertainty description in (7.11 Parametric gain uncertainty.
and to extend the results to MIMO systems and to more complex uncertainty descriptions we need to make use of the structured singular value. . the inverse multiplicative uncertainty imposes sensitivity.118) and (7.9 Conclusion In this chapter we have shown how model uncertainty for SISO systems can be represented in the frequency domain using complex normbounded perturbations.12 The model of an industrial robot arm is as follows ´×µ ¾ ¼´ ×¾ · ¼ ¼¼¼½× · ½¼¼µ ¾ · ¼ ¼¼¼½´ ¼¼ · ½µ× · ½¼¼´ ¼¼ · ½µµ ×´ × where ¼ ¼¼¼¾ ¼ ¼¼¾ . ¡ ½ ½.ÍÆ ÊÌ ÁÆÌ Æ ÊÇ ÍËÌÆ ËË ¾½ (a) Derive (7. ÛÁ Ì an upper bound on the sensitivity. Û È Ë · ÛÁ Ì The approach in this chapter was rather elementary. where we ﬁnd that ÛÁ Ì and Û Á Ë are the structured singular values for evaluating robust stability for the two sources of uncertainty in question. Sketch the Bode plot for the two extreme values of .119). robust performance with multiplicative uncertainty. (Hint: The gain variation in (7. ¼. This is the theme of the next chapter.) Ô (b) Show that the form in (7.117) can be written exactly as ´½ Ö ¡µ. ¾ 7. We showed that the requirement of robust stability for the case of multiplicative complex uncertainty imposes an upper bound on the allowed complementary ½ Similarly. . whereas Û È Ë · ÛÁ Ì is the structured singular value for evaluating robust performance with multiplicative uncertainty.118) does not allow for Ô (c) Discuss why (b) may be a possible advantage. What kind of control performance do you expect? Discuss how you may best represent this uncertainty. Exercise 7. At the end of the chapter we also discussed how to represent parametric uncertainty using real perturbations. Û Á Ë ½ . We also derived a condition for ½ .
¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .
in terms of minimizing . Æ . . we use the Æ ¡structure in Figure 8. . consider Figure 8. As . This involves solving a sequence of scaled ½ problems. input uncertainty. If we also pull out the controller Ã .1 General control conﬁguration with uncertainty For useful notation and an introduction to model uncertainty the reader is referred to Sections 7.1. ¿ ¾ ¡½ ¡ ¡ .1) ..3. Our main analysis tool will be the structured singular value.2. e. as shown in Figure 8. see Section 8. if the controller is given and we want to analyze the uncertain system. The starting point for our robustness analysis is a system representation in which the uncertain perturbations are “pulled out” into a blockdiagonal matrix.8.4 where it is shown how to pull out the perturbation blocks to form ¡ and the nominal system Æ . To illustrate the main idea. where each ¡ represents a speciﬁc source of uncertainty. can be designed using Ã iteration.8. À 8. This form is useful for controller synthesis. where Æ is real.. or parametric uncertainty.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ Æ Ä ËÁË The objective of this chapter is to present a general method for analyzing robust stability and robust performance of MIMO systems with multiple perturbations. ¡ Á . . Alternatively. The procedure with uncertainty is similar and is demonstrated by examples below.1 and 7. we get the generalized plant È . we discussed how to ﬁnd È and Æ for cases without uncertainty. In Section 3. ¡ (8.2. We also show how the “optimal” robust controller.g.
1: General control conﬁguration (for controller synthesis) ¡ Ù¡ Û ¹ ¹ Ý¡ Æ Þ ¹ Figure 8. Û. is related (8.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¡ Ù¡ Û Ù Ý¡ ¹ ¹ ¹ È Ã ¹Þ Ú Figure 8. Þ to Æ and ¡ by an upper LFT (see (3.2) Similarly.2: Æ ¡structure for robust performance analysis shown in (3. we can then rearrange the system into the To analyze robust stability of Å ¡ ¡ Ù¡ ¹ Ý¡ Å Figure 8.3) Ù ´Æ ¡µ ¸ Æ¾¾ · Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ . Æ is related to È and Ã by a lower LFT Æ Ð ´È Ã µ ¸ È½½ · È½¾ Ã ´Á È¾¾ Ã µ ½È¾½ (8. the uncertain closedloop transfer function from Û to Þ .3: Å ¡structure for robust stability analysis .112)).111).
ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¾ ¹ ¡¿ ¹ Û ¹ ¹ ¡¾ ¹ ¹ ¡½ ¹ Þ ¹ (a) Original system with multiple perturbations · ¡½ ¡ ¡¾ ¡¿ ¹ Û ¹ ¹ ¹ Þ ¹ Æ (b) Pulling out the perturbations Figure 8.4: Rearranging an uncertain system into the Æ ¡structure .
1) should be considered. Remark. that is. each individual perturbation is assumed to be stable and is normalized.7.1 Differences between SISO and MIMO systems The main difference between SISO and MIMO systems is the concept of directions which is only relevant in the latter. ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Æ ½½ is the transfer function from the output to 8. In some cases the blocks in ¡ may be repeated or may be real.6) . As a consequence MIMO systems may experience much larger sensitivity to uncertainty than SISO systems.4) For a complex scalar perturbation we have Æ ´ µ ½ .2. then we can always generate the desired class of plants with stable perturbations.5) Note that ¡ has structure. and for a real scalar perturbation ½ Æ ½. it then follows for ¡ ´¡ ´ µµ ½ ¸ ¡ ½ ½ (8.2 Representing uncertainty As usual. repetition is often needed to handle parametric uncertainty. ´¡ ´ µµ ½ (8. Only the subset which has the blockdiagonal structure in (8. The assumption of a stable ¡ may be relaxed.1 Coupling between transfer function elements. and therefore in the robustness analysis we do not want to allow all ¡ such that (8.5) is satisﬁed. Since from (A. but then the resulting robust stability and performance conditions will be harder to derive and more complex to state. Consider a distillation process where at steadystate ½¼ ¾ ½¼ £ Ê ´ µ ¿ ½ ¿ ½ ¿ ½ ¿ ½ (8.47) the maximum singular value of a block diagonal matrix is equal to the largest of the maximum singular values of the ¡ that individual blocks.¾ structure of Figure 8. The following example illustrates that for MIMO systems it is sometimes critical to represent the coupling between uncertainty in different transfer function elements. Example 8.3. we have additional structure. 8. Furthermore. if we use a suitable form for the uncertainty and allow for multiple perturbations. For example. so assuming ¡ stable is not really a restriction. as shown in Section 7.3 where Å the input of the perturbations.
usually with dimensions compatible with those of the plant. However. e. this seems to indicate that independent control of both outputs is impossible. 8.88) we know that perturbing the ½ ¾element to makes singular.g.2.7)).5. For the numerical data above Ø irrespective of Æ . for a distillation process. the simple uncertainty description used in (8. For example.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¾ From the large RGAelements we know that becomes singular for small relative changes in the individual elements.8) (8. We here deﬁne unstructured uncertainty as the use of a “full” complex perturbation matrix ¡. The reason is that the transfer function elements are coupled due to underlying physical constraints (e. Since variations in the steadystate gains of ¼% from or more may occur during operation of the distillation process. For example. form Exercise 8. where at each frequency any ¡´ µ satisfying ´¡´ µµ ½ is allowed.7) may be represented in the additive uncertainty · Ï¾ ¡ Ï½ where ¡ Æ is a single scalar perturbation.1 The uncertain plant in (8. Speciﬁcally. for the distillation process a more reasonable description of the gain uncertainty is (Skogestad et al.g. In Figure 8. as discussed in Chapter 7 for SISO systems. this conclusion is incorrect since.10) .7) Ø Ô Ø Ô Û in this case is a real constant. Find Ï½ and Ï¾ .2 Parametric uncertainty The representation of parametric uncertainty. However. multiplicative input uncertainty and multiplicative output uncertainty: ¥ ¥Á ¥Ç Ô Ô Ô · ´Á · Á µ ´Á · Ç µ Û ¡ Á Û Á ¡Á Ç Û Ç ¡Ç (8.. Ô 8. (Note that Ø is not generally true for the uncertainty description given in (8. Six common forms of unstructured uncertainty are shown in Figure 8.2. so Ô is never singular for this uncertainty. (b) and (c) are shown three feedforward forms. 1988) ¦ Ô where Æ · Û Æ Æ Æ Æ ½ (8. additive uncertainty. the inclusion of parametric uncertainty may be more signiﬁcant for MIMO plants because it offers a simple method of representing the coupling between uncertain transfer function elements. Û ¼. from (6.3 Unstructured uncertainty Unstructured perturbations are often used to get a simple uncertainty model. never becomes singular.7) originated from a parametric uncertainty description of the distillation process. the material balance).5(a). carries straight over to MIMO systems.9) (8.
Remark.2. but perturbation.12) (8. (b) Multiplicative input uncertainty.5(d). we may have ¡Á at the input and ¡Ç at the output. (e) Inverse multiplicative input uncertainty. (f) Inverse multiplicative output uncertainty In Figure 8. ¡ denotes the normalized perturbation and the “actual” Û¡ ¡Û. We have here used scalar weights Û. In practice.5: (a) Additive uncertainty. one can have several perturbations which themselves are unstructured. Ï ¾ ¡Ï½ where Ï½ and Ï¾ are given transfer function matrices. For example.13) The negative sign in front of the ’s does not really matter here since we assume that ¡ can have any sign. Another common form of unstructured uncertainty is coprime factor uncertainty discussed later in Section 8. (c) Multiplicative output uncertainty. so sometimes one may want to use matrix weights. inverse additive uncertainty. inverse multiplicative input uncertainty and inverse multiplicative output uncertainty: ¥ ¥Á ¥Ç Ô Ô Ô ´Á µ ½ ´Á Á µ ½ ´Á Ç µ ½ Û ¡ Á Û Á¡ Á Ç Û Ç¡ Ç (8. which may be combined .11) (8. (d) Inverse additive uncertainty.6. (e) and (f) are shown three feedback or inverse forms.¾ ÅÍÄÌÁÎ ÊÁ Ä Û Ã ÇÆÌÊÇÄ ¹Û ¹¡ ¡ Õ ¹ (a) ¹¹ + + ¹ + + ¹ (d) Õ¹ ¡Á ¹ ÛÁ ¹ ¡Á ÛÁ Õ (b) ¹¹ + + ¹ ¹ + + ¹ Õ (e) ¹ ¹ Û Ç ¹ ¡Ç ¹ ÛÇ ¡Ç Õ (c) ¹¹ + + ¹ ¹ + + Õ¹ (f) Figure 8.
¡ ¡Á ¡Ç . For example.14) ÐÇ ´ µ Ñ Ü Ô ¾¥ ¡ ´ Ô µ ½ ´ µ ÛÇ ´ µ ÐÇ ´ µ (8. similar to (7.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¾ into a larger perturbation. we ﬁrst attempt to lump the uncertainty at the output.16) ÐÁ ´ µ Ñ Ü Ô ¾¥ ½ ´ Ô µ´ µ¡ ÛÁ ´ µ ÐÁ ´ µ (8. this ¡ is a blockdiagonal matrix and is therefore no longer truly unstructured. Moving uncertainty from the input to the output For a scalar plant. However. in many cases this approach of lumping uncertainty either at the output or the input does not work well. then this lumping of uncertainty at the output is ﬁne. often in multiplicative form. we have Ô ´½ · ÛÁ ¡Á µ ´½ · ÛÇ ¡Ç µ and we may simply “move” the multiplicative uncertainty from the input to the output without . Ô where. ´Á · ÛÁ ¡Á µ ¡Á ½ ½ (8. using multiplicative input uncertainty with a scalar weight.15) (we can use the pseudoinverse if is singular).10. but then it makes a difference whether the perturbation is at the input or the output.4).15). Since output uncertainty is frequently less restrictive than input uncertainty in terms of control performance (see Section 6.e.17) However. This may also be done for MIMO systems. a set of plants ¥ may be represented by multiplicative output uncertainty with a scalar weight Û Ç ´×µ using Ô where. If the resulting uncertainty weight is reasonable (i. This is because one cannot in general shift a perturbation from one location in the plant (say at the input) to another location (say the output) without introducing candidate plants which were not present in the original set. similar to (7. ´Á · ÛÇ ¡Ç µ ¡Ç ½ ½ (8. This is discussed next. Lumping uncertainty into a single perturbation For SISO systems we usually lump multiple sources of uncertainty into a single complex perturbation.15). In particular. one should be careful when the plant is illconditioned. then one may try to lump the uncertainty at the input instead. it must at least be less than 1 in the frequency range where we want control). and the subsequent analysis shows that robust stability and performance may be achieved. If this is not the case.
Therefore.22) .21): Write at each frequency Í ¦Î À and ½ Í ¦Î À . Then from (8.e. ¢ Also for diagonal uncertainty ( Á diagonal) we may have a similar situation.15) ÐÇ ´ µ Ñ Ü ´´ Ô µ ½ µ Á Ñ Ü ´ Á Á ½ µ (8.7) one plant ¼ in this set (corresponding to Æ ½) has ¼ ´ ½ ´ ¼ µµ ½ ¿. For example. However. This would imply that the plant could become singular at steadystate and thus impossible to control. if the plant has large RGAelements then the elements in Á ½ will be much larger than those of Á . see (A.18) Ô Then from (8. to represent ¼ in terms of input for which we have ÐÁ uncertainty we would need a relative uncertainty of more than ½ ¼¼%. Û Example 8. this. ÛÁ ¼ ½. To see . making it impractical to move the uncertainty from the input to the output.7) with ½¼ (corresponding to about ½¼% uncertainty in each element).80). i.3 Let ¥ be the set of plants generated by the additive uncertainty in (8. that is.18) as multiplicative output uncertainty. Select ¡Á À (which is a unitary matrix with all singular values equal to 1). Then ´ ¡Á ½ µ ÎÍ ´Í ¦¦ Î À µ ´¦¦µ ´ µ ´ ½ µ ´ µ. Then we must select Ð¼ ÛÇ ¼ ½ ½ ½ order to represent this as multiplicative output uncertainty (this is larger than 1 and therefore not useful for controller design).20) which is much larger than Ð Á ´ µ if the condition number of the plant is large.2 Assume the relative input uncertainty is ½¼%. Û Á ÛÇ . Suppose the true uncertainty is represented as unstructured input uncertainty ( Á is a full matrix) on the form ´Á · Á µ (8.17) the magnitude of multiplicative input uncertainty is ÐÁ ´ µ Ñ Ü ´ ½ ´ Ô µµ Á Ñ Ü ´ Áµ Á (8. ¾ Example 8. which we know ½¼ · ½¼ ½¼ ½¼ (8.¿¼¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ changing the value of the weight. and the ½ ¾ in condition number of the plant is ½ ½ .19) On the other hand. if we want to represent (8.21) Proof of (8. then from (8. write Á ÛÁ ¡Á where we allow any ¡ Á ´ µ satisfying ´¡Á ´ µµ ½ Then at a given frequency ÐÇ ´ µ ÛÁ Ñ Ü ´ ¡Á ½ µ ¡Á ÛÁ ´ µ ´ ´ µµ (8. for multivariable plants we usually need to multiply by the condition number ´ µ as is shown next.
Conclusion. Ý Ô Ù.) thereby generating several perturbations. Ideally. Exercise 8.4 Diagonal uncertainty By “diagonal uncertainty” we mean that the perturbation is a complex diagonal matrix ¡´×µ Æ ´×µ Æ´ µ ½ (8. see (8.4 Obtain À in Figure 8. Fortunately.6.6 for the uncertain plant in Figure 7.5. However. we should use one of the inverse forms in Figure 8. Obtain À for each of the six uncertainty where Ï¾ ¡Ï½ (Hint for the inverse forms: ´Á Ï½ ¡Ï¾ µ ½ forms in (8. Here Ô Ù À¾¾ is the nominal plant model.9)). we would like to lump several sources of uncertainty into a single perturbation to get a simple uncertainty description.2. Therefore output uncertainty works well for this particular example.6: Uncertain plant.8)(8.23) Exercise 8.23) Exercise 8. at least for plants with a large condition number. Often an unstructured multiplicative output perturbations is used.2 A fairly general way of representing an uncertain plant Ô is in terms of a linear fractional transformation (LFT) of ¡ as shown in Figure 8. from the above discussion we have learnt that we should be careful about doing this.3 Obtain À in Figure 8. 8. etc. Á · Ï½ ¡´Á Ï¾ Ï½ ¡µ À½½ À½¾ ¡ À¾½ À¾¾ À¾¾ · À¾½ ¡´Á À½½ ¡µ ½ À½¾ (8. ¡ Ù ¹ ¹ À½½ À¾½ À½¾ À¾¾ Ý ¹ Figure 8.24) . see (3.22. For uncertainty associated with unstable plant poles.6 for the uncertain plant in Figure 7. In such cases we may have to represent the uncertainty as it occurs physically (at the input.20(b).ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼½ is incorrect.7)–(3. in the elements. we can instead represent this additive uncertainty as multiplicative output uncertainty (which is also generally preferable for a subsequent controller design) with ÐÇ ´´ ¼ µ ½ µ ¼ ½¼. represented by LFT.13) using ½ Ï¾ .
in which the input variables are the ﬂows of hot and cold water. the uncertainty Û . With each input Ù there is associated a separate physical system (ampliﬁer. . Remark 1 The diagonal uncertainty in (8.25). and since it has a scalar origin it may be represented using the methods presented in Chapter 7.28) where Ö¼ is the relative uncertainty at steadystate. consider a bathroom shower. actuator. A commonly suggested method to reduce the uncertainty is to measure the actual input (Ñ ) and employ local feedback (cascade control) to readjust Ù . This type of diagonal uncertainty is always present. resulting in a set of plants which is too large. namely Û´×µ × · Ö¼ ´ Ö½ µ× · ½ (8.26) Ô ´×µ which after combining all input channels results in diagonal input uncertainty for the plant Ô ´×µ ´½ · ÏÁ ¡Á µ ¡Á Æ ÏÁ ÛÁ (8. this is the case if ¡ is diagonal in any of the six uncertainty forms in Figure 8. but for representing the uncertainty it is important to notice that it originates at the input. Consider again (8. is at least 10% at steadystate (Ö ¼ ¼ ½). and the resulting robustness analysis may be conservative (meaning that we may incorrectly conclude that the system may not meet its speciﬁcations). and Ö ½ is the magnitude of the weight at higher frequencies. Typically.5. generates a physical plant input Ñ Ñ ´×µÙ (8.27) Normally we would represent the uncertainty in each input or output channel using a simple weight in the form given in (7. but this is probably not true in most cases.27) originates from independent scalar uncertainty in each input channel. If we choose to represent this as unstructured input uncertainty (¡Á is a full matrix) then we must realize that this will introduce nonphysical couplings at the input to the plant. Ù . signal converter. Diagonal uncertainty usually arises from a consideration of uncertainty or neglected dynamics in the individual input channels (actuators) or in the individual output channels (sensors). let us consider uncertainty in the input channels. For example.¿¼¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (usually of the same size as the plant). Ö ½ ¾). ½ is (approximately) the frequency where the relative uncertainty reaches 100%. valve. Remark 2 The claim is often made that one can easily reduce the static input gain uncertainty to signiﬁcantly less than 10%.) which based on the controller output signal.25) The scalar transfer function ´×µ is often absorbed into the plant model ´×µ.28). associated with each input. and it increases at higher frequencies to account for neglected or uncertain dynamics (typically. To make this clearer. We can represent this actuator uncertainty as multiplicative (relative) uncertainty given by ´×µ´½ · ÛÁ ´×µÆ ´×µµ Æ ´ µ ½ (8. etc. As a simple example.
but rather the error in the sensitivity of the measurement with respect to changes (i. see Example 3. and an input gain uncertainty of 20%. Note that it is not the absolute measurement error that yields problems.99 l/min (measured value of 1 l/min is 1% too high) to 1. It is always present and a system which is sensitive to this uncertainty will not work in practice. by way of an example. But. We want to derive the generalized plant È in Figure 8.1 l/min.1 l/min is 1% too low).e. 8. diagonal input uncertainty.7: System with multiplicative input uncertainty and performance measured at the output Example 8.11 l/min (measured value of 1.27). Consider a feedback system with multiplicative input uncertainty ¡Á as shown in Figure 8.4 System with input uncertainty.3 Obtaining È . in terms of deviation variables we want Ù ¼ ½ [l/min]. However. the “gain” of the sensor). Here ÏÁ is a normalization weight for the uncertainty and ÏÈ is a performance weight. assume that the nominal ﬂow in our shower is 1 l/min and we want to increase it to 1.7. should always be considered because: 1. 2. In conclusion. even in this case there will be uncertainty related to the accuracy of each measurement. By writing down the equations (e. corresponding to a change Ù¼ ¼ ½¾ [l/min].13) or simply by inspecting Figure 8. ¹ ÏÁ  Ý¡¹ ¡Á Ù¡ Û Þ ¹ + Õ¹ ÏÈ ¹ + Ú¹ Ã Ù Õ ¹ +¹ + Figure 8.1 which has inputs Ù Û Ù Ì and outputs Ý Þ Ú Ì . as given in (8. Æ and Å We will now illustrate.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼¿ One can then imagine measuring these ﬂows and using cascade control so that each ﬂow can be adjusted more accurately. how to obtain the interconnection matrices È . It often restricts achievable performance with multivariable control. even with this small absolute error. Æ and Å in a given situation. For example.g. that is. the actual ﬂow rate may have increased from 0.7 (remember to break ¡ ¡ . Suppose the vendor guarantees that the measurement error is less than 1%.
Exercise 8. Next. Æ¾½ ÏÈ Ë and Æ¾¾ ÏÈ Ë . we start at the output (Ý¡ ) and move backwards to the input (Û) using the MIMO Rule in Section 3. we have Å ÏÁ Ã ´Á · Ã µ ½ ÏÁ ÌÁ . Thus.29) using the lower LFT in (8. i. partition È to be compatible with Ã .29) is derived. in (8.3 for evaluating robust stability.N). Æ for the case when the multiplicative uncertainty is at the . we want to derive the matrix Æ corresponding to Figure 8.2 (we ﬁrst meet ÏÁ . deriving Æ from È is straightforward using available software.7 Derive Æ in (8. Exercise 8. Exercise 8. Note that the transfer function from Ù¡ to Ý¡ (upper left element in È ) is ¼ because Ù¡ has no direct effect on Ý¡ (except through Ã ).K) .7 we see easily from the block diagram that the ÏÈ ´Á · ´Á · ÏÁ ¡Á µÃ µ ½ .8 Derive È and output rather than the input.5). and with a speciﬁc ¡ the perturbed transfer function Ù ´Æ ¡µ from Û to Þ is obtained with the command F=starp(delta.2). we can derive Æ directly from Figure 8.¿¼ the loop before and after Ã ) we get ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ È ÏÈ ¼ ¼ ÏÁ ÏÈ Ï È Á (8. Æ½½ . Of course. then Ã and we then exit the feedback loop and get the term ´Á · Ã µ ½ ). First. Ù¡ to outputs Ý¡ (without breaking the loop before and after Ã ).5 Show in detail how È in (8. You will note that the algebra is quite tedious. For example.32) is the transfer function from Ù¡ to Ý¡ . Remark. We get (see Exercise 8.2. Exercise 8.32) we have Æ½½ Æ½¾ ÏÁ ÃË . to derive Æ½¾ . This is the transfer function Å needed in Figure 8.35) where from (8.29) It is recommended that the reader carefully derives È (as instructed in Exercise 8.31) Ð ´È Æ function from inputs Ã µ using (8. Û Þ The upper left block.6 For the system in Figure 8.7) ÏÁ Ã ´Á · Ã ½ ½ ÏÁ Ã ´Á · Ã ½ ½ µ µ ÏÈ ´Á · Ã µ ÏÈ ´Á · Ã µ È¾½ Á È¾¾ (8.2). is identical to Ù ´Æ ¡µ evaluated using (8.7 by evaluating the closedloop transfer For example.32) from È in (8. and that it is much simpler to derive Æ directly from the block diagram as described above.e. which is the transfer function from Û to Ý¡ . in the MATLAB toolbox we can evaluate Æ Ð ´È Ã µ using the command N=starp(P. È½½ and then ﬁnd Æ ÏÈ ¼ ÏÈ ¼ È½¾ ÏÁ ÏÈ (8. Show that this uncertain transfer function from Û to Þ is ÏÁ ÌÁ .30) (8.32) Alternatively.
34) Å Ï½Á ¼ Ï½Ç ¼ ÌÁ ÃË Ï¾Á ¼ Ë Ì ¼ Ï¾Ç Exercise 8.3 Show that ¡Á ¡Ç (8.14 Find for the uncertain system in Figure 7.23).33) ½ and ¡Ç ½ ½. Collect the perturbations into ¡ where ¡Á ½ and rearrange Figure 8. Consider the block diagram in Figure 8. The next step is to check whether we have stability and acceptable performance for all plants in the set: .12 Find the transfer function Å Ô in (8. Exercise 8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼ Exercise 8.23) when Û What is Å ? Ö and Þ Ý Ö.8: System with input and output multiplicative uncertainty Exercise 8. 8. uncertain plant Æ½½ for studying robust stability for the ¹ Ï½Á ¹ ¡Á ¹ Ï¾Á  ¹Ï½Ç ¹ ¡Ç ¹Ï¾Ç ¹ ¹ + + ¹Ã Õ Õ ¹ + + Figure 8.11 Find the interconnection matrix Æ for the uncertain system in Figure 7.13 Å ¡structure for combined input and output uncertainties.8 into the Å ¡structure in Figure 8. Exercise 8.9 Find È for the uncertain system in Figure 7.2.18.20(b).8 where we have both input and output multiplicative uncertainty blocks.10 Find È for the uncertain plant Ô in (8. The set of possible plants is given by Ô ´Á · Ï¾Ç ¡Ç Ï½Ç µ ´Á · Ï¾Á ¡Á Ï½Á µ (8. Exercise 8.18.4 Deﬁnitions of robust stability and robust performance We have discussed how to represent an uncertain set of plants in terms of the Æ ¡structure in Figure 8.
we determine how “large” the transfer function from exogenous inputs Û to outputs Þ may be for all plants in the uncertainty set. that is. Remark 1 Important.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 1. as our analysis tool. we must ﬁrst satisfy nominal stability (NS).35) ÆË ÆÈ ÊË ÊÈ ¸ Æ × ÒØ ÖÒ ÐÐÝ ×Ø Ð ¸ Æ¾¾ ½ ½ and NS ¸ Ù ´Æ ¡µ × ×Ø Ð ¡ ¡ ½ ½ ¸ ¡ ¡ ½ ½ and NS ½ ½ (8.2 our requirements for stability and performance can then be summarized as follows ¡µ ¸ Æ¾¾ · Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ (8. Û represents the exogenous inputs (normalized disturbances and references). robust stability (RS) and robust performance (RP).2. For simplicity below we will use the shorthand notation ¡ Ò Ñ¡ Ü (8. and Þ the ´¡µÛ.37) and NS (8. (We could condition with a strict inequality. Robust stability (RS) analysis: with a given controller Ã we determine whether the system remains stable for all plants in the uncertainty set.36) (8. 2. we need to deﬁne performance more precisely. . Before proceeding.) Remark 3 Allowed perturbations. We have Þ Ù ´Æ We here use the À½ norm to deﬁne performance and require for RP that for all allowed ¡’s. ÊË ¡ ½ alternatively have bounded the uncertainty with a strict inequality. This results in a stability ½ if Å ½ ½.3) exogenous outputs (normalized errors). We will show how this can be done by introducing the structured singular value. without having to search through the inﬁnite set of allowable perturbations ¡.38) (8. A typical choice is Û È ËÔ (the weighted sensitivity function).40) . As a prerequisite for nominal performance (NP). At the end of the chapter we also discuss how to synthesize controllers such that we have “optimal robust performance” by minimizing over the set of stabilizing controllers. yielding the equivalent condition RS ¡ ½ ½ if Å ½ ½. Remark 2 Convention for inequalities. This is because the frequencybyfrequency conditions can also be satisﬁed for unstable systems. In Figure 8. for example. Robust performance (RP) analysis: if RS is satisﬁed. where Û È is the performance weight (capital È for performance) and Ë Ô represents the set of perturbed sensitivity functions (lowercase Ô for perturbed). ´¡µ ½ ½ In terms of the Æ ¡structure in Figure 8. where from (8. In this book we use the convention that the perturbations are bounded such that they are less than or equal to one.39) These deﬁnitions of RS and RP are useful only if we can test them in an efﬁcient manner.
and not only Æ ¾¾ must be stable ).7. We then see directly from (8.2 is equivalent to the stability of the Å ¡structure in Figure 8. 8. Thus.5 Robust stability of the Å structure Consider the uncertain Æ ¡system in Figure 8. when we have nominal stability (NS). Fortunately. and the number of complex blocks. Then the Å ¡system in Figure 8. as in (8. Theorem 8. We also assume that ¡ is stable. This gets rather involved. Suppose that the system is nominally stable (with ¡ ¼).35).40) by ¡ ¡ . that is.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼ to mean “for all ¡’s in the set of allowed perturbations”.3 is stable for all allowed perturbations (we have RS) if and . Æ is stable (which means that the whole of Æ . It applies to À½ normbounded ¡perturbations. and “maximizing over all ¡’s in the set of allowed perturbations”. To be mathematically exact. such that if ¡ ¼ is an allowed perturbation then so is ¡ ¼ where is any real scalar such that ½.3 where Å Æ ½½ . where À ¾ ¡ ¡¾¡ ¡½ ½ is the set of unity normbounded perturbations with a given structure should also be deﬁned.1 Determinant stability condition (Real or complex perturbations). given by ¡ Ù ´Æ ¡µ Æ¾¾ · Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ (8. but as can be seen from the statement it also applies to any other convex set of perturbations (e. this amount of detail is rarely required as it is usually clear what we mean by “for all allowed perturbations” or “ ¡”. ¡ ½ ½. By allowed perturbations we mean that the ½ norm of ¡ is less or equal to ½. sets with other structures or sets bounded by different norms).41) that the only possible source of instability is the feedback term ´Á Æ ½½ ¡µ ½ . Assume that the nominal system Å ´×µ and the perturbations ¡´×µ are stable. for example by ¡. Consider the convex set of perturbations ¡.g. the stability of the system in Figure 8. The next theorem follows from the generalized Nyquist Theorem 4. and that ¡ has a speciﬁed blockdiagonal structure where certain blocks may be restricted to be real.2 for which the transfer function from Û to Þ is.41) We thus need to derive conditions for checking the stability of the Å ¡structure. we should replace ¡ in (8. The allowed structure Ñ ¢Ñ ¡ Æ½ ÁÖ½ ÆË ÁÖË ¡½ ¡ Æ ¾Ê ¡ ¾ where in this case Ë denotes the number of real scalars (some of which may be repeated).
´Å ¡¼ µ using the complex .1 which applies to complex perturbations. Ø´Á µ É ´Á µ and ´Á µ ½ ´ µ ¾ The following is a special case of Theorem 8. Then ´Å ¡¼ µ ½ for some perturbation ¡¼ such that ´Å ¡¼ µ ¡¼ where is eigenvalue . such that if ¡ ¼ is an allowed perturbation then so is ¡ ¼ where is any complex scalar such that ½. µ ¸ µ µ Remark 1 The proof of (8.45) and (8.45) not(8.42) µ not(8.43) is proved by proving not(8. the equivalence ¾ between (8.3 is stable for all allowed perturbations (we have RS) if and only if ´Å ¡´ µµ or equivalently ½ ¡ ½ (8. ¡¼¼ ½.43): First note that with ¡ ¼. (8.¿¼ only if Nyquist plot of ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ø ´Á Å ¡´×µµ does not encircle the origin. Assume there exists a perturbation ¡¼ such that the image of Ø´Á Å ¡¼ ´×µµ encircles the origin as × traverses the Nyquist Dcontour.45) is proved by proving not(8. ¡.2 Spectral radius condition for complex perturbations.42) µ ´ (8. Because ¾ (8.43) ( RS) is “obvious”: It follows from the deﬁnition of the spectral radius .46) ÊË ¸ Ñ Ü ´Å ¡´ µµ ¡ Proof: (8.44) Proof: Condition (8.43): Assume there exists a ½ at some frequency. Assume that the nominal system Å ´×µ and the perturbations ¡´×µ are stable.45) (8.45) (8. Finally.43): This is obvious since by “encirclement of the origin” we also include the origin itself.2. Consider the class of perturbations.1). Theorem 8. (8. Then the Å ¡. and there always exists another perturbation in the set. there then exists another perturbation in the set. the Nyquist contour and its map is closed.43) since (see Appendix A. (8. such that É Å ¡¼¼ µ ·½ (real and positive) and therefore ´ a complex scalar with É Ø´Á Å ¡¼¼ µ ´Á Å ¡¼¼ µ ´½ ´Å ¡¼¼ µµ ¼.45) relies on adjusting the phase of scalar and thus requires the perturbation to be complex. ¡ ¸ Ø ´Á Å ¡´ µµ ¼ ¡ ¸ ´Å ¡µ ½ ¡ (8.42) (8. ¡¼¼ ¯¡¼ with ¯ ¼ ½ .43) (8.42) is simply the generalized Nyquist Theorem (page 147) applied to a positive feedback system with a stable loop transfer function Å ¡.44) is equivalent to (8. and an ¼ such that Ø´Á Å ¡¼¼ ´ ¼ µµ ¼.43) (8.46) is simply the deﬁnition of Ñ Ü¡ .system in Figure 8.42) (8. and applies also to real ¡’s. Ø´Á Å ¡µ ½ at all frequencies.
forms the basis for the general deﬁnition of the structured singular value in (8.49) . follows since Í Ñ¡ Ü ´Å ¡µ Ñ¡ Ü ´Å ¡µ ¾ Lemma 8. Then the Å ¡system in Figure 8. Now. ¡. Theorem 8.3 is stable for all perturbations ¡ satisfying ¡ ½ ½ (i.48) µ ´ µ ´ µ. 8. Then ´¡¼ µ ½ and ´Å ¡¼ µ ´Í ¦Î À Î Í À µ ´Í ¦Í À µ ´¦µ ´Å µ.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¼ Remark 2 In words. we consider the special case where ¡´×µ is allowed to be any (full) ½. The second to last equality À Í ½ and the eigenvalues are invariant under similarity transformations.6 RS for complex unstructured uncertainty In this section. and we have Ñ¡ Ü ´¡µ ´Å µ ´Å µ (8.1. we have RS) if and only if ´Å ´ µµ ½ Û ¸ Å ½ ½ (8. and this is difﬁcult to check numerically. The main problem here is of course that we have to test the condition for an inﬁnite set of ¡’s.116)).2 tells us that we have stability if and only if the spectral radius of Å ¡ is less than 1 at all frequencies and for all allowed perturbations. which applies to both real and complex perturbations.e. Remark 3 Theorem 8. This will be the case if for any Å there exists an ´Å µ.48) follows since ´ to show that we actually have equality. This is often referred to as complex transfer function matrix satisfying ¡ ½ unstructured uncertainty or as fullblock complex perturbation uncertainty.3 Let ¡ be the set of all complex matrices such that following holds ´¡µ ´Å µ ½.2 directly yield the following theorem: Theorem 8. Then the (8. Such a ¡¼ does indeed exist if we allow ¡¼ to be allowed ¡¼ such that ´Å ¡¼ µ Î Í À where Í and Î a full matrix such that all directions in ¡¼ are allowed: Select ¡¼ are matrices of the left and right singular vectors of Å Í ¦Î À . Assume that the nominal system Å ´×µ is stable (NS) and that the perturbations ¡´×µ are stable. we need where the second inequality in (8.47) Ñ Ü ´Å ¡µ ¡ Ñ Ü ´Å ¡µ ¡ Ñ Ü ´¡µ ´Å µ ¡ Proof: In general.3 together with Theorem 8.4 RS for unstructured (“full”) perturbations. the spectral radius ( ) provides a lower bound on the spectral norm ( ) (see (A.76). Lemma 8.
The small gain theorem applies to any operator norm satisfying . with Ï¾ ¡Ï½ ¡ ½ ½ (8. use of the ¾ norm yields neither necessary nor sufﬁcient conditions for stability.51) To derive the matrix Å we simply “isolate” the perturbation. is that the stability condition in (8.34).55) (8.54) and (8.52) from the output to the input of the perturbation. In contrast.55) follow from the diagonal elements in the Å matrix in (8. We do not get sufﬁciency .57) (8. since the ¾ norm does not in general satisfy µ ¡ À À À ¡ 8.60) . and determine the transfer function matrix Å Ï½ Å¼ Ï¾ (8.56) (8.49) may be rewritten as (8. where Å ¼ for each of the six cases becomes (disregarding some negative signs which do not affect the subsequent robustness condition) is given by · ´Á · Á µ Ô Ô ´Á · Ç µ ´Á µ ½ Ô ´Á Á µ ½ Ô ½ Ô ´Á Ç µ Ô Å¼ Å¼ Å¼ Å¼ Å¼ Å¼ Ã ´Á · Ã µ ½ ÃË Ã ´Á · Ã µ ½ ÌÁ ½ Ì Ã ´Á · Ã µ ´Á · Ã µ ½ Ë ´Á · Ã µ ½ ËÁ ´Á · Ã µ ½ Ë (8.1 Application of the unstructured RScondition We will now present necessary and sufﬁcient conditions for robust stability (RS) for each of the six single unstructured perturbations in Figure 8.50) The sufﬁciency of (8.50) ( ) also follows directly from the small gain theorem by choosing Ä Å ¡. Note that the sign of Å ¼ does not matter as it may be absorbed into ¡. Theorem 8.5. Remark 2 An important reason for using the ½ norm to describe model uncertainty.54) (8.¿½¼ ÅÍÄÌÁÎ ÊÁ ÊË ¸ ´Å ´ µµ ´¡´ µµ ½ Ä ¡ Ã ÇÆÌÊÇÄ Remark 1 Condition (8.58) For example.4 then yields ÊË ¸ Ï½ Å¼ Ï¾ ´ µ ½ ½ Û (8.54) and (8.6.50) is both necessary and sufﬁcient.53) (8. from (8.59) we get for multiplicative input uncertainty with a scalar weight: ÊË Ô ´Á · ÛÁ ¡Á µ ¡Á ½ ½ ¸ ÛÁ ÌÁ ½ ½ (8. and the others are derived in a similar fashion.59) For instance. (8.
because they can then be stacked on top of each other or sidebyside.62) .53) follows as a special case of the inverse multiplicative output uncertainty in (8. in an overall ¡ which is then a full matrix.6.9: Coprime uncertainty Robust stability bounds in terms of the À ½ norm (RS ¸ Å ½ ½) are in general only tight when there is a single full perturbation block.61) In general. If we normbound the combined (stacked) uncertainty. One important uncertainty description that falls into this category is the coprime uncertainty description shown in Figure 8. This uncertainty description is surprisingly general. Thus.60) Similarly. An “exception” to this is when the uncertainty blocks enter or exit from the same location in the block diagram.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½½ Note that the SISOcondition (7. (7. it allows both zeros and poles to cross into the righthalf plane. 8. the above RSconditions are often conservative.20).33) follows as a special case of (8. the unstructured uncertainty descriptions in terms of a single perturbation are not “tight” (in the sense that at each frequency all complex perturbations satisfying ´¡´ µµ ½ may not be possible in practice).2 RS for coprime factor uncertainty ¹ ¡Æ + ¹  ¡Å Õ ¹ ÆÐ ¹+ ¹ + Ã ÅÐ ½ Õ Figure 8. it Ô ´ÅÐ · ¡Å µ ½ ´ÆÐ · ¡Æ µ ¡Æ ¡Å ½ ¯ (8.58): ÊË Ô ´Á Û Ç ¡ Ç µ ½ ¡Ç ½ ½ ¸ Û ÇË ½ ½ (8. see (4. 1990). and has proved to be very useful in applications (McFarlane and Glover. for which the set of plants is where ÅÐ ½ÆÐ is a left coprime factorization of the nominal plant. Since we have no weights on the perturbations. we then get a tight condition for RS in terms of Å ½ . In order to get tighter conditions we must use a tighter uncertainty description in terms of a blockdiagonal ¡.9.
4 discussed in Chapter 9. Ô Exercise 8. In (8. to test for RS we can rearrange the block diagram to match the Å ¡structure in Figure 8.65).45) we see that these two approaches differ at most by a factor of ¾. To test for robust stability we rearrange the system into the Å ¡structure and we have from (8. see (4. ¸ Å½ ½¯ (8.63) We then get from Theorem 8.¿½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ is reasonable to use a normalized coprime factorization of the nominal plant. so it is not normalized to be less than 1 in this case. ¡ Ç ¡Ç ½ and derive a necessary and sufﬁcient condition for robust stability of the closedloop system. Make a block diagram of the uncertain plant. The question is whether we can take advantage of the fact that ¡ ¡ is structured to obtain an RScondition which is tighter than (8.3 with ¡ ¡Æ ¡Å Å Ã ´Á · Ã µ ½ ÅÐ ½ Á ¯ (8.15 Consider combined multiplicative and inverse multiplicative uncertainty at ´Á ¡ Ç Ï Ç µ ½ ´Á · ¡Ç ÏÇ µ . ¡Æ ½ ¯ and ¡Å ½ ¯. from (A. where ¡ ¡ is blockdiagonal. 8.49) ÊË ´Å ´ µµ ½ (8.64) The above robust stability result is central to the À ½ loopshaping design procedure ÊË ¡Æ ¡Å ½ The comprime uncertainty description provides a good “generic” uncertainty description for cases where we do not use any speciﬁc a priori uncertainty information. One idea is to make use of the fact that . so it is not an important issue from a practical point of view. combined uncertainty.62) we bound the combined (stacked) uncertainty. In any case.65) We have here written “if” rather than “if and only if” since this condition is only sufﬁcient for RS when ¡ has “no structure” (fullblock uncertainty). This is because this uncertainty description is most often used in a controller design procedure where the objective is to maximize the magnitude of the uncertainty (¯) such that RS is maintained. which is not quite the same as bounding the individual blocks.25). However. Ô ½. Note that the uncertainty magnitude is ¯.7 RS with structured uncertainty: Motivation Consider now the presence of structured uncertainty. Remark. where we choose to normbound the the output. ¡Æ ¡Å ½ ¯.
´Å µ. and so as expected (8. As one might guess.65) must also apply if we replace Å by Å have ÊË ´ Å ½ µ ½ (8. However.10.68) is identical to (8. . Now rescale the inputs and outputs to Å and ¡ by inserting the matrices and ½ on both sides as shown in Figure 8.68) directly motivated the introduction of the structured singular value. ¡ ¡. i. Remark 1 Historically.66). we get more degrees of freedom in . note that with the chosen form for the scalings we have for ¡ ½ . introduce the blockdiagonal scaling matrix Á (8. This means each perturbation block ¡ ½ (see Figure 8. Next. and we have ÊË Ñ Ò ´ µ¾ ´ ´ µÅ ´ µ ´ µ ½ µ ½ (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ SAME UNCERTAINTY ¿½¿ ¡½ ¡¾ . we have ¡ ¡ ½ . To this effect.66) where is a scalar and Á is an identity matrix of the same dimension as the ’th perturbation block. that is.e.67) in (8. ¡ stability must be independent of scaling. and therefore the “most improved” (least This applies for any conservative) RScondition is obtained by minimizing at each frequency the scaled singular value. We will return with more examples of this compatibility Á and we have later.68) where is the set of blockdiagonal matrices whose structure is compatible to that of ¡. and we that (8. ¡ . discussed in detail in the next section.10). when ¡ has structure. ½ . This clearly has no effect on stability. we must select ´ Å ½ µ ´Å µ. the RScondition in (8. ¹ ½ ¹ Å Å ½ ¹ ¡ NEW Å : Figure 8. . Note that when ¡ is a full matrix.65). and ´ Å ½ µ may be signiﬁcantly smaller than ´Å µ.10: Use of blockdiagonal scalings.
Condition (8.72) ¿½ ¾ . for blockdiagonal complex perturbations we generally have that ´Å µ is very close to Ñ Ò ´ Å ½ µ. ´Å µ depends not only on Å but also on the allowed structure for ¡. . a similar condition applies when we use other matrix norms. the smallest ¡ which yields ½ ´Å µ. which is the one we will use.69) where as before is compatible with the blockstructure of ¡. Mathematically. Remark 2 Other norms. ´¡µ) which makes the ´Å µ ½ ¸ Ñ Ò ´¡µ ¡ Ø´Á Å ¡µ ¼ ÓÖ ×ØÖÙ ØÙÖ ¡ (8. For the case where ¡ is “unstructured” (a full matrix). for example. ´ µ¾ 8. we usually prefer because for this norm we get a necessary and sufﬁcient RScondition.5 Full perturbation (¡ is unstructured). or Å ¾ ´Å µ. This is sometimes shown explicitly by using the notation ¡ ´Å µ. Remark. A particular smallest ¡ singularity has ´¡µ ½ Ú½ ÙÀ .71) The perturbation with ´¡µ when ¡ is a full matrix. then ´Å µ ½ ´¡µ. . How is deﬁned? A simple statement is: Find the smallest structured ¡ (measured in terms of matrix Á Å ¡ singular. Thus. and we have ´Å µ ´Å µ. Consider Å ¡ ½ ½ ½ Ú½ ÙÀ ½ ½ ½ ´Å µ ¾ ¾ ¼ ¼ ¼ ¼ ¿ ½ ¾ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼ ¼ makes À (8. In fact. The Å ¡½ Û structure in Figure 8. or any induced matrix norm such as Å ½ (maximum column sum). Any matrix norm may be used. Å . Å ½ (maximum row sum).¿½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ we have that ´Å µ Ñ Ò ´ Å ½ µ.8 The structured singular value The structured singular value (denoted Mu.3 is stable for all blockdiagonal ¡’s which satisfy ¡´ µ if ÑÒ ´ µÅ ´ µ ´ µ ½ ½ (8. We will use to get necessary and sufﬁcient conditions for robust stability and also for robust performance. SSV or ) is a function which provides a generalization of the singular value.70) Clearly. ½ ¼ ¼ ¼ ¿½ ¾ ¼ ¼ ½ ¿ ½ ¾ ¼ ¿½ Ø´Á Å ¡µ ¼ ¾¼¼ ¼ ½¼¼ ¼ ¾¼¼ ¼ ½¼¼ ¼. Although in some cases it may be convenient to use other norms. Thus ´Å µ (8. mu.68) is essentially a scaled version of the small gain theorem. the Frobenius norm. which achieves this is ¡ ½ ½ Example 8. and the spectral radius.
¡ ÓÑÔÐ Ü ¡Ö Ð ´Å µ ´Å µ Å (8. Thus ´Å µ ¿ when ¡ is a diagonal matrix. If Å is a scalar then in most cases ´Å µ Å . and is then the ½ Ñ. Diagonal perturbation (¡ is structured).1 Structured Singular Value. If we restrict ¡ to be diagonal ¼.6 of a scalar. deﬁnition of . this requires that from (8. so in this case ´Å µ ¼. The following example demonstrates that also depends on whether the perturbation is real or complex.76) A value of ½ means that there exists a perturbation with ´¡µ ½ which is just large enough to make Á Å ¡ singular. whereas a smaller value of is “good”.70) involves varying ´¡µ. Deﬁnition 8. The above example shows that depends on the structure of ¡.72) is a full matrix. is deﬁned by ½ Ø´Á Ñ Å ¡µ ¼ ÓÖ ×ØÖÙ ØÙÖ ¡ ´¡µ If no such structured ¡ exists then ´Å µ ¼. the smallest diagonal ¡ which makes Ø´Á Å ¡µ ¼ is ½ ½ ¼ ¡ ¿ ¼ ½ Å in (8. A larger value of is “bad” as it means that a smaller perturbation makes Á Å ¡ singular.e. i. However. we prefer to normalize ¡ such that ´¡µ ½.5 continued. In summary. . We can do this by scaling ¡ by a factor Ñ . This is illustrated then we need a larger perturbation to make Ø´Á Å ¡µ next. Example 8. Example 8. called the structured singular value.74) Å ÓÖ Ö Ð Å ¼ ÓØ ÖÛ × (8. and looking for the smallest Ñ which makes the matrix Á Ñ Å ¡ singular. This results in the following alternative reciprocal of this smallest Ñ . This follows ½ Å such that ´½ Å ¡µ ¼. Let Å be a given complex matrix and let ¡ ¡ denote a set of complex matrices with ´¡µ ½ and with a given blockdiagonal structure (in which some of the blocks may be repeated and some may be restricted to be real).73) with ´¡µ ¼ ¿¿¿. For the matrix (8. However.71).75) The deﬁnition of in (8. we have for a scalar Å. The real nonnegative function ´Å µ. ´Å µ ¸ ÑÒ Ñ ½ (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ Note that the perturbation ¡ in (8. which is impossible when ¡ is real and Å has an imaginary component.70) by selecting ¡ we can select the phase of ¡ such that Å ¡ is real.
If they are nonsquare. since if 2.2 Properties of Two properties of for real and complex ¡ which hold for both real and complex perturbations are: 1. In most cases Å and ¡ are square. such as providing a generalization of the spectral radius. The remainder of this section deals with the properties and computation of .12) which yields Ø´Á Ñ Å ¡µ Ø´Á Ñ ¡Å µ (8. ¼ we obtain Á 3. Let ¡ may have additional structure) and let Å be partitioned accordingly. but this need not be the case. Ñ ´Å µ ´Å µ ½ . (8. ¡½ ¡¾ be a blockdiagonal perturbation (in which ¡ ½ and ¡¾ 2. This follows from the identity (A.1 Remarks on the deﬁnition of 1. 8. that is. this is a more natural deﬁnition of a robustness margin. one possible way to obtain numerically. ´«Å µ « ´Å µ for any real scalar «. 4. then we make use of (8.8.8.77) 5.78) In words. By “allowed” we mean that ¡ must have the speciﬁed blockdiagonal structure and that some of the blocks may have to be real. The sequence of Å and ¡ in the deﬁnition of does not matter. ¾ Ñ Ü ¡½ ´Å½½ µ ¡¾ ´Å¾¾ µ (8. ´Å µ. In many respects.14) ½ Å½½ ¡½ and ½¾ Á ½ Å¾¾ ¡¾ . At the same time (in fact.77) and work with either Å ¡ or ¡Å (whichever has the lowest dimension). ´Å µ.78) simply says that robustness with respect to two perturbations taken together is at least as bad as for the worst perturbation considered alone. The structured singular value was introduced by Doyle (1982). The ¡ corresponding to the smallest Ñ in (8. ´Å µ has a number of other advantages.¿½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. Thus. Readers who are primarily interested in the practical use of may skip most of this material. This agrees with our intuition that we cannot improve robust stability by including another uncertain perturbation. Then ¡´Å µ Proof: Consider with ½½ Á Ø´Á ½ Å ¡µ where ¡ ´Å µ and use Schur’s formula in (A. ½. Note that with Ñ Ñ Å ¡ Á which is clearly nonsingular. . and gradually increase Ñ until we ﬁrst ﬁnd an allowed ¡ with ´¡µ ½ such that ´Á Ñ Å ¡µ is singular (this value of Ñ is then ½ ). and the spectral norm. then ½ Ñ cannot be the ÑÅ ¡ µ ¼ structured singular value of Å .76) will always have ´¡µ ¼ ¼ ¼ Ø´Á ¼ for some ¡¼ with ´¡¼ µ ½. in the same issue of the same journal) Safonov (1982) introduced the Multivariable Stability Margin Ñ for a diagonally perturbed system as the inverse of . is to start with Ñ ¼ . However. since there exists a smaller scalar Ñ Ñ such that Ø´Á Ñ Å ¡µ ¼ where ¡ ½ ¡¼ and ´¡µ ½.
e.79).79) and (8.82). The results are mainly based on the following result.46).80) Proof: Follows directly from (8.8.80) and (8. Lemma 8.3 for complex ¡ When all the blocks in ¡ are complex.g. ´«Å µ « ´Å µ for any (complex) scalar «.g.79) since there are no degreesoffreedom for the ¾ maximization. may be computed relatively easily.81). the upper bounds given below for complex perturbations. However. generally hold only for complex perturbations. whereas selecting ¡ full gives the most degreesoffreedom. which may be viewed as another deﬁnition of that applies for complex ¡ only.81) for complex perturbations is bounded by the spectral radius and the singular value (spectral norm): ´Å µ ´Å µ ´Å µ (8. For a full block complex perturbation we have from (8. 8. This follows because complex perturbations include ´Å µ real perturbations as a special case.87). ´Å µ in (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ In addition. e. For a repeated scalar complex perturbation we have ¡ ÆÁ ´Æ × ÓÑÔÐ Ü × Ð Öµ ´Å µ ´Å µ (8. 1. the lower bounds. 3. since selecting ¡ ÆÁ gives the fewest degreesoffreedom for the optimization in (8. and the equivalence between (8.82) This follows from (8. 2. . also hold for real or mixed ¡ ´Å µ real/complex perturbations ¡. Ñ Ò ¾ ´ Å ½ µ in (8. This is discussed below and in more detail in the survey paper by Packard and Doyle (1993).43) ¾ Properties of for complex perturbations Most of the properties below follow easily from (8.79).79) ´Å µ Ñ Ü¡ ´¡µ ½ ´Å ¡µ Proof: The lemma follows directly from the deﬁnition of and (8. ´Å µ ´Å µ (8.5 For complex perturbations ¡ with ´¡µ ½: (8.47): ¡ ÙÐÐ Ñ ØÖ Ü 4.
The second equality applies since ´ eigenvalue properties in the Appendix).87) This optimization is convex in (i. The result (8. Unfortunately.79). has only one minimum. the worst known example has an upper bound which is about 15% larger than (Balas et al. Then ´ÅÍ µ ´Å µ ´ÍÅ µ Å ¡¼ where ´¡¼ µ ´Í ¡µ (8. The key step is the third equality which applies ¾ only when ¡ ¡ . Then for complex ¡ µ ´ µ (by the The ﬁrst equality is (8.86) is motivated by combining (8. The fourth equality again follows from (8.79). satisfy ¡ ¡ ). Í with the ´Å µ Ñ ÜÍ ¾Í ´ÅÍ µ (8.86) is not convex and so it may be difﬁcult to use in calculating numerically. . Then it follows from (8. the global minimum) and it may be shown (Doyle. numerical evidence suggests that the bound is tight (within a few percent) for 4 blocks or more. Improved upper bound.e.83) Proof: Follows from (8. that is.84) and (8. 1993). Consider any unitary matrix Í with the same structure as ¡.85) ¡´ Å µ Ñ¡ Ü ´ Å ¡µ 7. It follows from a generalization of the maximum modulus theorem for rational ¾ functions.. the optimization in (8.¿½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 5.79) by writing ÅÍ ¡ and so Í may always be absorbed into ¡. Deﬁne Í as the set of all unitary matrices same blockdiagonal structure as ¡. ´¡µ. ¡ ¡.84) ´ Åµ Proof: ´Å µ Ò ´ Å ½ µ ´ Åµ ´Å µ follows from Ñ¡ Ü ´Å ¡ µ Ñ¡ Ü ´Å ¡µ (8.82) to yield ´Å µ Ñ Ü ´ÅÍ µ Í ¾Í The surprise is that this is always an equality. Deﬁne to be the set of matrices which commute with ¡ (i. Consider any matrix which commutes with ¡. ¾ 6. Then ´Å µ ¡ ´Å µ (8.e. 1982) that the inequality is in fact an equality if there are 3 or fewer blocks in ¡.82) that ´Å µ ÑÒ ¾ ´ Å ½ µ (8. Improved lower bound.83) and (8. Furthermore. 8.86) Proof: The proof of this important result is given by Doyle (1982) and Packard and Doyle (1993).
µ when ¡ has a structure similar 11.86) by ﬁxing one of the corresponding unitary scalars in Í equal to 1.94) Note: (8. Some special cases of (8. Let us extend this set by ½ and with the same structure as ¡ .87). Proof: Let ¼ ½ Similarly. 1993). we may assume the blocks in to be À ¼. see also Exercise 8. When we maximize over ¡ then Î generates a certain set of matrices with ´Î µ ½.94) is stated incorrectly in Doyle (1982) since it is not speciﬁed that have the same structure as .e. 1988a). then one may without loss of generality set Ò ½. and Doyle.88) ½Á ¼ (8. One can always simplify the optimization in (8. ¡ must .20.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ’s which commute with ¡ are ¿½ Some examples of ¡ ¡ ¡ ¡ ÆÁ ÙÐÐ Ñ ØÖ Ü ÙÐÐ Ñ ØÖ Ü Á ¡½ ´ ÙÐÐµ ¼ ¼ ¡¾ ´ ÙÐÐµ ¡½ ´ ÙÐÐµ Æ¾ Á Æ¿ Æ (8. Use the fact that ´ µ Ñ Ü¡ ´¡ µ Ñ Ü¡ ´Î µ ´ µ where Î ¡ ´ µ. 9.91) In short.87) by ﬁxing one of the scalar blocks in equal to 1.92) ¡´ µ ´ µ ¡ ´ µ Ò and note that ´ Å ½ µ ´ ¼ Å ¼ ½ µ. one may simplify the optimization in (8.92): (a) If is a full matrix then the structure of ¡ is a full matrix. For example.90) ½Á ¾ ´ ÙÐÐµ ¿ (8. Without affecting the optimization in (8. ¡” Proof: The proof is from (Skogestad and Morari. and we simply µ ´ µ ´ µ (which is not a very exciting result since we always get ´ have ´ µ ´ µ ´ µ ´ µ). and “ Here the subscript “¡ ” denotes the structure of the matrix denotes the structure of ¡. This follows from Property 1 with ½.89) ¼ ¾Á (8. ¾ ¡´ µ ´ µ ¡´ µ ¡ (8. i. The following property is useful for ﬁnding ´ to that of or : (8. and for scalars ¼ (Packard Hermitian positive deﬁnite. for cases where ¡ has one or more scalar blocks. (b) If ¡ has the same structure as (e.93) . they are both diagonal) then ¡´ µ ´ µ ¡´ µ (8.g. let ½ ¾ Ò . 10. we see that the structures of ¡ and are “opposites”. We maximizing over all matrices Î with ´Î µ µ Ñ ÜÎ ´Î µ ´ µ ¾ then get ´ Î ´ µ ´ µ.
Above we have used Ñ Ò . ´Å µ ´Å µ and ´Å µ · since Å is singular and its nonØÖ´Å µ · . A generalization of (8. Assume that Å is an LFT of Ê: Å Æ½½ · Æ½¾ Ê´Á Æ¾¾ Êµ ½ Æ¾½ .97) may be used to derive a sufﬁcient loopshaping bound on a transfer function of interest.87) (which is exact here since we have only two blocks).93) is: ´¡µ ¡ ´Å µ ´Êµ ¾ ¼ ¡ (8. Ä. Then Å ´Å µ ´Å µ · ´Å µ · Ô ¾ ¾·¾ ¾ ÓÖ ¡ ÓÖ ¡ ÓÖ ¡ Æ½ Æ¾ ÙÐÐ Ñ ØÖ Ü ÆÁ (8.97) where ¡ ¡ Ê . (c) Use of the upper bound in (8. we should have used Ò because the set of allowed ’s is not bounded and therefore the exact minimum may not be achieved (although we may be get arbitrarily close).7 Let Å and ¡ be complex ¾ ¢ ¾ matrices. 13. it is interesting to consider three different proofs of the result ´Å µ · : (a) A direct calculation based on the deﬁnition of .96) where ¡ ¡ Ê . For a diagonal ¡. e. ¡ consists of repeated scalars). Ì . The problem is to ﬁnd an upper bound on Ê.126). (b) Use of the lower “bound” in (8. (8.92) and (8.e. see (A. Given the condition ¡ ´Å µ ½ (for RS or RP). ´Å µ ´Å µ and ´Å µ zero eigenvalue is ½ ´Å µ Ô ¾ ¾ · ¾ ¾ since Å is singular and its nonzero singular value is ´Å µ Å . Skogestad and Morari (1988a) show that the best upper bound is the which solves ¡ Æ½½ Æ½¾ Æ¾½ Æ¾¾ ½ (8.¿¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (c) If ¡ ÆÁ (i.95) ¡´ Ê µ ¼ (8. Ä ½ or Ã . For ¡ full. The use of Ñ Ü¡ (rather than ×ÙÔ¡ ) is mathematically correct since the set ¡ is closed (with ´¡µ ½). 1988a). Example 8. and is easily computed using skewed. which guarantees that ¡ ´Å µ ½ when ¡ ´Æ½½ µ ½.g.86) (which is always exact). A useful special case of this is inequality ´ ´Å ¡µ 12. . we get the spectral radius µ ´ µ ´ µ. Ê may be Ë . ´Êµ . The result is proved by (Skogestad and Morari. Remark.. To be mathematically correct. The following is a further generalization of these bounds.98) Proof: For ¡ ÆÁ .
101) ÆÁ Proof: From the deﬁnition of eigenvalues and Schur’s formula (A. Solution: Use ½ . ½ Æ½ Æ¾ ¼. The solution is ¾·¾ · ¾ ´Å µ · . Exercise 8.16.36) that (c) For ´Å µ ´ Å ½ µ ½ Ô ¾· ¾· ¾· ¾ Ô (8. Then ¼ ßÞ ¼ ´ ÔÅ µ Å ´ µ ´ µ ´ µ ´Å µ Ñ Ü ´ µ ´ µ Ô ÓÖ ¡ ÓÖ ¡ ÓÖ ¡ ¡½ ¡¾ ¡ ÙÐÐ ÙÐÐ Ñ ØÖ Ü (8. and from (8.98) and a diagonal ¡ show that ´Å µ · using the lower “bound” ´Å µ Ñ ÜÍ ´ÅÍ µ (which is always exact).99) which gives which weÔ want to minimize with respect to .98) and a diagonal ¡ show that ´Å µ · using the upper bound Ñ Ò ´ Å ½ µ (which is exact in this case since has two “blocks”).e. Since Å ½ is a singular matrix we have from (A. We get Æ ½ ´ · µ. Show that for ¡ ¡½ ¡¾ Å½½ Å½¾ Å¾½ Å¾¾ Å½¾ ½Å½½ Å¾¾ Å¾½ (8. ´Å µ Ñ Ü ´ µ ´ µ follows since ´Å µ ´Å À Å µ where À À .14) we get ´Å µ Ô Ô Ô ´ µ and ´Å µ ´ µ follows.70) we have that ¾ Exercise 8. i.77) we then get Æ¾ Æ½ Æ¾ Å¡ ½ Æ½ Æ¾ Ø´Á Å ¡µ The smallest Æ½ and obtained when Æ½ Ø´Á ¡Å µ ½ Æ½ Æ¾ Æ¾ . and ¡ ´ µ ´ µ then realizing that we can always select ¡½ and ¡¾ such that ´ ¡¾ Ô ½ µ (recall (8. ÅÀÅ ¾ . For blockdiagonal ¡.47)). ½ ½ Æ ¡Æ · ¡Æ which make this matrix singular.16 (b) For Å in (8.8 Let Å be a partitioned matrix with both diagonal blocks equal to zero.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ Æ½ ¿¾½ We will use approach (a) here and leave (b) and (c) for Exercise 8. and we may ﬁx one of them equal Å in (8.17 Let be a complex scalar. are Æ¾ Æ and the phases of Æ½ and Æ¾ are adjusted such that ¼. Hint: Use to ½). Í ½ (the blocks in Í are unitary scalars.100) Example 8. ´Å µ ´ µ ´ µ follows in a similar way using ´Å µ Ñ Ü¡ ´Å ¡µ Ñ Ü¡½ ¡¾ ´ ¡¾ ¡½ µ. We have Å¡ From (8.
¿¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Exercise 8.103) is that it is only a “yes/no” condition.103) A problem with (8. Then the Å ¡system in Figure 8. From the determinant stability condition in (8. Exercise 8.3 for the case where ¡ is a set of normbounded blockdiagonal perturbations.18 Let Å be a complex ¿ ¢ ¿ matrix and ¡ Æ½ Æ¾ Æ¿ · · . namely Ø´Á Ñ Å ¡µ ¼ (8. Show by a counterexample that in general equal to ´ µ.76) this value is Ñ following necessary and sufﬁcient condition for robust stability.6 RS for blockdiagonal perturbations (real or complex). Theorem 8.43) which applies to both complex and real perturbations we get ÊË ¸ Ø´Á Å ¡´ µµ ¼ ¡ ´¡´ µµ ½ (8. To ﬁnd the factor Ñ by which the system is robustly stable.20 Assume ´ µ is not Exercise 8.105) . and look for the smallest Ñ which yields “borderline instability”.9 Robust stability with structured uncertainty Consider stability of the Å ¡structure in Figure 8. µ 8.21 If (8. Show that for ¡ Æ½ Æ¾ · (8. Prove that Å Exercise 8.102) Does this hold when ¡ is scalar times identity. and we obtain the From the deﬁnition of in (8. Under what conditions is ´ µ ´ µ? (Hint: Recall (8.3 is stable for all allowed perturbations with ´¡µ ½ . we scale the uncertainty ¡ by Ñ . if and only if ´Å ´ µµ ½ (8.84)).104) ½ ´Å µ. Assume that the nominal system Å and the perturbations ¡ are stable.19 Let and ´Å µ be complex scalars.94) were true for any structure of ¡ then it would imply ´ ´ µ ´ µ. or when ¡ is full? (Answers: Yes and No). and are square matrices. Show by a counterexample that this is not true.
Let us consider two examples that illustrate how we use to check for robust stability with structured uncertainty.ÊÇ ÍËÌ ËÌ Proof: ´Å µ ¡ to make ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¾¿ ½ ¸ Ñ ½. ¾ Condition (8. and an analysis based on the À ½ norm leads to the incorrect conclusion that the system is not robustly stable. In the second example the structure makes no difference.6 is really a theorem. and the system is stable. so if ´Å µ ½ at all frequencies the required perturbation Ø´Á Å ¡µ ¼ is larger than 1.105) that it is trivial to check for robust stability provided we can compute . In either case. or a restatement of the deﬁnition of .105) for robust stability may be rewritten as ÊË ¸ ´Å ´ µµ ´¡´ µµ ½ (8. the structure of the uncertainty is important. On the other hand. so if ´Å µ ½ at some frequency there does exist a perturbation with ´¡µ ½ such that Ø´Á Å ¡µ ¼ at this frequency. A nominal ¾ ¾ plant and the controller (which represents PIcontrol of a distillation process using the DVconﬁguration) is given by ¢ ´×µ ½ × · ½ ½¼ ½ ¾ ½ Ã ´×µ ½ · × ¼ ¼¼½ ¼ × ¼ ¼ ¼ (8.7 for the case when the multiplicative input uncertainty is diagonal. In the ﬁrst example. One may argue whether Theorem 8. The use of unstructured uncertainty and ´ÌÁ µ is conservative ¡Á ´ÌÁ µ feedback system in Figure 8. Example 8.9 RS with diagonal input uncertainty 10 10 2 Consider robust stability of the 1 Magnitude ÛÁ ´ ½ µ ´ÌÁ µ ¡Á ´ÌÁ µ 10 10 10 10 0 −1 −2 −3 10 −3 10 −2 10 −1 10 0 10 1 10 2 Frequency Figure 8. and the system is unstable.107) .106) which may be interpreted as a “generalized small gain theorem” that also takes into account the structure of ¡. we see from (8. ´Å µ ½ ¸ Ñ ½.11: Robust stability for diagonal input uncertainty is guaranteed since ½ ÛÁ .
However. at least for ¡Á diagonal.11 and is seen to be satisﬁed at all frequencies.108) ¼ ×·½ This implies a relative uncertainty of up to ¾¼% in the low frequency range.80) we see that the nonphysical constant perturbation Æ½ Æ¾ . so to guarantee RS we can at most tolerate ½¼% (complex) uncertainty at low frequencies. Exercise 8. so the system is robustly stable.3 we get Å ÛÁ Ã ´Á · Ã µ ½ ÛÁ ÌÁ (recall (8.) (3.22 Consider the same example and check for robust stability with fullblock multiplicative output uncertainty of the same magnitude. 3.12. with repeated scalar perturbations (i.80). With repeated real perturbations. we have that ´Ì µ instability may occur with a (nonphysical) complex Æ½ Æ¾ of magnitude ¼ ½. available software (e.1 with the plant ´×µ given in (3. However. from which we see that Æ½ ´Ì µ is ½¼ at low frequencies.11. using the command mu with blk = [2 0] in the toolbox in MATLAB) yields a peak value of ½.) On the other hand. the use of complex that real perturbations Æ½ rather than real perturbations is not conservative in this case. The controller results in a nominally stable system with acceptable performance. (Solution: RS is satisﬁed). On rearranging the block diagram to match the Å ¡structure in Figure 8.109) This condition is shown graphically in Figure 8. We ´Ì µ irrespective of the structure of the complex multiplicative ﬁnd for this case that ´Ì µ perturbation (fullblock. Also in Figure 8. from ¼ ½ yields instability. which increases at high frequencies. At low frequencies ´Ì µ is about ½¼. This demonstrates the need for the structured singular value. This shows that the system would be unstable for fullblock input uncertainty (¡Á full).7.g.32)).e. Thus. The uncertainty may be represented as multiplicative input uncertainty as shown in Figure 8. we can tolerate more than ½¼¼% uncertainty at frequencies above ½¼ rad/s. (Indeed.7. 1 from Section Ã Á .6 yields (time in minutes). Assume there is complex multiplicative uncertainty in each manipulated input of magnitude ÊË ¸ ¡Á ´ÌÁ µ ½ ÛÁ ´ µ ¡Á Æ½ Æ¾ (8. diagonal or repeated complex scalar).76) and the controller sensitive this design is to multiplicative input uncertainty. Example 8.10 RS of spinning satellite. Since ´Ì µ crosses ½ at about ½¼ rad/s. where we found ¼ ½ and Æ¾ ¼ ½ yield instability. we plot ´Ì µ as a function of frequency.7 ÛÁ Á where ÛÁ ´×µ is a where ¡Á is a diagonal complex matrix and the weight is ÏÁ scalar. fullblock uncertainty is not reasonable for this plant. reaching a value of ½ (½¼¼% uncertainty) at about ½ rad/min. In Figure 8. the uncertainty in each channel is identical) there is a difference between real and complex perturbations. This conﬁrms the results from Section 3. so we can tolerate a perturbation Æ½ Æ¾ of magnitude ½ before getting instability (This is conﬁrmed by considering the characteristic polynomial in Æ¾ ½ yields instability.1. We want to study how Ì .¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÛÁ ´×µ × · ¼ ¾ (8. Recall Motivating Example No. with (3. so for RS there is no difference between multiplicative input and In this case ÌÁ multiplicative output uncertainty. so complex repeated perturbations. and the RScondition ´Å µ ½ in Theorem 8. The increase with frequency allows for various neglected dynamics associated with the actuator and valve. ´ÌÁ µ can be seen to be larger than ½ ÛÁ ´ µ over a wide frequency range. and therefore we conclude that the use of the singular value is conservative in this case.
let ¡ want to ﬁnd how large ¡ ¾ can be before we get instability. where × is called skewed.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¾ 10 1 Magnitude ´Ì µ 10 0 10 −1 10 −2 10 −1 10 0 10 1 10 2 Frequency Figure 8.we must ﬁrst deﬁne which part of the perturbations is to be constant.1 What do ½ and skewed mean? A value of ½ ½ for robust stability means that all the uncertainty blocks must be decreased in magnitude by a factor 1. with available software to compute . We may view × ´Å µ as a generalization of ´Å µ. In practice. and we have that skewed. how large can one particular source of uncertainty be before we get instability? We deﬁne this value as ½ × .110). i. The solution is to select ÃÑ ÃÑ Å ¡µ Á ¼ ÑÁ Ñ which makes ¼ (8. × for ½. ¡½ ¡¾ and assume we have ﬁxed ¡ ½ ½ and we For example. . we obtain × by iterating on Ñ until ´ÃÑ Å µ ½ where ÃÑ may be as in (8. × for ½. and × for ½.12: plot for robust stability of spinning satellite 8.is Ø´Á × ´Å µ ¸ ½ Ñ Note that to compute skewed. But if we want to keep some of the uncertainty blocks ﬁxed. This iteration is straightforward since increases uniformly with Ñ .9..1 in order to guarantee stability.110) and look at each frequency for the smallest value of ¼.e. × ´Å µ is always further from 1 than ´Å µ is.
Our RPrequirement. but ﬁrst a few remarks: 1. which itself may be a blockdiagonal matrix. Step B is the key step and the reader is advised to study this carefully in the treatment below.2. the RPcondition is identical to a RScondition with an additional perturbation block. This may be tested exactly by computing ´Æ µ as stated in the following theorem.81) that ´Æ¾¾ µ For example. It is a ﬁctitious uncertainty block representing the À ½ performance speciﬁcation. Below we prove the theorem in two alternative ways. Condition (8. is deﬁned such that it directly addresses the worst case. À . represents the true uncertainty. whereas ¡È is a full complex matrix stemming from the ½ norm performance speciﬁcation.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. Theorem 8. ¼µ we get from (8.111) (8. ¡ ¡È .13.10. even the worstcase plant. for the nominal system (with ¡ ¡È ´Æ¾¾ µ.1 Testing RP using To test for RP. Essentially. Rearrange the uncertain system into the Æ ¡structure of Figure 8. Here 2. This also holds for MIMO systems.39). as given in (8. and we see that ¡È must be a full matrix. Then ÊÈ where ¸ ¸ ½ Ù ´Æ ¡µ ½ Û ¡´Æ ´ µµ ½ ¡ ¡ ¼ ¼ ¡È ½ ¡ ½ ½ (8. Assume nominal stability such that Æ is (internally) stable. is that the À ½ norm of the transfer function Ù ´Æ ¡µ remains less than 1 for all allowed perturbations. We showed in Chapter 7 that for a SISO system with an À ½ performance objective. we ﬁrst “pull out” the uncertain perturbations and rearrange the uncertain system into the Æ ¡form of Figure 8. The condition for RP involves the enlarged perturbation ¡ ¡.112) is computed with respect to the structure (8.112) allows us to test if ½ ½ for all possible ¡’s without having to test each ¡ individually. as illustrated by the stepwise derivation in Figure 8. Note that the block ¡ È (where capital È denotes Performance) is always a full matrix.113) and ¡È is a full complex perturbation with the same dimensions as Ì. 8.7 Robust performance.13.10 Robust performance Robust performance (RP) means that the performance objective is satisﬁed for all possible plants in the uncertainty set.
where ¡ È is a full complex matrix. Block diagram proof of Theorem 8.6 is equivalent to ¡ ´Æ µ ½. For a generalization of Theorem 8. ¡ ´Æ ´ µµ ½¸ Ø´Á Æ ´ µ¡´ µµ ¼ ¡ ´¡´ µµ ½ By Schur’s formula in (A.7 In the following. Step A. The theorem is proved by the equivalence between the various block diagrams in Figure 8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¾ 3.7 The deﬁnition of gives at each frequency ´Æ ¡ßÞ µ ÊÈ ¡È ´Æ¾¾ µ Ñ Ü ¡ ´Æ½½ µ ¡È ´Æ¾¾ µ ßÞ ßÞ ÊË ÆÈ (8. 4. is equivalent to Å ½ ½. ¾ Algebraic proof of Theorem 8. We then have that the original RPproblem is equivalent to RS of the Æ ¡structure which from Theorem 8. Æ ½ ½. let Ù ´Æ ¡µ denote the perturbed closedloop system for which we want to test RP.4 that stability of the Å ¡structure in Figure 8.114) ½ ½.114) implies that RS ( ¡ ´Æ½½ µ ½) and where as just noted NP ( ´Æ¾¾ µ ½) are automatically satisﬁed when RP ( ´Æ µ ½) is satisﬁed. but which is not nominally stable and thus is actually robustly unstable). However. Since ¡ always has structure. From this theorem. Step B (the key step).2. This is simply the deﬁnition of RP. From (8. where ¡ is a full complex matrix.112) and must be tested separately (Beware! It is a common mistake to get a design with apparently great RP.13. see also Zhou et al. (8. (1996).3. Step C. 5. Introduce Ù ´Æ ¡µ from Figure 8. we get that the RPcondition ½ ½ is equivalent to RS of the ¡È structure.78) we have that À½ norm. Collect ¡ and ¡È into the blockdiagonal matrix ¡. the use of the conservative for robust performance. Recall ﬁrst from Theorem 8. note that NS (stability of Æ ) is not guaranteed by (8. Step D.7 see the main loop theorem of Packard and Doyle (1993). is generally ´Æ¾¾ µ.14) we have Ø´Á Æ ¡µ Ø´Á Æ½½ ¡µ ¡ Ø´Á Æ½½ ¡µ ¡ Æ ½¾ Ø Á Æ ½½ ¡ Á ÆÆ ¡È ¾¾ ¡Ô ¾½ ¡ Ø Á Æ¾¾ ¡È Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ ¡È Ø Á ´Æ¾¾ · Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ µ¡È .
13: RP as a special case of structured RS. . ¡È ½ ½ ¡ ½ ½ ¹ ¹ STEP D Æ Ñ ¡ ¼ ¡ ¼ ¡È is RS. ¡È ½ ½ ¡ ½ ½ ´¡µ Ñ ¡È ¡ is RS.¿¾ ÅÍÄÌÁÎ ÊÁ RP STEP A Ä Ã ÇÆÌÊÇÄ Ñ Ñ ´¡µ ½ ½ STEP B ¡ ½ ½ ¡È ¹ STEP C is RS. ¹ ¡ ½ ½ Æ Ñ ¡´Æ µ (RS theorem) ½ Û Ù ´Æ ¡) Figure 8.
Then we have: ÆË ÆÈ ÊË ÊÈ ¸ Æ ´ ÒØ ÖÒ ÐÐÝµ ×Ø Ð ¸ ´Æ¾¾ µ ¡È ½ and NS ¸ ¡ ´Æ½½ µ ½ and NS ¡ ¼ ¸ ¡ ´Æ µ ½ ¡ ¼ ¡È (8. ¾ 8.7 is proved by reading the above lines in the opposite direction. Note that it is not necessary to test for RS separately as it follows as a special case of the RP requirement. whereas ¡ È always is a full complex matrix. where the blockdiagonal perturbations satisfy ¡ ½ ½. i.e. Ø´Á Æ½½ ¡µ and for all ¡ ¼ ¡ ¸ ¡ ´Æ½½ µ ½ ´ÊËµ Ø´Á ¡È µ ¼ ¡È ¸ ¡È ´ µ ½ ¸ ´ µ ½ (RP deﬁnition) Theorem 8.118) Here ¡ is a blockdiagonal matrix (its detailed structure depends on the uncertainty we are representing).116) (8. Introduce Ù ´Æ ¡µ Æ¾¾ · Æ¾½ ¡´Á Æ½½ ¡µ ½ Æ½¾ ½ ½ for all allowable and let the performance requirement (RP) be perturbations. both terms must be nonzero at each frequency. RS and RP Rearrange the uncertain system into the Æ ¡structure of Figure 8. with an associated block structure ¡.117) and NS (8. it is sometimes convenient to refer to the peak value as the “¡norm”.115) (8.10. Although the structured singular value is not a norm. Note that nominal stability (NS) must be tested separately in all cases.2. we therefore deﬁne À ´×µ ¡ ¸ Ñ Ü ¡ ´À ´ µµ For a nominally stable system we then have ÆÈ ¸ Æ¾¾ ½ ½ ÊË ¸ Æ½½ ¡ ½ ÊÈ ¸ Æ ¡ ½ . For a stable rational transfer matrix À ´×µ.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ Ø´Á Æ½½ ¡µ ¡ Ø´Á Ù ´Æ ¡µ¡È µ ¿¾ Since this expression should not be zero.2 Summary of conditions for NP.
.10. The corresponding worstcase perturbation may be obtained as follows: First compute the worstcase performance at each frequency using skewed.122) .1). that is.of Æ as discussed in Section 8.11 Application: RP with input uncertainty We will now consider in some detail the case of multiplicative input uncertainty with performance deﬁned in terms of weighted sensitivity. at each frequency skewed. In the MATLAB toolbox.1. 8. one needs to keep the magnitude of the perturbations ﬁxed ( ´¡µ ½). and then ﬁnd a stable. We have. Ñ Ü ´ Ð ´Æ ¡µ´ µµ ´¡µ ½ × ´Æ ´ µµ (8. we must compute skewed. as one might have expected.blk.121) where the set of plants is given by ´Á · ÛÁ ¡Á µ ¡Á ½ (8. To ﬁnd the worstcase weighted performance for a given uncertainty. That is.is the value × ´Æ µ which solves ´ÃÑ Æ µ ½ ÃÑ Á ¼ ¼ ½ × (8. does not directly give us the worstcase performance..14.120) Note that underestimates how bad or good the actual worstcase performance is.muup] = wcperf(N.¿¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8.1. So. What does this mean? The deﬁnition of tells us that our RPrequirement would be satisﬁed exactly if we reduced both the performance requirement and the uncertainty by a factor of 1.119) To ﬁnd × numerically. i. as illustrated in Figure 8. we scale the performance part of Æ by a factor Ñ ½ × and iterate on Ñ until ½. The performance requirement is then ÊÈ ¸ ÛÈ ´Á · Ô Ã µ ½ Ô ½ ½ ½ Ô (8. At the frequency where × ´Æ µ has its peak. This follows because × ´Æ µ is always further from 1 than ´Æ µ.mulow.3 Worstcase performance and skewedAssume we have a system for which the peak value for RP is ½ ½.e. Remark. allpass transfer function that matches this. the single command wcperf combines these three steps: [delwc. in this case.9. we may extract the corresponding worstcase perturbation generated by the software. Ñ Ü ¡ ´ ´¡µµ.
14: Robust performance of system with input uncertainty multivariable systems. . Find some simple bounds on for this problem and discuss the role of the condition number. For example. Consider the SISO case.122) is ﬁne for analyzing a given controller. We will mostly assume that ¡Á is diagonal. however. and the uncertainty is the same for all inputs.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿½ Here ÛÈ ´×µ and ÛÁ ´×µ are scalar weights. It should be noted. so the performance objective is the same for all the outputs. so that useful connections can be made with results from the previous chapter. In this section. Find the interconnection matrix Æ for this problem. it is less suitable for controller synthesis. we will: 1. that although the problem setup in (8. 4. Make comparisons with the case where the uncertainty is located at the output.121) and (8. 2. the problem formulation does not penalize directly the outputs from the controller. We will ﬁnd that for RP is indeed much larger than 1 for this decoupling controller. Consider a multivariable distillation process for which we have already seen from simulations in Chapter 3 that a decoupling controller is sensitive to small errors in the input gains. but we will also consider the case when ¡ Á is a full matrix. 5. 3. This problem is excellent for illustrating the robustness analysis of uncertain  ¹ Ã Õ ¹ ÏÁ ¹ ¡Á ¹+ ¹ + ¹ +¹ Õ + Û ÏÈ Þ ¹ · ¡Á Û ¹ ¹ Æ Þ ¹ Figure 8.
115)(8.125) (8.83). conditions (8.126) (8. in terms of weighted sensitivity with multiplicative uncertainty for a SISO system. . thus involves minimizing the peak value of ´Æ µ ÛÁ Ì · ÛÈ Ë . For SISO systems ÌÁ Ì .11.115)(8.127) The RPcondition (8. Ë ´Á · Ã µ ½ and for simplicity we have omitted ´ÍÆ µ with unitary the negative signs in the 1.¿¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. since ´Æ µ Í Á ¼ For a given controller Ã we can now test for NS.118).118) with Æ as in (8. see (8. ÃË and ÌÁ are stable ÛÈ Ë ½ ÛÁ ÌÁ ½ ÛÈ Ë · ÛÁ ÌÁ ½ ÛÁ ÌÁ ÛÁ ÌÁ ÛÈ Ë ÛÈ Ë ÛÁ ÌÁ · ÛÈ Ë (8.123) become ÆË ÆÈ ÊË ÊÈ ´Æ µ ¸ ¸ ¸ ¸ Ë . ¼ Á .61). Here ¡ ¡ Á may be a full or diagonal matrix (depending on the physical situation). Ë .32).129) ÛÁ Ì Û Á Ô · ÛÈ Ë differs from Ì From (A. as in (8.1 and 1. minimizing ÆÑ Ü ½ is close to optimizing robust performance in terms of ´Æ µ. This may be solved using Ã iteration as outlined later in Section 8. and we see that where we have used Ì Á (8.64). which was derived in Chapter 7 using a simple graphical Ã. that is ÛÁ ÌÁ ÛÈ Ë ÛÁ ÃË ÛÈ Ë (8.102). RS and RP using (8.2 RP with input uncertainty for SISO system For a SISO system. 8. argument based on the Nyquist plot of Ä Robust performance optimization.95) we get that at each frequency ´Æ µ Ô ´ÆÑ Ü µ ÛÁ Ì ¾ · ÛÈ Ë ¾ by at most a factor ¾. ÛÁ ÌÁ ÛÁ ÃË Æ (8.1 Interconnection matrix On rearranging the system into the Æ ¡structure. A closely related problem. is to minimize the peak value (À ½ norm) of the mixed sensitivity matrix ÛÈ Ë ÆÑ Ü (8.127) follows from (8. we get.127) is identical to (7. as shown in Figure 8. recall (7.12.128) ÃË .124) (8. which is easier to solve both mathematically and numerically.14.123) ÛÈ Ë ÛÈ Ë where ÌÁ Ã ´Á · Ã µ ½ . Thus.2 blocks of Æ . NP.11.
ÃË and Ì Á are stable. To this effect we use the following weights for uncertainty and performance: ÛÁ ´×µ ×·¼¾ ¼ ×·½ ÛÈ ´×µ × ¾·¼ ¼ × (8.9 min) and a maximum peak for ´Ë µ of Å × ¾. Note that example.11. With reference to (2.9 min. .130) we ﬁnd that Ë . a closedloop bandwidth of about 0.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿¿ 8.15 that the NPcondition is easily satisﬁed: ´ÛÈ Ë µ is small at low frequencies (¼ ¼ ¼ ¼ ¼ at ¼) and approaches ½ ¾ ¼ at high frequencies. We will now conﬁrm these ﬁndings by a analysis.3 Robust performance for ¾ ¢ ¾ distillation process ¼ × ´×µ ½ (8.05 [rad/min] (which is relatively slow in the presence of an allowed time delay of 0. ¡Á is a diagonal matrix in this NS With and Ã as given in (8. so the system is nominally stable. Recall from Figure 3. NP With the decoupling controller we have ¬ ¬× ¬ ¬ ´Æ¾¾ µ ´ÛÈ Ë µ ¾·¼ ¼ ×·¼ ¬ ¬ ¬ ¬ and we see from the dasheddot line in Figure 8.26) we see the weight Û Á ´×µ may approximately represent a 20% gain error and a neglected time delay of 0. We now test for NS.131) Ð ¼ × ¯ ½ ½·Ð ½ ½ ¿× · ½ We have used ¯ for the nominal sensitivity in each loop to distinguish it from the Laplace variable ×.130) Consider again the distillation process example from Chapter 3 (Motivating Example No.12 that this controller gave an excellent nominal response.132) With reference to (7. NP. but that the response with 20% gain uncertainty in each input channel was extremely poor. Ë . NS and RP.72) we see that the performance weight Û È ´×µ speciﬁes integral action. Û Á ´ µ levels off at 2 (200% uncertainty) at high frequencies. 2) and the corresponding inversebased controller: ´×µ ½ × · ½ ½¼ ¾ ½¼ Ã ´×µ The controller provides a nominally decoupled system with Ä where ÐÁ Ë × ×·¼ ¯Á and Ì Ø ½ ¯ ØÁ ¼ ×·¼ (8.
¡ to 6. as is conﬁrmed in the following exercise. that is.15: plots for distillation process with decoupling controller RS Since in this case ÛÁ ÌÁ ÛÁ Ì is a scalar times the identity matrix. the weighted sensitivity will be about 6 times larger than what we require.107) which is illconditioned with ´ µ at all frequencies (but note that the RGAelements of are all about ¼ ). This is conﬁrmed by the curve for RP in Figure 8.1. meaning that even with 6 times less uncertainty.135) below.15 that RS is easily satisﬁed. for our particular plant and controller in (8. However.15 are given in Table 8. we have. With an . Exercise 8. The peak of the actual worstcase weighted sensitivity with uncertainty blocks of magnitude 1. we can tolerate about 38% gain uncertainty and a time delay of about 1.. In general. The peak value of ¡Á ´Å µ over frequency is Å ¡Á ¼ ¿. RP Although our system has good robustness margins (RS easily satisﬁed) and excellent nominal performance we know from the simulations in Figure 3. this is not generally true.93. which may be computed using skewed. that they are the same.7 min before we get instability.¿¿ 6 ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ RP 4 2 RS 0 −3 10 10 −2 NP 10 −1 10 0 10 1 10 2 Frequency Figure 8. The MATLAB toolbox commands to generate Figure 8. is for comparison 44. independent of the structure of ¡ Á .12 that the robust performance is poor. with unstructured uncertainty (¡ Á full) is larger than with structured uncertainty (¡ Á diagonal).23 Consider the plant ¼ ´×µ in (8. that ¡Á ´ÛÁ ÌÁ µ ÛÁ Ø ¬ ¬ ¬¼ ¬ ¬ ×·½ ¬ ¾ ´¼ × · ½µ´½ ¿× · ½µ ¬ ¬ and we see from the dashed line in Figure 8. The peak value is close (8.15 which was computed numerically using ¡ ´Æ µ with Æ as in ¡Á ¡È and ¡Á Æ½ Æ¾ .123). Of course.130) it appears from numerical calculations and by use of (8. This means before the that we may increase the uncertainty by a factor of ½ ¼ ¿ ½ worstcase uncertainty yields instability.
[10 1. 108. % dynk=nd2sys([75 1]. % wp=nd2sys([10 1].1:2.blk. 1 1.dynk).5242).6]. % % Inversebased control. pkvnorm(muRS) % % mu for NP (= max.muslow.omega).[0.wp).m’.5).3:4.[2 2].rowp.1: MATLAB program for analysis (generates Figure 8.[75 1]). [mubnds.7726).1).2 109.minv(G0)).3:4).sens.rowd.5 1]). G=mmult(Dyn.musup] = wcperf(Nf. Dynk=daug(dynk.1).’:’. singular value of Nnp).rowd. Wp=daug(wp.1). % (ans = 0.rowg]=mu(Nrs. inputvar = ’[ydel(2). % blk = [1 1.4.wi).rowg] = mu(Nf. muRS = sel(mubnds. Kinv=mmult(Dynk. % % Generalized plant P.1). [mubnds.dyn).61). pkvnorm(muRP) % % Worstcase weighted sensitivity % [delworst. u(2)]’.sens. Wp. Nf = frsp(N.[1 1.1:2). [mubnds.G0).rowp. musup % % mu for RS.rowg]=mu(Nnp.e5]. % (ans = 5. cleanupsysic = ’yes’. % (musup = 44. input to Wp = ’[G+w]’.rowd. % Nrs=sel(Nf. w(2) . % systemnames = ’G Wp Wi’. muRP = sel(mubnds.15) % Uses the Mu toolbox G0 = [87.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ Table 8.0.0.e5]. sysoutname = ’P’.8 86.5000).[1 1.muNP).’c’).’c’). % Nnp=sel(Nf. % % Weights. .Kinv).7). wi=nd2sys([1 0. Gw]’.2]. % (ans = 0. pkvnorm(muNP) vplot(’liv. muNP = sel(mubnds.3. outputvar = ’[Wi.’:’. input to G = ’[u+ydel]’. % N = starp(P. dyn = nd2sys(1. input to Wi = ’[u]’.rowp.muRP.sens.’c’). omega = logspace(3.blk. % % mu for RP. 2 2]. sysic.93 for % delta=1). Wi=daug(wi. 1 1].muRS. Dyn=daug(dyn.’:’.
We consider unstructured multiplicative input uncertainty (i. 8. i.133): Since ¡Á is a full matrix. From (8.46) ÑÒ Æ½½ ½ Æ¾½ ¢ Æ½¾ Æ¾¾ ÛÁ ÌÁ ½ ÛÈ Ë ÛÁ ÃË ÛÈ Ë ´ÛÁ Ì Á Á ½ £µ · ´ÛÈ Ë ¢ ½ Á £µ ½ µ · ´ÛÈ Ë µ ´ ½ Á µ ´ÛÁ ÌÁ µ ´Á ßÞ ßÞ ½· ´ ½ µ ½· ½ ´ µ Ô Õ and selecting ´ µ ´ ½ µ ´Æ µ Ô ´ µ gives ´ÛÁ ÌÁ µ · ´ÛÈ Ë µ ´½ · ´ µµ A similar derivation may be performed using Ë Ã ½ ÌÁ to derive the same expression ¾ but with ´Ã µ instead of ´ µ. . ¡ Á is a full matrix) and performance measured in terms of weighted sensitivity.87) yields ´Æ µ where from (A.123).132). compute for RP with both diagonal and fullinversebased controller Ã ´×µ × block input uncertainty using the weights in (8. The value of is much smaller in the former case.133) where is either the condition number of the plant or the controller (the smallest one should be used): ´ µ ÓÖ ´Ã µ (8.e. Let Æ be given as in (8. there is less sensitivity to uncertainty (but it may be difﬁcult to achieve NP in this case).11. since then ´Ã µ ´ µ is large.¿¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¼ ´×µ ½ . we would expect for RP to be large if we used an inversebased controller for a plant with a large condition number. Then Þ ÊÈ ß ¡ ´Æ µ Þ ÊË ß Þ ÆÈ ß Ô ´ÛÁ ÌÁ µ · ´ÛÈ Ë µ ´½ · µ (8. This is conﬁrmed by (8. one with ´Ã µ ½.133) we see that with a “round” controller. Any controller.4 Robust performance and the condition number In this subsection we consider the relationship between for RP and the condition number of the plant or of the controller. On the other hand.135) below. (8.134) Proof of (8.e.
We then have that the worstcase weighted sensitivity is equal to skewed.: Ñ Ü ´ÛÈ ËÔ µ ËÔ ´ÛÈ Ë ¼ µ × ´Æ µ .99). For more details see Zhou et al.135): The proof originates from Stein and Doyle (1991).15. we have at frequency ½ rad/min. Proof of (8.12 For the distillation column example studied above. denotes the ’th singular value of . we have ´ µ at all frequencies. (8.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ Example 8. 293295.11 For the distillation process studied above. ÛÈ ¯ ¼ ½ and ÛÁ Ø ¼ ¾. frequencies. (1996) pp.123): × ¡ ´Æ µ ÛÈ ¯ ¾ · ÛÁ Ø ¾ · ÛÈ ¯ ¡ ÛÁ Ø ´ µ · ½ ´ µ (8. for RP increases approximately Ô in proportion to ´ µ. and hence the desired selecting at each frequency ¾ result. Ô ´Ã µ ½½ Inversebased controller.135) where ¯ is the nominal sensitivity and ´ µ is the condition number of the plant.131)) and unstructured input uncertainty. and have used the fact that is unitary invariant. Suppose that at each frequency the worstcase sensitivity is ´Ë ¼ µ. and at frequency Û 1 rad/min the upper bound given by (8.87) with ´Æ µ ÑÒ ÛÁ ØÁ ÛÈ ¯´ Ô Ñ ÒÑ Ü Ñ ÒÑ Ü ÛÁ Ø´ µ ½ ÑÒ ÛÈ ¯Á ÛÁ Ø ÛÁ Ø´ µ ½ ÛÈ ¯´ µ ÛÈ ¯ µ ÛÁ ØÁ ÛÁ Ø´ ¦µ ½ ÛÈ ¯´ ¦µ ÛÈ ¯Á ÛÈ ¯ ¾ · ÛÁ Ø ¾ · ÛÈ ¯ ¾ · ÛÁ Ø´ µ ½ ¾ We have here used the SVD of Í ¦Î À at each frequency. The upper bound in Á Á yields (8. it is possible to derive an analytic expression for for RP with Æ as in (8.135) yields plot in Figure 8.133) becomes ´¼ ¾ · ¼ ½µ´½ · ½ ½ µ ½¿ ½. which illustrates that the bound in (8. see (8. We see that for plants with a large condition number.133) is generally not tight. With an inversebased controller (resulting in the nominal decoupled system (8. The expression is minimized by ÛÁ Ø ´ ÛÈ ¯ ´ µ ´ µµ. Example 8. and since ´ µ ½ ½ at all Ô ´Æ µ ¼ ½ · ¼ ¾ · ¿¼ ½ which agrees with the Worstcase performance (any controller) We next derive relationships between worstcase performance and the condition number. This is higher than the actual value of ´Æ µ which is .
137) holds for any controller and for any structure of the uncertainty (including ¡ Á unstructured).10.133).137) where as before denotes either the condition number of the plant or the controller (preferably the smallest). Equation (8. recall that in Section 6. (8. derive the following sufﬁcient (conservative) tests for RP ( ´Æ µ with unstructured input uncertainty (any controller): RP RP RP ´ ´ ´ ´ÛÈ Ë µ · ´ÛÁ ÌÁ µ ´½ · ´ÛÈ Ë µ · ´ÛÁ ÌÁ µ ½ ´ÛÈ Ë µ · ´ÛÁ Ì µ ½ Ô µ ½ where denotes the condition number of the plant or the controller (the smallest being the most useful). We then have ´Ë µ ½ ´ÛÁ ÌÁ µ ´ÛÈ Ë µ ½ ´ÛÁ ÌÁ µ (8.¿¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Now.72) we ﬁnd ´Ë ¼ µ. ½ ½ ¡ ¼ ½ ´½ ¼ ¾µ ½¾½.137) at rad/min is × demonstrates that these bounds are not generally tight. (8.139) .136) ´Ë ¼ µ × ´Æ µ ´ µ ´ÛÈ Ë ¼ µ A similar bound involving ´Ã µ applies. This is Ñ ÜËÔ ´ÛÈ ËÔ µ which as found earlier is ½ higher than the actual peak value of (at frequency ½ ¾ rad/min).13 For the distillation process the upper bound given by (8. and 8.5 Comparison with output uncertainty Consider output multiplicative uncertainty of magnitude Û Ç ´ the interconnection matrix µ. Example 8. Remark 1 In Section 6. when ½.4 we derived tighter upper bounds for cases when ¡Á is restricted to be diagonal and when we have a decoupling controller.10. In this case we get (8.11. In (6.138) ÛÇ Ì ÛÇ Ì ÛÈ Ë ÛÈ Ë and for any structure of the uncertainty ´Æ µ is bounded as follows: ÊË ß Þ ÊÈ ß Þ Ô ÛÇ Ì ÛÇ Ì ´Æ µ ¾ Û Ë ÛÈ Ë È ßÞ ÆÈ Æ (8.76) and (6. we may.137) and expressions similar Remark 2 Since × ½) to (6.77).4 we derived a number of upper bounds on and referring back to (6. from (8.78) we also derived a lower bound in terms of the RGA.
one selects an initial stable rational transfer matrix ´×µ with appropriate structure. Thus. It also implies that for practical purposes we may optimize robust performance with output uncertainty by minimizing the À ½ norm of the stacked matrix ÛÇ Ì .45) the difference between bounding the combined ¡ perturbations.1 Ã iteration At present there is no direct method to synthesize a optimal controller. namely Ñ Ò´Ñ Ò Æ ´Ã µ ½ ½ µ (8. 8. Derive the interconnection matrix Æ for.10. Use this to prove (8. However. ´¡ Ç µ and ´¡È µ. The Ã iteration then proceeds as follows: The idea is to ﬁnd the controller that minimizes the peak value over frequency of this upper bound. and often yields good results. To start the iterations. one may also seek to ﬁnd the controller that minimizes a given condition: this is the synthesis problem.140) Ã ¾ .2) and from (A. The starting point is the upper bound (8. The identity matrix is often a good initial choice for provided the system has been reasonably scaled for performance. and 2) the stacked case when ¡ ¡ ¡È .87) on in terms of the scaled singular value ´Æ µ Ñ Ò ´ Æ ½ µ ¾ Æ ´Ã µ ½ ½ with respect to either Ã or by alternating between minimizing (while holding the other ﬁxed). 8.12 synthesis and Ã iteration The structured singular value is a very powerful tool for the analysis of robust performance with a given controller. This conﬁrms our ﬁndings from Section 6. However. ÛÈ Ë Exercise 8.24 Consider the RPproblem with weighted sensitivity and multiplicative output uncertainty.4 that multiplicative output uncertainty poses no particular problem for performance. for complex perturbations a method known as Ã iteration is available. 1) the conventional case with ¡ ¡ ¡È . is at most a factor of ¾.6. Ô Ç ¡È and individual perturbations.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ This follows since the uncertainty and performance blocks both enter at the output (see Section 8.139). It combines À½ synthesis and analysis. in this case we “automatically” achieve RP (at least within Ô ¾) if we have satisﬁed separately the subobjectives of NP and RS.12.
but because we use a ﬁniteorder ´×µ to approximate the scales. However. Find ´ µ to minimize at each frequency ´ Æ ½ ´ µµ with ﬁxed Æ. The DKiteration depends heavily on optimal solutions for Steps 1 and 2.g. 2. with an À ½ norm. practical experience suggests that the method works well in most cases. Therefore. preferably by a transfer function of low order. This yields a blend of À ½ and À¾ optimality with a controller which usually has a steeper highfrequency rolloff than the À ½ optimal controller. with a ﬁniteorder the controller (because Ä´ ½µ controller we will generally not be able (and it may not be desirable) to extend the ﬂatness to inﬁnite frequencies. or by a poor ﬁt of the scales. Fit the magnitude of each element of ´ µ to a stable and minimum phase transfer function ´×µ and go to Step 1. For most cases. . One may even experience the value increasing. The iteration may continue until satisfactory performance is achieved. and it may be difﬁcult to judge whether the iterations are converging or not. and also on good ﬁts in Step 3. and will thus be of inﬁnite order. the upper bound value in Step 2 being higher than the À½ norm obtained in Step 1). One reason for preferring a loworder ﬁt is that this reduces the order of the À ½ problem. One fundamental problem with this approach is that although each of the minimization steps (Kstep and Dstep) are convex. Ñ Ò ). the iterations may converge to a local optimum. The order of the controller resulting from each iteration is equal to the number of states in the plant ´×µ plus the number of states in the weights plus twice the number of states in ´×µ. . which is 5% higher than the optimal value. In any case. Synthesize an À ½ controller for the scaled problem. Dstep. In some cases the iterations converge slowly. Kstep. joint convexity is not guaranteed. if the iterations converge slowly. or until the À½ norm no longer decreases. 3. then one may consider going back to the initial problem and rescaling the inputs and outputs. This may be caused by numerical problems or inaccuracies (e. In the Ã step (Step 1) where the À ½ controller is synthesized. Æ ½ ½ ½. which usually improves the numerical properties of the À ½ optimization (Step 1) and also yields a controller of lower order.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 1. However. The true optimal controller would have a ﬂat curve (as a function of frequency).g. except at inﬁnite frequency where generally has to approach a ﬁxed value independent of ¼ for real systems). the true optimal controller is not rational. we get a controller of ﬁnite (but often high) order. Ñ ÒÃ Æ ´Ã µ ½ ½ with ﬁxed ´×µ. it is often desirable to use a slightly suboptimal controller (e.
8. The resulting controller will have very large gains at high frequencies and should not be used directly for implementation.4 Example: synthesis with Ã iteration We will consider again the case of multiplicative input uncertainty and performance deﬁned in terms of weighted sensitivity. and we effectively optimize worstcase performance by adjusting a parameter in the performance weight.e. then the interpretation is that at this frequency we can tolerate ½ times more uncertainty and still satisfy our performance objective with a margin of ½ . the designer will usually adjust some parameter(s) in the performance or uncertainty weights until the peak value is close to 1.11.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ 8. Sometimes it helps to switch the optimization between minimizing the peak of (i. Nevertheless we will use it here as an example of synthesis because of its simplicity. .12.3 Fixed structure controller Sometimes it is desirable to ﬁnd a loworder controller with a given structure. as it does not explicitly penalize the outputs from the controller.4). In practice. or one may consider a more complicated problem setup (see Section 12.12. For example. but less suitable for controller synthesis.142) where Æ . The optimization problem becomes Ñ Ü £ ×Ù Ø Ø ´Æ µ ½ (8. ½) and minimizing the integral square deviation of away from (i. We noted there that this setup is ﬁne for analysis. depends on be implemented as an outer loop around the Ã iteration. £ . This may be achieved by numerical optimization where is minimized with respect to the controller parameters The problem here is that the optimization is not generally convex in the parameters. as discussed in detail in Section 8.e.12. The latter is an attempt to “ﬂatten out” . This may 8. Sometimes the uncertainty is ﬁxed. consider the performance weight ÛÈ ´×µ × Å· £ ×· £ (8. ´ µ ¾ ) where usually is close to 1.2 Adjusting the performance weight Recall that if at a given frequency is different from 1.141) where we want to keep Å constant and ﬁnd the highest achievable bandwidth frequency £ . In synthesis. one can add extra rolloff to the controller (which should work well because the system should be robust with respect to uncertain highfrequency dynamics). the interconnection matrix for the RPproblem.
2. Notice that there is a frequency range (“window”) where both weights are less than one in magnitude. Step 1: With the initial scalings. and we may set ¿ ½. but not the controller which is to be designed (note that Æ Ð ´È Ã µ). First the generalized plant È as given in (8. Again.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ With this caution in mind. we proceed with the problem description. The objective is to minimize the peak value ¡Á ¡È . It includes the plant model. so ¡ Á is a ¾ ¢ ¾ diagonal matrix. As ½ ¾ ¿ initial scalings we select ¼ Á¾ ½ ¼ ½. and are shown graphically in Figure 8. 10 2 The uncertainty weight Û Á Á and performance weight Û È Á are given in (8. È is then scaled with the matrix ¾ where Á¾ is associated with the inputs and outputs from the controller (we do not want to scale the controller). the À½ software produced a 6 state controller (2 states from the plant model and 2 from each of the weights) . Note that we have only three complex uncertainty blocks. where Æ is given in (8. We will consider of ¡ ´Æ µ. The appropriate commands for the MATLAB toolbox are listed in Table 8.132). we use the model of the simpliﬁed distillation process ´×µ ½ × · ½ ½¼ ¾ ½¼ (8.16: Uncertainty and performance weights.143) We will now use Ã iteration in attempt to obtain the optimal controller for this example. Iteration No. ¡ È is a full ¾ ¢ ¾ matrix representing the performance speciﬁcation. 1. Then the blockstructure is deﬁned. Magnitude 10 1 ÛÈ 10 0 ÛÁ 10 −1 10 −3 10 −2 10 −1 10 0 10 1 10 2 Frequency Figure 8.123) and ¡ diagonal input uncertainty (which is always present in any real problem). The scaling matrix Á¾ where Á¾ is a ¾ ¢ ¾ identity matrix. it consists of two ½ ¢ ½ blocks to represent ¡ Á and a ¾ ¢ ¾ block to for Æ ½ then has the structure represent ¡È . so ´Æ µ is equal to the upper bound Ñ Ò ´ Æ ½ µ in this case.16. the uncertainty weight and the performance weight.29) is constructed. ¼ Á .
3.dyn).’c’). dsysl = daug(d0. % % Generalized plant P.mubnds. % wp = nd2sys([10 1]. % omega = logspace(3.8 86. nu=2. 108.5). % % Initialize.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ Table 8. 2 2].4. % % Approximated % integrator.gamma] = hinfsyn(DPD.gmin. % % % STEP 2: Compute mu using upper bound: % [mubnds.2]. G = mmult(Dyn.[0.05*gamma.[10 1. dyn = nd2sys(1.tol).5 1]).blk.eye(2)). dsysr=daug(dsysR. 0. tol=0.P. % % New Version: % [dsysL. % Uses the Mu toolbox G0 = [87. . [K.Nsc. outputvar = ’[Wi.G0). % % GOTO STEP 1 (unless satisfied with murp).gmax. % % STEP 1: Find Hinfinity optimal controller % with given scalings: % DPD = mmult(dsysl. gmin=0.2 109. % (Remark: % Without scaling: % N=starp(P.[75 1]).wi). % systemnames = ’G Wp Wi’. cleanupsysic = ’yes’. Dyn = daug(dyn.61).dsysr]=musynflp(dsysl.rowd. Nf=frsp(Nsc. % choose 4th order. nmeas=2. vplot(’liv. sysoutname = ’P’.mubnds). input to Wi = ’[u]’.rowg] = mu(Nf. dsysr=dsysl.nu. Wp = daug(wp. Wi = daug(wi.6]. gamma=2.dsysR]=msf(Nf.inf) % % STEP 3: Fit resulting Dscales: % [dsysl.minv(dsysr)).e5].nmeas.sens.blk. % dsysl=daug(dsysL.K).nu). sysic.eye(2)).sens.rowd. u(2)]’.eye(2).d0. input to G = ’[u+ydel]’.nmeas. % % START ITERATION. gmax=1. 4. Gw]’.2: MATLAB program to perform Ã iteration % Distillation % process. 1 1.blk).).01. % order: 4. wi = nd2sys([1 0.rowp. inputvar = ’[ydel(2). d0 = 1. murp=pkvnorm(mubnds.m’. blk = [1 1.rowd.wp). Wp.omega).sens. % % Weights.0. w(2) .9. input to Wp = ’[G+w]’.eye(2)).
it was decided to stop the iterations. Furthermore. Step 1: With the scalings ¾ ´×µ the À½ norm was only slightly reduced from 1.0238. Step 2: This controller gave a peak state controller and value of of 1.17.19. so the performance speciﬁcation ´ÛÈ ËÔ µ ½ is almost satisﬁed for all possible plants. corresponding to a peak value of =1.6 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency [rad/min] Figure 8.17: Change in during Ã iteration Analysis of “optimal” controller Ã ¿ The ﬁnal curves for NP. 1”. 2.8 0. 3 Optimal 0. Step 2: The upper bound gave the curve shown as curve “Iter. Since the improvement was very small and since the value was very close to the desired value of 1. 3. Step 3: The frequencydependent ½ ´ µ and ¾ ´ µ from Step 2 were each ﬁtted using a 4th order transfer function. To conﬁrm this we considered the nominal plant and six perturbed plants ¼ ´×µ ´×µ Á ´×µ . Step 3: The resulting scalings ¾ were only slightly changed from the previous iteration as can be seen from ¾ ´ µ labelled “Iter.18 and labelled “Iter.18.019. ½ ´Ûµ and the ﬁtted 4thorder transfer function (dotted line) are shown in Figure 8. the peak value of 1.019. 1 1 Iter. The resulting controller with 22 states (denoted Ã ¿ in the following) gives a peak value of 1. Iteration No. RS and RP with controller Ã ¿ are shown in Figure 8. The objectives of RS and NP are easily satisﬁed.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ with an À½ norm of ½ ½ ¾¿. ¾ is not shown because it was found that ½ ¾ (indicating that the worstcase fullblock ¡ Á is in fact diagonal). The ﬁt is very good so it is hard to distinguish the two curves. ½ Iteration No.024 to 1. Iter.019 with controller Ã ¿ is only slightly above 1. Step 1: With the 8 state scaling ½ ´×µ the À½ software gave a 22 ½ Æ ´ ½ µ ½ ½ ½ ¼¾¿ . 1” in Figure 8.1818. 2 Iter. 2” in Figure 8.
ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿ 4th order ﬁt 10 0 Initial Magnitude 10 −1 Iter.18: Change in RP 1 scale ½ during Ã iteration NP 0. Thus.6 0. 1 Optimal Iter. Recall that the uncertainty weight is ÛÁ ´×µ ×·¼¾ ¼ ×·½ which is 0.2 0 −3 10 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency [rad/min] Figure 8.19: plots with “optimal” controller Ã¿ where Á Á · ÛÁ ¡Á is a diagonal transfer function matrix representing input uncertainty (with nominal Á ¼ Á ).2 in magnitude at low frequencies.4 RS 0.8 0. Two allowed dynamic perturbations for the diagonal elements in Û Á ¡Á are ¯½ ´×µ × · ¼ ¾ ¯ ´×µ × · ¼ ¾ ¾ ¼ ×·½ ¼ ×·½ . 2 10 −2 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 Frequency [rad/min] Figure 8. the following input gain perturbations are allowable Á½ ½¾ ¼ ¼ ½¾ Á¾ ¼ ¼ ¼ ½¾ Á¿ ½¾ ¼ ¼ ¼ Á ¼ ¼ ¼ ¼ These perturbations do not make use of the fact that Û Á ´×µ increases with frequency.
Solid line: nominal plant. and is seen to be almost below the bound ½ ÛÁ ´ µ for all seven cases ( ¼ ) illustrating that RP is almost satisﬁed.6 0.2 0 0 10 20 30 40 50 60 70 Ý½ Ý¾ 80 90 100 Time [min] Figure 8. ´Ë ¼ µ. corresponding to elements in ½ ´×µ ½ · ¯½ ´×µ ½¾ ¼ Á of ½ ×·½ ¼ ×·½ ½ ´×µ ¾ ´×µ Á ½ · ¯¾ ´×µ ¾ ´ ×µ ¼ ¼ ¿¿× · ½ ¼ ×·½ so let us also consider Á ¼ ½ ´×µ ¼ ¼ ½ ´×µ ¼ The maximum singular value of the sensitivity.20: Perturbed sensitivity functions ´Ë¼ µ using “optimal” controller Ã¿ . and the others with dotted lines. At low frequencies the worstcase corresponds closely to a plant with .21: Setpoint response for Dashed line: uncertain plant ¼ ¿ “optimal” controller Ã¿ . Upper solid line: Worstcase plant ¼ .¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ½ ÛÈ 10 0 10 −1 10 −2 10 −1 10 0 10 1 Frequency [rad/min] Figure 8.4 0. Lower solid line: Nominal plant .8 0. Dotted lines: Plants ¼ ½ . is shown in Figure 8.20 for the nominal and six perturbed plants. The sensitivity for the nominal plant is shown by the lower solid line. Û 1 0.
By trial and error. The resulting design produces the curves labelled optimal in Figures 8. For more details see Hovd (1992) and Hovd et al. ÃÓÔØ .10. Remark. o 1994). let Í ¦Î À . 3. and there are many plants which yield a worstcase performance of Ñ ÜËÔ ÛÈ ËÔ ½ ½ ¼¿ . then ÃÓÔØ Î Ã× Í À where Ã× is a diagonal matrix.20. The “worstcase” plant is not unique. such as ¼ . and the sensitivity function for the corresponding worstcase plant ¼ ´×µ ´×µ´Á · ÛÁ ´×µ¡Û ´×µµ found with the Û software is shown by the upper solid line in Figure 8. For example.18. The time response of Ý ½ and Ý¾ to a ﬁltered setpoint change in Ý ½ . and has a peak of about 2. 4. but shows no strong sensitivity to the uncertainty. Á gave a good design for this plant with an ½ norm 5.05 at 0. so we might as well have used a full matrix for ¡Á . the worstcase of these six plants ¾ ¿ seems to be ¼ Á .20. Overall. The corresponding controller.144) 2.17 and 8. it is likely that we could ﬁnd plants which were more consistently “worse” at all frequencies than the one shown by the upper solid line in Figure 8.18. may be synthesized using ½ synthesis with the following thirdorder scales À ½ ´×µ ¾ ´×µ ¾ ´¼ ¼¼½× · ½µ´× · ¼ ¾ µ´× · ¼ ¼ µ ´´× · ¼ µ¾ · ¼ ¾ µ´× · ¼ ¼½¿µ ¿ ½ (8. This gives a worstcase performance of Ñ Ü ËÔ ÛÈ ËÔ ½ ½ ¼¿ . Note that the optimal controller ÃÓÔØ for this problem has a SVDform. Ö½ ½ ´ × · ½µ. That is. This scaling worked well because the inputs and outputs had been scaled À À . (1994).3 on page 330.12. This arises because in this example Í and Î are constant matrices. Petter Lundstr¨ m was able to reduce the peak o value for robust performance for this problem down to about ÓÔØ ¼ (Lundstr¨ m.02 (above the allowed bound of 2) at 1. is shown in Figure 8. For this particular plant it appears that the worstcase fullblock input uncertainty is a diagonal perturbation.2 and 0. The initial choice of scaling of about 1. and many long nights. ¼ or ¼ . To ﬁnd the “true” worstcase performance and plant we used the MATLAB tools command wcperf as explained in Section 8.21 both for the nominal case (solid line) and for 20% input gain uncertainty (dashed line) using the plant ¼ ¿ (which we know is one of ¿ the worst plants). It has a peak value of ´Ë ¼ µ of about 2. which has ´Ë ¼ µ close to the bound at low frequencies. The ½ software may encounter numerical problems if È ´×µ has poles on the axis. Remarks on the synthesis example.8.79 rad/min. The response is interactive. But this does not hold in general.6 rad/min. The response with uncertainty is seen to be much better than that with the inversebased controller studied earlier and shown in Figure 3.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿ gains 1. This is the reason why in the MATLAB code we have moved the integrators (in the performance weights) slightly into the lefthalf plane. 1.
46. the use of assumes the uncertain perturbations to be timeinvariant.143) [min] to another value. For a comparison. and minimizing the upper bound ½ ½ is also of interest in its own the basis for the DKiteration. Another interesting fact is that the use of constant scales ( is not allowed to vary with frequency). which is related to scaling the plant model properly. the upper bound provides a necessary and sufﬁcient condition for robustness to arbitraryslow timevarying linear uncertainty (Poolla and Tikku.13. This demonstrates the importance of good initial scales. 1994).87. 8.143).145) (8.58.25 Explain why the optimal value would be the same if in the model (8. consider the original model in Skogestad et al. Æ right. then we know that this controller will work very well even for rapid changes in the plant . one can ﬁnd bounds on the allowed time variations in the perturbations. 2. provides a necessary and sufﬁcient condition for robustness to arbitraryfast timevarying linear uncertainty (Shamma.9 (Step 1) resulting in a peak value of 5. 1. 1.53.1 Further justiﬁcation for the upper bound on For complex perturbations. 1. starting the Ã iteration with ﬁrst iteration yields an ½ norm of 14. (1988) which was in terms of unscaled physical variables. and therefore that it is better to minimize Æ ½ ½ instead of ´Æ µ. Nevertheless. In some cases. we are still far away from the optimal value which we know is less than 1.13 Further remarks on 8. by considering how ´ µ varies with frequency.22.145) has all its elements 100 times smaller than in the scaled model (8. It may be argued that such perturbations are unlikely in a practical situation. Subsequent iterations yield with 3rd and 4th order ﬁts of the scales the following peak values: 2. 1995). is that when all uncertainty blocks are full and complex. In addition. À Exercise 8. we see that if we can get an acceptable controller design using constant scales. 1. 1.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ to be of unit magnitude. it can be argued that slowly timevarying uncertainty is more useful than constant perturbations. However.49. using this model should give the same optimal value but with controller gains 100 times Á works very poorly in this case.67. The reason for this. Note that the iteration itself we changed the time constant of would be affected.42. Therefore. On the other hand. ÙÒ× Ð ´×µ ½ ×·½ ¼ ¼ ½ ¼ ¾ ½ ¼ (8.44. 1. 1. 1.92.2 (Step 2). The larger. However. the scaled singular value ´ Æ ½ µ is a tight upper Æ ½ ½ forms bound on ´Æ µ in most cases. At this point (after 11 iterations) the plot is fairly ﬂat up to 10 [rad/min] and one may be tempted to stop the iterations. However.
(1995). however. There is also a corresponding The practical implementation of this algorithm is however difﬁcult.13. and we have described practical algorithms for computing upper bounds of for cases with complex.146)) may be arbitrarily conservative. 1995).3 Computational complexity It has been established that the computational complexity of computing has a combinatoric (nonpolynomial. For more details. which exploit the blockdiagonal structure of the perturbations. Another advantage of constant scales is that the computation of straightforward and may be solved using LMIs. the upper bound ´ Å ½ µ for complex perturbations is generally tight.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿ is then model. As mentioned. mixed real and complex perturbations. 8. that practical algorithms are not possible. . The algorithm in the toolbox makes use of the following result from Young et al. Ã iteration procedure for synthesis (Young. “NPhard”) growth with the number of parameters involved even for purely complex perturbations (Toker and Ozbay.2 Real perturbations and the mixed problem We have not discussed in any detail the analysis and design problems which arise with real or. (1992): if there exists a ¬ ¼. ´Å µ 8. a and a with the appropriate blockdiagonal structure such that ½ ½ ´Á · ¾ µ ´ Å ½ ¬ then ½ µ´Á · ¾ µ ½ (8. and a very high order ﬁt may be required for the scales. whereas the present upper bounds for mixed perturbations (see (8. There also exists a number of lower bounds for computing . where in addition to matrices. This does not mean. implemented in the MATLAB toolbox. employ a generalization of the upper bound ´ Å ½ µ. which exploit the structure of the real perturbations.146) ¬ . there are matrices. An alternative approach which involves solving a series of scaled Ã iterations is given by TøffnerClausen et al. real or mixed real/complex perturbations. Most of these involve generating a perturbation which makes Á Å ¡ singular. The current algorithms.13. 1994). The matrices (which should not be confused with the plant transfer function ´×µ) have real diagonal elements at locations where ¡ is real and have zeros elsewhere.151) below. the reader is referred to Young (1993). see (8. more importantly.
13. In particular. which is the wellknown stability condition for discrete ¡Þ ´ µ systems. this results in real perturbations.7) that Æ ½ ½ (NP) if and only if ¡Þ ¡È (8. Thus. since when ¡ ´À µ ½ (NP) we have from (8. Thus. However. We can also generalize the treatment to consider RS and RP. However. but at the expense of a complicated calculation. since the statespace matrices are contained explicitly in À in (8.4 Discrete case It is also possible to use for analyzing robust performance of discretetime systems (Packard and Doyle. For this reason the discretetime formulation is little used in practical applications. is difﬁcult to solve numerically both for analysis and in particular for synthesis. Second. we see that the search over frequencies in the frequency domain is avoided. .78) that ´ µ ½. Æ ´Þ µ ´ÞÁ µ ½ · Ù ´À ½ Áµ Þ À (8. representing the singular value performance speciﬁcation. and real perturbations (from the uncertainty). a fullblock complex perturbation (from the performance speciﬁcation). First. 1993). and the resulting problem which involves repeated complex perturbations (from the evaluation of Þ on the unit circle). This is discussed by Packard and Doyle (1993).147) where ¡Þ is a matrix of repeated complex scalars. by introducing Æ Þ ½ Þ and ¡Þ ÆÞ Á we have from the main loop theorem of Packard and Doyle (1993) (which generalizes Theorem 8.147). note that the À½ norm of a discrete transfer function is Æ ½ ¸ Ñ Ü ´ ´ÞÁ µ ½ · µ Þ ½ This follows since evaluation on the axis in the continuous case is equivalent to ½) in the discrete case.148) is also referred to as the statespace test. it follows that the discrete time formulation is convenient if we want to consider parametric uncertainty in the statespace matrices. note that in this case nominal stability (NS) follows as a special case (and thus does not need to be tested separately).148) only considers nominal performance (NP).¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 8. and ¡ È is a full complex matrix. Consider a discrete time system Ü ·½ Ü · Ù Ý Ü · Ù ´ÞÁ The corresponding discrete transfer function matrix from Ù to Ý is Æ ´Þ µ µ ½ · .148) ¡´À µ ½ ¡ Condition (8. representing the discrete “frequencies”. note that Æ ´Þ µ may be written the unit circle ( Þ as an LFT in terms of ½ Þ . The condition in (8.
The upper bound for can be rewritten as an LMI: (8.153) where ´Å µ is the structured singular value ´Å µ.13.152) which is tight (necessary and sufﬁcient) for the special case where at each frequency any complex ¡ satisfying ´¡µ ½ is allowed. including À¾ and À½ optimal control problems. To analyze robust stability (RS) of an uncertain system we make use of the Å ¡structure (Figure 8. The calculation of makes use of the fact that ¡ has a given blockdiagonal structure.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿½ 8. 8.149) Depending on the particular problem. À (where may have a blockdiagonal structure) such does there exist an that È À È ÉÀ É · Ê · ÊÀ · Ë ¼ (8. where certain blocks may also be real (e. The reader is referred to Boyd et al. can be reduced to solving LMIs. For given matrices È É Ê and Ë . To compute the upper bound for based on this which is an LMI in LMI we need to iterate on ¬ . which make LMIs attractive computationally.150) (8. the matrices È É Ê and Ë are functions of a real parameter ¬ . what is the largest ¬ for which there is no solution.3) where Å represents the transfer function for the “new” feedback part generated by the uncertainty.5 Relationship to linear matrix inequalities (LMIs) An example of an LMI problem is the following. From the small gain theorem.g. .151) ¸ ´ Å ½ µ ¬ ¸ ´ ½ Å À Å ½ µ ¬ ¾ ½ Å À Å ½ ¬ ¾ Á ¼ ¸ Å À ¾ Å ¬¾ ¾ ¼ ¾ ¼. These inequality conditions produce a set of solutions which are convex. the tight condition is ÊË ¸ ´Å µ ½ (8. for example. (1994) for more details. It has been shown that a number of other problems. ÊË ´ ´Å µ ½ (8. to handle parametric uncertainty). and we want to know. More generally.14 Conclusion In this chapter and the last we have discussed how to represent uncertainty and how to analyze its effect on stability (RS) and performance (RP) using the structured singular value as our main tool. the may be replaced by . Sometimes.
We aim to make the system robust to some “general” class of uncertainty which we do not explicitly model. implies a worstcase analysis. For SISO systems the classical gain and phase margins and the peaks of Ë and Ì provide useful general robustness measures. .¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ We deﬁned robust performance (RP) as Ð ´Æ ¡µ ½ ½ for all allowed ¡’s. Potentially. especially if parametric uncertainty is considered. Analysis and. In applications. and the resulting analysis and design may be unnecessarily conservative. It should be noted that there are two main approaches to getting a robust design: 1. it yields better designs. by considering ﬁrst simple sources of uncertainty such as multiplicative input uncertainty. normalized coprime factor uncertainty provides a good general class of uncertainty. so one should be careful about 2. We explicitly model and quantify the uncertainty in the plant and aim to make the system robust to this speciﬁc uncertainty. noise and disturbances – otherwise it becomes very unlikely for the worst case to occur. should one consider more detailed uncertainty descriptions such as parametric uncertainty (with real perturbations). and the associated GloverMcFarlane À ½ loopshaping design procedure. This second approach has been the focus of the preceding two chapters. For MIMO systems. One then iterates between design and analysis until a satisfactory solution is obtained. it is therefore recommended to start with the ﬁrst approach. and the subsequent complexity in synthesizing controllers. Only if they can’t. and we derived ÊÈ ¸ ´Æ µ ½ (8.154) where is computed with respect to the blockdiagonal structure ¡ ¡È . we found that RP could be viewed as a special case of RS. see Chapter 10. Practical analysis We end the chapter by providing a few recommendations on how to use the structured singular value in practice. Since we used the À½ norm in both the representation of uncertainty and the deﬁnition of performance. The robust stability and performance is then analyzed in simulations and using the structured singular value. Because of the effort involved in deriving detailed uncertainty descriptions. The use of including too many sources of uncertainty. at least for design. for example. the rule is to “start simple” with a crude uncertainty description. 2. 1. synthesis using can be very involved. and then to see whether the performance speciﬁcations can be met. has proved itself very useful in applications. but it may require a much larger effort in terms of uncertainty modelling. in particular. Here ¡ represents the uncertainty and ¡ È is a ﬁctitious full uncertainty block representing the À ½ performance bound.
then we 4. recommend that you keep the uncertainty ﬁxed and adjust the parameters in the performance weight until is close to 1. The relative (multiplicative) form is very convenient in this case.ÊÇ ÍËÌ ËÌ ÁÄÁÌ Æ È Ê ÇÊÅ Æ ¿¿ 3. If is used for synthesis. There is always uncertainty with respect to the inputs and outputs. so it is generally “safe” to include diagonal input and output uncertainty. is most commonly used for analysis. .
¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ .
that reliable generalizations of this classical approach have emerged. and Ù are the control . By multivariable transfer function shaping. 9. In February 1981. in our opinion. This methodology for controller design is central to the practical procedures described in this chapter. Ý are the outputs to be controlled. consider the one and controller degreeoffreedom conﬁguration shown in Figure 9. we mean the shaping of singular values of appropriately speciﬁed transfer functions such as the loop transfer function or possibly one or more closedloop transfer functions. therefore. and measurement noise Ò. The paper by Doyle and Stein (1981) was particularly inﬂuential: it was primarily concerned with the fundamental question of how to achieve the beneﬁts of feedback in the presence of unstructured uncertainty. and through the use of singular values it showed how the classical loopshaping ideas of feedback design could be generalized to multivariable systems. have an important role to play in industrial control. we present practical procedures for multivariable controller design which are relatively straightforward to apply and which. The plant Ã interconnection is driven by reference commands Ö. or so.1 Tradeoffs in MIMO feedback design The shaping of multivariable transfer functions is based on the idea that a satisfactory deﬁnition of gain (range of gain) for a matrix transfer function is given by the singular values of the transfer function. the ﬁrst six papers of which were on the use of singular values in the analysis and design of multivariable feedback systems.6 has been successfully applied. For industrial systems which are either SISO or loosely coupled. To see how this was done. But for truly multivariable systems it has only been in the last decade.1.ÇÆÌÊÇÄÄ Ê ËÁ Æ In this chapter. the classical loopshaping approach to control system design as described in Section 2. the IEEE Transactions on Automatic Control published a Special Issue on Linear Multivariable Control Systems. output disturbances .
If the unstructured uncertainty in the plant model is represented by an additive · ¡. then we have: 6. if the uncertainty is modelled by a multiplicative output perturbation such that Ô ´Á · ¡µ . For noise attenuation make ´Ì µ small.2) These relationships determine several closedloop objectives. For robust stability in the presence of an additive perturbation make ´ÃË µ small. Alternatively.1: One degreeoffreedom feedback conﬁguration Ý´×µ Ù´×µ Ì ´×µÖ´×µ · Ë ´×µ ´×µ Ì ´×µÒ´×µ Ã ´×µË ´×µ Ö´×µ Ò´×µ ´×µ (9. we have the following important transfer function Ì relationships: Ö + ¹  ¹ Ã Ù ¹ ¹+ + + + Õ ¹Ý Ò Figure 9. For disturbance rejection make ´Ë µ small. For reference tracking make ´Ì µ ´Ì µ ½.e. Feedback design is therefore a tradeoff over frequency of conﬂicting objectives. disturbance rejection is . 3.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ signals. 2. 4. In terms of the sensitivity function Ë ´Á · Ã µ ½ and the closedloop Ã ´Á · Ã µ ½ Á Ë . The closedloop requirements ½ to cannot all be satisﬁed simultaneously. then a further closedloop objective is perturbation. namely: 1. For control energy reduction make ´ÃË µ small. Ô 5. For robust stability in the presence of a multiplicative output perturbation make ´Ì µ small.1) (9. For example. in addition to the requirement that Ã stabilizes . i. This is not always as difﬁcult as it sounds because the frequency ranges over which the objectives are important can be quite different.
For disturbance rejection make ´ Ã µ large. 5 and 6 are conditions which are valid frequencies.1 Show that the closedloop objectives 1 to 6 can be approximated by the openloop objectives 1 to 6 at the speciﬁed frequency ranges. but to . It also follows that at the bandwidth frequency (where ½ ´Ë ´ µµ Ô ¾ ½ ½) we have ´Ä´ µµ between 0. For robust stability to an additive perturbation make ´Ã µ small. 4. valid for ½. and for robust stability ´ Ã µ must be forced below a robustness boundary for all above . it is relatively easy to approximate the closedloop requirements by the following openloop objectives: 1. the openloop requirements 1 and 3 are valid and important at low . ¼ Ð and important at high frequencies. Exercise 9. For control energy reduction make ´Ã µ small. For robust stability to a multiplicative output perturbation make ´ Ã µ small. whereas at frequencies where we want low gains (at high frequencies) the “worstcase” direction is related to ´ Ã µ. valid for frequencies at which ´ Ã µ ½.2. That is. valid for frequencies at which ´ Ã µ ½. From Figure 9.41. However. while noise mitigation is often only relevant at higher frequencies.41 and 2. 4. over speciﬁed frequency ranges. from Ì Ä´Á · Äµ ½ it follows that ´Ì µ ´Äµ at frequencies where ´Äµ is small. it is the magnitude of the openloop transfer function Ä Ã which is shaped. From this we see that at frequencies where we want high gains (at low frequencies) the “worstcase” direction is related to ´ Ã µ. while 2. For reference tracking make ´ Ã µ large. 3.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ typically a low frequency requirement. as illustrated in Figure 9. valid for frequencies at which ´ Ã µ ½. for good performance. ´ Ã µ must be made to lie above a performance boundary for all up to Ð . For noise attenuation make ´ Ã µ small. frequencies at which ´ Ã µ 6.2. recall from (3. Furthermore. whereas the above design requirements are all in terms of closedloop transfer functions. valid for frequencies at which ´ Ã µ ½. Thus.51) that ´Äµ ½ ½ ´Ë µ ´Äµ · ½ (9. Typically.3) ½ ´Äµ at frequencies where ´Äµ is much larger from which we see that ´Ë µ than 1. 5. valid for frequencies at which ´ Ã µ ½. it follows that the control engineer must design Ã so that ´ Ã µ and ´ Ã µ avoid the shaded regions. 2. ½. To shape the singular values of Ã by selecting Ã is a relatively easy task. In classical loop shaping.
Both of these loop transfer recovery (LTR) procedures are discussed below after ﬁrst describing traditional LQG control. 1981). The stability constraint is therefore even more difﬁcult to handle in multivariable loopshaping than it is in classical loopshaping. For SISO systems. it is clear from Bode’s work (1945) that closedloop stability is closely related to openloop gain and phase near the crossover frequency . where Ã ´ µ ½.2: Design tradeoffs for the multivariable loop transfer function Ã do this in a way which also guarantees closedloop stability is in general difﬁcult. the rolloff rate from high to low gain at crossover is limited by phase requirements for stability. which implies that stability margins vary from one break point to another in a multivariable system. An immediate consequence rolloff rate less than ¼ of this is that there is a lower limit to the difference between and Ð in Figure 9. To overcome this difﬁculty Doyle and Stein (1981) proposed that the loopshaping should be done with a controller that was already known to guarantee stability. In particular. but this is in terms of the eigenvalues of Ã and results in a limit on the rolloff rate of the magnitude of the eigenvalues of Ã . Closedloop stability cannot be determined from openloop singular values. They also gave a dual “robustness recovery” procedure for designing the ﬁlter in an LQG controller to give desirable properties in Ã . Recall that Ã is not in general equal to Ã . control energy reduction boundary ÛÐ Performance boundary ´ Ãµ Figure 9. . For MIMO systems a similar gainphase relationship holds in the crossover frequency region.¿ log magnitude ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ´ Ãµ Û log(Û ) Robust stability.6. not the singular values (Doyle and Stein. noise attenuation. see Section 2. They suggested that an LQG controller could be used in which the regulator part is designed using a “sensitivity recovery” procedure of Kwakernaak (1969) to give desirable properties (gain and phase margins) in Ã .2.2. and in practice this corresponds to a .
which are usually assumed to be uncorrelated zeromean Gaussian stochastic processes with constant power spectral density matrices Ï and Î respectively. reached maturity in the 1960’s with what we now call linear quadratic gaussian or LQG Control.8) where is the expectation operator and Æ ´Ø µ is a delta function. In this section. That is.2 LQG control Optimal control.2. and that the measurement noise and disturbance signals (process noise) are stochastic with known statistical properties. we will discuss its robustness properties. . Many text books consider this topic in far greater detail. we will describe the LQG problem and its solution. Aerospace engineers were particularly successful at applying LQG. such as rocket manoeuvering with minimum fuel consumption. 9. building on the optimal ﬁltering work of Wiener in the 1940’s. we have a plant model Ü Ý Ü· Ù·Û Ü · Ù · ÛÒ (9. These were problems. Its development coincided with large research programs and considerable funding in the United States and the former Soviet Union on space related problems.5) where we for simplicity set ¼ (see the remark below). As a result LQG designs were sometimes not robust enough to be used in practice. we recommend Anderson and Moore (1989) and Kwakernaak and Sivan (1972).ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ 9. Û and ÛÒ are white noise processes with covariances Û ´ØµÛ ´ ¨ ÛÒ ´ØµÛÒ ´ and ¨ © Û ´ØµÛÒ ´ µÌ ¨ µÌ © µÌ ¼ © Ï Æ´Ø µ Î Æ ´Ø µ ¨ © ÛÒ ´ØµÛ ´ µÌ (9.6) (9. Accurate plant models were frequently not available and the assumption of white noise disturbances was not always relevant or meaningful to practising control engineers. but when other control engineers attempted to use LQG on everyday industrial problems a different story emerged.4) (9. it is assumed that the plant dynamics are linear and known. That is.7) ¼ (9.1 Traditional LQG and LQR problems In traditional LQG Control. and we will describe procedures for improving robustness. Û and ÛÒ are the disturbance (process noise) and measurement noise inputs respectively. which could be well deﬁned and easily formulated as optimizations.
10) where ÃÖ is a constant matrix which is easy to compute and is clearly independent plant noise. and Gaussian white noise processes to model disturbance signals and noise.3. the above LQG problem without Û and ÛÒ . It happens that the solution to this problem can be written in terms of the simple state feedback law Ù´Øµ ÃÖ Ü´Øµ (9. so that optimal state estimate is given by a Kalman ﬁlter and is independent of É and Ê. The optimal estimate Ü of the state Ü. It consists of ﬁrst determining the optimal control to a deterministic linear quadratic regulator (LQR) problem: namely. as illustrated in Figure 9. known as the Separation Theorem or Certainty Equivalence Principle. ÃÖ Ü Kalman Filter dynamic system Figure 9. the statistical properties of the Ò Ó Ü Ü Ì Ü Ü is minimized. The required solution to the LQG problem is then found by replacing Ü by Ü. The name LQG arises parameters) such that É from the use of a linear model. We therefore see that the LQG problem and its solution can be separated into two distinct parts. an integral Quadratic cost function.¿¼ ´ ÅÍÄÌÁÎ ÊÁ Ä µ Ã ÇÆÌÊÇÄ The LQG control problem is to ﬁnd the optimal control Ù´Øµ which minimizes Â £ ½ Ì¢ Ì ÐÑ Ü ÉÜ · ÙÌ ÊÙ Ø Ì ½Ì ¼ (9.3: The Separation Theorem . Û ÛÒ Ý Ö Ù ¹ Plant Deterministic linear quadratic regulator constant. The solution to the LQG problem. The next step is to ﬁnd an of Ï and Î .9) where É and Ê are appropriately chosen constant weighting matrices (design ÉÌ ¼ and Ê ÊÌ ¼. to give Ù´Øµ ÃÖ Ü´Øµ. is surprisingly simple and elegant.
e.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿½ Û ÛÒ Ý ÕÙ ¹ Plant Ã + ¹ + Ü¹ + Ê Õ¹ Õ Kalman Filter Ý ¹+ ÃÖ Ü LQ Regulator Figure 9.12) ÃÖ Ì and Riccati equation Ê ½ Ì Ê ½ Ì ¼ is the unique positivesemideﬁnite solution of the algebraic Ì · ·É ¼ (9. i. Optimal state feedback.14) .4. ﬁnd the input signal Ù´Øµ which takes the system to the zero state (Ü ¼) in an optimal manner.13) Kalman ﬁlter. where all the states are known. by minimizing the deterministic cost ÂÖ ½ ¼ ´Ü´ØµÌ ÉÜ´Øµ · Ù´ØµÌ ÊÙ´Øµµ Ø (9. with Ü Ü · Ù · Ã ´Ý Üµ (9. The Kalman ﬁlter has the structure of an ordinary stateestimator or observer. is the deterministic initial value problem: given the system Ü Ü · Ù with a nonzero initial state Ü´¼µ.4: The LQG controller and noisy plant We will now give the equations necessary to ﬁnd the optimal statefeedback matrix ÃÖ and the Kalman ﬁlter. where (9. as shown in Figure 9.11) The optimal solution (for any initial state) is Ù´Øµ Ã Ö Ü´Øµ. The LQR problem.
show that the closedloop dynamics are described by Ø Ü Ü Ü ÃÖ ¼ Ã ÃÖ Ü Ü Ü · Á Á Ã ¼ Û ÛÒ This shows that the closedloop poles are simply the union of the poles of the deterministic ÃÖ ) and the poles of the Kalman ﬁlter (eigenvalues of LQR system (eigenvalues of Ã ). is easily shown to be given by ÃÄÉ ´×µ × ÃÖ Ã Ã ÃÖ ¼ ½ Ì Ì Î ½ Ê Ê ½ Ì Ì Î ½ ¼ (9.4. For the LQGcontroller.5).4. then the Kalman ﬁlter equation (9.¿¾ ÅÍÄÌÁÎ ÊÁ Ò Ä Ó Ã ÇÆÌÊÇÄ .17) has the extra term ·Ã ÃÖ . as shown in Figure 9. Remark 1 The optimal gain matrices Ã and ÃÖ exist. from Ý to Ù (i. It is exactly as we would have expected from the Separation Theorem.4.15) The optimal choice of Ã . The LQG control problem is to minimize Â in (9. which minimizes Ü Ü Ì Ü Ü Ã Ì where Riccati equation Ì Ì Î ½ Ì Î ½ ¼ is the unique positivesemideﬁnite solution of the algebraic · ·Ï ¼ (9.16) LQG: Combined optimal state estimation and optimal state feedback. with a nonzero term in (9.9).2 For the plant and LQG controller arrangement of Figure 9. One strategy is illustrated Figure 9. ut is not easy to see where to position the reference input Ö. Here the control error Ö Ý is integrated and the regulator ÃÖ is designed for the plant augmented with these integrated states. Remark 2 If the plant model is biproper. if desired. We here design an LQGcontroller with integral action (see Figure 9. Example 9. The structure of the LQG controller is illustrated in Figure 9.5) for the SISO inverse response process . its transfer function. and the LQGcontrolled system ½ É ¾ µ and is internally stable. assuming positive feedback). and how integral action may be included.1 LQG design for inverse response process.e. and the matrix of the LQGcontroller in (9.14) has the extra term Ã Ù on the righthand side.5. Exercise 9. provided the systems with statespace realizations ´ ½ ´ Ï ¾ µ are stabilizable and detectable. is given by (9.17) It has the same degree (number of poles) as the plant.
1. Recall from (2. For Ê Ì ´Ü ÉÜ · ÙÌ ÊÙµ Ø we choose É such that only the integrated the objective function Â states Ý Ö are weighted and we choose the input weight Ê ½.5 2 1. The Kalman ﬁlter is set up so that ÛÁ (process we do not estimate the integrated states.5: LQG controller with integral action and reference input 2.1 was used to design the LQG controller.6: LQG design for inverse response process.5 0 5 10 15 20 25 30 35 40 45 50 Ý´Øµ Ù´Øµ Time [sec] Figure 9. For the noise weights we select Ï ½. Exercise 9. (Only the ratio between É and Ê matters and reducing Ê yields a faster response).5 0 −0. so we augment the plant ´×µ with an integrator before designing the state feedback regulator. Closedloop response to unit step in reference.5 1 0.26) that the plant model has a RHPzero and is given ¿´ ¾×·½µ by ´×µ ´ ×·½µ´½¼×·½µ . The standard LQG design procedure does not give a controller with integral action. and be choose Î ½ (measurement noise).3 Derive the equations used in the MATLAB ﬁle in Table 9.6. The reponse is good and compares nicely with the loopshaping design in Figure 2. The resulting closedloop response is shown in Figure 9.ÇÆÌÊÇÄÄ Ê + Ö¹ ËÁ Æ ¿¿ Û ÛÒ  ¹ Ê ¹ Ã Ö ¹ Kalman Filter ¹ Õ Ù Plant Õ ¹Ý Õ Figure 9. (Only noise directly on the states) with Û the ratio between Û and Î matter and reducing Î yields a faster response). introduced in Chapter 2. .17. The MATLAB ﬁle in Table 9.
by example. Krp=Kr(1:m. However.1).Cc.1: MATLAB commands to generate LQG controller in Example 9.n+1:n+m).m)]. if the weight Ê is chosen to be diagonal. 0nn=zeros(nn). ½¾ Ñ . % 2) Design Kalman filter Bnoise = eye(n). Klqg = ss(Ac. Dc = [0mm 0mm]. that there exist LQG combinations with arbitrarily small gain margins. 1964. [n. and a (minimum) phase margin of ¼ Æ in each plant input control channel.0nm +Ke]. % 3) Form controller from [r y]’ to u Ac=[0mm 0mn.V). 0nm=zeros(nm).den). 0mn=zeros(mn).d].1 % Uses the Control toolbox num=3*[2 1].b.d] = tf2ss(num. He showed.m] = size(b).W.c. % original plant % plant is (a.0mn eye(m. % Model dimensions: p = size(c.c. 0mm=zeros(mm). W = eye(n).Bnoise.2 Robustness properties For an LQGcontrolled system with a combined Kalman ﬁlter and LQR control law there are no guaranteed stability margins.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 9. Safonov and Athans. a gain reduction margin equal to 0. Bc = [eye(m) eye(m). Q=[0nn 0nm.b*Kri ab*KrpKe*c]. the sensitivity function Ë ÃÖ ´×Á µ ½ µ ½ satisﬁes the Kalman inequality ´Ë ´ µµ ½ Û (9. R=eye(m). a complex perturbation can be introduced at the plant inputs without causing instability providing (i) or ¼ and ¼ ½.18) From this it can be shown that the system will have a gain margin equal to inﬁnity.2. for an LQRcontrolled system (i.c 0mm]. Cc = [Kri Krp].1:n).5.[10 1]). Includes % integrators. den=conv([5 1].Bc.e. Ke = lqe(a. assuming all the states are available and no stochastic inputs) it is well known (Kalman. This means that in the LQRcontrolled system ¨ © Ù ÃÖ Ü. B = [b. This was brought starkly to the attention of the control community by Doyle (1978) (in a paper entitled “Guaranteed Margins for LQG Regulators” with a very compact abstract which simply states “There are none”).c.R). [a.d) % no. of states and inputs (u) % % % % % % % % % % augment plant with integrators weight on integrated error input weight optimal statefeedback regulator extract integrator and state feedbacks don’t estimate integrator states process noise model (Gd) process noise weight measurement noise weight Kalman filter gain % integrators included % Final 2DOF controller. V = 1*eye(m).B.Dc). % 1) Design state feedback regulator A = [a 0nm.Q. Kr and Kalman filter 9. Kr=lqr(A.Kri=Kr(1:m. of outputs (y) % no.b. 1977) ´Á · that.
13) becomes ( · Ê ½ · ½ Ê· ÃÖ Ê Ü·Ù which. This is obvious for the stable plant with ¼ since ÃÖ ¼ and then the phase of Ä´ µ varies from ¼Æ (at zero frequency) to ¼Æ (at inﬁnite frequency). Note also that Ä´ µ has a rolloff of 1 at high frequencies. if the power spectral density matrix Î is chosen to . the Nyquist plot of Ä´ µ avoids the unit disc centred on the critical point ½. Furthermore. the above shows that the Nyquist diagram of the openloop regulator transfer function Ã Ö ´×Á µ ½ will always lie outside the unit circle with centre ½. For a singleinput plant.12) Ô ¼ ¸ · Ô ½. and is illustrated in Example 9. ½ ´× µ with the statespace realization Example 9. In particular. For Ä Ê ÃÖ Consider now the Kalman ﬁlter shown earlier in Figure 9. since ¼. This was ﬁrst shown by Kalman (1964). For a nonzero initial state the cost function to be minimized is ½ ¼ ´Ü¾ · ÊÙ¾ µ Ø . which is a general property of LQR designs. pole from × Ô Ô ¾·½ ÊÜ ½ ½ ½ small (“cheap control”) the gain crossover frequency of the loop transfer function Ô ÃÖ ´× µ is given approximately by ½ Ê. ½¾ Ñ where Ñ is the number of plant inputs. É ½) ¾ ¾ Ê Ê ¼ ´ Êµ¾ · Ê. The algebraic Riccati equation (9.2 below. Consider a ﬁrst order process ´×µ Ü´Øµ ÂÖ Ü´Øµ · Ù´Øµ Ý ´Øµ Ü´Øµ so that the state is directly measured. The optimal control is given by ¾ ·½ Ê and we get the closedloop system Ü ¾ · ½ Ê ¼. the root locus for the The closedloop pole is located at × for Ê (inﬁnite weight optimal closedloop pole with respect to Ê starts at × along the real axis as Ê approaches zero. Note that the root on the input) and moves to ¼) and unstable ( ¼) plants ´×µ with the same value locus is identical for stable ( ¼ we see that the minimum input energy needed to stabilize the of .4.2 LQR design of a ﬁrst order process. which moves the plant (corresponding to Ê to its mirror image at × . that is Ë ´ µ ½ ½ · Ä´ µ ½ at all frequencies. The surprise is that it is also true for the unstable plant with ¼ even though the phase of Ä´ µ varies from ½ ¼Æ to ¼Æ . Thus. Arguments dual to those employed for the LQRcontrolled system can then be used to show that. gives Ù ÃÖ Ü where from (9. for ) is obtained with the input Ù ¾ Ü. Notice that it is itself a feedback system.ÇÆÌÊÇÄÄ Ê (ii) ËÁ Æ ¿ ½ and ¼Æ.
17) and ¨´×µ ´×Á µ ½ ´×µ ¨´×µ is the plant model. and a Kalman ﬁlter has good stability margins at the inputs to Ã . a gain reduction margin of 0. the Nyquist diagram of the openloop ﬁlter transfer function ´×Á µ ½ Ã will lie outside the unit circle with centre at ½. The loop transfer functions associated with the Û Ù 3¢ ÛÒ ¢ 1 ¹ Plant. consider the LQG controller arrangement shown in Figure 9. An LQRcontrolled system has good stability margins at the plant inputs.19) (9.22) ÃÄÉ ´×µ is as in (9.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ be diagonal.23) . ÃÄÉ ´×µ Figure 9. for a singleoutput plant. Consequently. then at the input to the Kalman gain matrix Ã there will be an inﬁnite gain margin.5 and a minimum phase margin of ¼ Æ .21) (9.7. (9.7: LQGcontrolled plant labelled points 1 to 4 are respectively Ä½ ´×µ Ä¾ ´×µ Ä¿ ´×µ Ä ´×µ where ¢ £ ÃÖ ¨´×µ ½ · ÃÖ · Ã ½ Ã ¨´×µ ÃÄÉ ´×µ ´×µ ´×µÃÄÉ ´×µ ÃÖ ¨´×µ (regulator transfer function) ¨´×µÃ (ﬁlter transfer function) (9. G(s) ¢ 2 Ý Õ ÃÖ Õ ¨´×µ + + Ã ¢ 4 +  ¹ Controller.20) (9. so why are there no guarantees for LQG control? To answer this.
since we will argue later that the procedures are somewhat limited for practical control system design. For Ä¿ ´×µ the reason is that after opening the loop at point 3 the error dynamics (point 4) of the Kalman ﬁlter are not excited by the plant inputs. That is. It is necessary to assume that the plant model ´×µ is minimum phase and that it has at least as many inputs as outputs. Alternatively. But at the actual input and output of the plant (points 1 and 2) where we are most interested in achieving good stability margins. we refer the reader to the original communications (Kwakernaak. Doyle and Stein. At points 3 and 4 we have the guaranteed robustness properties of the LQR system and the Kalman ﬁlter respectively.2. and explain why Ä ´×µ (like Ä¿ ´×µ) has such a simple form. Exercise 9. Ä¾ ´×µ. we aim to make Ä ¾ ´×µ ´×µÃÄÉ ´×µ approximately equal to the Kalman ﬁlter transfer function ¨´×µÃ .3 Loop transfer recovery (LTR) procedures For full details of the recovery procedures. with its guaranteed stability margins. These procedures are considered next. we recommend a Special Issue of the International Journal of Robust and Nonlinear Control. . but this time it must have at least as many outputs as inputs. Ä¿ ´×µ and Ä ´×µ. the LQG loop transfer function Ä ½ ´×µ can be made to approach by designing Ã in the Kalman ﬁlter to be large using the robustness recovery procedure of Doyle and Stein (1979). in fact they are uncontrollable from Ù. Notice also that points 3 and 4 are effectively inside the LQG controller which has to be implemented. it is necessary to assume that the plant model ´×µ is minimum phase.4 Derive the expressions for Ä½ ´×µ. Doyle and Stein. we have complex transfer functions which in general give no guarantees of satisfactory robustness properties. 1969. The LQG loop transfer function Ä ¾ ´×µ can be made to approach ¨´×µÃ . Fortunately. Again. ÃÖ ¨´×µ The procedures are dual and therefore we will only consider recovering robustness at the plant output. edited by Niemann and Stoustrup (1995). most likely as software. For a more recent appraisal of LTR. Ä¿ ´×µ and Ä ´×µ are surprisingly simple. either Ä ½ ´×µ can be made to tend asymptotically to Ä ¿ ´×µ or Ä¾ ´×µ can be made to approach Ä ´×µ. We will only give an outline of the major steps here. 1981) or to the tutorial paper by Stein and Athans (1987). by a suitable choice of parameters. 1979. and so we have good stability margins where they are not really needed and no guarantees where they are.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ Remark. if Ã Ö in the LQR problem is designed to be large using the sensitivity recovery procedure of Kwakernaak (1969). 1981) show how. for a minimum phase plant procedures developed by Kwakernaak (1969) and Doyle and Stein (1979. 9.
by choosing the power spectral density matrices Ï and Î so that the minimum singular value of ¨´×µÃ is large enough at low frequencies for good performance and its maximum singular value is small enough at high frequencies for robust stability. the two approaches of À ¾ and À½ control were seen to be more closely related than originally thought. but rather a set of designs is obtained (for small ) and an acceptable design is selected. we design a Kalman ﬁlter whose transfer function ¨´×µÃ is desirable. In this section. ´×µÃ ÄÉ ´×µ or ÃÄÉ ´×µ ´×µ. As tends to zero ´×µÃÄÉ tends to the desired loop transfer function ¨´×µÃ . are indirectly shaped. loop transfer recovery is achieved by designing Ã Ö in Ì and Ê an LQR problem with É Á . Their main limitation is to minimum phase plants. in an iterative fashion. Zames argued that the poor robustness properties of LQG could be attributed to the integral criterion in terms of the À¾ norm. In tuning Ï and Î we should be careful to choose Î as diagonal and Ï ´ Ë µ´ Ë µ Ì . as discussed in section 9. where is a scalar. see for example Glover and Doyle (1988) and Doyle et al. although an earlier use of À ½ optimization in an engineering context can be found in Helton (1976). and he also criticized the representation of uncertain disturbances by white noise processes as often unrealistic.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ First. As the À ½ theory developed. raise. and a cancelled nonminimum phase zero would lead to instability. Notice that Ï and Î are being used here as design parameters and their associated stochastic processes are considered to be ﬁctitious. A further disadvantage is that the ¼µ which brings about full recovery also introduces high limiting process ´ gains which may cause problems with unmodelled dynamics. Much has been written on the use of LTR procedures in multivariable control system design. When the singular values of ¨´×µÃ are thought to be satisfactory. (1989).3 À¾ and À½ control Motivated by the shortcomings of LQG control there was in the 1980’s a signiﬁcant shift towards À½ optimization for robust control. This development originated from the inﬂuential work of Zames (1981). This is done.4. however. the recovery procedures are not usually taken to their limits ´ ¼µ to achieve full recovery. This is because the recovery procedures work by cancelling the plant zeros. Because of the above disadvantages. But as methods for multivariable loopshaping they are limited in their applicability and sometimes difﬁcult to use. or lower the singular values. 9. The result is a somewhat adhoc design procedure in which the singular values of a loop transfer function. particularly in the solution process. where Ë is a scaling matrix which can be used to balance. The cancellation of lightly damped zeros is also of concern because of undesirable oscillations at these modes during transients.1. A more direct and intuitively appealing method for multivariable loopshaping will be given in Section 9. we will begin with a general .
and Þ the socalled “error” signals which are to be minimized in some sense to meet the control objectives. Such a general formulation is afforded by the general conﬁguration shown in Figure 9. 9. commercial software for solving such problems is now so easily available. It is not our intention to describe in detail the mathematical solutions. or modiﬁed to suit his or her application.8 and discussed earlier in Chapter 3.1 General control problem formulation There are many ways in which feedback design problems can be cast as À ¾ and À½ optimization problems.26) ½ ¾ The signals are: Ù the control variables. since efﬁcient.3. As shown in (3.8 is described by Û Ù ¹ ¹ È Ã Þ ¹ Ú Figure 9.24) (9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ control problem formulation into which we can cast all À ¾ and À½ optimizations of practical interest.102) the closedloop transfer function from Û to Þ is given by the linear fractional transformation Þ (9.25) with a statespace realization of the generalized plant È given by (9. The system of Figure 9. It is very useful therefore to have a standard problem formulation into which any particular problem may be manipulated. Rather we seek to provide an understanding of some useful problem formulations which might then be used by the reader. Û the exogenous signals such as disturbances Û and commands Ö. The general À ¾ and À½ problems will be described along with some speciﬁc and typical control problems.27) Ð ´È Ã µÛ È s ½ ½½ ¾½ ¾ ½¾ ¾¾ ¿ .8: General control conﬁguration Þ Ú Ù È ´×µ Û Ù Ã ´×µÚ ¾ È½½ ´×µ È½¾ ´×µ È¾½ ´×µ È¾¾ ´×µ Û Ù (9. Ú the measured variables.
Recall that À ¾ is the set of strictly proper stable transfer functions. ½¾ and ¾½ have full rank. they both give controllers of statedimension equal to that of the generalized plant È . 1989) and (Green and Limebeer.. and assumption (A2) is sufﬁcient to ensure the controllers are proper and hence realizable. is required but they do signiﬁcantly simplify the algorithm formulas. see (Safonov et al. .¿¼ where ÅÍÄÌÁÎ ÊÁ Ð ´È Ä Ã ÇÆÌÊÇÄ (9. This can be achieved. by a scaling of Ù and Ú and a unitary transformation of Û and Þ . see for example Maciejowski (1989). for simplicity of exposition. In addition. the following additional assumptions are sometimes made . Assumptions (A3) and (A4) ensure that the optimal controller does not try to cancel poles or zeros on the imaginary axis which would result in closedloop ¼ makes È½½ instability. neither ½½ ¼. The most general. It is worth mentioning again that the similarities between À ¾ and À½ theory are most clearly evident in the aforementioned algorithms. Ãµ È½½ · È½¾ Ã ´Á È¾¾ Ã µ ½ È¾½ Ð ´È Ãµ First some remarks about the algorithms used to solve such problems. The following assumptions are typically made in À ¾ and À½ problems: (A1) (A2) (A3) (A4) (A5) ´ ¾ ¾ µ is stabilizable and detectable. ¾ ¾½ ½½ ¼ and ¾¾ ¼. For simplicity. and they both exhibit a separation structure in the controller already seen in LQG control.3. ¾¾ ¼ makes È¾¾ strictly proper and simpliﬁes the formulas in the À ¾ algorithms. We will consider each of them in turn. (1989). Á ¾ has full column rank for all ½ ½¾ Á ½ has full row rank for all . Assumption (A1) is required for the existence of stabilizing controllers Ã . ½½ strictly proper. In À½ . widely available and widely used algorithms for À ¾ and À½ control problems are based on the statespace solutions in Glover and Doyle (1988) and Doyle et al. Assumption (A5) is conventional in À ¾ control. If they are not zero. An algorithm for À ½ control problems is summarized in Section 9. nor ¾¾ ¼. it is also sometimes assumed that ½¾ and ¾½ are given by (A6) ½¾ ¼ Á and ¾½ ¢ ¼ Á £ . For example. both À¾ and À½ require the solutions to two Riccati equations. an equivalent À ½ problem can be constructed in which they are.28) À¾ and À½ control involve the minimization of the À ¾ and À½ norms of respectively. without loss of generality. 1995).4.
the interconnection structure.3. and the process noise and measurement ¼).1 and A. In general. then it probably means that your control problem is not well formulated and you should think again. for a speciﬁed a stabilizing controller is found for which Ð ´È Ã µ ½ .g. It also has the following stochastic interpretation. it should be said that À ½ algorithms. in LQG where there are no cross ½ ½ µ is detectable.2 on page 538. Therefore. ´ ½ µ is stabilizable and ´ Assumption (A7) is common in Ì terms in the cost function ( ½¾ Ì noise are uncorrelated ( ½ ¾½ may be replaced by (A8).30) The expected power in the error signal Þ is then given by ´ ½ Ì ÐÑ Þ ´ØµÌ Þ ´Øµ Ø Ì ½ ¾Ì Ì µ (9. As discussed in Section 4. and the designer speciﬁed weighting functions.31) . in which the optimal controller is unique and can be found from the solution of just two Riccati equations.2 À¾ optimal control × The standard À ¾ optimal control problem is to ﬁnd a stabilizing controller Ã which minimizes ´×µ ¾ ½ ½ ØÖ ´ µ ´ µÀ ¾ ½ Ð ´È Ãµ (9. That is. tools or the Robust Control toolbox of MATLAB) complains. 9. Suppose in the general control conﬁguration that the exogenous input Û is white noise of unit intensity. À ¾ control e. Whilst the above assumptions may appear daunting.ÇÆÌÊÇÄÄ Ê (A7) (A8) ËÁ Æ ¿½ Ì Ì ½¾ ½ ¼ and ½ ¾½ ¼.1 and noted in Tables A. This is illustrated for the LQG problem in the next subsection. the À¾ norm can be given different deterministic interpretations. reducing until the minimum is reached within a given tolerance. That is ¨ © Û´ØµÛ´ µÌ ÁÆ´Ø µ (9. in general. if the software (e. most sensibly posed control problems will meet them. to ﬁnd an optimal À ½ controller is numerically and theoretically complicated. ﬁnd a suboptimal controller. If an optimal controller is required then the algorithm can be used iteratively. Notice that if (A7) holds then (A3) and (A4) Lastly.g.10. ¼).29) For a particular problem the generalized plant È will include the plant model. This contrasts signiﬁcantly with À ¾ theory.
34) Û ´Øµ ÛÒ ´Øµ Â Û ´ µÌ ÛÒ ´ µÌ Ã ´×µÝ such that ´ (9.¿¾ ¨ © ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ØÖ Þ ´ØµÞ ´ØµÀ £ ½ ½ ¢ ØÖ ´ µ ´ µÀ ¾ ½ Thus. ÛÒ as Ï ¾ ¼½ Û ¼ Î¾ µ (9. Then the LQG cost function is Â ½ Ì ÐÑ Þ ´ØµÌ Þ ´Øµ Ø Ì ½Ì ¼ Ð ´È Ãµ ¾ ¾ (9. by minimizing the À ¾ norm. we are minimizing the rootmeansquare (rms) value of Þ .36) This problem can be cast as an À ¾ optimization in the general framework in the following manner. ¾ ¾ ¾ Ð ´È Ã µ ¾ (by Parseval’s Theorem) (9. is minimized.35) The LQG problem is to ﬁnd Ù £ ½ Ì¢ Ì ÐÑ Ü ÉÜ · ÙÌ ÊÙ Ø Ì ½Ì ¼ (9.37) and represent the stochastic inputs Û .38) where Û is a white noise process of unit intensity. For the stochastic system Ü Ý where Ü· Ù·Û Ü · ÛÒ Ï ¼ Æ´Ø µ ¼ Î µ (9. Þ Û ÛÒ ´ É ¾ ¼½ ¼ Ê¾ ½ ½ Ü Ù (9. due to a unit intensity white noise input.3 LQG: a special À¾ optimal controller An important special case of À ¾ optimal control is the LQG problem described in subsection 9.1.32) 9. the output (or error) power of the generalized system. Deﬁne an error signal Þ as is minimized with É ÉÌ ¼ and Ê ÊÌ ¼.2.33) (9.39) .3.
9: The LQG problem formulated in the general control conﬁguration 9. One is that it minimizes the peak of the maximum singular value of Ð ´È ´ µ Ã ´ µµ. With the standard assumptions for the LQG problem..2 the À ½ norm has several interpretations in terms of performance.42) Ð ´È Ãµ ½ Û´Øµ Ñ Ü Þ ´Øµ ¾ ¼ Û´Øµ ¾ (9.. It also has a time domain interpretation as the induced (worstcase) 2norm.¼. then Ãµ ½ Ñ Ü ´ Ð ´È Ã µ´ µµ (9..43) .½ ¼ Î¾ ¼ ¾ É½ (9.4 With reference to the general control conﬁguration of Figure 9..10.ÇÆÌÊÇÄÄ Ê where ËÁ Æ ¿¿ and the generalized plant È is given by Þ ´×µ Ð ´È ¾ Ã µÛ´×µ ¿ (9.Ê . the standard À ½ optimal control problem is to ﬁnd all stabilizing controllers Ã which minimize À½ optimal control Ð ´È As discussed in Section 4. Û ¹ Ï ¾½ Û È µ ½ + ¹ Ê ½¾ ¹ ¹ Þ Õ¹ ¹ Î ½¾ Ù + ¹ + ¹ ´×Á Õ ¹ É ¾½ Ý ÛÒ + ¹ Ã Figure 9. application of the general À¾ formulas (Doyle et al. 1989) to this formulation gives the familiar LQG optimal controller as in (9.40) È È½½ È½¾ È¾½ È¾¾ s ¾ Ï½ ¼ ¼ ¼ ¼ ½ ¾ ¼.3.9.17).8.41) The above formulation of the LQG problem is illustrated in the general setting in Figure 9.¼ .. Let Þ Ð ´È Ã µÛ...
For É´×µ ¼. . ﬁnd all stabilizing controllers Ã such that Ð ´È Ã µ ½ This can be solved efﬁciently using the algorithm of Doyle et al.44) (9. Then the À½ suboptimal control problem is: given a Ñ Ò.26). ¼ is a solution to the algebraic Riccati equation Ì Ì Ì Ì ½ · ½ · ½ ½ · ½ ´ ¾ ½ ½ ¾ ¾ µ ½ ¼ ¢ £ Ì Ì · ´ ¾ ½ ½ ¾ ¾ µ ½ ¼.49) ½ ½ This is called the “central” controller and has the same number of states as the generalized plant È ´×µ.46) ¼ Á ¾ Á ¼ Ì Ì (9. (1989).47) ½ ¾ ½ Ä½ ½ ¾ ½ ´Á ¾ ½ ½ µ ½ ¾ ½ Ì ½ · ¾ ½ · ½ Ä½ ¾ · (9.45) ½ ¾ Ã ´×µ s (9. . there exists a stabilizing controller Ã ´×µ such that Ð ´È Ã µ ½ if and only if In practice.¿ where ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ General À½ algorithm. and it is often computationally (and theoretically) simpler to design a suboptimal one (i. with assumptions (A1) to (A8) in Section 9.50) ¾ ½ Ä½ ¿ ½ ÛÛÓÖ×Ø ½ ßÞ ½ ¾ ½ ½ ¾ . Þ ´Øµ ¾ ÕÊ ¼ ½ È Þ ´Øµ ¾ Ø is the ¾norm of the vector signal.8 described by equations (9. and such that Re (iii) ´ ½ ½ µ ¾ All such controllers are then given by Ã Ð ´Ã Éµ where (i) ½ (9. we get Ã ´×µ Ã ½½ ´×µ ½Ä½ ´×Á ½ µ ½ ½ (9. an optimal solution is approached. The central controller can be separated into a state estimator (observer) of the form Ü Ü · ¾ Ì Ü · Ù · Ä ´ Ü Ýµ (9. one close to the optimal ones in the sense of the À ½ norm). For the general control conﬁguration of Figure 9. it is usually not necessary to obtain an optimal controller for the À ½ problem.24)(9. and such that Re (ii) ½ ¼ is a solution to the algebraic Riccati equation Ì Ì Ì ½ · ½ Ì · ½ ½ · ½ ´ ¾ ½ ½ ¾ ¾ µ ½ ¼ ¢ £ Ì Ì · ½ ´ ¾ ½ ½ ¾ ¾ µ ¼. Let Ñ Ò be the minimum value of Ð ´È Ã µ ½ over all stabilizing controllers Ã .1.e.3. The algorithm is summarized below with all the simplifying assumptions. and by reducing iteratively.48) ½ ½ and É´×µ is any stable proper transfer function such that É ½ .
The justiﬁcation for this is that the initial design. then we can perform a bisection on until its value is sufﬁciently accurate.5 Mixedsensitivity À½ control Mixedsensitivity is the name given to transfer function shaping problems in which the sensitivity function Ë ´Á · Ã µ ½ is shaped along with one or more other closedloop transfer functions such as ÃË or the complementary sensitivity function Ì Á Ë . by examining a typical one degreeoffreedom conﬁguration.1. In Section 2.14) we see that it contains an additional term ½ ÛÛÓÖ×Ø . 9. In practice. Therefore stopping the iterations before the optimal value is achieved gives the design an À ¾ ﬂavour which may be desirable. we saw quite clearly the importance of Ë . the reader is referred to the original source (Glover and Doyle. For the more general situation. for practical designs it is sometimes recommended to perform only a few iterations of the À ½ algorithm. and command signals. after one iteration.ÇÆÌÊÇÄÄ Ê and a state feedback ËÁ Æ ¿ Ù ½Ü (9. The above result provides a test for each value of to determine whether it is less than Ñ Ò or greater than Ñ Ò . Thus. . Earlier in this chapter. The maximum singular values are relatively easy to shape by forcing them to lie below user deﬁned bounds. where ÛÛÓÖ×Ø can be interpreted as an estimate of the worstcase disturbance (exogenous input). we seek to minimize the energy in certain error signals given a set of exogenous input signals. If we desire a controller that achieves Ñ Ò . to within a speciﬁed tolerance. 1988).4. Figure 9. Given all the assumptions (A1) to (A8) the above is the most simple form of the general À½ algorithm. iteration.51) Upon comparing the observer in (9. A difﬁculty that sometimes arises with À ½ control is the selection of weights such that the À½ optimal controller provides a good tradeoff between conﬂicting objectives in various frequency ranges. In the former.7.50) with the Kalman ﬁlter in (9. we would expect a user to have access to commercial software such as MATLAB and its toolboxes. noise. This is discussed in Section 9.4. Note that for the special case of À ½ loop shaping this extra term is not present. thereby ensuring desirable bandwidths and rolloff rates. we distinguished between two methodologies for À ½ controller design: the transfer function shaping approach and the signalbased approach. where some of the assumptions are relaxed. In each case we will examine a particular problem and formulate it in the general control conﬁguration. and Ì . The latter might include the outputs of perturbations representing uncertainty. as well as the usual disturbances.3. Both of these two approaches will be considered again in the remainder of this section. In the signalbased approach. is similar to an À ¾ design which does tradeoff over various frequency ranges. ÃË . À ½ optimization is used to shape the singular values of speciﬁed transfer functions over frequency.
usually ÏÙ Á . the scalar weighting functions Û ½ ´×µ and Û¾ ´×µ can be replaced by matrices Ï½ ´×µ and Ï¾ ´×µ. To do this we could select a scalar low pass ﬁlter Û ½ ´×µ with a bandwidth equal to that of the disturbance. but anything more complicated is usually not worth the effort. To see how this mixedsensitivity problem can be formulated in the general setting. Recall that Ë is the transfer function between and the output. The disturbance is typically a low frequency signal. This can be useful for systems with channels of quite different bandwidths when diagonal weights are recommended.10 to show that Þ ½ Ï½ ËÛ and Þ¾ Ï¾ ÃËÛ as required. therefore. the stability requirement will indirectly limit the controller gains. as illustrated in signal Þ Figure 9. that we have a regulation problem in which we want to reject a disturbance entering at the plant output and it is assumed that the measurement noise is relatively insigniﬁcant. and to determine the elements of the generalized plant È as È½½ È¾½ Ï½ ¼ Á È½¾ È¾¾ Ï½ Ï¾ (9. Note we have here outlined an alternative way of selecting the weights from that in ÏÈ was selected with a crossover frequency Example 2. Tracking is not an issue and therefore for this problem it makes sense to shape the closedloop transfer functions Ë and ÃË in a one degreeoffreedom setting. In the presence of a nonminimum phase zero. The size of ÃË is also important for robust stability with respect to uncertainty modelled as additive plant perturbations. but it is far more useful in practice to minimize Û½ Ë Û¾ ÃË ½ (9. It focuses on just one closedloop transfer function and for plants without righthalf plane zeros the optimal controller has inﬁnite gains. and hence the control energy used. we can imagine the disturbance as a single exogenous input and deﬁne an error ¢ Ì Ì £ Þ½ Þ¾ Ì . Remark.6. It is important to include ÃË as a mechanism for limiting the size and bandwidth of the controller. and then ﬁnd a stabilizing controller that minimizes Û½ Ë ½ .10.11 and Section 3. and ÃË the transfer function between and the control signals. This cost function alone is not very practical. In general. where Þ½ Ï½ Ý and Þ¾ Ï¾ Ù. and therefore it will be successfully rejected if the maximum singular value of Ë is made small over the same low frequencies.52) where Û¾ ´×µ is a scalar high pass ﬁlter with a crossover frequency approximately equal to that of the desired closedloop bandwidth. It is not difﬁcult from Figure 9.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Suppose.53) . There Ï½ equal to that of the desired closedloop bandwidth and Ï¾ ÏÙ was selected as a constant.4.
As in the regulation problem signals are Þ½ Ï½ of Figure 9. you will see that the exogenous input Û is passed through a weight Ï ¿ before it impinges on the system.55) Another interpretation can be put on the Ë ÃË mixedsensitivity optimization as shown in the standard control conﬁguration of Figure 9.¾Ú ¿ È½½ È½¾ È¾½ È¾¾ Ãµ Ï½ Ë Ï¾ ÃË Û Ù (9. we have in this tracking problem Þ ½ Ï½ ËÛ and Þ¾ Ï¾ ÃËÛ. where it is used to design a rotorcraft control law. again in a one degreeoffreedom setting. Another useful mixed sensitivity optimization problem.54) and Ð ´È (9. The exogenous input is a reference command Ö. is to ﬁnd a stabilizing controller which minimizes Ï½ Ë Ï¾ Ì ½ (9. Here we consider a tracking problem.10. Ï ¿ is chosen to weight the input signal and not directly to shape Ë or ÃË . An example of the use of Ë ÃË mixed sensitivity minimization is given in Chapter 12.11.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ È Û ¹ Ï½ ¹ Þ½ ¹ Þ¾ ¼ ¹ Ï¾ Ù Þ Õ¹ ¹ + ÕÝ ¹ + + Setpoint Ö Ú Ã Figure 9.10: Ë ÃË mixedsensitivity optimization in standard form (regulation) where the partitioning is such that ¾ Þ½ Þ . and the error Ï½ ´Ö Ýµ and Þ¾ Ï¾ Ù. In this helicopter problem. This signalbased approach to weight selection is the topic of the next subsection.56) .
12.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ È Û Ö ¹ ¹ Ï½ Ï¾ ¹ Þ½ ¹ Þ¾ Ú Þ Ù Õ¹ ¹+ Ã Õ Figure 9.11: Ë ÃË mixedsensitivity minimization in standard form (tracking) The ability to shape Ì is desirable for tracking problems and noise attenuation. The Ë Ì mixedsensitivity minimization problem can be put into the standard control conﬁguration as shown in Figure 9. The elements of the È Û Ö ¹ ¹ Ï½ Ï¾ ¹ Þ½ ¹ Þ¾ Ú Þ ¹ Ù Õ¹ + Õ Ã Figure 9.12: Ë Ì mixedsensitivity optimization in standard form corresponding generalized plant È are È½½ È¾½ Á Ï½ ¼ È½¾ È¾¾ Ï½ Ï¾ (9. It is also important for robust stability with respect to multiplicative perturbations at the plant output.57) .
11.3. we deﬁne the plant and possibly the model uncertainty. With two. The trick here is to replace a nonproper term such as ´½ · ½ ×µ by ´½ · ½ ×µ ´½ · ¾ ×µ where ¾ ½ . draw the corresponding control conﬁguration and give expressions for the generalized plant È . we deﬁne the class of external signals affecting the system and we deﬁne the norm of the error signals we want to keep small. Similarly one might be interested in weighting ÃË with a nonproper weight to ensure that Ã is small outside the system bandwidth.58) ½ formulate a standard problem. We stress that the weights Ï in mixedsensitivity À½ optimal control must all be stable. In which case.3. Weights are used to describe the expected or known frequency content of exogenous signals and the desired frequency content of error signals. we would have to approximate ½ × ½ by ×·¯ . where . where ¯ ½.5 For the cost function Ï½ Ë Ï¾ Ì Ï¿ ÃË ¿ (9. The shaping of closedloop transfer functions as described above with the “stacked” cost functions becomes difﬁcult with more than two functions. the mixedsensitivity approach is not general enough and we are forced to look at more advanced techniques such as the signalbased approach considered next. For more complex problems. Therefore if we wish. lowpass and highpass ﬁlters are sufﬁcient to carry out the required shaping and tradeoffs. Weights are also used is the if a perturbation is used to model uncertainty. for example. A useful discussion of the tricks involved in using “unstable” and “nonproper” weights in À ½ control can be found in Meinsma (1995).6 Signalbased À½ control The signalbased approach to controller design is very general and is appropriate for multivariable problems in which several objectives must be taken into account simultaneously. In this approach. The focus of attention has moved to the size of signals and away from the size and bandwidth of selected closedloop transfer functions. information might be given about several exogenous signals in addition to a variety of signals to be minimized and classes of plant perturbations to be robust against. But the standard assumptions preclude such a weight. assumption (A1) in Section 9. If they are not. the process is relatively easy.13. as in Figure 9. stable. 9. The bandwidth requirements on each are usually complementary and simple. to emphasize the minimization of Ë at low frequencies by weighting with a term including integral action.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¾ ¿ Exercise 9.1 is not satisﬁed. and the general À ½ algorithm is not applicable. This is exactly what was done in Example 2.
or when we consider the induced 2norm between the exogenous input signals and the error signals.13: Multiplicative dynamic uncertainty model LQG control is a simple example of the signalbased approach. ¹ Ï ¹ ¡ Õ ¹+ ¹ + ¹ Figure 9.13. or simply À ¾ control. As we have already seen. the weights É and Ê are constant. when uncertainty is addressed. As in mixedsensitivity À ½ control.14: A signalbased À½ control problem A typical problem using the signalbased approach to À ½ control is illustrated in the . Ï is a weighting function that captures the relative model ﬁdelity over frequency. When we consider a system’s response to persistent sinusoidal signals of varying frequency. and ¡ represents unmodelled dynamics usually normalized via Ï so that ¡ ½ ½.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ nominal model. as it always should be. In the absence of model uncertainty. However. in which the exogenous signals are assumed to be stochastic (or alternatively impulses in a deterministic setting) and the error signals are measured in terms of the 2norm. the weights in signalbased À ½ control need to be stable and proper for the general À½ algorithm to be applicable. see Chapter 8 for more details. there does not appear to be an overwhelming case for using the À ½ norm rather than the more traditional À ¾ norm. we are required to minimize the À ½ norm. but LQG can be generalized to include frequency dependent weights on the signals leading to what is sometimes called WienerHopf design. ¹ ÏÖ ¹ Ö ¹ Ò Ï Ï Ö× ÝÖ ¹ ÝÑ Õ ¹ ¹ + ¹ Ã Ù¹ + ¹ Ï ¹Þ½ Õ ¹ +Õ Ý + ¹ ÏÒ + ¹ ¹ ÏÙ ¹Þ¾ Figure 9. À ½ is clearly the more natural approach using component uncertainty models as in Figure 9.
Ï and ÏÒ may be constant or dynamic and describe the relative importance and/or frequency content of the disturbances. This problem is a nonstandard À ½ optimization.15.60) .14. and noise signals. and Ã is the controller to be designed.8. It is a robust performance problem for which the synthesis procedure. and are nominal models of the plant and disturbance dynamics. can be applied. We have assumed in this statement that the signal weights have normalized the 2norm of the exogenous input signals to unity.15: An À½ robust performance problem ´Å ´ µµ ½ (9. set points. we require the structured singular value ¹Ï Ö ¹ Ï Ö× ¹ ¹Ã ÝÑ Ò ¹ ÏÒ + ¹ + ¹ ÏÖ ÝÖ Õ Õ ¡ ¹ Ï Ý¹ ¡ ¹ Ù¡ ¹ + ¹ Ï ¹Þ½ Õ ¹ +¹ + ¹ + ÕÝ + ¹ ÏÙ ¹Þ¾ Figure 9. respectively. The weights Ï and ÏÙ reﬂect the desired frequency content of the error ´Ý ÝÖ µ and the control signals Ù.59) Suppose we now introduce a multiplicative dynamic uncertainty model at the input to the plant as shown in Figure 9. The problem we now want to solve is: ﬁnd a stabilizing controller Ã such that the À ½ norm of the transfer function between Û and Þ is less than 1 for all ¡. Ö Ò Ö× ÝÑ Þ Ù Ù Þ½ Þ¾ (9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿½ interconnection diagram of Figure 9. where ¡ ½ ½. The problem can be cast as a standard À½ optimization in the general control conﬁguration by deﬁning ¾ ¿ Û Ú in the general setting of Figure 9. Mathematically. The weights Ï . outlined in Chapter 8. The weight Ï Ö is a desired closedloop transfer function between the weighted set point Ö × and the actual output Ý .
a step by step procedure for À ½ loopshaping design is presented. The loopshaping design procedure described in this section is based on À ½ robust stabilization combined with classical loop shaping. synthesis is sometimes difﬁcult to use and often too complex for the practical problem at hand. it should be based on classical loopshaping ideas and it should not be limited in its applications like LTR procedures. For simplicity. 9. This systematic procedure has its origin in the Ph.4 À½ loopshaping design We will begin the section with a description of the À ½ robust stabilization problem (Glover and McFarlane. First.61) and the associated block diagonal perturbation has 2 blocks: a ﬁctitious performance Ì and Þ Ì Þ Ì Ì . An important advantage is that no problemdependent uncertainty modelling. The formulas are relatively simple and so will be presented in full. In the next section. is required in this second step. It is essentially a two stage design process. Whilst the structured singular value is a useful analysis tool for assessing designs. as proposed by McFarlane and Glover (1990). thesis of Hyde (1991) and has since been successfully applied to several industrial problems. and explicit formulas for the corresponding controllers are available. The procedure synthesizes what is in effect a single degreeoffreedom controller. Following this. 1989). a design procedure is required which offers more ﬂexibility than mixedsensitivity À ½ control. This is a particularly nice problem because it does not require iteration for its solution. where solutions exist the controllers tend to be of very high order. or weight selection. the openloop plant is augmented by pre and postcompensators to give a desired shape to the singular values of the openloop frequency response. Then the resulting shaped plant is robustly stabilized with respect to coprime factor uncertainty using À ½ optimization. and an uncertainty block ¡ block between Ì ÖÌ ÒÌ ½ ¾ between Ù¡ and Ý¡ . the algorithms may not always converge and design problems are sometimes difﬁcult to formulate directly.¿¾ ¾ ¿ ÅÍÄÌÁÎ ÊÁ ¾ ¿ Ä Ã ÇÆÌÊÇÄ where Å is the transfer function matrix between Ö Ò Æ and Þ½ Þ¾ ¯ (9.D. This can be a limitation if there are stringent requirements on command following. the synthesis problem is not yet solved mathematically. but is not as complicated as synthesis. we present such a controller design procedure. In its full generality. . For many industrial control problems.
the procedure can be extended by introducing a second degreeoffreedom in the controller and formulating a standard À½ optimization problem which allows one to trade off robust stabilization against closedloop modelmatching. but a family of perturbed plants deﬁned by ¨ ¢ £ © ¡Æ ¡Å ½ ¯ ´Å · ¡Å µ ½ ´Æ · ¡Æ µ (9.64) Ô . It is now common practice. ¨ © and . we will ﬁrst recall the uncertainty model given in (8. and for our purposes it leads to a very useful À ½ robust stabilization problem.1 Robust stabilization For multivariable systems. as shown in Section 8. restricts the plant and perturbed plant models to either have the same number of unstable poles or the same number of unstable (RHP) zeros. as shown by Limebeer et al.2. (1993).4. Before presenting the problem. ¡Æ are stable unknown transfer functions which represent the uncertainty in the nominal plant model . taken one at a time. Although this uncertainty description seems unrealistic and less intuitive than the others.62). With a single perturbation. classical gain and phase margins are unreliable indicators of robust stability when deﬁned for each channel (or loop). 9. Use of a single stable perturbation.2.6.62) and Å Æ for simplicity. A perturbed Ô ´Å · ¡Å µ ½ ´Æ · ¡Æ µ (9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿¿ However.1. one on each of the factors in a coprime factorization of the plant. but even these are limited. We will consider the stabilization of a plant factorization (as discussed in Section 4. the associated robustness tests is in terms of the maximum singular values of various closedloop transfer functions.5) which has a normalized left coprime Å ½Æ where we have dropped the subscript from plant model Ô can then be written as (9. To overcome this. The objective of robust stabilization it to stabilize not only the nominal model . to model uncertainty by stable normbounded dynamic (complex) matrix perturbations. We will describe this two degreesoffreedom extension and further show that such controllers have a special observerbased structure which can be taken advantage of in controller implementation. as discussed in section More general perturbations like 9.2. it is in fact quite general. are required to capture the uncertainty. as seen in Chapter 8.63) where ¡Å . because simultaneous perturbations in more than one loop are not then catered for. two stable perturbations can be used.
denotes the spectral radius (maximum µ of . To maximize this stability margin is the problem of robust stabilization of normalized coprime factor plant descriptions as introduced and solved by Glover and McFarlane (1989). For the perturbed feedback system of Figure 9.66) where ¡ À denotes Hankel norm. The lowest achievable value of and the corresponding maximum stability margin ¯ are given by Glover and McFarlane (1989) as Ñ Ò ¯ ½ Ü Ñ Ò ½ Æ Å ¾ À ½ Ó ¾ ´½ · ´ µµ ¾ ½ (9. is the eigenvalue).64).67) where Ê Ë Á· Ì and is the unique positive deﬁnite solution of the following algebraic Riccati equation ´ Ë ½ Ì µÌ · ´ Ë ½ Ì µ Ë ½ Ì · Ì Ê ½ ¼ (9.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¹ ¡Æ + ¹  ¡Å Õ¹ Ù Æ ¹+ ¹ + Ã Å ½ Ý Õ Figure 9. as already derived in (8. and for a minimal statespace realization ´ unique positive deﬁnite solution to the algebraic Riccati equation ´ Ë ½ Ì µ · ´ Ë ½ Á· Ì Ì µÌ Ì Ê ½ · Ë ½ Ì ¼ (9. the stability property is robust if and only if the nominal feedback system is stable and Notice that Ã ´Á Ã µ ½ Å ½ Á ½ to Ù and Ý ½ ¯ (9.65) is the À½ norm from ´Á Ã µ ½ is the sensitivity function for this positive feedback arrangement.16: À½ robust stabilization problem where ¯ ¼ is then the stability margin.68) .16.
where Í and Î are stable. and determine a transfer function expression and a statespace realization for the generalized plant È .70). It is important to emphasize that since we can compute Ñ Ò from (9. if Ñ Ò in (9. such that Ê · É ½ is minimized. A solution to this problem is given in Glover (1984). then this problem can be resolved using a descriptor system approach. and thus (9. A controller (the “central” controller in McFarlane and Glover (1990)) which guarantees that for a speciﬁed Ã Ä s Ñ Ò . i.2 A systematic À½ loopshaping design procedure Robust stabilization alone is not much use in practice because the designer is not able to specify any performance requirements. Remark 2 Notice that.2.66) we get an explicit solution by solving just two Riccati equations (aresolv) and avoid the iteration needed to solve the general À ½ problem.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ Notice that the formulas simplify considerably for a strictly proper plant. (1989). To do this McFarlane and Glover (1990) .73) Exercise 9. Remark 1 An example of the use of coprimeunc is given in Example 9.70).70) cannot be implemented. all controllers achieving ÍÎ ½ .3 below. from Glover and McFarlane (1989). can be used to generate the controller in (9. is given by · · ¾ ´ÄÌ µ ½ Ì Ì Ã ´Á Ã µ ½ Å ½ Á ½ Ì´ (9. If for some unusual reason the truly optimal controller is required. which is singular. ½ Æ Å À (9. listed in Table 9. then Ä ´ µÁ · .70) (9.71) (9.6 Formulate the ½ robust stabilization problem in the general control conﬁguration of Figure 9. the minimum being Ê£ À . and Í Î satisfy Æ££ · Í Å Î The determination of Í and Î is a Nehari extension problem: that is. the details of which can be found in Safonov et al. when ¼. ´Í Î µ is a right coprime Ñ Ò are given by Ã factorization of Ã .8.e.69) · µ ¾ ´ÄÌ µ ½ Ë ½´ Ì · ´½ ¾ µÁ · µ Ì Ì (9. a problem in which an unstable transfer function Ê´×µ is approximated by a stable transfer function É´×µ. À 9. Remark 3 Alternatively.72) The MATLAB function coprimeunc.4.
Bc.eig.2: MATLAB function to generate the % Uses the Robust Control or Mu toolbox function [Ac. [x1. % Optimal gamma: gammin=sqrt(1+max(eig(X*Z))) % Use higher gamma.. [x1. Mu toolbox: %[x1. X = x2/x1. Bc=gam*gam*inv(L’)*Z*c’. L=(1gam*gam)*eye(size(X*Z)) + X*Z. % gamrel: gamma used is gamrel*gammin (typical gamrel=1.d: Statespace description of (shaped) plant..Cc.b. fail. R=eye(size(d*d’))+d*d’.d. R1=b*inv(S)*b’.x2.x2.eig.Dc. gam = gamrel*gammin. Ac=a+b*F+gam*gam*inv(L’)*Z*c’*(c+d*F). % % INPUTS: % a.c.xerr.wellposed. R1 A1]).¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 9. reig min] = ric schr([A1 R1.gamrel) % % Finds the controller which optimally ‘‘robustifies’’ a given shaped plant % in terms of tolerating maximum coprime uncertainty.x2.b.c. Z = x2/x1.Q1).R1). Cc=b’*X. A1=ab*inv(S)*d’*c.Z] = aresolv(A1’.wellposed. Q1 A1’]).R1. Mu toolbox: %[x1.X] = aresolv(A1. % Alt.Bc. Dc=d’.. Q1=c’*inv(R)*c. % S=eye(size(d’*d))+d’*d.Cc.gammin]=coprimeunc(a. reig min] = ric schr([A1’ Q1. % Alt.70) .Dc: "Robustifying" controller (positive feedback).Q1.1) % % OUTPUTS: % Ac..xerr. x2. F=inv(S)*(d’*c+b’*X). À½ controller in (9. fail.
Û for In Example 2. MATLAB. If Ï½ and Ï¾ are the pre. and can easily be implemented using the formulas already presented and reliable algorithms in.4. and we also add a phaseadvance term × · ¾ to reduce the slope for Ä from ¾ at lower frequencies to about ½ at crossover.74) × Ï¾ Ï ½ as shown in Figure 9.75) We want as good disturbance rejection as possible. The above procedure contains all the essential ingredients of classical loop shaping.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ proposed pre. and the gain crossover frequency the ﬁnal design should be about 10 rad/s.17. then the shaped plant × is given by (9. the loop shape (“shaped plant”) × ½ Ï½ is desired. to make the response a little faster we multiply the gain by a factor ¾ to get the weight Ï½ ×·¾ × (9. We will afterwards present a systematic procedure for selecting the weights Ï ½ and Ï¾ . Consider the disturbance process in (2.1 for the shaped plant × with a normalized left coprime factorization × Å× ½Æ× . Finally.and postcompensators respectively. so usage. À ´×µ ½ ¾¼¼ ½¼× · ½ ´¼ ¼ × · ½µ¾ ´×µ ½¼¼ ½¼× · ½ (9. ½ and we select Ï½ to get We ﬁrst present a simple SISO example. To improve the performance at low frequencies we add integral action. for example. Then after neglecting the highfrequency dynamics in ´×µ this yields an initial weight Ï½ ¼ . The feedback controller for the plant is then Ã Ï½ Ã× Ï¾ . The controller Ã× is synthesized by solving the robust ¹ Ï½ × ¹ Ã× ¹ Ï¾ Figure 9. where Ï ¾ acceptable disturbance rejection.3 GloverMcFarlane ½ loop shaping for the disturbance process.8 we argued that for acceptable disturbance rejection with minimum input Ï½ should be similar to .and postcompensating the plant to shape the openloop singular values prior to robust stabilization of the “shaped” plant.76) .17: The shaped plant and controller stabilization problem of section 9.54) which was studied in detail in Chapter 2. Example 9.
18(b) and is seen to be much improved. it is also very similar . corresponding to 25% allowed coprime uncertainty. as may expected.18(a). By applying this to our example we get Ñ Ò ¾ ¿ and an overall controller Ã Ï½ Ã× with states ( × .2: [Ac. and the magnitude of × ´ µ is shown by the dashed line in Figure 9. We see that the change in the loop shape is small. In general. has states. The corresponding loop shape × Ã× is shown by the solid line in Figure 9.gamrel) À Ï½ has statespace matrices and .18(b). The returned variable gammin (Ñ Ò ) is the inverse of the magnitude of coprime uncertainty we can tolerate before we get instability.gammin]=coprimeunc(A. The × ) to ¿ gain crossover frequency Û is reduced slightly from ½¿ to ½¼ ¿ rad/s. This may be done with the MATLAB toolbox using the command ncfsyn or with the MATLAB Robust Control toolbox using the function coprimeunc given in Table 9.D.Dc. × Ã× This yields a shaped plant × Ï½ with a gain crossover frequency of 13. The corresponding disturbance response is shown in Figure 9. and we usually require that Ñ Ò is less than . and.5 10 0 0 −2 10 10 −2 10 0 10 2 −0. and we note with interest that the slope around crossover is somewhat gentler. Solid line: “robustiﬁed” design. and the phase margin (PM) is improved from ½¿ ¾Æ to ½ Æ . gamrel is the value of relative to Ñ Ò .5 0 1 2 3 Frequency [rad/s] (a) Loop shapes Ä´ µ Time [sec] (b) Disturbance response Figure 9. and was in our case selected as ½ ½.7 rad/s.C.Bc. Ï½ Ã× is quite similar to that of the loopRemark. and the Here the shaped plant × function returns the “robustifying” positive feedback controller Ã× with statespace matrices and . the response with the “controller” Ã Ï½ is too oscillatory. × . The response to a unit step in the disturbance response is shown by the dashed line in Figure 9. and thus Ã× .Cc. This translates into better margins: the gain margin (GM) is improved from ½ ¾ (for (for × Ã× ). We want Ñ Ò ½ as small as possible. The response for reference tracking with controller Ã Ï½ Ã× is not shown. and Ï½ has ½ state).¿ 10 4 ÅÍÄÌÁÎ ÊÁ 1 Ä Ã ÇÆÌÊÇÄ Magnitude 10 2 × Ã× × 0. Ã× has the same number of poles (states) as × .21). Dashed line: Initial “shaped” design. The response with the controller Ã shaping controller Ã¿ ´×µ designed in Chapter 2 (see curves Ä¿ and Ý¿ in Figure 2.18: GloverMcFarlane loopshaping design for the disturbance process.B.18(a). We now “robustify” this design so that the shaped plant tolerates as much ½ coprime factor uncertainty as possible.
Exercise 9.18.2. if one is to go straight to a design the following variation has proved useful in practice: (a) The outputs are scaled such that equal magnitudes of crosscoupling into each of the outputs is equally undesirable. To reduce this overshoot we would need to use a two degreesoffreedom controller.23).76). Bedford in 1993. i. generate plots corresponding to those in Figure 9. More recently.37) to compute the the disturbance and reference responses. However. Next. also for the UK DRA at Bedford. Compute the gain and phase margins and compare closedloop instability with Ã and use (2.7 Design an ½ loopshaping controller for the disturbance process in (9. but it has a slightly smaller overshoot of ¾½% rather than ¾ %.and postcompensators Ï ½ and Ï¾ ). An excellent illustration of this is given in the thesis of Hyde (1991) who worked with Glover on the robust control of VSTOL (vertical and/or short takeoff and landing) aircraft.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ to that with Ã¿ (see Figure 2. and for loopshaping it can simplify the selection of weights. The À ½ loopshaping procedure has also been extensively studied and worked on by Postlethwaite and Walker (1992) in their work on advanced control of high performance helicopters. Scaling with respect to maximum values is important if the controllability analysis of earlier chapters is to be used. Based on these. Their work culminated in a successful ﬂight test of À ½ loopshaping control laws implemented on a Harrier jumpjet research vehicle at the UK Defence Research Agency (DRA). À . Scale the plant outputs and inputs. it is recommended that the following systematic procedure is followed when using À ½ loopshaping design: 1. but experience on real applications has shown that robust controllers can be designed with relatively little effort by following a few simple rules. À ½ loopshaping has been tested in ﬂight on a Bell 205 ﬂybywire helipcopter. it enables meaningful analysis to be made of the robustness properties of the feedback system in the frequency domain. (1999) and Smerlas et al. (2001). the inputs are scaled to reﬂect the relative actuator capabilities. That is. Skill is required in the selection of the weights (pre. and other studies. see Postlethwaite et al. This is very important for most design procedures and is sometimes forgotten. There are a variety of methods available including normalization with respect to the magnitude of the maximum or average value of the signal in question.e. An example of this type of scaling is given in the aeroengine case study of Chapter 12. ¾´× · ¿µ × (which results in an initial × which would yield repeat the design with Ï½ ½). In both cases ﬁnd maximum delay that can be tolerated in the plant before instability arises. scaling improves the conditioning of the design problem.75) using the weight Ï½ in (9. (b) Each input is scaled by a given percentage (say 10%) of its expected range of operation. In general. This application is discussed in detail in the helicopter case study in Section 12.
77) 3. Ï is diagonal and is adjusted so that actuator rate limits are not exceeded for reference demands and typical disturbances on the scaled plant outputs. at least 25% coprime factor uncertainty is allowed. 1. where Ï½ ÏÔ Ï Ï . Select the elements of diagonal pre. 5.70).4). This is effectively a constant decoupler and should not be used if the plant is illconditioned in terms of large RGA elements (see Section 6. we discuss the selection of weights to obtain the shaped plant where Ï½ ÏÔ Ï Ï × Ï¾ Ï½ (9. A small value of ¯Ñ Ü indicates that the chosen singular value loopshapes are . rolloff rates of approximately 20 dB/decade (a slope of about ½) at the desired bandwidth(s). The purpose of this pseudodiagonalization is to ease the design of the pre.10. then Ï ¾ might be chosen 1. if there are feedback measurements of two outputs to be controlled and a velocity signal. and we also ﬁnd that the shape of the openloop singular values will not have changed much after robust stabilization. for low frequency performance. Robustly stabilize the shaped plant × using the formulas of the previous section.and postcompensators Ï Ô and Ï¾ so that the singular values of Ï ¾ ÏÔ are desirable.1 . and synthesize a suboptimal controller using equation (9. for simplicity. Order the inputs and outputs so that the plant is as diagonal as possible. where ¼ ½ is in the velocity signal channel. In this case. ¯ Ñ Ü ¼ ¾ . phaseadvance for reducing the rolloff rates at crossover. Ï Ô contains the to be dynamic shaping. select Ñ Ò. Optional: Align the singular values at a desired bandwidth using a further constant weight Ï cascaded with ÏÔ . by about ½¼±. 4. This requires some trial and error. The align algorithm of Kouvaritakis (1974) which has been implemented in the MATLAB Multivariable FrequencyDomain Toolbox is recommended. Ï¾ Ï½ . and phaselag to increase the rolloff rates at high frequencies should all be placed in Ï Ô if desired. will be chosen to be diagonal. Integral action. reﬂecting the relative importance of the outputs to be controlled and the other measurements being fed back to the controller. There is usually ¼¾ no advantage to be gained by using the optimal controller. If the margin is too small.and postcompensators which. 6.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 2. with higher rates at high frequencies. This would normally mean high gain at low frequencies. Ï ¾ is usually chosen as a constant. For example. Some trial and error is involved here. First. Otherwise. Next. The weights should be chosen so that no unstable hidden modes are created in × . The relative gain array can be useful here. 0. When ¯ Ñ Ü (respectively Ñ Ò ) the design is usually successful. calculate the maximum stability ½ Ñ Ò. then go back margin ¯Ñ Ü to step 4 and modify the weights. Optional: Introduce an additional gain matrix Ï cascaded with Ï to provide control over actuator usage.
19 has been found useful when compared with the conventional set up in Figure 9. ¯ ¯ ¯ ¯ Exercise 9. 8. Polezeros cancellations are common in many À ½ control problems and are a problem when the plant has lightly damped modes.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿½ incompatible with robust stability requirements. This is because Ö ¹ Ã×´¼µÏ¾´¼µ ¹ + Ù× ¹ Ï½ Ã× Ù Ý× ¹ Ï¾ Õ ¹Ý Figure 9. there are no polezero cancellations between the plant and controller (Sefton and Glover. is justiﬁed theoretically in McFarlane and Glover (1990). (2000). assuming integral action in Ï ½ or . That the loopshapes do not change much following robust stabilization if is small (¯ large). Tsai et al. which in turn corresponds to a maximum stability margin ¯ Ñ Ü ½ Ñ Ò. here deﬁned in terms of comprime factor perturbations. The derivation of these margins is based on the gap metric (Gergiou and Smith.1. Deﬁnition: A stable transfer function matrix À ´×µ is inner if ÀÀ £ Á . and coinner if . À£À Á . 7. 1992). No iteration is required in the solution. It has recently been shown (Glover et al. when the À½ loopshaping weights Ï ½ andÏ¾ are diagonal. 1993) measures for uncertainty. can be interpreted in terms of simultaneous gain and phase margins in all the plant’s inputs and outputs. 1990. 2000) that the stability margin ¯ Ñ Ü ½ Ñ Ò. Except for special systems.19: A practical implementation of the loopshaping controller the references do not directly excite the dynamics of Ã × . We will conclude this subsection with a summary of the advantages offered by the above À½ loopshaping design procedure: It is relatively easy to use. which can result in large amounts of overshoot (classical derivative kick). 1990) and the gap metric (Vinnicombe. The conﬁguration shown in Figure 9.8 First a deﬁnition and some useful properties. ones with allpass factors.. Analyze the design and if all the speciﬁcations are not met make further modiﬁcations to the weights. but the interested reader is referred to the excellent book on the subject by Vinnicombe (2001) and the paper by Glover et al. Implement the controller. The constant preﬁlter ensures a steadystate gain of ½ between Ö and Ý . There exists a closed formula for the À ½ optimal cost Ñ Ò . being based on classical loopshaping ideas.. The operator À £ is deﬁned as À £ ´×µ À Ì ´ ×µ. A discussion of these mesures lies outside the scope of this book.
show for the shaped × the matrix Å× Æ× is coinner and hence that the ½ loopshaping cost function Properties: The ½ norm is invariant under right multiplication by a coinner function and under left multiplication by an inner function.79) has an exact solution. We are fortunate that the above robust stabilization problem is also one of great practical signiﬁcance.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Å× ½ Æ× . commands or references. although as we showed in Figure 9.79) Whilst it is highly desirable. such problems are rare. This shows that the problem of ﬁnding a stabilizing controller to minimise the 4block cost function (9.19 a simple constant preﬁlter can easily be implemented for steadystate accuracy.e. from a computational point of view. A general two degreesoffreedom feedback control scheme is depicted in Figure 9.78) is equivalent to ½ where Ë× ´Á × Ã× µ .20: General two degreesoffreedom feedback control scheme The À½ loopshaping design procedure of McFarlane and Glover is a one degreeoffreedom design. and the controller is driven (for example) by an error signal i. this will not be sufﬁcient and a dynamic two degreesoffreedom design is . a one degreeoffreedom structure may not be sufﬁcient. Ã× Ë× Ã× Ë× × Ë× Ë× × ½ (9. But in cases where stringent timedomain speciﬁcations are set on the output response. For many tracking problems. measurement or feedback signals and on the other. that Equipped with the above deﬁnition and properties.3 Two degreesoffreedom controllers Many control design problems possess two degreesoffreedom: on the one hand. 9. to have exact solutions for À½ optimization problems. À À Ã× ´Á Ã µ ½ Å ½ × × × Á ½ (9. The commands and feedbacks enter the controller separately and are independently processed. Ö ¹ ¹ Controller ¹ Õ ¹Ý Figure 9.4.20. Sometimes. one degreeoffreedom is left out of the design. the difference between a command and the output. however.
often called the reference model. We will show this later.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿¿ required. + ¹ ¡Æ× ¹  ¡Å× + Õ Ý¹ ¹ Ö ¹ Á ¬¹ Õ Ã½ + ¹ ¹ +ÕÙ× Æ× ¹ + ¹ Å× ½ + ¹ ÌÖ Ã¾ Á ¹ Figure 9.80) where Ã½ is the preﬁlter. The problem is Ö × easily cast into the general control conﬁguration and solved suboptimally using standard methods and iteration. with a normalized coprime factorization × Å× ½ Æ× . It is assumed that the measured outputs and the outputs to be controlled are the same although this assumption can be removed as shown later. (1991) and Limebeer et al.21: Two degreesoffreedom À½ loopshaping design problem ¢ £ Ã½ Ã¾ for The design problem is to ﬁnd the stabilizing controller Ã the shaped plant × Ï½ . (1993) a two degreesoffreedom extension of the GloverMcFarlane procedure was proposed to enhance the modelmatching properties of the closedloop.81) ÌÖ is the desired closedloop transfer function selected by the designer to introduce timedomain speciﬁcations (desired response characteristics) into the design process. which minimizes the À½ norm of the transfer function between the signals ¢ Ì Ì £Ì and ¢ ÙÌ Ý Ì Ì £Ì as deﬁned in Figure 9. In Hoyle et al. Ã¾ is the feedback controller.21. With this the feedback part of the controller is designed to meet robust stability and disturbance rejection requirements in a manner similar to the one degreeoffreedom loopshaping design procedure except that only a precompensator weight Ï is used. ¬ is the scaled reference. .21. and is a scalar parameter that the designer can increase to place more emphasis on model matching in the optimization at the expense of robustness. An additional preﬁlter part of the controller is then introduced to force the response of the closedloop system to follow that of a speciﬁed model. The purpose of the preﬁlter is to ensure that ´Á × Ã¾ µ ½ × Ã½ ÌÖ ½ ¾ (9. and Ý is the measured output. The control signal to the shaped plant Ù × is given by Ù× Ã½ Ã¾ ¬ Ý (9. Ì Ö . Both parts of the controller are synthesized by solving the design problem illustrated in Figure 9.
83) ¼ ¼ Á ¼ Å× ½ × ¾ÌÖ.×.21 and a little bit of algebra. the À ½ norm of this block matrix transfer function is minimized.. In addition. Notice that the (1.1) blocks help to limit actuator usage and the (3. To put the two degreesoffreedom design problem into the standard control conﬁguration. the problem reverts£to minimizing ¢ Ì the À½ norm of the transfer function between and ÙÌ Ý Ì .85) (9.86) ÌÖ ..Á ¼ ¼ ¼ Å× ½ × ¿ ¾ Ö Ù× ¿ (9.1) and (2. we have that ¾ Ù× Ý ¿ ¾ (9... the (1.84) ÌÖ Further.1) block corresponds to modelmatching.3) block is linked to the performance of the loop. if the shaped plant × and the desired stable closedloop transfer function have the following statespace realizations × s s × × Ö Ö × × Ö Ö (9.2) and (2..2) blocks taken together are associated with robust stabilization and the (3. and the two degreesoffreedom controller reduces to an ordinary À ½ loopshaping controller.Å× ½.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ From Figure 9.. namely. For ¼. we can deﬁne a generalized plant È by ¾ ´Á Ã¾ × µ ½ Ã½ ´Á¢ × Ã¾µ ½ × Ã½ ¾ ´Á × Ã¾ µ ½ × Ã½ ÌÖ £ Ã¾´Á × Ã¾ µ ½ Å× ½ ½ Å× ½ ´Á × Ã¾ µ ´Á × Ã¾ µ ½ Å× ½ ¿ Ö Ù× Ý  ¿ ¬ Ý È½½ È½¾ È¾½ È¾¾ ¾ ¾ Ö Ù× ¿ (9.82) In the optimization. the × robust stabilization problem.
..¾ . and in the realization (9.. This is not guaranteed by the optimization which aims to minimize the norm of the error... The required scaling is given by ½ Ï ¢ £ ÏÓ ´Á × ´¼µÃ¾ ´¼µµ ½ × ´¼µÃ½ ´¼µ ½ ÌÖ ´¼µ (9. À À À Remark 2 Extra measurements.... These extra measurements can often make the design problem easier (e. The command signals Ö can be scaled by a constant matrix Ï to make the closedloop transfer function from Ö to the controlled outputs ÏÓ Ý match the desired model ÌÖ exactly at steadystate.82). This can be accommodated in the two degreesoffreedom design procedure by introducing an output selection matrix ÏÓ . and × is the unique positive deﬁnite solution to the generalized Riccati equation (9.. 1989) to synthesize the controller Ì Á · × × . This problem would involve the structured singular value.12. if there are four feedback measurements and only by ÏÓ Ê× the ﬁrst three are to be controlled then ¾ ÏÓ ½ ¼ ¼ ¼ ¼ ½ ¼ ¼ ¼ ¼ ½ ¼ ¿ (9.. a designer has more plant outputs available as measurements than can (or even need) to be controlled. This matrix selects from the output measurements Ý only those which are to be controlled and hence included in the modelmatching part of the optimization. MATLAB commands to synthesize the controller are given in Table 9. For example.g.Ê... ....× Ö . In the optimization problem..67) for × . .87) Ã . controlled. In Figure 9... The resulting controller is Ã .. An alternative problem would be to minimize the ½ norm form Ö to subject to an upper bound on ¡Æ× ¡Å× ½ .... Note that Ê× and used in standard À ½ algorithms (Doyle et al.. see Section 8. Remark 1 We stress that we here aim to minimize the ½ norm of the entire transfer function in (9.. only the equation for the error is ½¾ affected.21.89) Á if there are no extra feedback measurements beyond those that are to be Recall that ÏÓ ¢ £ Ã½ Ï Ã¾ .3.¼ ¼ Á ¼ ¼ ½ ¼ ¼ Ê× ¾ × × × ¼ ¼ (9.ÇÆÌÊÇÄÄ Ê ¾ ËÁ Æ ¿ ¿ then È may be realized by Ì Ì ´ × × · × × µÊ× ½ ¾ × ¼ ¼ ¼ Ö Ö ¼ ¼ ¼ ¼ Á ½ ¼ ¼ Ê× ¾ × × ½¾ ¾ Ö × ×... ÏÓ is introduced between Ý and the summing junction. velocity feedback) and therefore when beneﬁcial should be used by the feedback controller Ã¾ ..87) for È one simply replaces × by ÏÓ × and Ê× ½ ¾ in the ﬁfth row..88) Remark 3 Steadystate gain matching. and the optimal controller could be obtained from solving a series of ½ optimization problems using Ã iteration.. In some cases..
Q1 = Bs*inv(Ss)*Bs’. R1 = Cs’*inv(Rs)*Cs. [l2. P = pck(A.D). ncon = m2.’. use command: hinfopt % À½ 2DOF controller in (9.zeros(ls. gmin.Ds] = unpck(Gs).C.Bs.ls)].ns+nr). C1 = [zeros(ms.Cr.Q1. C = [C1.m2] = size(D12).R1).ms).Cs zeros(ls. Ss = eye(ms)+Ds’*Ds. reig min] = ric schr([A1’ R1.nr).ms] = size(Ds).3: MATLAB commands to synthesize the % Uses MATLAB mu toolbox % % INPUTS: Shaped plant Gs % Reference model Tref % % OUTPUT: Two degreesoffreedom controller K % % Coprime factorization of Gs % [As. Br zeros(nr.C2]. Z2.zeros(ls. [Z1. [ls. [lr. Robust Control toolbox: % [Z1. Zs = Z2/Z1.eig.Cs. D11 = [zeros(ms. % % Choose rho=1 (Designer’s choice) and % build the generalized plant P in (9.Zs] = aresolv(A1’. nmeas = l2. B = [B1 B2].mr) ((Bs*Ds’)+(Zs*Cs’))*inv(sqrt(Rs)).01.zerr. gmin = 1.ls). C2 = [zeros(mr.rho*Cs rho*rho*Cr]. B1 = [zeros(ns. ncon. [ns.87) % rho=1. Rs = eye(ls)+Ds*Ds. Q1 A1]).Ds. nmeas.wellposed.rho*rho*Dr rho*sqrt(Rs)].Br.80) . gtol).Ar).rho*Ds]. A = daug(As. fail.m1] = size(D21). B2 = [Bs. % Alternative: Use sysic to generate P from Figure 9.zeros(nr. % Alt. D21 = [rho*eye(mr) zeros(mr.mr+ls).D21 D22]. gam] = hinfsyn(P.21 % but may get extra states. % Alt.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Table 9. D = [D11 D12. Robust toolbox. gmax = 5. [nr.Bs*inv(Ss)*Ds’*Cs).B.mr] = size(Dr).Dr] = unpck(Tref). gtol = 0.mr) sqrt(Rs).Z2.ns+nr). Gnclp.mr) sqrt(Rs)]. D12 = [eye(ms). [K.nr)]. gmax.Cs zeros(ls. [Ar.Ds]. since states from Gs may enter twice. D22 = [zeros(mr.ns] = size(As). % % Gamma iterations to obtain Hinfinity controller % [l1.ms)]. A1 = (As .nr] = size(Ar).
Remember to include Ï Ó in to a speciﬁed tolerance to get Ã the problem formulation if extra feedback measurements are to be used. The clear structure of À ½ loopshaping controllers has several advantages: ¯ ¯ It is helpful in describing a controller’s function.4 Observerbased structure for À½ loopshaping controllers À½ designs exhibit a separation structure in the controller. whether of the one or two degreesoffreedom variety. Select a desired closedloop transfer function Ì Ö between the commands and controlled outputs. Design a one degreeoffreedom À ½ loopshaping controller using the procedure of Subsection 9. Analyse and. Set the scalar parameter to a small value greater than 1. having a disturbance term (a “worst” disturbance) entering the observer state equations. if required. 5. .51) the controller has an observer/state feedback structure.22: Two degreesoffreedom À½ loopshaping controller 9. redesign making adjustments to and possibly Ï ½ and ÌÖ .2. For the shaped plant × parameter . the desired response Ì Ö . but without a postcompensator weight Ï ¾ . 6.4.4. À½ loopshaping controller is illustrated in Ö ¹ Ï ¹ Ã½ + ¹ +¹ Ï½ Ã¾ ¹ Ý ¹ Controller Figure 9.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¿ 1. solve the standard À ½ optimization £ problem deﬁned by È in (9.87) ¢ Ã½ Ã¾ .50) and (9. and the scalar 4. The ﬁnal two degreesoffreedom Figure 9. something in the range 1 to 3 will usually sufﬁce. Replace the preﬁlter Ã ½ . 2. Hence Ï½ . especially to one’s managers or clients who may not be familiar with advanced control. As seen from (9. by Ã½ Ï to give exact modelmatching at steadystate. as shown by Hyde and Glover (1993). but the observer is nonstandard. this extra term is not present. For À ½ loopshaping controllers. It lends itself to implementation in a gainscheduled scheme. 3. Ï½ .22 We will conclude this subsection with a summary of the main steps required to synthesize a two degreesoffreedom À ½ loopshaping controller.
93) (9. He considers a stabilizable and detectable plant × s × × ¼ × (9. and À× Ã× Ì × × ¢ Ì ¾ × Á Á £ ¾ × × ½ × (9. The same structure was used by Hyde and Glover (1993) in their VSTOL design which was scheduled as a function of aircraft forward speed.67) and (9. as shown in (Samar. À½ loopshaping designs. Walker (1996) has shown that the two degreesoffreedom À ½ loopshaping controller also has an observerbased structure.68).92) where Ü× is the observer state.96) . For simplicity we will assume the shaped plant is strictly proper.90) À½ loopshaping controller can be realized as an observer for the shaped plant plus a statefeedback control law. for both one and two degreesoffreedom × s × × ¼ × (9.91) (9. with a stabilizable and detectable statespace realization We will present the controller equations. 1990). an implementation of an observerbased À ½ loopshaping controller is shown in block diagram form. In Figure 9.23. the single degreeoffreedom Ü× Ù× × Ü× · À× ´ × Ü× Ý× µ · × Ù× Ã× Ü× (9.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¯ It offers computational savings in digital implementations and some multimode switching schemes.95) and a desired closedloop transfer function ÌÖ s Ö Ö ¼ Ö (9.94) where × and × are the appropriate solutions to the generalized algebraic Riccati equations for × given in (9. 1995). The equations are In which case. Ù× and Ý× are respectively the input and output of the shaped plant. as shown in (Sefton and Glover.
.99) (9.87) simpliﬁes to Ì × × × ¿ ¾ È s ¼ ¼ Ö Ö ¼ ¼ ¼ Á ¼ ¼ Á ¼ × ¾ Ö Á × .... where (9.... (i) (ii) Ã Ã½ Ã¾ £ satisfying ½ · ¾ .¼ ...98) ¼ ...97) Walker then shows that a stabilizing controller Ð ´È Ã µ ½ exists if..ÇÆÌÊÇÄÄ Ê ËÁ Æ Ö + ¹ Ï ´Úµ ¹ + Ù× Õ ¹Ï½´Úµ Ù¹ À× ´Úµ + ¹+¹ + Ê Õ ¹Ý Ï¾ ´Úµ Ý× × ´Ú µ + ¿ × ´Ú µ ¹ Õ × ´Ú µ Õ Ã× ´Úµ Figure 9.23: An implementation of an scheduling against a variable Ú À½ loopshaping controller for use when gain in which case the generalized plant ¾ È ´×µ in (9.100) (9. ... and only if.¼ ¼ Á ¼ ¼ ¼ Á ¼ ¼ × ¼ ¼ × ¼ ¼ ½ ¾ ½ ½½ ¾½ ¢ ¾ ½¾ ¾¾ ¿ (9.101) ´ Ì Â µ ½ ´ Ì Â · Ì ½ µ Â ½½ ½¾ ÁÛ ¼ ÁÞ ¼ ¼ ¾ÁÛ .¼. and ½ ¼ is a solution to the algebraic Riccati equation ¢ Ô Ì ½ · Ì ½ · ½ ½ Ì´ ÌÂ µ such that Re · £ ¼ (9..
5 Implementation issues Discretetime controllers.103) (9. discretetime controllers are usually required. This is a characteristic of the two degreesoffreedom À ½ loopshaping controller (Hoyle et al. Therefore whilst the structure of the controller is comforting in the familiarity of its parts. Likewise. 9.24.106) The structure of this controller is shown in Figure 9. For implementation purposes. As in the one degreeoffreedom case. in the standard conﬁguration. parts of the observer may not change. 1991). respectively.4. The referencemodel part of the controller is also nice because it is often the same at different design operating points and so may not need to be changed at all during a scheduled operation of the controller. Ö (9. for example. then a stabilizing controller Ð ´È Ã µ ½ has the following equations: Ü× ÜÖ Ù× where × Ü× · À× ´ × Ü× Ý× µ · × Ù× Ö ÜÖ · Ö Ö Ì Ì × ½½½ Ü× × ½½¾ ÜÖ (9.104) ½½½ and ½½¾ are elements of ½ ½½½ ½¾½ × ½½¾ ½¾¾ ¼ (9.. if the weight Ï ½ ´×µ is the same at all the design operating points. a model of the desired closedloop transfer function Ì Ö (without Ö ) and a statefeedback control law that uses both the observer and referencemodel states. where ÁÞ and ÁÛ are unit matrices of dimensions equal to those of the error signal Þ . and exogenous input Û. where the statefeedback gain matrices × and Ö are deﬁned by × Ì × ½½½ Ö Ì × ½½¾ (9.102) (9.107) The controller consists of a state observer for the shaped plant × . it also has some signiﬁcant advantages when it comes to implementation. this observerbased structure is useful in gainscheduling.¼¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Notice that this À½ controller depends on the solution to just one algebraic Riccati equation.93). Ã ´×µ satisfying Walker further shows that if (i) and (ii) are satisﬁed. not two.105) which has been partitioned conformally with ¼ and À× is as in (9. These can be obtained from a continuoustime design using .
Let the weight Ï ½ have a realization Ï½ s Û Û Û Û (9. Antiwindup. the expression for Ù becomes output from shaped plant À× + Ý× ×  Ý× ¹ input to shaped plant × + ¹ +¹ + Ê ÕÕ Ü¹ × × × Ù× ¹ Õ Ö + ¹ +¹ Ê ÜÖ¹ Ö Õ  Ö commands Ö Figure 9. but there can be advantages in being able to design directly in discretetime. As this was a real application.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¼½ a bilinear transformation from the ×domain to the Þ domain. a number of important implementation issues needed to be addressed. However. This application was on a real engine. observerbased statespace equations are derived directly in discretetime for the two degreesoffreedom À ½ loopshaping controller and successfully applied to an aero engine. An antiwindup scheme is therefore required on the weighting function Ï ½ .24: Structure of the two degreesoffreedom À½ loopshaping controller . which is a Rolls Royce 2spool reheated turbofan housed at the UK Defence Research Agency. Pyestock. (1995). Then Ù Ï½ Ù× . In À½ loopshaping the precompensator weight Ï ½ would normally include integral action in order to reject low frequency disturbances acting on the system. a Spey engine. The approach we recommend is to implement the weight Ï ½ in its selfconditioned or Hanus form. they will be brieﬂy mentioned now. Although these are outside the general scope of this book. When implemented in Hanus form. In Samar (1995) and Postlethwaite et al. in the case of actuator saturation the integrators continue to integrate their input and hence cause windup problems.108) and let Ù be the input to the plant actuators and Ù × the input to the shaped plant.
110) where Ùas is the actual input to the shaped plant governed by the online controller. that is the measurement at the output of the actuators which therefore contains information about possible actuator saturation. When there is no saturation Ù Ù. which were switched between depending on the most signiﬁcant outputs at any given time. Ù the dynamics are inverted and driven by Ù so that the states But when Ù remain consistent with the actual plant input Ù .102) but when offline the state equation becomes Ü× × Ü× · À× ´ × Ü× Ý× µ · × Ùas (9. The Hanus form prevents windup by keeping the states of Ï½ consistent with the actual plant input at all times.103) and (9. To ensure smooth transition from one controller to another .108) Bumpless transfer. The situation is illustrated in Figure 9. the dynamics of Ï ½ remain unaffected and (9. a multimode switched controller was designed.bumpless transfer . In the aeroengine application. the observer state evolves according to an equation of the form (9. 1987) ÅÍÄÌÁÎ ÊÁ Û Û Û Û Ä Ã ÇÆÌÊÇÄ Ù ½ Û ¼ Û Û Û½ ¼ Ù× Ù (9.104) is not invertible and therefore cannot be selfconditioned.109) where Ù is the actual plant input. However. where the actuators are each modelled by a unit gain and a saturation.9 Show that the Hanus form of the weight Ù.109) simpliﬁes to (9. The reference model with state feedback given by (9. when there is no saturation i.e.. Thus when online.25: Selfconditioned weight Ï½ Exercise 9. in discretetime the .109) simpliﬁes to (9.¼¾ (Hanus et al. Notice that such an implementation requires Ï½ to be invertible and minimum phase. This consisted of three controllers. when Ù Ï½ in (9. Ù× ¹ Self¹ conditioned Ï½ actuator saturation Ù ¹ ¡¡ ¹ Ù Õ ¹ ¹ Figure 9.it was found useful to condition the reference models and the observers in each of the controllers.108).25. each designed for a different set of engine output variables.
on signals. In which case. with more functions the shaping becomes increasingly difﬁcult for the designer. In this approach. we recommend analysis as discussed in Chapter 8. For complex problems. we would suggest doing an initial LQG design (with simple weights) and using the resulting loop shape as a reasonable one to aim for in À ½ loop shaping. À ½ optimization is used to shape two or sometimes three closedloop transfer functions. model following and model uncertainty). After a design. The following quote from Rosenbrock (1974) illustrates the dilemma: In synthesis the designer speciﬁes in detail the properties which his system must have. but this complexity is rarely necessary in practice. Moreover. . Sometimes one might consider synthesizing a optimal controller.. The act of specifying the requirements in detail implies the ﬁnal . and if the design is not robust. Consequently. to the point where there is only one possible solution. it may not be easy to decide on a desired loop shape. the resulting controller should be analyzed with respect to robustness and tested by nonlinear simulation.. it may be more appropriate to follow a signalbased À ¾ or À½ approach. but our emphasis has been on À ½ loop shaping which is easy to apply and in our experience works very well in practice. It combines classical loopshaping ideas (familiar to most practising engineers) with an effective method for robustly stabilizing the feedback loop. one should be careful about combining controller synthesis and analysis into a single step. However.g.5 Conclusion We have described several methods and techniques for controller design. in the aeroengine example the reference models for the three controllers were each conditioned so that the inputs to the shaped plant from the offline controller followed the actual shaped plant input Ù as given by the online controller. Satisfactory solutions to implementation issues such as those discussed above are crucial if advanced control methods are to gain wider acceptance in industry. For the former. But again the problem formulations become so complex that the designer has little direct inﬂuence on the design. In other design situations where there are several performance objectives (e.ÇÆÌÊÇÄÄ Ê ËÁ Æ ¼¿ optimal control also has a feedthrough term from Ö which gives a reference model that can be inverted. An alternative to À ½ loop shaping is a standard À ½ design with a “stacked” cost function such as in S/KS mixedsensitivity optimization. 9. then the weights will need modifying in a redesign. such as unstable plants with multiple gain crossover frequencies. We have tried to demonstrate here that the observerbased structure of the À ½ loopshaping controller is helpful in this regard.
controller synthesis. and so on. Rosenbrock (1974) makes the following observation: Solutions are constrained by so many requirements that it is virtually impossible to list them all. control system analysis and nonlinear simulation. A good design usually has strong aesthetic appeal to those who are competent in the subject. attempting to reconcile conﬂicting demands of cost. performance.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ solution. The designer ﬁnds himself threading a maze of such requirements. which can then turn out to be unsuitable in ways that were not foreseen. . control structure design. performance and robustness weights selection. control system design usually proceeds iteratively through the steps of modelling. controllability analysis. easy maintenance. yet has to be done in ignorance of this solution. Therefore.
under the title “A quantitative approach to the selection and partitioning of measurements and manipulations for the control of complex systems”. He noted that increases in controller complexity unnecessarily outpaces increases in plant complexity. but the gap still remained. but the same issues are relevant in most other areas of control where we have largescale systems. They therefore fail to answer some basic questions which a control engineer regularly meets in practice. 1989) gave a number of lectures based on his experience on aeroengine control at General Electric. Later Morari et al. for decentralized control. and which links should be made between them? The objective of this chapter is to describe the main issues involved in control structure design and to present some of the available quantitative methods. 1989. Which variables should be controlled. and still does to some extent today.1 Introduction Control structure design was considered by Foss (1973) in his paper entitled “Critique of process control theory” where he concluded by challenging the control theoreticians of the day to close the gap between theory and applications in this important area. hierarchical control and multilevel optimization in their paper “Studies in the synthesis of control structure for chemical processes”. For example. Control structure design is clearly important in the chemical process industry because of the complexity of these plants. in the late 1980s Carl Nett (Nett. Nett and Minto. for example. and that the objective should be to .½¼ ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ Most (if not all) available control theories assume that a control structure is given at the outset. 10. which inputs should be manipulated. (1980) presented an overview of control structure design. which variables should be measured.
etc): What algorithm is used for Ã in Figure 10. see section 10. and stated that the controller design problem is to ¯ Find a controller Ã which based on the information in Ú . if we go back to Chapter 1 (page 1). 10. e. In this chapter we are concerned with the structural decisions (Steps 4.2 and 10.1. The control structure (or control strategy) refers to all structural decisions included in the design of a control system.1? The distinction between the words control structure and control conﬁguration may seem minor. 5. thereby minimizing the closedloop norm from Û to Þ .¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ . (weighted) exogenous inputs (weighted) exogenous outputs Û Ù ¹ ¹ È Ã ¹Þ Ú control signals sensed outputs Figure 10. the control conﬁguration .3): What are the variables Þ in Figure 10. On the other hand.1. then we see that this is only Step 7 in the overall process of designing a control system.8): What is the structure of Ã in Figure 10.4): What are the variable sets Ù and Ú in Figure 10. The selection of a controller type (control law speciﬁcation. The selection of a control conﬁguration (a structure interconnecting measurements/commands and manipulated variables.. minimize control system complexity subject to the achievement of accuracy speciﬁcations in the face of uncertainty. PIDcontroller. However. generates a control signal Ù which counteracts the inﬂuence of Û on Þ . see sections 10. see sections 10.6. how should we “pair” the variable sets Ù and Ú ? 4. but note that it is signiﬁcant within the context of this book. decoupler.g. that is. The selection of manipulations and measurements (sets of variables which can be manipulated and measured for control purposes.1? 3.. LQG. The selection of controlled outputs (a set of variables which are to be controlled to achieve a set of speciﬁc objectives. 6 and 7) associated with the following tasks of control structure design: 1.1: General control conﬁguration In Chapter 3.7 and 10.1? 2.8 we considered the general control problem formulation in Figure 10.
Some discussion on control structure design in the process industry is given by Morari (1982). The number of possible control structures shows a combinatorial growth. A survey on control structure design is given by van de Wal and de Jager (1995). In other cases. Multivariable centralized controllers may always outperform decomposed (decentralized) controllers. we can often from physical insight obtain a reasonable choice of controlled outputs.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¼ refers only to the structuring (decomposition) of the controller Ã itself (also called the measurement/manipulation partitioning or input/output pairing). Here the reference value Ö is set at some higher layer . A review of control structure design in the chemical process industry (plantwide control) is given by Larsson and Skogestad (2000). Shinskey (1988). Stephanopoulos (1984) and Balchen and Mumme (1988). measurements and inputs (with little regard to the conﬁguration of the controller Ã ) and then a “bottomup” design of the control system (in which the selection of the control conﬁguration is the most important decision). One important reason for decomposing the control system into a speciﬁc control conﬁguration is that it may allow for simple tuning of the subcontrollers without the need for a detailed plant model describing the dynamics and interactions in the process.1.2 and 10. The selection of controlled outputs. The reader is referred to Chapter 5 (page 160) for an overview of the literature on inputoutput controllability analysis.6. Control conﬁguration issues are discussed in more detail in Section 10. in practice the tasks are closely related in that one decision directly inﬂuences the others. but we will in these two sections call them Ý . measurements and manipulated inputs. so for most systems a careful evaluation of all alternative control structures is impractical. 10. but this performance gain must be traded off against the cost of obtaining and maintaining a sufﬁciently detailed plant model. Ideally. However. simple controllability measures as presented in Chapters 5 and 6 may be used for quickly evaluating or screening alternative control structures. so the procedure may involve iteration. manipulations and measurements (tasks 1 and 2 combined) is sometimes called input/output selection. Ý Ö. the tasks involved in designing a complete control system are performed sequentially. Fortunately. These are the variables Þ in Figure 10. ﬁrst a “topdown” selection of controlled outputs. The selection of controlled outputs involves selecting the variables Ý to be controlled at given reference values.2 Optimization and control In Sections 10.3 we are concerned with the selection of controlled variables (outputs).
as is illustrated in Figure 10. Thus.2: Typical control system hierarchy in a chemical plant control hierarchy for a complete chemical plant.2 which shows a typical Scheduling (weeks) Sitewide optimization (day) © ª ¨¨ª¨ ¨ ª Control layer Local optimization (hour) Í ¢ · ¢ ¢ Supervisory control (minutes) Regulatory control (seconds) Ï Figure 10. the selection of controlled outputs (for the control layer) is usually intimately related to the hierarchical structuring of the control system which is often divided into two layers: ¯ ¯ optimization layer — computes the desired reference commands Ö (outside the scope of this book) control layer — implements these commands to achieve Ý Ö (the focus of this book). Here the control layer is subdivided into two layers: supervisory control (“advanced control”) and regulatory control (“base control”).¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ in the control hierarchy. We have also included a scheduling layer above the optimization. Additional layers are possible. the information ﬂow in such a control hierarchy is based on the higher layer . In general.
operator acceptance. the difﬁculty of controller design. the control layer is mainly based on feedback information.g. such that the overall goal is satisﬁed. for example. However. see Figure 10. according to Lunze (1992). lack a formal analytical treatment. the optimal coordination of the inputs and thus the optimal performance is obtained with a centralized optimizing controller. Nevertheless. and the interaction and task sharing between the automatic control system and the human operators are very important in most cases.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¼ sending reference values (setpoints) to the layer below. Remark 1 In accordance with Lunze (1992) we have purposely used the word layer rather than level for the hierarchical decomposition of the control system. There is usually a time scale separation with faster lower layers as indicated in Figure 10. From a theoretical point of view. although often used in practice. an aircraft pilot. Between these updates. whereas in a multilayer system the different units have different objectives (which preferably contribute to the overall goal). we generally refer to the control layer. and the lack of computing power. maintenance and modiﬁcation. and from now on when we talk about control conﬁgurations. are updated only periodically. Multilevel systems have been studied in connection with the solution of optimization problems. this solution is normally not used for a number of reasons. Remark 2 The tasks within any layer can be performed by humans (e. these issues are outside the scope of this book. robustness problems. On the other hand. The optimization tends to be performed openloop with limited use of feedback. which combines the two layers of optimization and control. This observation is the basis for Section 10. including the cost of modelling. infrequent steadystate optimization. e. in the next section we provide some ideas on how to select objectives (controlled outputs) for the control layer. Mesarovic (1970) reviews some ideas related to online multilayer structures applied to largescale industrial complexes. This means that the setpoints.3(c). .2.g.3 which deals with selecting outputs for the control layer. The difference is that in a multilevel system all units contribute to satisfying the same goal. However. hierarchical decomposition and decentralization. All control actions in such an ideal control system would be perfectly coordinated and the control system would use online dynamic optimization based on a nonlinear dynamic model of the complete plant instead of. when the setpoints are constant. see Figure 10. whereas we often use linear dynamic models in the control layer (as we do throughout the book). it is important that the system remains reasonably close to its optimum.3(b). The optimization is often based on nonlinear steadystate models. However. multilayer structures. manual control). and the lower layer reporting back any problems in achieving this. as viewed from a given layer in the hierarchy. As noted above we may also decompose the control layer.
as the optimal heat input depends strongly on the particular oven we use.½¼ Objective ÅÍÄÌÁÎ ÊÁ Objective Ä Objective Optimizing Controller Ã ÇÆÌÊÇÄ Optimizer Optimizer Ù + Ö  Ý Ù Ý Ý Controller Ù (a) (b) (c) Figure 10.1 Cake baking. The overall goal is to make a cake which is well baked inside and with a nice exterior.g. (c) Integrated optimization and control. it is clear from a physical understanding of the process what the controlled outputs should be. In many cases. Then the controlled outputs Ý are selected to achieve the overall system goal. In shortm the openloop implementation is sensitive to uncertainty. we might consider directly manipulating the heat input to the stove. and if we were to construct the stove ourselves. Now. possibly with a wattmeter measurement. The manipulated input for achieving this is the heat input. if we had never baked a cake before. An effective way of reducing the uncertainty is to use feedback. from opening the oven door or whatever else might be in the oven. For example. (b) Closedloop implementation with separate control layer. To get an idea of the issues involved in output selection let us consider the process of baking a cake. Ù É. in practice we look up the optimal oven temperature in a cook book. Therefore. if we consider heating or cooling a room. 10. for example. e. and may not appear to be important variables in themselves. then we should select room temperature as the controlled output Ý . this openloop implementation would not work well.3: Alternative structures for optimization and control. and the operation is also sensitive to disturbances. Example 10. However. (a) Openloop optimization. In other cases it is less obvious because each control objective may not be associated with a measured output variable. at ½ minutes). (and we will assume that the duration of the baking is ﬁxed. and use a closedloop implementation where a thermostat is used .3 Selection of controlled outputs A controlled output is an output variable (usually measured) with an associated control objective (usually a reference value).
Typically. However. and an acceptably baked cake achieves È (a completely burned cake may correspond to È ½). Recall that the title of this section is selection of controlled outputs. Ö ÝÓÔØ ´ µ For example. in the cake baking process we may assign to each cake a number È on a scale from 0 to 10. based on cake quality. Ö should be independent of the disturbances . and the overall goal of the control system is then to cases we can select Â minimize Â . In both È . (b) For a given disturbance . there exists an optimal value Ù ÓÔØ ´ µ and corresponding value Ý ÓÔØ ´ µ which minimizes the cost function Â . In the following. i. e. (c) The reference values Ö for the controlled outputs Ý should be constant. What variables Ý should be selected as the controlled variables? 2.e.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½½ to keep the temperature Ý at its predetermined value Ì . and we Ideally.3. we want Ù select controlled outputs Ý such that: . For a given disturbance the optimal value of the cost function is ÂÓÔØ ´ µ ¸ Â ´ÙÓÔØ µ ÑÙÒ Â ´Ù µ (10. The system behaviour is a function of the independent variables Ù and .g. So why do we select the oven temperature as a controlled output? We now want to outline an approach for answering questions of this kind.1) ÙÓÔØ . A perfectly baked cake achieves È ½¼. The (a) openloop and (b) closedloop implementations of the cake baking process are illustrated in Figure 10. so we may write Â Â ´Ù µ. In (b) the “optimizer” is the cook book which has a precomputed table of the optimal temperature proﬁle. What is the optimal reference value (Ý ÓÔØ ) for these variables? The second problem is one of optimization and is extensively studied (but not in this book). We make the following assumptions: (a) The overall goal can be quantiﬁed in terms of a scalar cost function Â which we want to minimize. In another case È could be the operating proﬁt. Note that this may also include directly using the inputs (openloop implementation) by selecting Ý Ù. In the cake baking process we select oven temperature as the controlled output Ý in the control layer. The reference value Ö for temperature is then sent down to the control layer which consists of a simple feedback controller (the thermostat). this will not be achieved in practice. we let Ý denote the selected controlled outputs in the control layer. Here we want to gain some insight into the ﬁrst problem. It is interesting to note that controlling the oven temperature in itself has no direct relation to the overall goal of making a wellbaked cake. Two distinct questions arise: 1. some average value is selected.
. e. This results in the useful minimum singular value rule. However. If we with constant references (setpoints) Ö can achieve an acceptable loss. Here Ö is usually selected as the optimal value for the nominal disturbance. by solving the nonlinear equations. for example. What happens if Ù ÙÓÔØ .g. assuming Ý Ö · where Ö is kept constant.1 Selecting controlled outputs: Direct evaluation of cost The “brute force” approach for selecting controlled variables is to evaluate the loss for alternative sets of controlled variables. which may be misleading.2 Selecting controlled outputs: Linear analysis We here use a linear analysis of the loss function. we evaluate directly the cost function Â for various disturbances and control errors . then this set of controlled variables is said to be selfoptimizing. This approach is may be time consuming because the solution of the nonlinear equations must be repeated for each candidate set of controlled outputs. the worstcase loss ÏÓÖ×Ø × ÐÓ×× ¨ ¸ Ñ Ü Â ´Ù µ Â ´ÙÓÔØ µ ¾ ßÞ Ä (10.½¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¯ The input Ù (generated by feedback to achieve optimal input Ù ÓÔØ ´ µ. or more precisely twice differentiable. 10.2) Here is the set of possible disturbances. due to a disturbance? Obviously. Ý Ö) should be close to the Note that we have assumed that Ö is independent of . note that this is a local analysis. As “disturbances” we should also include changes in operating point and model uncertainty.3.2). Specically. We make the following additional assumptions: (d) The cost function Â is smooth. The special case of measurement selection for indirect control is covered on page 439. and a reasonable objective for selecting controlled outputs Ý is to minimize some norm of the loss.3. for example. we then have a loss which can be quantiﬁed by Ä Â Â ÓÔØ . 10. Consider the loss Ä Â ´Ù µ ÂÓÔØ ´ µ (10. where is a ﬁxed (generally nonzero) disturbance. if the optimum point of operation is close to infeasibility. The set of controlled outputs with smallest worstcase or average value of Â is then preferred. but this may not be the best choice and its value may also be found by optimization (“optimal backoff”).
that is.6) First. The second term on the right hand side in (10. then we assume that this is implemented and consider the remaining unconstrained problem.g. for example due to poor control performance or an incorrect measurement or estimate (steadystate bias) of Ý .5) Ù¾ ÓÔØ where the term ´ ¾ Â Ù¾ µÓÔØ is independent of Ý .4) ¾Â ½ ´Ý ÝÓÔØµ (10. a cook book) precomputes a desired Ö which is different from the optimal ÝÓÔØ ´ µ. In addition. we would like to select the controlled outputs such that Ý Ý ÓÔØ is zero.3) Equation (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½¿ (e) The optimization problem is unconstrained. Next. we consider the steadystate control and optimization. write Ý ÝÓÔØ Ý Ö · Ö ÝÓÔØ · ÓÔØ (10.) We get Ý which results in the smallest practice.3) is zero at the optimal point for an unconstrained problem. this is not possible in Â ÂÓÔØ ¡Ì ½ ½ ´Ý ÝÓÔØ µ ¾ (If is not invertible we may use the pseudoinverse possible Ù ÙÓÔØ ¾ for a given Ý Ý ÓÔØ . We get µ in terms of a Taylor series expansion in Ù Â ´Ù µ ÂÓÔØ ´ µ · Â Ì ´Ù ÙÓÔØ ´ µµ · Ù ÓÔØ ßÞ ¼ ¾Â ½ ´Ù ÙÓÔØ ´ µµÌ ´Ù ÙÓÔØ´ µµ · ¾ Ù¾ ÓÔØ ¡¡¡ (10. If the control itself is perfect then Ò (measurement noise). . In most cases the errors and ÓÔØ ´ µ can be assumed independent. If ﬁxed becomes Ý ÝÓÔØ is invertible we then get We will neglect terms of third order and higher (which assumes that we are reasonably close to the optimum). to study how this relates to output selection we use a linearized model of the plant. To see this. we have a control error Ý Ö because the control layer is not perfect. (f) The dynamics of the problem can be neglected. Ù ÙÓÔØ ½ ´Ý ÝÓÔØµ (10. we have an optimization error ÓÔØ ´ µ ¸ Ö ÝÓÔØ ´ µ. which for a ´Ù ÙÓÔØ µ where is the steadystate gain matrix. However. If it is optimal to keep some variable at a constraint. Obviously. For a ﬁxed we may then express Â ´Ù around the optimal point. because the algorithm (e.3) quantiﬁes how Ù Ù ÓÔØ affects the cost function.
and so we want the smallest singular value of the Note that ´ ½ µ steadystate gain matrix to be large (but recall that singular values depend on scaling as is discussed below). The use of the minimum singular value to select controlled outputs may be summarized in the following procedure: 1. we know that the optimal oven temperature ÌÓÔØ is largely independent of disturbances and is almost the same for any oven. ÓÔØ´ µ Ö ÝÓÔØ ´ µ is small. 1) in magnitude for each output. the choice of Ý should be such that the inputs have a large effect on Ý . as obtained from the cook book. corresponding to the direction of ´ µ. the optimal heat input ÉÓÔØ depends strongly on the heat loss. From (10. We must also assume that the variations in Ý Ý ÓÔØ are uncorrelated. . etc.1 Cake baking. On the other hand. we ﬁnd that it is much easier to keep ÓÔØ ÉÖ ÉÓÔØ [W] small. the choice of Ý should be such that its optimal value ÝÓÔØ ´ µ depends only weakly on the disturbances and other changes.5) that we should ﬁrst scale the outputs such that the expected magnitude of Ý Ý ÓÔØ is similar (e. the choice of Ý should be such that it is easy to keep the control ½ ´ µ. Ý Ö is small. From experience. This means that we may always specify the same oven temperature. how often the oven door is opened. ½ is small (i. 2. etc.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Example 10. (This yields a “lookup” table of optimal parameter values as a function of the operating conditions.5) and (10.6). 3. In summary. the size of the oven. From a (nonlinear) model compute the optimal parameters (inputs and outputs) for various conditions (disturbances. Procedure. operating points). From this data obtain for each candidate output the variation in its optimal value. The desire to have ´ µ large is consistent with our intuition that we should ensure that the controlled outputs are independent of each other. is large). A cook book would then need to list a different value of ÉÖ for each kind of oven and would in addition need some correction factor depending on the room temperature. we must assume: (g) The “worstcase” combination of output deviations. the main reason for controlling the oven temperature is to minimize the optimization error. and may vary between.) 2. say ÌÖ ½ ¼Æ C. and scale the inputs such that the effect of a given deviation Ù Ù ÓÔØ on the cost function Â is similar for each input (such that ¾ ¡ Â Ù¾ ÓÔØ is close to a constant times a unitary matrix). say ½¼¼W and ¼¼¼W. continued. Ú ´Ý ÓÔØ Ñ Ü Ý ÓÔØ Ñ Ò µ ¾.g. can occur in practice. Ý Ý ÓÔØ . Ì ÌÓÔØ [Æ C] small than to keep Therefore. error small. we see from (10. we conclude that we should select the controlled outputs Ý such that: 1. or more precisely. Let us return to our initial question: Why select the oven temperature as a controlled output? We have two alternatives: a closedloop Ì (the oven temperature) and an openloop implementation with implementation with Ý Ý Ù É (the heat input).e. To use ´ µ to select controlled outputs.
4. For practical reasons. Ú · ½). Note that the disturbances and measurement noise enter indirectly through the scaling of the outputs (!). In particular. Since the engine at steadystate has three degreesoffreedom we need to specify three variables to keep the engine approximately at the optimal point. The question is then: Which variables (controlled outputs) should be kept constant (between each optimization)? Essentially. and ﬁve alternative sets of three outputs are given. and thus the matrix . Maximization of ´ µ where is appropriately scaled (see the above procedure). 5.g.3. The objective of the control layer is then to keep the controlled . 2. The outputs are scaled as outlined above. Scale the candidate outputs such that for each output the sum of the magnitudes of Ú and the control error ( . the scalings. while at the same time staying safely away from instability. Remark. we found that we should select variables Ý for which the variation in optimal value and control error is small compared to their controllable range (the range Ý may reach by varying the input Ù).3 Selection of controlled variables: Summary Generally.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ 3. we have considered a hierarchical strategy where the optimization is performed only periodically.9. “Brute force” evaluation to ﬁnd the set with the smallest loss imposed by using constant values for the setpoints Ö. Note that our desire to have ´ µ large for output selection is not related to the desire to have ´ µ large to avoid input constraints as discussed in Section 6. are different for the two cases. such that ¾ Â Ù¾ ÓÔØ is close to a constant times a unitary matrix). Select as candidates those sets of controlled outputs which correspond to a large value of ´ µ. The optimization layer is a lookup table. including measurement noise Ò ) is similar (e. If the loss imposed by keeping constant setpoints is acceptable then we have selfoptimizing control. the optimal values of all variables will change with time during operation (due to disturbances and other changes). The aeroengine application in Chapter 12 provides a nice illustration of output selection. Example.e. We considered two approaches for selecting controlled outputs: 1. There the overall goal is to operate the engine optimally in terms of fuel consumption. is the transfer function for the effect of the scaled inputs on the scaled outputs. provided we can also achieve good dynamic control performance. 10. which gives the optimal parameters for the engine at various operating points. and a good output set is then one with a large value of ´ µ. Scale the inputs such that a unit deviation in each input from its optimal value has ¡ the same effect on the cost function Â (i.
the selection of controlled and measured outputs are two separate issues. A reasonable objective is therefore to minimize the required input usage of the stabilizing control system. The measurement selection problem is brieﬂy discussed in the next section. We may also use other measurements to improve the control of the controlled outputs.5 we discuss the relative gain array of the “big” transfer matrix (with all candidate outputs included). for a single unstable mode. the rule is generally to select those which have a strong relationship with the controlled outputs. by selecting the output (measurement) and input (manipulation) corresponding to the largest elements in the output and input pole vectors (Ý Ô and ÙÔ ). Then in section 10. because 1) we may not be able to measure all the controlled variables. We usually start the controller design by designing a (lowerlayer) controller to stabilize the plant. Note that the measurements used by the controller (Ú ) are in general different from the controlled variables (Þ ). This choice maximizes the (state) controllability and observability of the unstable mode. for example. Local disturbance rejection. It turns out that this is achieved. 1998b).4 Selection of manipulations and measurements We are here concerned with the variable sets Ù and Ú in Figure 10. but we may also estimate their values based on other measured variables. respectively (see remark on page 2) (Havre. The issue is then: Which outputs (measurements) and inputs (manipulatons) should be used for stabilization? We should clearly avoid saturation of the inputs. 10.1. The controlled outputs are often measured. although the two decisions are obviously closely related. . or which may quickly detect a major disturbance and which together with manipulations can be used for local disturbance rejection.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ outputs at their reference values (which are computed by the optimization layer). 1998)(Havre and Skogestad. For a more formal analysis we may consider the model Here Ý ÐÐ ÐÐ Ù ÐÐ · ÐÐ . The selected manipulations should have a large effect on the controlled outputs. by use of cascade control. and should be located “close” (in terms of dynamic response) to the outputs and measurements. Thus. as a useful screening tool for selecting controlled outputs. For measurements. because this makes the system effective openloop and stabilization is impossible. and 2) we may want to measure and control additional variables in order to ¯ ¯ stabilize the plant (or more generally change its dynamics) improve local disturbance rejection Stabilization.
However. one may. and Ë Á is a nonsquare output “selection” matrix with a ½ and otherwise ¼’s in each column. for Ñ Ð and Å Ä ½¼ it is 63504. perform an To evaluate the alternative combinations. based on and inputoutput controllability analysis as outlined in Chapter 6 for each combination (e. based on analyzing achievable robust performance by neglecting causality.g. 2 structures with one input and two outputs.For Ñ Ð Ð Ñ Ä ¾ there are 4+2+2+1=9 candidates (4 structures with one input and example. the number of combinations has a combinatorial growth. . This rather crude analysis may be used. At least this may be useful for eliminating some alternatives. and Ð from Ä candidate measurements. interactions. with Ë Ç Á all outputs are selected. The number of possibilities is much larger if we considerÈ possible combinations all ¡ ¡ È with 1 to Å inputs and 1 to Ä outputs. A more involved approach. Remark. This approach is more involved both in terms of computation time and in the effort required to deﬁne the robust performance objective. and 1 structure with two inputs and two outputs). so even a simple inputoutput controllability analysis becomes very timeconsuming if there are many alternatives. Ð and Ä ½¼¼ (selecting 5 measurements out of 100 possible) there are 75287520 possible combinations.8) Ñ Ð ´Ä Ðµ Ñ ´Å Ñµ A few examples: for Ñ Ð ½ and Å Ä ¾ the number of possibilities is 4. For example. the number of possibilities is Ý ÐÐ Ù ÐÐ all candidate outputs (measurements) all candidate inputs (manipulations) Å Ä Å (10. An even more involved (and exact) approach would be to synthesize controllers for optimal robust performance for each candidate combination. to perform a prescreening in order to reduce the possibilities to a manageable number.7) Here ËÇ is a nonsquare input “selection” matrix with a ½ and otherwise ¼’s in each row. and for Ñ Å . for Ñ Ð ¾ and Å Ä it is 36. RHPzeros. consider the minimum singular value. (1995). rules of thumb and simple controllability measures. For a plant where we want to select Ñ from Å candidate manipulations. These candidate combinations can then be analyzed more carefully. The number is (Nett. 1989): Å ½ Ä ½ Ä Å . Ä Ð One way of avoiding this combinatorial problem is to base the selection directly on the “big” models ÐÐ and ÐÐ . For example. etc). one may consider the singular value decomposition and relative gain array of ÐÐ as discussed in the next section.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ ¯ ¯ The model for a particular combination of inputs and outputs is then Ý Ù· where ËÇ ÐÐ ËÁ ËÇ ÐÐ (10. 2 structures with two inputs and one output. is outlined by Lee et al. together with physical insight. . with Å one output. and with ËÇ ¼ Á output 1 has not been selected.
is the relative gain array (RGA) of the “big” transfer matrix ÐÐ with all candidate inputs and outputs included. The following theorem links the input and output (measurement) projection to the column and row sums of the RGA. ÐÐ ¢ ÐÐ Essentially. and we deﬁne ÈÖÓ Ø ÓÒ ÓÖ ÓÙØÔÙØ ÌÍ Ö ¾ (10. for the case of many candidate manipulations (inputs) one may consider not using those manipulations corresponding to columns in the RGA where the sum of the elements is much smaller than 1 (Cao. and Í Ö consists of the output directions we can affect (reach) by use of the inputs. Í Ö consists of the Ö ﬁrst columns of Í .12) are 1. Then the ’th input is Ù Ì Ý . Theorem 10. To see this.9) ÐÐ Í ¦Î À ÍÖ ¦Ö ÎÖÀ where ¦Ö consists only of the Ö Ö Ò ´ µ nonzero singular values.5 RGA for nonsquare plant A simple but effective screening tool for selecting inputs and outputs. £ ÝÌ . which avoids the combinatorial problem just mentioned. Thus. Similarly. for the case of many candidate measured outputs (or controlled outputs) one may consider not using those outputs corresponding to rows in the RGA where the sum of the elements is much smaller than 1. Î Ö consists of the input directions with a nonzero effect on the outputs. and the ’th column sum of the RGA is equal to the square of the ’th input projection. Ì ÍÖ yields the projection of a unit output Ý onto the effective (reachable) output space of .11) which is a number between 0 and 1.12) For a square nonsingular matrix both the row and column sums in (10.) The ’th row sum of the RGA is equal to the square of the ’th output projection. Ñ ½ ÌÍ Ö ¾ ¾ Ð ½ ÌÎ Ö ¾ ¾ (10.10) which is a number between 0 and 1. 1995). write the singular value decomposition of ÐÐ as (10.e. and we follow Cao (1995) and deﬁne ÈÖÓ Ø ÓÒ ÓÖ ÒÔÙØ ÌÎ Ö ¾ (10. Deﬁne in a similar way such that 0’s elsewhere. . Let ¼ ¡ ¡ ¡ ¼ ½ ¼ ¡ ¡ ¡ ¼ Ì be a unit vector with a 1 in position and Ì Ù. i. Similarly.1 (RGA and input and output projections.½ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. We then have that Ì ÎÖ yields the projection of a unit the ’th output is Ý input Ù onto the effective input space of . and Î Ö consists of the Ö ﬁrst columns of Î .
We have: ¾ ¿ ¾ ¿ ÐÐ The four row sums of the RGAmatrix are ¼ the output projection we should select outputs ¼. The following example shows that although the RGA is an efﬁcient screening tool. The plant and its RGAmatrix are ¾ ½¼ ½¼ ¿ ¾ ¼ ½¼ ¼ ¼ ¿¼¿ ¿ ÐÐ ¡ ½¼ ¾ ¾ ¾ ¼ ½ ½ ¾ ¾ £ ¼ ¾ ¼ ½¿½ ¼ ¼¿ ¼ ¼¼ ¾ ¼ ¼ ½¼¼ ¼ ¼ ½ ¼ ¾½¼½ ¼ ¼¾ ¾ ¼¾ ½ combinations with ¾ inputs and ¾ outputs.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ Proof: See Appendix A. However. The variables must therefore be scaled prior to the analysis. this yields a plant ½ ½¼ ½¼ which is illconditioned with large RGAelements. To maximize the output projection we should select outputs 1 and 4. The six row sums of the RGAmatrix are ¼ ¾ ¾ ¼ ¿ ¼ ¼ ¼¼ ¼ ½¿ ¼ ¼¾½¼ and ¼ ¾ . It includes all the alternative inputs and/or outputs and thus avoids the combinatorial problem. ¼ ¿ and ¼ ¿ . ´ ½ µ ¼ ½. ½¼ ½¼ ½¼ ¾ ½ ¾ ½ £ ¾ ¿¾ ½ ½ ¿ ¼ ¼ ¼ ¾ ¼ ¼ ¼ ¾ . to maximize ½ and ¾. ¾ ¾ ½ ¾ ½ the minimum singular values are: ´ ÐÐ µ ½ ¼ . On the other hand. and for extra outputs on the output scaling. selecting outputs ½ and ¿ yields ½¼ ½¼ which is wellconditioned with £´ ¾ µ ½ ¾ . there are a large number of other factors that determine controllability. Example 10. £´ ½ µ ½¼ . and ½¼ ½¼ is likely to be difﬁcult to control. and these must be taken into account when making the ﬁnal selection.2 Consider a plant with ¾ inputs and candidate outputs of which we want to select ¾. For comparison. The RGA may There exist ¾ be useful in providing an initial screening. Thus. Example 10. This shows that we have not lost much gain in the lowgain direction by using only ¾ of the outputs. sensitivity to uncertainty. such as RHPzeros. ¼ ¿.2.4. For this selection ´ µ ¾ ½¾ whereas ´ ÐÐ µ ¾ ´ ÐÐ µ ¾ with all outputs included. ËÁ Æ ½ ¾ The RGA is a useful screening tool because it need only be computed once. and ´ ¾ µ ¼ ¼. For the case of extra inputs the RGAvalues depend on the input scaling. it must be used with some caution. We discuss on page 435 the selection of extra measurements for use in a cascade control system.3 Consider a plant with ¾ inputs and candidate outputs of which we want to select ¾. From (10.12) we see that the row and column sums of the RGA provide a useful way of interpreting the information available in the singular vectors. However.
In general this will limit the achievable performance compared to a simultaneous design (see also the remark on page 105). Ù ÃÚ (10. Similar arguments apply to other cascade schemes. In other cases we may use a two degreesoffreedom controller. blocks) with predetermined links and with a possibly predetermined design sequence where subcontrollers are designed locally. such a “big” (full) controller may not be desirable. The restrictions imposed on the overall controller Ã by decomposing it into a set of local controllers (subcontrollers. Obviously. These subsets should not be used by any other controller. manipulations and controlled outputs are ﬁxed. and then the preﬁlter (Ã Ö ) is designed for command tracking.13) (the variables Ú will mostly be denoted Ý in the following). units. This deﬁnition of decentralized control is consistent with its use by the control community. However.6 Control conﬁguration elements We now assume that the measurements. this limits the achievable performance compared to that of a two degrees of freedom controller. In decentralized control we may rearrange the ordering of . The available synthesis theories presented in this book result in a multivariable controller Ã which connects all available measurements/commands (Ú ) with all available manipulations (Ù). but we may impose the restriction that the feedback part of the controller (Ã Ý ) is ﬁrst designed locally for disturbance rejection. some deﬁnitions: Decentralized control is when the control system consists of independent feedback controllers which interconnect a subset of the output measurements/commands with a subset of the manipulated inputs. elements. In a conventional feedback system a typical restriction on Ã is to use a one degreeoffreedom controller (so that we have the same controller for Ö and Ý ). and in the context of the process industry in Shinskey (1988) and Balchen and Mumme (1988). More speciﬁcally. Some elements used to build up a speciﬁc control conﬁguration are: ¯ ¯ ¯ ¯ ¯ Cascade controllers Decentralized controllers Feedforward elements Decoupling elements Selectors These are discussed in more detail below. we deﬁne Control conﬁguration. By control conﬁguration selection we mean the partitioning of measurements/commands and manipulations within the control layer.¾¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. First.
The choice of control conﬁguration leads to two different ways of partitioning the control system: ¯ ¯ Vertical decomposition. and give some justiﬁcation for using such “suboptimal” conﬁgurations rather than directly . This usually results from a sequential design of the control system. the subcontrollers are designed. For example. Horizontal decomposition. we may impose restrictions on the way. This usually involves a set of independent decentralized controllers. Cascade control is when the output from one controller is the input to another. for a hierarchical decentralized control system. In particular. this is relevant for cascade control systems. This is broader than the conventional deﬁnition of cascade control which is that the output from one controller is the reference command (setpoint) to another. depending on the conditions of the system.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾½ measurements/commands and manipulated inputs such that the feedback part of the overall controller Ã in (10. a performance loss is inevitable if we decompose the control system. based on cascading (series interconnecting) the controllers in a hierarchical manner. we discuss cascade controllers and selectors.13) has a ﬁxed blockdiagonal structure. In this section. then this may pose fundamental limitations on the achievable performance which cannot be overcome by advanced controller designs at higher layers. These limitations imposed by the lowerlayer controllers may include RHPzeros (see the aeroengine case study in Chapter 12) or strong interactions (see the distillation case study in Chapter 12 where the ÄÎ conﬁguration yields large RGAelements at low frequencies). For most decomposed control systems we design the controllers sequentially. if we select a poor conﬁguration at the lower (base) control layer. Decoupling elements link one set of manipulated inputs (“measurements”) with another set of manipulated inputs. Feedforward elements link measured disturbances and manipulated inputs. Remark 2 Of course. Remark 1 Sequential design of a decentralized controller results in a control system which is decomposed both horizontally (since Ã is diagonal) as well as vertically (since controllers at higher layers are tuned with lowerlayer controllers in place). e. and it is sometimes also used in the design of decentralized control systems. or rather in which sequence. Selectors are used to select for control. In addition to restrictions on the structure of Ã .g. They are used to improve the performance of decentralized control systems. and are often viewed as feedforward elements (although this is not correct when we view the control system as a whole) where the “measured disturbance” is the manipulated input computed by another decentralized controller. starting with the “fast” or “inner” or “lowerlayer” control loops in the control hierarchy. a subset of the manipulated inputs or a subset of the outputs.
Ýµ. velocity feedback is frequently used in mechanical systems.2 Cascade control: Extra measurements In many cases we make use of extra measurements Ý ¾ (secondary outputs) to provide local disturbance rejection and linearization. For example. partially controlled systems and sequential controller design. e. we here use singleinput singleoutput (SISO) controllers of the form Ù Ã ´×µ´Ö Ý µ (10. or to reduce the effect of measurement noise. A centralized implementation Ù Ã ´Ö Ã½½ ´×µ´Ö½ Ý½ µ · Ã½¾´×µ´Ö¾ Ý¾ µ (10.15) . 10. but at the same time the reference. This cascade scheme is referred to as input resetting. Ù . However.4(a).1 Cascade control systems We want to illustrate how a control system which is decomposed into subcontrollers can be used to solve multivariable control problems. Ý½ the controlled output (with an associated control objective Ö ½ ) and Ý¾ the extra measurement.8 we consider decentralized diagonal control. Finally. This is conventional cascade control. Later.7. we can cascade controllers to make use of extra measurements or extra inputs. Let Ù be the manipulated input.4(b) where Ù ¾ is the “measurement” for controller Ã ½ ).g. see Figure 10.14) is a reference minus a measurement. as a degree of freedom.6. where Ã is a 2input1output controller. may be written Ù Centralized (parallel) implementation. in Section 10.6. we discuss in more detail the hierarchical decomposition. including cascade control. The “measurement” Ý is an output from another controller (typically used for the case of an extra manipulated input Ù . Ö .14) where Ã ´×µ is a scalar. becomes a new degree of freedom. For simplicity. and local ﬂow cascades are used in process systems. It may look like it is not possible to handle nonsquare systems with SISO controllers. since the input to the controller in (10. 10. in Figure 10.¾¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ designing the overall controller Ã . in Section 10. A cascade control structure results when either of the following two situations arise: ¯ ¯ The reference Ö is an output from another controller (typically used for the case of an extra measurement Ý ). Note that whenever we close a SISO control loop we lose the corresponding input.
Finally. On the other hand.15)).4(a). see the velocity feedback for the helicopter case study in Section 12.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ Ö½ + ¹ ¹  Ã½ Ö¾¹ + ¹  Ã¾ Ù¾¹ Plant Ý¾ Ý Õ ¹½ ¾¿ ÖÙ¾ Ö (a) Extra measurements Ý¾ (conventional cascade control) + ¹ ¹ + ¹ ¹  Ã½ Ã¾ Ù½ ¹ ¾ Õ Ù¹ Plant Ý Õ ¹ (b) Extra inputs Ù¾ (input resetting) Figure 10. a centralized implementation is better suited for direct multivariable synthesis.4(a): Ö¾ Ã½ ´×µ´Ö½ Ý½ µ (10.4: Cascade implementations where in most cases Ý¾ ). For example. Ö¾ ¼ (since we do not have a degree of freedom to control Cascade implementation (conventional cascade control).2. It also shows more clearly that Ö ¾ is not a degreeoffreedom at higher layers in the control system. Remark.15) the relationship between the centralized and cascade With Ö¾ implementation is Ã½½ Ã¾ Ã½ and Ã½¾ Ã¾ . it allows for integral action in both loops (whereas usually only Ã ½½ should have integral action in (10. ¼ in (10. An advantage with the cascade implementation is that it more clearly decouples the design of the two controllers.16) Ù¾ Ã¾´×µ´Ö¾ Ý¾ µ Ö¾ Ù½ (10. cascades based on measuring the actual manipulated variable (in which case Ý¾ ÙÑ ) are commonly used to reduce uncertainty and nonlinearity at the plant input.17) Note that the output Ö ¾ from the slower primary controller Ã ½ is not a manipulated plant input. Consider conventional cascade control in Figure 10. but rather the reference input to the faster secondary (or slave) controller Ã¾ . In the general case Ý½ and Ý¾ are not directly related to each other. To obtain an implementation with two SISO controllers we may cascade the controllers as illustrated in Figure 10. and this is sometimes referred to as parallel cascade .
(b) The inner loop Ä¾ ¾ Ã¾ removes the uncertainty if it is sufﬁciently fast (high gain feedback) and yields a transfer function ´Á · Ä¾ µ ½ Ä¾ close to Á at frequencies where Ã½ is active. consider Figure 10. and also.1. However. This is a special case of Figure 10. then the inputoutput controllability of ¾ and ½ ¾ are in theory the same. so an inner cascade loop may yield faster control (for rejecting ½ and tracking Ö½ ) if the phase of ¾ is less than that of ½ ¾ . Outline of solution: (a) Note that if ½ is minimum phase. and it is Exercise 10. the practical bandwidth is limited by the frequency ÛÙ where the phase of the plant is ½ ¼Æ (see section 5.g. and for rejecting ¾ there is no fundamental advantage in measuring Ý½ rather than Ý¾ . case (c) in the above example. such as when PID controllers are used. ½ ¾ ¾ Ý½ depends . Morari and Zaﬁriou (1989) conclude that the use of extra measurements is useful under the following circumstances: (a) The disturbance ¾ is signiﬁcant and ½ is nonminimum phase. We want to derive conclusions (a) and (b) from an inputoutput controllability analysis. (c) explain why we may choose to use cascade control if we want to use simple controllers (even with ¾ ¼).1 Conventional cascade control. in the inner loop.5.5 where directly on Ý¾ . (c) In most cases.2 To illustrate the beneﬁt of using inner cascades for highorder plants. (b) The plant ¾ has considerable uncertainty associated with it – e. With reference to the special (but common) case of conventional cascade control shown in Figure 10. control.5 and let ½ ½ ´× · ½µ¾ ¾ ¾ ½ ×·½ We use a fast proportional controller Ã¾ PIDcontroller is used in the outer loop. it is common to encounter the situation in Figure 10. In terms of design they recommended that Ã¾ is ﬁrst designed to minimize the effect of ¾ on Ý½ (with Ã½ ¼) and then Ã½ is designed to minimize the effect of ½ on Ý½ . whereas a somewhat slower Ã½ ´×µ ¾ Ã ´× · ½µ ×´¼ ½× · ½µ Ã Sketch the closedloop response.5: Common case of cascade control where the primary output Ý½ depends directly on the extra measurement Ý¾ .¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Ö½ + ¹ ¹  Ã½ + ¹ ¹  Ã¾ Ù¾¹ ¾ + ¹ +¾ Õ ¹ Ý¾ ½ ½ + ¹ +½ Õ Ý¹ Figure 10. What is the bandwidth for each of the two loops? .12).4(a) with “Plant” considered further in Example 10. Exercise 10. a poorly known nonlinear behaviour – and the inner loop serves to remove the uncertainty.
4(b). so ½ ¾ . A centralized implementation Ù Ã ´Ö Ã¾½ ´×µ´Ö Ýµ (10.20) With ÖÙ¾ is Ã½½ and we see that the output from the fast controller Ã ¾ is the “measurement” for the slow controller Ã½ . but note that the gain of Ã ½ should be negative (if effects of Ù½ and Ù¾ on Ý are both positive). The fast control loop is then Ù¾ Ã¾ ´×µ´Ö Ýµ (10.3 Cascade control: Extra inputs In some cases we have more manipulated inputs than controlled outputs. To obtain an implementation with two SISO controllers we may cascade the controllers as shown in Figure 10. What is the ½) and sketch the achievable bandwidth? Find a reasonable value for Ã (starting with Ã closedloop response (you will see that it is about a factor 5 slower without the inner cascade). Ã½Ã¾ and Ã¾½ Ã¾. the reference for Ù ¾ . then after a while it is reset to some desired value (“ideal resting value”).ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾ Compare this with the case where we only measure Ý½ .6. We again let input Ù¾ take care of the fast control and Ù ½ of the longterm control. we can have integral action in both Ã ½ and Ã¾ . Sometimes Ù¾ is an extra input which can be used to improve the fast (transient) control of Ý . ¼ the relationship between the centralized and cascade implementation The cascade implementation again has the advantage of decoupling the design of the two controllers. but if it does not have sufﬁcient power or is too costly to use for longterm control. Then Ù¾ ´Øµ will only be used for transient (fast) control and will return to zero (or more precisely to its desired value Ö Ù¾ ) as Ø ½. Cascade implementation (input resetting). Consider a plant with a single controlled output Ý and two manipulated inputs Ù ½ and Ù¾ . It also shows more clearly that Ö Ù¾ . and use a PIDcontroller Ã ´×µ with the same dynamics as Ã½ ´×µ but with a smaller value of Ã . may be used as a degreeoffreedom at higher layers in the control system. We usually let Ã½½ have integral control whereas Ã ¾½ does not.18) Here two inputs are used to control one output.19) The objective of the other slower controller is then to use input Ù ½ to reset input Ù¾ to its desired value ÖÙ¾ : Ù½ Ã½´×µ´ÖÙ¾ Ý½ µ Ý½ Ù¾ (10. may be written Ù½ Ã½½ ´×µ´Ö Ýµ Ù¾ Centralized (parallel) implementation. . Finally. 10. Ýµ where Ã is a 1input 2output controller. so to get a unique steadystate for the inputs Ù½ and Ù¾ . These may be used to improve control performance.
4 Derive the closedloop transfer functions for the effect of Ö on Ý . is reset to a desired position by the outer cascade. For example.3 Draw the block diagrams for the two centralized (parallel) implementations corresponding to Figure 10.¹ + Ã¾ Ã¿ Ù¾ Õ ¹ ¹ ½ ¿ + ¹ Ý¾ + Õ¹ ¾ Õ ¹ Ý½ Ö¿ Ù¿ Figure 10. Show that input Ù¾ is reset at steadystate.4. As an example use ½ ½ and use integral action in both controllers. manipulated inputs (Ù¾ and Ù¿ ). Ù½ and Ù¾ for the cascade input resetting scheme in Figure 10. Exercise 10.22) . The extra measurement Ý¾ is closer than Ý½ to the input Ù¾ and may be useful for detecting disturbances (not shown) affecting ½ . Ã¿ (input resetting using slower input). Consider the system in Figure 10.6 with two Ö½ ¹. In Figure 10.¹ + ¹.6: Control conﬁguration with two layers of cascade control.4(b). Ã½ ½ × and ½½ ½¾ Ã¾ ½¼ ×.6 we would probably tune the three controllers in the order Ã¾ (inner cascade using fast input). The corresponding equations are Ù½ Ã½ ´×µ´Ö½ Ý½ µ (10. one controlled output (Ý½ which should be close to Ö½ ) and two measured variables (Ý½ and Ý¾ ). whereas controllers Ã¾ and Ã¿ are cascaded to achieve input resetting. usually a valve.¹ + Ö¾ Ã½ ¹. Input Ù¾ should only be used for transient control as it is desirable that it remains close to Ö¿ ÖÙ¾ . and Ã½ (ﬁnal adjustment of Ý½ ).¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark 1 Typically. for the control system in Figure 10.6 controllers Ã½ and Ã¾ are cascaded in a conventional manner. because the extra input Ù¾ . the cascade implementation of input resetting is sometimes referred to as valve position control. Remark 2 In process control.4 Two layers of cascade control.21) Ù¾ Ã¾ ´×µ´Ö¾ Ý¾ µ Ö¾ Ù½ (10. Exercise 10. Input Ù¾ has a more direct effect on Ý½ than does input Ù¿ (there is a large delay in ¿ ´×µ). Example 10. the controllers in a cascade system are tuned one at a time starting with the fast loop.
A practical case of a control system like the one in Figure 10. Ù ½ is used for control when Ý ¾ Ý Ñ Ò Ý½ . we cannot control all the outputs independently. too few inputs. 10. Another situation is when input constraints make it necessary to add a manipulated input. this may be used to adjust the heat input (Ù) to keep the maximum temperature (Ñ Ü Ý ) in a ﬁred heater below some value. Make a process ﬂowsheet with instrumentation lines (not a block diagram) for this heater/reactor process. the improvement must be traded off against the cost of the extra actuator. In this case Ý¾ may be the outlet temperature from the preheater. so we either need to control all the outputs in some average manner. For example. Ù¿ Exercise 10. A completely different situation occurs if there are ).6 is in the use of a preheater to keep the reactor temperature Ý½ at a given value Ö½ . this is used in a heater where the heat input (Ù) normally controls temperature (Ý ½ ). and Ù¾ is used when Ý ¾ Ý½ ÝÑ Ü .6. 10. We assumed above that the extra input is used to improve dynamic performance. for example. and Ù¿ the ﬂow of heating medium (steam). measurement and control system. Consider the case with one input (Ù) and several outputs (Ý ½ Ý¾ In this case. Ù¾ the bypass ﬂow (which should be reset to Ö¿ . controller Ã¿ manipulates Ù¿ slowly in order to reset input Ù¾ to its desired value Ö¿ . Selectors for too few inputs. which is the reference value for Ý¾ . For example.23) Controller Ã½ controls the primary output Ý½ at its reference Ö½ by adjusting the “input” Ù½ . . An example where local feedback is required to counteract the effect of highorder lags is given for a neutralization process in Figure 5. except when the pressure (Ý ¾ µ is too large and pressure control takes over. In this case the control range is often split such that. say ½¼% of the total ﬂow). and we select the smallest (or largest) as the input.4 Extra inputs and outputs (local feedback) In many cases performance may be improved with local feedback loops involving extra manipulated inputs and extra measurements. However. Override selectors are used when several controllers compute the input value.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾ Ã¿ ´×µ´Ö¿ Ý¿ µ Ý¿ Ù¾ (10.5 Process control application. or we need to make a choice about which outputs are the most important to control. Selectors or logic switches are often used for the latter. Controller Ã¾ controls the secondary output Ý¾ using input Ù¾ . Auctioneering selectors are used to decide to control one of several similar outputs.24 on page 208.6. Finally. The use of local feedback is also discussed by Horowitz (1991).5 Selectors Splitrange control for extra inputs.
which are a prerequisite for applying multivariable control. their tuning parameters have a direct and “localized” effect. the main challenge is to ﬁnd a control conﬁguration which allows the (sub)controllers to be tuned independently based on a minimum of model information (the pairing problem). the gain and integral time constant of a PIcontroller). why is cascade and decentralized control used in practice? There are a number of reasons. such as the RGA. one desirable property is that the steadystate gain from Ù to Ý in an “inner” loop (which has already been tuned).6(a). it is usually more important (relative to centralized multivariable control) that the fast control loops be tuned to respond quickly. Why do we need a theory for cascade and decentralized control? We just argued that the main advantage of decentralized control was its saving on the modelling effort. but is becoming less relevant as the cost of computing power is reduced. Other advantages of cascade and decentralized control include the following: they are often easier to understand by operators. the number of possible pairings is usually very high.g. in the input channels. Based on the above discussion. resulting in a centralized multivariable controller as used in other chapters of this book. To be able to tune the controllers independently. The issue of simpliﬁed implementation and reduced computation load is also important in many applications.6. they reduce the need for control links and allow for decentralized implementation. decomposed control conﬁgurations can easily become quite complex and difﬁcult to maintain and understand. even though we may not want to use a model to tune the controllers. we may still want to use a model to decide on a control structure and to decide on whether acceptable control with a decentralized . For example. with cascade and decentralized control each controller is usually tuned one at a time with a minimum of modelling effort. Since cascade and decentralized control systems depend more strongly on feedback rather than models as their source of information.6 Why use cascade and decentralized control? As is evident from Figure 10. can be helpful in reducing the number of alternatives to a manageable number.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. If this is the case. but the most important one is probably the cost associated with obtaining good plant models. This seems to be a contradiction. does not change too much as outer loops are closed. A fundamental reason for applying cascade and decentralized control is thus to save on modelling effort. we must require that the loops interact only to a limited extent. but in most cases physical insight and simple tools. However. and they tend to be less sensitive to uncertainty. for example. sometimes even online by selecting only a few parameters (e. For decentralized diagonal control the RGA is a useful tool for addressing this pairing problem. but any theoretical treatment of decentralized control requires a plant model. For industrial problems. It may therefore be both simpler and better in terms of control performance to set up the controller design problem as an optimization problem and let the computer do the job. On the other hand.
Sequential design of conventional cascade control.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¾ conﬁguration is possible. This means that the controller at some higher layer in the hierarchy is designed based on a partially controlled plant. Four applications of partial control are: 1. The . The modelling effort in this case is less. The reason for controlling Ý ¾ is to improve the control of Ý ½ . because the model may be of a more “generic” nature and does not need to be modiﬁed for each particular application. The outputs Ý (which include Ý½ and Ý¾ ) all have an associated control objective. In this section we derive transfer functions for partial control. Sequential design of decentralized controllers. We divide the outputs into two classes: ¯ Ý½ – (temporarily) uncontrolled output (for which there is an associated control ¯ Ý¾ – (locally) measured and controlled output We also subdivide the available manipulated inputs in a similar manner: objective) ¯ Ù¾ – inputs used for controlling Ý ¾ ¯ Ù½ – remaining inputs (which may be used for controlling Ý ½ ) We have inserted the word temporarily above. In most of the development that follows we assume that the outputs Ý ¾ are tightly controlled. and we use a hierarchical control system. usually starting with the fast loops (“bottomup”).7 Hierarchical and partial control A hierarchical control system results when we design the subcontrollers in a sequential manner.7. With this controller Ã¾ in place (a partially controlled system). we may then design a controller Ã½ for the remaining outputs. 10. We ﬁrst design a controller Ã ¾ to control the subset Ý ¾ . 10. However. and provide some guidelines for designing hierarchical control systems.1 Partial control Partial control involves controlling only a subset of the outputs for which there is a control objective. since Ý ½ is normally a controlled output at some higher layer in the hierarchy. we here consider the partially controlled system as it appears after having implemented only a local control system where Ù¾ is used to control Ý ¾ . 2. The outputs Ý ¾ are additional measured (“secondary”) variables which are not important variables in themselves.
By partitioning the inputs and outputs. 3. Let us derive the transfer functions for Ý ½ when Ý¾ is controlled. . and (2) want it to be easy to control Ý¾ using Ù¾ (dynamically).7. By eliminating Ù ¾ and Ý¾ . Measurement and control of Ý ½ ? Yes Yes No No Control objective for Ý¾ ? Yes No Yes No Sequential decentralized control Sequential cascade control “True” partial control Indirect control The four problems are closely related. The outputs Ý ½ have an associated control objective.25) ¾½ Ù½ · ¾¾ Ù¾ · ¾ Assume now that feedback control Ù ¾ Ã¾ ´Ö¾ Ý¾ Ò¾ µ is used for the “secondary” subsystem involving Ù ¾ and Ý¾ . see Figure 10. This is similar to cascade control.24) Ý½ Ý¾ ½½ Ù½ · ½¾ Ù¾ · ½ (10. “True” partial control. In all cases there is a control objective associated with Ý ½ and a feedback loop involving measurement and control of Ý ¾ . The references Ö ¾ are used as degrees of freedom and the set Ù ½ is empty. we then get the following model for the resulting partially controlled system: Ý½ ¡ ½½ ½¾ Ã¾´Á · ¾¾ Ã¾µ ½ ¾½ ¡ Ù½ · ½ ½¾ Ã¾ ´Á · ¾¾ Ã¾ µ ½ ¾ · ½¾ Ã¾ ´Á · ¾¾ Ã¾µ ½ ´Ö¾ Ò¾ µ (10.8). The outputs Ý (which include Ý ½ and Ý¾ ) all have an associated control objective. Instead. and we consider whether by controlling only the subset Ý¾ . One difﬁculty is that this requires a separate analysis for each choice of Ý¾ and Ù¾ . but they are not measured. that is.7. Indirect control.¿¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ references Ö¾ are used as degrees of freedom for controlling Ý ½ so the set Ù½ is often empty.26) . Indirect control was discussed in Section 10.4. and in all cases we (1) want the effect of the disturbances on Ý ½ to be small (when Ý ¾ is controlled). the outputs Ý½ remain uncontrolled and the set Ù ½ remains unused. and the number of alternatives has a combinatorial growth as illustrated by (10. The following table shows more clearly the difference between the four applications of partial control. we can indirectly achieve acceptable control of Ý ½ . we aim at indirectly controlling Ý ½ by controlling the “secondary” measured variables Ý ¾ (which have no associated control objective). 4. the overall model Ý Ù may be written (10. but there is no “outer” loop involving Ý ½ .
28) over (10. but it is better to solve for Ù ¾ in (10.7: Partial control Remark. otherwise we can get the leastsquare solution by replacing ½ by the pseudoinverse.24) we get Ý½ (10.e. ´ ½½ ½¾ ½ ¾½ µ Ù½ · ´ ½ ½¾ ½ ßÞ ¾¾ ßÞ ¾¾ ¸ ÈÙ ¸È ¾ µ · ½¾ ½ ´Ö¾ ßÞ ¾ µ ßÞ ¾¾ ¸ ÈÖ Ý¾ . i. In some cases we can assume that the control of Ý ¾ is fast compared to the control of Ý ½ .27) Tight control of Ý¾ .26) may be rewritten in terms of linear fractional transformations. In many cases the set Ù ½ is empty (there are no extra inputs). For example. To obtain the model we may formally let Ã ¾ ½ in (10.28) where È is called the partial disturbance gain. the transfer function from Ù½ to Ý½ is Ð´ Ã¾ µ ½½ ½¾ Ã¾ ´Á · ¾¾ Ã¾ µ ½ ¾½ (10. (10. Ý . On substituting ¾¾ ¾¾ this into (10..25) to get Ù¾ ½ ¾ ½ ¾½ Ù½ · ½ Ý¾ ¾¾ ¾¾ ¾¾ We have here assumed that ¾¾ is square and invertible.26) is that it is independent of Ã ¾ . but we stress that it only applies at frequencies where Ý ¾ is tightly controlled. which is the disturbance gain for a system under perfect partial control. the control error ¾ equals the measurement error (noise) Ò ¾ . For the case of tight control we have ¾ ¸ Ý¾ Ö¾ Ò¾ . and È Ù is the effect of Ù½ on Ý½ with Ý¾ perfectly controlled.26). The advantage of the model (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿½ ½ Ù½ Ù¾ ¾ Ý¹ ½ Ò¾ Ö¾ ¹ ¹ ½½ ¾½ Ã¾ ½¾ ¾¾ + ¹+ + ¹+ +  Ý¾ + + Figure 10.
By “unnecessary” we mean limitations (RHPzeros. Based on these objectives. the inputoutput controllability of the subsystem involving use of Ù¾ to control Ý¾ should be good (consider ¾¾ and ¾ ). The idea is to ﬁrst implement a local lowerlayer (or inner) control system for controlling the outputs Ý ¾ . ÈÙ and ÈÖ ) may be simpliﬁed if Ã½ is mainly effective at lower frequencies.g. The appropriate model for designing Ã ½ is given by (10. The highfrequency dynamics of the models of the partially controlled plant (e.2 Hierarchical control and sequential design A hierarchical control system arises when we apply a sequential design procedure to a cascade or decentralized control system. we design a controller Ã ½ to control Ý½ . and is not required to operate the plant. . Relationships similar to those given in (10. The lower layer must quickly implement the setpoints computed by the higher layers. (1986) on block relative gains and the work of Haggblom and Waller (1988) on distillation control conﬁgurations. 2. 3. 4.28) have been derived by many authors. To “stabilize”1 the plant using a lowerlayer control system (Ã ¾ ) such that it is amenable to manual control. etc. illconditioning. The higher layer control system (Ã ½ ) is then used mainly for optimization purposes. To allow for simple or even online tuning of the lowerlayer control system (Ã ¾ ). but rather to a plant that is sensitive to disturbances and which is difﬁcult to control manually. To allow the use of longer sampling intervals for the higher layers (Ã ½ ). 10.26) (for the general case) or (10. that is. e. it should minimize the effect of disturbances on Ý ½ (consider È for Ý¾ tightly controlled). Hovd and Skogestad (1993) proposed some criteria for selecting Ù¾ and Ý¾ for use in the lowerlayer control system: 1.¿¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Remark.28) (for the case when we can assume Ý¾ perfectly controlled). Next. 2.g. The control of Ý ¾ using Ù¾ should not impose unnecessary control limitations on the remaining control problem which involves using Ù ½ and/or Ö¾ to control Ý½ .) that did not ½ The terms “stabilize” and “unstable” as used by process operators may not refer to a plant that is unstable in a mathematical sense. that is. with this lowerlayer control system in place. The objectives for this hierarchical decomposition may vary: 1. 3. The latter is the case in many process control applications where we ﬁrst close a number of faster “regulatory” loops in order to “stabilize” the plant. To allow simple models when designing the higherlayer control system (Ã ½ ). The control of Ý ¾ using Ù¾ should provide local disturbance rejection. see the work of Manousiouthakis et al.7.
pressure Ô) see Figure 10. The overall control problem for the distillation column in Figure 10. overhead vapour ÎÌ ) and (these are all ﬂows: reﬂux Ä. These three criteria are important for selecting control conﬁgurations for distillation columns as is discussed in the next example. but the plant has poles ﬁn or close to the origin and needs to be stabilized. each choice (“conﬁguration”) is named by the inputs Ù½ left for composition control. since each level (tank) has an inlet and two outlet ﬂows. which should not be much worse than that of . By convention.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿¿ exist in the original overall problem involving Ù and Ý . ¢ In most cases. distillate 5 outputs Ü Å Å ÔÌ (these are compositions and inventories: top composition Ý .5 Control conﬁgurations for distillation columns. bottom ﬂow . and ÈÙ has small RGAelements. Example 10. condenser holdup Å . the steadystate interactions from Ù½ to Ý½ are generally much less. However. Another conﬁguration is the ÎÌ Ì to control Ý¾ ). boilup Î . In addition. Another complication is that composition measurements are often expensive and unreliable. However. reboiler holdup Å .26) depends . For example. In this case. But the model in (10. The ÄÎ conﬁguration is good from the point of view that Ý½ Ù½ Ä Î Ì (and we assume that there is a control system in place which uses Ù¾ Î conﬁguration where Ù½ Î Ì and thus Ù¾ Ä ÎÌ Ì .8. the distillation column is ﬁrst stabilized by closing three decentralized SISO loops for level and pressure so Ý¾ and the remaining outputs are Å Å Ý ÔÌ The three SISO loops for controlling Ý¾ usually interact weakly and may be tuned independently of each other. for highpurity separations the RGAmatrix may have some large elements. the problem of controlling Ý½ by Ù½ (“plant” ÈÙ ) is often strongly interactive with large steadystate RGAelements in ÈÙ . This problem usually has no inherent control limitations caused by RHPzeros. the “ÄÎ conﬁguration” used in many examples in this book refers to a partially controlled system where we use to control Ý½ Ü Ì control of Ý½ using Ù½ is nearly independent of the tuning of the controller Ã¾ involving Ý¾ and Ù¾ . there exists many possible choices for Ù¾ (and thus for Ù½ ). Consider the controllability of ÈÙ for Ý¾ tightly controlled.8 has 5 inputs Ù Ý Ý Ä Î ÎÌ Ì . bottom composition Ü .
However.. This eliminates a few alternatives. one should ﬁrst consider the problem of controlling levels and pressure (Ý¾ ). Waller et al. This is further discussed in Skogestad (1997). like the Î conﬁguration mentioned above. see also the MATLAB ﬁle on page 501. ¢ .e. with ﬁve inputs there are ½¼ alternative conﬁgurations. Ä .8: Typical distillation column controlled with the ÄÎ conﬁguration strongly on Ã¾ (i. (1990). These issues are discussed by. are given in Haggblom and Waller (1988) and Skogestad and Morari (1987a).g. and derive the models for the conﬁgurations using (10. e. which are used in the references mentioned above. relationships ÄÎ between ÈÙ È ÄÎ and ÈÙ Î È Î etc. Important issues to consider then are disturbance sensitivity (the partial disturbance gain È should be small) and the interactions (the RGAelements of ÈÙ ). one often allows for the possibility of using ratios between ﬂows. and this sharply increases the number of alternatives. it may be simpler to start from the overall model . If Ý¾ is tightly controlled then none of the conﬁgurations seem to yield RHPzeros in ÈÙ . Å may There are also many other possible conﬁgurations (choices for the two inputs in Ù½ ). ¢ To select a good distillation control conﬁguration.28).g. for example. are sensitive to the tuning of Ã¾ . as possible degrees of freedom in Ù½ . on the tuning of the level loops). e. may not apply. Expressions which directly relate the models for various conﬁgurations. Another important issue is that it is often not desirable to have tight level loops and some conﬁgurations.26) or (10. so the ﬁnal choice is based on the ¾ ¾ composition control problem (Ý½ ).¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ÎÌ È Ô Ä Þ Å Ä Ý Î Å Ä Ü Figure 10. and a slow level loop for introduce unfavourable dynamics for the response from Ù½ to Ý½ . Furthermore. (1988) and Skogestad et al. Then the expressions for ÈÙ and È .
Á ½ and ¾ ¼ Á . so ½¾ ½ ¾. Show that È Á ¼ and ÈÖ ¾¾ ¾. where their reference values are set by a composition controller in a cascade manner. Most of the arguments given in Section 10.5 shows a cascade control system where the primary output Ý½ depends directly on the extra measurement Ý¾ .7 below. In summary. the blocks Ã½ and Ã¾ may themselves be decentralized). and in Section 10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ Because of the problems of interactions and the high cost of composition measurements.3. where we have additional “secondary” measurements Ý ¾ with no associated control objective. In particular. Remark. we want the inputoutput controllability of the “plant” ÈÙ ÈÖ (or ÈÖ if the set Ù½ is empty) with disturbance model È . we often ﬁnd in practice that only one of the two product compositions is controlled (“true” partial control). to be better than that of the plant ½½ ½¾ (or ½¾ ) with disturbance model ½ . This is discussed in detail in Example 10. and the objective is to improve the control of the primary outputs Ý ½ by locally controlling Ý¾ .6 The block diagram in Figure 10.2.4(a). it should be easy to control Ý ½ by using as degrees of freedom the references Ö ¾ (for the secondary outputs) or the unused inputs Ù ½ . Sequential design is also used for the design of cascade control systems. This is discussed next. apply to cascade control if the term “optimization layer” is replaced by “primary controller”. for the selection of controlled outputs. ½ ½ and discuss the result. it follows that we should select secondary measurements Ý ¾ (and inputs Ù¾ ) such that È is small and at least smaller than ½ . the overall ¢ distillation control problem is solved by ﬁrst designing a ¿ ¢ ¿ controller Ã¾ for levels and pressure. Note that ÈÖ is the “new” plant as it appears with the inner loop closed. From (10. Sequential design of cascade control systems Consider the conventional cascade control system in Figure 10. The idea is that this should reduce the effect of disturbances and uncertainty on Ý½ . and then designing a ¾ ¢ ¾ controller Ã½ for the composition control. these arguments apply at higher frequencies. and “control layer” is replaced by “secondary controller”. . Exercise 10. Furthermore.28). Another common solution is to make use of additional temperature measurements from the column. for the separation into an optimization and a control layer. More precisely. This is then a case of (block) decentralized control where the controller blocks Ã ½ and Ã¾ are designed sequentially (in addition.
28).7. There may also be changes in Ö ¾ (of magnitude Ê ¾ ) which may be regarded as disturbances on the uncontrolled outputs Ý ½ . as expressed by the elements in the matrix ½¾ ½ Ê¾ . consider the effect of disturbances on the uncontrolled output(s) Ý ½ as given by (10. Then Ù ½ ½ .25) and compute È using (10. The effect of a disturbance on the uncontrolled output Ý is Ý½ and Ù Ý¾ ¾ È where “Ù outputs ÝÐ Ý Ù ¼ ÝÐ ¼ ½ ½ (10. Ù we may directly evaluate the partial disturbance gain based on the overall model Ý Ù· . are less than 1 in magnitude at all frequencies.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 10. Therefore. as an alternative to rearranging Ý into into Ù½ for each candidate control conﬁguration and computing È from (10. Suppose all variables have been scaled as discussed in Section 1. ¾¾ One uncontrolled output and one unused input.4. rearrange the system as in (10. Then we have that: ¯ A set of outputs Ý½ may be left uncontrolled only if the effects of all disturbances on Ý½ . actuators and control links cost money.28). Set ÝÐ ¼ for all Ð .28). Proof of (10. we also have that: ¯ A set of outputs Ý½ may be left uncontrolled only if the effects of all reference changes in the controlled outputs (Ý ¾ ) on Ý½ . From (10. In this case. This “true” partial control may be possible in cases where the outputs are correlated such that controlling the outputs Ý ¾ indirectly gives acceptable control of Ý½ .3 “True” partial control We here consider the case where we attempt to leave a set of primary outputs Ý ½ uncontrolled.24) and (10.29): The proof is from Skogestad and Wolff (1992). to evaluate the feasibility of partial control one must for each choice of controlled outputs (Ý ¾ ) and corresponding inputs (Ù ¾ ). are less than ½ in magnitude at all frequencies. “True” partial control is often considered if we have an Ñ ¢ Ñ plant ´×µ where acceptable control of all Ñ outputs is difﬁcult. Rewrite Ý ½ Ý ½ ½ Ý as Ù . and we consider leaving one input Ù unused and one output Ý uncontrolled. To analyze the feasibility of partial control.29) ¼ ÝÐ ¼” means that input Ù is constant (unused) and the remaining are constant (perfectly controlled).28) then. and by setting Ù ¼ we ﬁnd Ý Ù· ½ ¾ . and we therefore prefer control schemes with as few control loops as possible. as expressed by the elements in the corresponding partial disturbance gain matrix È . One justiﬁcation for partial control is that measurements.
28) by making use of the Schur complement in (A. as is done in Exercise 6.29) may be generalized to the case with several uncontrolled outputs / unused inputs (Zhao and Skogestad. We ﬁrst reorder such that the upper left ½½subsystem contains the uncontrolled and unused variables. 2. Select the uncontrolled output Ý and unused input Ù such that the ’th element in ½ is large. and we leave one output (Ý ½ ) uncontrolled. Example 10. That is. We assume that changes in the reference (Ö½ and Ö¾ ) are infrequent and they will not be considered. since all elements in the third row of ½ are large. From the second rule.29) do not clearly favour any particular partial . the rules following (10. Select the unused input Ù such that the ’th row in ½ has small elements. keep an output uncontrolled if it is insensitive to changes in the unused input with the other outputs controlled. Consider a distillation process with ¾ inputs (reﬂux Ä and boilup Î ). ½½ ¡ ½ ¢ ½ £ ½ (10. 1997). If (and thus ½½ ) is square. keep the input constant (unused) if its desired change is small.7 Partial and feedforward control of ¾ ¾ distillation process. At steadystate (× ¼) we have This result is derived from the deﬁnition of È in (10. see Exercise 6.16. ¾ outputs (product compositions Ý and Ü ) and ¾ disturbances (feed ﬂowrate and feed composition Þ ).16 on page 250 with ´¼µ ½ ½ ½¾ ¿¼ ¿¼ ¿½ ¼ ½ ½ ½ ¼ ½ ´¼µ ¼ ¼ ¼ ¼¾ ¼ ¼ ¼ ¼¿ ¼ ¼¿ ¼ ¼¾ ¼ ¿ ¼ ¿¾ ¼ ¿ where we want to leave one input unused and one output uncontrolled.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ We want È small so from (10. we then have È ¢ ½ £ We next consider a ¾ ¢ ¾ distillation process where it is difﬁcult to control both outputs independently due to strong interactions. To improve the performance of Ý ½ we also consider the use of feedforward control where Ù ½ is adjusted based on measuring the disturbance (but we need no measurement of Ý ½ ).31) are similar in magnitude as are also the elements of ½ Since the row elements in ½ (between ¼ ¿ and ¼ ). That is. (The outputs are mainly selected to avoid the presence of RHPzeros.30) ¢ ½¼ ¾ ½¼ ½ ½½ ¾ ½½ ½ ½ ¼ ¼ ¼ ¼¼ ¼ ½¼ (10.29) we derive direct insight into how to select the uncontrolled output and unused input: 1. Example 10. (10.16). it seems reasonable to let input Ù¿ be unused.6 Consider the FCC process in Exercise 6.7).
10 1 Magnitude 10 0 ¾ 2. so it does not help in this case. the mathematical ¼ ½ . This is illustrated in Figure 10. ½½ 10 −1 4. This is conﬁrmed by the values of È . Curve ½) ¾ and the partial disturbance gain (È ¾½½ . the magnitudes of the elements in È are much smaller than in one output signiﬁcantly reduces the effect of the disturbances on the uncontrolled output. In practice. Curve ¾). This eliminates the steadystate effect of ½ on Ý½ controller which keeps Ä (provided the other control loop is closed). so let us next consider how we may reduce its effect on Ý½ . We use a dynamic model which includes liquid ﬂow dynamics. In particular. Add ﬁlter to FF 10 −2 3. In terms of our linear model. This scheme corresponds to controlling output Ý¾ (the bottom composition) using Ù¾ (the boilup Î ) and with Ý½ (the top composition) uncontrolled.16.4. Add static FeedForward −3 10 10 −2 10 −1 10 0 10 1 Frequency [rad/min] Figure 10. in all . For disturbance ¾ the partial disturbance gain (not shown) remains below ½ at all frequencies. by installing a buffer tank (as in pHexample in Chapter 5. for which the gain is reduced from about ½¼ to ¼ ¿¿ and less. a buffer tank has no effect at steadystate. low frequencies (È ´¼µ One approach is to reduce the disturbance itself. for example. Another approach is to install a feedforward controller based on measuring ½ and adjusting Ù½ (the reﬂux Ä) which is so far unused. È ¾½½ 1. However. Importantly. Let us consider in more detail scheme ½ which has the smallest disturbance sensitivity for the uncontrolled output (È ¾½¾ ). so control of four cases. the model is given in Section 12. where ¼ is the ½ ½element equivalence of this ratio controller is to use Ù½ ½ . this is the case for disturbance ¾.9: Effect of disturbance 1 on output 1 for distillation column example The partial disturbance gain for disturbance ½ (the feed ﬂowrate ) is somewhat above ½ at ½ ¿ ). this is easily implemented as a ratio constant. which are seen to be quite similar for the four candidate partial control schemes: È ¾½¾ ½ ¿ ¼ ¼½½ Ì È ¾½½ ½ ¿ ¼ ¾ Ì È ½¾¾ ½ ¾ ¼ ¼½ Ì È ½¾½ ¾ ¼¼ ¼ ¿¿ Ì The superscripts denote the controlled output and corresponding input.¿ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ control scheme. 20% error in FF 5.3).9. The effect of the disturbance after including this static feedforward controller in . where we show for the uncontrolled output Ý½ and the worst disturbance ½ both the openloop disturbance gain ( ½½ . and È show that the conclusion at steady state also applies Frequencydependent plots of at higher frequencies.
and with less phase lag. The effect of the most difﬁcult disturbance on Ý ½ can be further reduced using a simple feedforward controller (10. In the example there are four alternative partial control schemes with quite similar disturbance sensitivity for the uncontrolled output. The resulting effect of the disturbance on the uncontrolled output is shown by curve .33) (10. But.34) ¾Ù · ¾ Ö¾ · ¾ where ¾ is the control error. which is purely static. Remark. 10.34) and substituting into (10. and we see that the effect is now less than ¼ ¾ Ù½ at all frequencies.28): Solving for Ù ¾ in (10. because the input in these two cases has a more direct effect on the output.4 Measurement selection for indirect control Assume the overall goal is to keep some variable Ý ½ at a given value (setpoint) Ö ½ .32) ¿× · ½ ½ To be realistic we again assume an error of ¾¼%. The reason for this undesirable peak is that the feedforward controller. reacts too fast. we can almost achieve acceptable control of both outputs by leaving Ý ½ uncontrolled. we should also perform a controllability analysis of the feedback properties of the four ½ ½ problems. which may not be desirable. To decide on the best scheme. and use Ù½ ½ ¾ ¼ ½ . For small changes we may assume linearity and write Ý½ Ý¾ ½Ù · ½ (10. so let us assume the error is ¾¼%. ¢ In conclusion. To avoid this we ﬁlter the feedforward action with a time constant of ¿ min resulting in the following feedforward controller: ¡ ¡ ¼ (10. The ½ ¿ ¼ ¾ ¼ ¾ . for this example it is difﬁcult to control both outputs simultaneously using feedback control due to strong interactions. We assume we cannot measure Ý ½ . and in fact makes the response worse at higher frequencies (as seen when comparing curves ¿ and with curve ¾). Performing such an analysis.g.9. so the performance is acceptable. the effect is above ¼ at higher frequencies. and instead we attempt to achieve our goal by controlling Ý ¾ at a constant value Ö ¾ . We With feedback control of Ý ¾ we get Ý¾ now follow the derivation that led to È in (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ is shown as curve ¿ in Figure 10. as seen from the frequencydependent plot (curve ). due to measurement error we cannot achieve perfect feedforward control. e. However.7.33) yields Ý½ ´ ½ ½ ½ ¾ µ · ½ ½ ´Ö¾ · ¾ µ ¾ ¾ . we ﬁnd that schemes ½ (the one chosen) and are preferable.32) from disturbance ½ to Ù½ . our objective is to minimize Â Ý ½ Ö½ . However. which is steadystate effect of the disturbance is then È ´¼µ´½ ½ ¾µ still acceptable.
this is the case in a distillation column when we use temperatures inside the column (Ý¾ ) for indirect control of the product compositions (Ý½ ).¼ With ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ ¾ ¼ and ¼ this gives Ý½ Ö½ Ý½ Ö½ ½ ½ ½ Ö¾ . In this case we want È ÈÖ small only at high frequencies beyond the bandwidth of the outer loop involving Ý½ .36) and (10. and we cannot place it too far from the column end due to sensitivity to disturbances ( È becomes large).35) he control error in the primary output is then ´ ½ ½ ½ ßÞ ¾ È ¾ µ · ½ßÞ ½ ¾ ¾ ÈÖ (10. so Ö¾ must be chosen such that ¾ ½ Ö¾ ¾ (10. and scale the outputs Ý¾ so that the expected control error ¾ (measurement noise) is of magnitude 1 for each output (this is different from the output scaling used in other cases). Â ÅÒÑÞ × È ÈÖ (10. However. Note that È depends on the scaling of disturbances and “primary” outputs Ý ½ (and is independent of the scaling of inputs Ù and selected outputs Ý¾ . and Marlin (1995) uses the term inferential control to mean indirect control as discussed above. . For example.g. However.37) is zero. The maximum singular value arises if ¾ ½ and ¾ ¾ ½. e. Then to minimize the control error for the primary Ý½ Ö½ . ÈÖ is still nonzero. at least for square plants). The main difference is that in cascade control we also measure and control Ý½ in an outer loop.7. However. and we want to minimize Ý½ Ö½ ¾ .37) Ý½ we have that Ö½ Ö¾ is independent of Remark 2 For the choice Ý¾ È in (10. Based on (10. Remark 5 The problem of indirect control is closely related to that of cascade control discussed in Section 10. The magnitude of the control error ¾ depends on the choice of outputs Ý ¾ .2. we should select sets of controlled outputs which: outputs. and the matrix Remark 3 In some cases this measurement selection problem involves a tradeoff between wanting È small (wanting a strong correlation between measured outputs Ý¾ and “primary” outputs Ý½ ) and wanting ÈÖ small (wanting the effect of control errors (measurement noise) to be small). Remark 4 Indirect control is related to the idea of inferential control which is commonly used in the process industry. there is no universal agreement on these terms.36) a procedure for selecting controlled outputs Ý¾ may be suggested: Scale the disturbances to be of magnitude 1 (as usual).37) is usually of secondary importance. in inferential control the idea is usually to use the measurement of Ý¾ to estimate (infer) Ý½ and then to control this estimate rather than controlling Ý¾ directly. Remark 1 The choice of norm in (10. we cannot place the measurement too close to the column end due to sensitivity to measurement error ( ÈÖ becomes large). see Stephanopoulos (1984).36) To minimize Â Ý½ Ö½ we should therefore select controlled outputs such that È and ÈÖ ¾ are small. For a highpurity separation.
10) ¾ ½ ´×µ Ã ´×µ ´×µ ¾ ´×µ .ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ 10. The design (tuning) of each controller. The optimal solution to this problem is very difﬁcult mathematically. With a particular choice of pairing we can rearrange the columns or rows of ´×µ such that the paired elements are along the ). and diagonal of ´×µ.8 Decentralized feedback control Ã ´×µ Ö½ Ö¾ + ¹..38) Ñ ´×µ This is the problem of decentralized diagonal feedback control. The choice of pairings (control conﬁguration selection) 2.¹ ½ Ù½ ¹ ¹ +  ¹ ¾ Ù¾ ¹ ´ µ × Õ ¹Ý½ Õ ¹Ý¾ Figure 10. because the optimal controller is in general of inﬁnite order and may be nonunique. Rather we aim at providing simple tools for pairing selections (step 1) and for analyzing the achievable performance (controllability) of diagonally controlled plants (which may assist in step 2). ´×µ. ÑÑ . 1995) for more details. Sourlas and Manousiouthakis. ´×µ denotes a square Ñ ¢ Ñ plant ´×µ denotes the remaining ´Ñ ½µ ¢ ´Ñ ½µ plant obtained with elements . The reader is referred to the literature (e. by removing row and column in ´×µ. ´×µ is a square plant which is to be controlled using a diagonal controller (see Figure 10. (10. .g. We then have that the controller Ã ´×µ is diagonal ( we also introduce ¾ ¸ ½½ ¿ ¾¾ ..10: Decentralized diagonal control of a ¾ ¢ ¾ plant ¿ In this section. we do not address it in this book. (10. Notation for decentralized diagonal control.39) . The design of decentralized control systems involves two steps: 1.
42) note that Ý Ù µ and and interchange the roles of ½ .40) ¸ (10. but it is a good approximation at Ù frequencies within the bandwidth of each loop.42) To derive (10.45) . . and show that the RGA provides a measure of the interactions caused by decentralized diagonal control. whereas (10. Let Ù and Ý denote a particular input and output for the multivariable plant ´×µ. it is assumed that the other loops are closed with perfect control. and of Ý Ù Ù ¼ (10. Bristol argued that there will be two extreme cases: ¯ ¯ Other loops open: All other inputs are constant.43) and to get Ù ½ Ý µ Ù Ý Ý ¼ ½ (10. In the latter case. 10.1 RGA as interaction measure for decentralized control We here follow Bristol (1966).8.44) and (10. i.41).e. i. ’th relative gain deﬁned as ¸ ½ (10. which is also equal to the ’th diagonal element of in loop is denoted Ä Ä Ã. and assume that our task is to use Ù to control Ý .41) ’th ½ is the ’th element of is the inverse of the ½ ½ (10.e. The loop transfer function . Ù Other loops closed: All other outputs are constant. We get Other loops open: Other loops closed: Here element of Ý Ù Ù ¼ Ý Ù Ý ¼ .42) follows. Bristol argued that the ratio between the gains in (10. We now evaluate the effect Ý of “our” given input Ù on “our” given output Ý for the two extreme cases. and he introduced the term.40) and (10. is a useful measure of interactions. Perfect control is only possible at steadystate. Ý ¼ ¼ . corresponding to the two extreme cases. of Ù and Ý.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ as the matrix consisting of the diagonal elements of .
a ¢ plant 24. Note that Ë is not equal to the matrix of diagonal elements of Ë . ¢ ´ ½ µÌ where ¢ denotes elementbyelement From (10. because Intuitively.47) or equivalently in terms of the sensitivity function Ë Ë Here Ë ´Á · Ì µ ½ ½ Ò Ë ¸ ´Á · Ã µ ½ ½· Ì Á Ë (10. because most of the important results for stability and performance using decentralized control may be derived from this expression. This is identical to our deﬁnition of the RGAmatrix in (3.3 Stability of decentralized control systems Consider a square plant with singleloop controllers.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ¿ The Relative Gain Array (RGA) is the corresponding matrix of relative gains. has a RGA matrix close to identity (see Pairing Rule 1. we would like to pair variables Ù and Ý so that this means that the gain from Ù to Ý is unaffected by closing the other loops.45) we get £´ µ multiplication (the Schur product). 10. 10.139) with and ¼ . (10.69). For a ¾ ¢ ¾ plant there are two alternative pairings. and an Ñ ¢ Ñ plant has Ñ alternatives. with the pairings along the diagonal.48) is correct. we would like to pair such that the rearranged system. tools are needed which are capable of quickly evaluating . The reader is encouraged to conﬁrm that (10.2 Factorization of sensitivity function The magnitude of the offdiagonal elements in diagonal elements are given by the matrix (the interactions) relative to its ¸´ ´Á · Ã µ ´Á · Ì µ ßÞ µ ½ (10. More precicely. Thus.48) ´Á · Ã µ ßÞ (10.8.46) An important relationship for decentralized control is given by the following factorization of the return difference operator: ÓÚ Ö ÐÐ ßÞ ÒØ Ö Ø ÓÒ× Ò Ú Ù Ð ÐÓÓÔ× ´Á · Ã µ ½ .8. (10. a ¿ ¢ ¿ plant offers 6.48) follows from (A.49) contain the sensitivity and complementary sensitivity functions for the individual loops. page 445). is close to 1.
Sufﬁcient conditions for stability For decentralized diagonal control.92) we have ´ Ì µ ´ µ ´Ì µ and from (10. and is computed with respect to the diagonal structure of Ì . From (10.50) This sufﬁcient condition for overall stability can.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ alternative pairings.52) .2 Assume is stable and that the individual loops are stable ( Ì is stable). We then derive necessary conditions for stability which may be used to eliminate undesirable pairings. it follows that the overall system is stable (Ë is stable) if and only if Ø´Á · Ì ´×µµ does not encircle the origin as × traverses the Nyquist contour.47) and the generalized Nyquist theorem in Lemma A. in order to satisfy (10. These give rules in terms of diagonal dominance. In this section we ﬁrst derive sufﬁcient conditions for stability which may be used to select promising pairings. where we may view Ì as the “design uncertainty”. Then from the factorization Ë Ë ´Á · Ì µ ½ in (10. Thus. ´ Ì µ ½. We would like to use integral action in the loops. Á at low frequencies. it is desirable that the system can be tuned is stable and each and operated one loop at a time.50) we then derive the following sufﬁcient condition for overall stability in terms of the rows of : Ø (10. 1986): Theorem 10. The least conservative approach is to split up ´ Ì µ using the structured singular value. that is. be used to obtain a number of even weaker stability conditions. From the spectral radius stability condition in (4.51) Here ´ µ is called the structured singular value interaction measure. Sufﬁcient conditions in terms of . Assume therefore that individual loop is stable by itself ( Ë and Ì are stable).107) we then have that the overall system is stable if ´ Ì ´ µµ ½ (10. ´ µ ½ (“generalized diagonal dominance”) Another approach is to use Gershgorin’s theorem. i. This gives the following rule: Prefer pairings for which we have at low frequencies. see page 514.e.5 (page 540). A. as discussed by Grosdidier and Morari (1986). Then the entire system is closedloop stable (Ì is stable) if ´Ì µ Ñ ÜØ ½ ´ µ (10.50) we get the following theorem (as ﬁrst derived by Grosdidier and Morari.51) we want Ì we need ´ µ ½ at low frequencies where we have tight control. From (8.
the rearranged matrix ´ µ (with the paired elements along the diaginal) is close to triangular. it follows from (A. Remark 1 We cannot say that (10.53) . Proof: From the deﬁnition of the RGA it follows that £´ µ Á can only arise from a triangular ´×µ or from ´×µmatrices that can be made triangular by interchanging rows and columns in such a way that the diagonal elements remain the same but in a different order (the pairings remain the same).52) or (10.84) where ´ µ is the Perron root. and it is better (less restrictive) to use ´ µ to deﬁne diagonal dominance. Sufﬁcient conditions for stability in terms of RGA. A plant with a “triangularized” transfer matrix (as described above) controlled by a diagonal controller has only oneway coupling and will always yield ´ µ ½ a stable system provided the individual loops are stable. . However.51) and the use of ´ µ for (nominal) stability of the decentralized control system can be generalized to include robust stability and robust performance. in terms of the columns. it is sufﬁcient for overall stability to require that ´ µ is close to triangular (or £´ µ Á ) at crossover frequencies: Pairing Rule 1.127). RGA at crossover frequencies. In most cases.51) is always less conservative than (10.53).e.52) and (10. since ´ µ ´ in this case is diagonal. We now want to show that for closedloop stability it is desirable to select pairings such that the RGA is close to the identity matrix in the crossover region. see (8. Mathematically. ½ Ñ upper bounds in (10.127) that ´ µ ´ µ. will enable us to do this: Á Theorem 10.53) give individual bounds. i. Ø This gives the important insight that we prefer to pair on large elements in (10. To achieve stability with decentralized control prefer pairings such that at frequencies around crossover. The next simple theorem. and since the diagonal elements of are zero.53) is always It is true that the smallest of the smaller (more restrictive) than ½ ´ µ in (10. see equations (31ab) in Skogestad and Morari (1989). However. so ´ Ì µ ¼ and (10. whereas (10. (10. which applies to a triangular plant.51) imposes the same bound on Ø for each loop.3 Suppose the plant ´×µ is stable. where ½ µ. it follows that all ¾ eigenvalues of Ì are zero.52) and (10. some of which may be less restrictive than ½ ´ µ. This is equivalent to requiring £´ ´ µµ Á . the RGAnumber £´ ´ µµ Á ×ÙÑ should be small. see (A. can be made triangular.51). If the RGAmatrix £´ µ then stability of each of the individual loops implies stability of the entire system. Remark 3 Condition (10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ or alternatively.50) is satisﬁed. Remark 2 Another deﬁnition of generalized diagonal dominance is that ´ µ ½.
60) the overall system is stable (Ë is stable) if and only if ´Á · Ë ´ Á µµ ½ is stable. and that Ë ´×µ Ë ´×µ ´×µ ½ is stable and has no RHPzeros (which is always satisﬁed if both and are stable and have no RHPzeros). This is conﬁrned by computing the relative interactions as given by the matrix : ¼ ¼ ¾ µ ½ ¼ . i. This conclusion also holds for plants with RHPzeros provided they are located beyond the crossover frequency range. this condition is usually satisﬁed because Ë is small. i. It is not possible in this case to conclude from the Gershgorin bounds in (10.54) may be satisﬁed if ´ µ is close to triangular. the system possesses integrity if it remains stable ¯ and ¯ may take on when the controller Ã is replaced by Ã where ¼ or ¯ ½. Assume that Ë is stable. At higher frequencies. and from (10. B.52) and (10. Decentralized integral controllability (DIC) is concerned with whether this is possible integral control: . because the offdiagonal element of 6 is larger than any of the diagonal elements. The SSVinteraction measure is ´ µ ¼ ½¾ . Note that the Perron root ´ µ ¼ conservative. the closedloop system should remain stable as subsystem controllers are brought in and out of service.53) that the plant is diagonally dominant. ¾ Example.e. so the plant is diagonally dominant. with diagonal elements close to zero. Then from (10. Necessary steadystate conditions for stability A desirable property of a decentralized control system is that it has integrity.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ Derivation of Pairing rule 1. where the elements in Ë × approach and possibly exceed 1 in magnitude. (10. so from the spectral radius stability condition in (4. Consider a plant and its RGAmatrix ½ ¾ ´ £´ µ ¼ ¾ ¼¿ ¼¿ ¼ ¾ ¼¿ ¼ ¿½¾ ¼¿ The RGA indicates that is diagonally dominant and that we would prefer to use the diagonal pairing. (10. Here Ë ´ Á µ is stable. so the eigenvalues of Ë ´ Á µ´ µ are close to zero. the values of ¯ An even stronger requirement is that the system remains stable as the gain in various loops are reduced (detuned) by an arbitrary factor.51) stability of the individual loops will guarantee stability of the overall closedloop ½ which shows that the use of ´ µ is less system.54) At low frequencies.54) is satisﬁed and we have stability of Ë . Mathematically. ¼ ¯ ½ (“complete detunability”).e.107) the overall system is stable if ´Ë ´ Á µ´ µµ ½ (10. This is because Á and thus Ë ´ Á µ are then close to triangular.
If a pairing of outputs and manipulated inputs corresponds to a negative steadystate relative gain. The worst case is (a) when the overall system negative value of is unstable.77) Ø ¼ Ø Ø and Theorem 10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ Deﬁnition 10.1 Decentralized Integral Controllability (DIC). (1985): Theorem 10. e. The plant ´×µ (corresponding to a given pairing with the paired elements along its diagonal) is DIC if there exists a stabilizing decentralized controller with integral action in each loop such that each individual loop may be detuned independently by a factor ¯ (¼ ¯ ½) without introducing instability. (c) The closedloop system is unstable if the loop with the negative relative gain is opened (broken). Consider a stable square plant and a diagonal controller Ã with integral action in all elements. Since Ø ¼ Ø and from (A. as is clear from the following result which was ﬁrst proved by Grosdidier et al. This can be summarized as follows: A stable (reordered) plant ´×µ is DIC only if ´¼µ ¼ for all .47) and applying the generalized Nyquist stability condition. but situation (c) is also highly undesirable as it will imply instability if the loop with the negative relative gain somehow becomes inactive.4 resulting from pairing on a ´¼µ is undesirable. DIC was introduced by Skogestad and Morari (1988b). ¾ Each of the three possible instabilities in Theorem 10. (b) The loop with the negative relative gain is unstable by itself. Remarks on DIC and RGA. . The steadystate RGA provides a very useful tool to test for DIC.4 follows. The reason is that with all ¯ uncontrolled plant . ¼ we are left with the 2. (10. for example. Unstable plants are not DIC. 1.4 Steadystate RGA and DIC. Alternatively. so it depends only on the plant and the chosen pairings. and the system will be (internally) unstable if ´×µ is unstable.55) Proof: The theorem may be proved by setting Ì Á in (10. or if all the other loops may become inactive. we can use Theorem 6.g. A detailed survey of conditions for DIC and other related properties is given by Campo and Morari (1994). due to input saturation.5 on page 245 and select Ø we have ¼ . Note that DIC considers the existence of a controller. then the closedloop system has at least one of the following properties: (a) The overall closedloop system is unstable. Situation (b) is unacceptable if the loop in question is intended to be operated by itself. and assume that the loop transfer function Ã is strictly proper. due to input saturation.
154). and we can 8. For ¾ ¾ plants (Skogestad and Morari. 5. where corresponds to ¯ ½ for all .5 to all possible combinations ¼ or 1 as illustrated in the proof of Theorem 10. p. Determinant conditions for integrity (DIC). 1990) ¢ ½¾¿ (10. for a ¾ ¾ system ¢ . such as when the determinant of one or more of the submatrices is zero. ¯ (integrity). the requirement is only sufﬁcient for DIC and therefore cannot be used to eliminate designs. For ¯ ¼ we assume that the integrator of the corresponding SISO controller has been removed. corresponds to ¯ ½ with the other ¯ ¼. DIC is also closely related to stability.e. as pointed outÔ Campo and Morari (1994). otherwise the integrator would yield internal instability. However. but this has little practical signiﬁcance). For ¾ ¾ and ¿ ¿ plants we have even tighter conditions for DIC than (10. The theory of Dstability provides necessary and sufﬁcient conditions except in a few special cases.55). If we assume that the controllers have integral action.4. 4. ´¼µ ¼. Then one may compute the all diagonal elements of determinant of ´¼µ and all its principal submatrices (obtained by deleting rows and corresponding columns in ´¼µ). 1988b) ¢ ¢ ¢ Á ¸ ½½ ´¼µ · Ô ½½ ´¼µ ¼ ´¼µ Ô (10. the RGA is usually the preferred tool because it does not have to be recomputed for each pairing. which is a prerequisite for having DIC: Assume without loss have been adjusted such that of generality that the signs of the rows or columns of are positive. 7. that is. Á . This determinant condition follows by applying Theorem 6. then Ì ´¼µ derive from (10. ´ ´¼µµ ½ This is proved by Braatz (1993. The following condition is concerned with whether it is possible to design a decentralized controller for the plant such that the system possesses integrity. systems or higher. This is clear from Ø the fact that Ø . Actually. for the detuning factor in each loop in the deﬁnition of DIC. ¢ ÆÁ Ø ´¼µ ¥ ´¼µ (10. Speciﬁcally. and corresponds to ¯ ¼ with the other ¯ ½. 6.58) of ´¼µ and its principal submatrices are all positive. which should all have the same sign for DIC. Nevertheless.ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 3. and is equivalent to requiring of ¯ that the socalled Niederlinski indices. we do not have equivalence by Ô Ô for the case when ½½ ´¼µ · ¾¾ ´¼µ · ¿¿ ´¼µ is identical to 1. because in the RGA the terms are combined into Ø we may have cases where two negative determinants result in a positive RGAelement.51) that a sufﬁcient condition for DIC is that is generalized diagonally dominant at steadystate.56) For ¿ ¿ plants with positive diagonal RGAelements of ´¼µ and of (its three principal submatrices) we have (Yu and Fan. see papers by Yu and Fan (1990) and Campo and Morari (1994).57) Á ¸ Ô ¾¾ ´¼µ · ¿¿ ´¼µ ½ (Strictly speaking. this yields more information Ø so than the RGA. One cannot expect tight conditions for DIC in terms of the RGA for ¼ or ¯ ½ The reason is that the RGA essentially only considers “corner values”. i.
Let ´×µ denote the closedloop transfer function between input Ù and output Ý with all the other outputs under integral control. Then taken together the above two theorems then have the following implications: . The above results only address stability. If the plant has axis poles.g. and assume that the element has no RHPzero. ´×µ. c) The subsystem with input and output removed. 1992) Consider a transfer function matrix ´×µ with no zeros or poles at × ¼. (ii) the loop transfer function Ã is strictly proper. one may ﬁrst implement a stabilizing controller and then analyze the partially controlled system as if it were the plant ´×µ. ´¼µ ¼. 10.8.6 (Grosdidier et al. assume that ´½µ is positive (it is usually close to 1. by using very lowgain feedback). Assume that: (i) ´×µ has no RHPzeros. Since Theorem 6. these are moved slightly into the LHP (e. Negative RGAelements and decentralized performance With decentralized control we usually design and implement the controller by tuning and closing one loop at a time in a sequential manner.4 to unstable plants (and in this case one may actually desire to pair on a negative RGAelement). If ´ ½µ and ´¼µ have different signs then at least one of the following must be true: a) The element ´×µ has a RHPzero. We then have: If ´¼µ ¼ then ´×µ has an odd number of RHPpoles and RHPzeros. Assume Ð Ñ× ½ ´×µ is ﬁnite and different from zero. has a RHPzero. 1986) that ´ ´¼µµ ½ is equivalent to ½½ ´¼µ ¼ . 10. Theorem 10. integrators. e. This will have no practical signiﬁcance for the subsequent analysis. see Pairing rule 1). Alternatively. 9.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ it is easy to show (Grosdidier and Morari.8.g. 1985) Consider a stable transfer function matrix ´×µ with elements ´×µ. which is conservative when compared with the necessary and sufﬁcient condition ½½ ´¼µ ¼ in (10. (iii) all other elements of ´×µ have equal or higher pole excess than ´×µ.5. This is indeed true as illustrated by the following two theorems: Theorem 10. it is recommended that. we may also easily extend Theorem 10.5 applies to unstable plants.56). 11.5 (Hovd and Skogestad.4 The RGA and righthalf plane zeros: Further reasons for not pairing on negative RGA elements Bristol (1966) claimed that negative values of ´¼µ implied the presence of RHPzeros.. Assume that we pair on a negative steadystate RGAelement. Performance is analyzed in Section 10. This is shown in Hovd and Skogestad (1994a). b) The overall plant ´×µ has a RHPzero. prior to the RGAanalysis.
The RGA is a very efﬁcient tool because it does not have to be recomputed for each possible choice of pairing. Thus. then we will get ´×µ) which will limit the performance in the other outputs (follows a RHPzero (in from Theorem 10. £ Á ×ÙÑ .6). For a stable plant avoid pairings that correspond to negative steadystate RGAelements. Thus. and we are led to conclude that decentralized control should not be used for this plant. (b) If we end by closing this loop.4. We have then ﬁrmly established the following rule: Pairing Rule 2. pairing on a negative RGAelement will. one may expect pairing (a) to be the best since it corresponds to pairing on RGAelements equal to 1.¼ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ (a) If we start by closing this loop (involving input Ù and output Ý ).8 Consider a ¿ ¢ ¿ plant with ´¼µ For a ¿ ¿ plant there are 6 alternative pairings. if we require DIC we can from a quick glance positive (Ù½ at the steadystate RGA eliminate ﬁve of the six alternative pairings. and the RGAnumber. in addition to resulting in potential instability as given in Theorem 10. This is illustrated by the following example.9 Consider the plant and RGA ´×µ ´ × · ½µ ´ × · ½µ¾ ½ ½ ½ ½ ¾ ½ ¾ ½ ½ £´ µ ½ ½ ½ Note that the RGA is constant. ´¼µ ¼. Example 10. the RGAmatrix is far from identity. However. Intuitively. Remark. (Hovd and Skogestad. and therefore there is only one possible pairing with all RGAelements row ¿ ( ¿¿ Ý¾ . This follows since any permutation of the rows and columns of results in the same permutation in the RGA of . Ù¾ Ý½ . but from the steadystate RGA we see that there is only one positive element in column ¾ ( ½¾ ½ ). Surprisingly. 1992) conﬁrm this conclusion by designing PI controllers for ½ to be signiﬁcantly the two cases.5 by assuming that has no RHPzero and that ´½µ ¼). Ù¿ Ý¿ ). ¢ ½¼ ¾ ½ ½ ½ ¼ ¼ ½ ½ £´¼µ ¼ ½ ½ ½ ¼ ¿ ¼ ¿ ¼ ¼ ¼ ¼ ½ ¼ (10. also limit the closedloop decentralized performance. (b) The pairing on all . they found pairing (a) corresponding to . and therefore one can often eliminate many alternative pairings by a simple glance at the RGAmatrix. and only one positive element in ½ ).59) ° ° ° Example 10. then we will get a RHPzero i(in will limit the performance in output Ý (follows from Theorem 10. none of the two alternatives satisfy Pairing Rule 1. Only two of the six possible pairings give positive steadystate RGAelements (Pairing Rule 2): (a) The (diagonal) pairing on all ½. ´×µ) which In conclusion. To achieve DIC one has to pair on a positive RGA(0)element in each row and column. independent of frequency. is 30 for both alternatives.
Each reference change is less than the corresponding diagonal element in Ö Ê. and this is the reason for its name. (b) Consider the ¿ ¿ FCC process in Exercise 6. On the other hand.60) yields Ë Ë which shows that is important when evaluating performance with decentralized control. ¢ ¢ 10. deﬁned as (10. Ê. we consider performance in terms of the control error Ý Ö Ù· Ö (10. such that at each frequency: 1. which in both cases is very slow compared to the RHPzero which has a time constant of 1.140) is Ë ´Á · Ë ´ Á µµ ½ Ë (10. Note that the offdiagonal elements of the PRGA depend on the relative scaling on the outputs.5 Performance of decentralized control systems Above we used the factorization Ë Ë ´Á · Ì µ ½ in (10. ½.4. Each disturbance is less than 1 in magnitude. ´×µ ¸ ´×µ ½ ´×µ (10. We will also make use of the related ClosedLoop Disturbance Gain (CLDG) matrix. A related factorization which follows from (A.60) where is the Performance Relative Gain Array (PRGA).62) ´×µ ¸ ´×µ ´×µ ´×µ ½ ´×µ ´×µ The CLDG depends on both output and disturbance scaling. Here we want to consider perfomance. At frequencies where feedback is effective ( Ë ¼). .82) represents the steadystate model of a plant.63) Suppose the system has been scaled as outlined in Section 1. They found the achieveable closedloop time constants to be 1160 and 220. Show that 5 of the 6 possible pairings can be eliminated by requiring DIC. Exercise 10. .ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ ½ worse than (b) with . whereas the RGA is scaling independent. whereas the RGA only measures twoway interaction.16 on page 250.8.48) to study stability. the PRGA measures also oneway interaction. (10. Note that ½ Á . 2.7 (a) Assume that the matrix in (A. respectively. The diagonal elements of the PRGAmatrix are equal to the diagonal elements of the RGA.61) which is a scaled inverse of the plant. In the following. Show that ¾¼ of the ¾ possible pairings can be eliminated by requiring DIC.
In other words. . Proofs of (10. Consider frequencies where feedback is effective so Ë is small (and (10. ½.68) Ë Ë Ë ËÖ Ë Ë Ö (10. and then subsequently proved. Then for acceptable disturbance rejection ( ½) we must with decentralized control require for each loop . and the may be directly generalized by introducing the PRGAmatrix. at least at frequencies where feedback is effective. we found in Section 5. Consequently. Similarly.64) for acceptable disturbance rejection and command tracking. Let Ä denote the loop transfer and let function in loop . In words. when the other loops are closed the response in loop gets slower by a factor . at frequencies close to crossover.66): At frequencies where feedback is effective. so Á · Ë ´ Á µ and from (10. Single reference change. Consider frequencies where feedback is effective (and (10. when the system is controlled using decentralized control. stability is the main issue.65) and (10. Single disturbance. respectively.60) we have The closedloop response then becomes Á (10.65) which is the same as the SISOcondition (5. Then for acceptable reference tracking ( ½) we must require for each loop ½·Ä ¡Ê (10. for performance it is desirable to have small elements in . . and since the diagonal elements of the PRGA and RGA are equal.67) is valid).67) (10. .69) . CLDGmatrix. ½·Ä (10. in which case denote the ’th element of . Ë is small.66) which is the same as the SISOcondition except for the PRGAfactor.¾ ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ 3.52) except that is replaced by the gives the “apparent” disturbance gain as seen from loop CLDG. For each output the acceptable control error is less than 1. consider a change in reference for output of magnitude Ê . For SISO systems.10 that in terms of scaled variables we must at all frequencies require ½·Ä Ò ½·Ä Ê (10. Ä is a vector. However. These generalizations will be presented and discussed next. For decentralized control these requirements ½ . Note that and Ê are all scalars in this case. Consider a single disturbance. we usually prefer to have close to 1 (recall Pairing Rule 1 on page 445).67) is valid).
54). Similarly.67) will be automatically satisﬁed if (10. The next step is to compute the RGAmatrix as a function of frequency. as this means that the interactions are such that they reduce the apparent effect of the disturbance. the unstable modes must not be decentralized ﬁxed modes. 2.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ and a single reference change Ö is ¿ and the response in output to a single disturbance × where × Ö (10. to achieve ½ for Ö Ê we must require × Ê ½ and (10. For example. such that one does not need high gains Ä in the individual loops.66) is satisﬁed. Its effect on output with no . (1985) (see also Skogestad and Morari (1987b)): ¬ ¸ ½ (10. in most cases (10. Thus. and the ratio between (the CLDG) and is the relative disturbance gain control is (RDG) (¬ ) of Stanley et al. Prefer pairings which have the RGAmatrix close to identity at frequencies around crossover. where puts minimal restrictions on the achievable 3. It is desirable to have ¬ small. In addition. . Prefer a pairing bandwidth. i.71) Thus ¬ . which yields Remark 2 Consider a particular disturbance with model .70) × ½ ´½ · µ is the sensitivity function for loop by itself. gives the change in the effect of the disturbance caused by decentralized control. otherwise the plant cannot be stabilized with a diagonal controller (Lunze. to achieve ½ for ½ we must require × ½ and (10. so we ¾ normally need not check assumption (10.66) follows. Remark 1 (10. This rule is to ensure that interactions from other loops do not cause instability as discussed following (10. Since Ê usually has all of its elements Ì Á larger than 1. the RGAnumber £´ µ Á should be small.67).65) follows.48) by assuming ´Á · Ì µ ½ ´Á · µ ½ . Speciﬁcally. Also note that × ½ will imply that assumption (10. the frequency Ù where ´ Ù µ ½ ¼Æ should be as large as possible. Avoid a pairing with negative steadystate RGA elements. then as usual the unstable modes must be controllable and observable.8. this is the case for a triangular plant if the unstable mode appears only in the offdiagonal elements. which is scaling independent.e.6 Summary: Controllability analysis for decentralized control When considering decentralized diagonal control of a plant. If the plant is unstable. one should ﬁrst check that the plant is controllable with any controller.68) may also be derived from (10. ´ ´¼µµ. 1992). 10.67) is valid. and to determine if one can ﬁnd a good set of inputoutput pairs bearing in mind the following: 1.
65) and (10. whereas they may not pose any problems for a multivariable controller.12). The advantage of (10. plot for each reference (assuming here for simplicity that each reference is of unit magnitude). Note that RHPzeros in the diagonal elements may limit achievable decentralized control. Compute the CLDG and PRGA. and can be used as a basis for designing directly how large Ä the controller ´×µ for loop . and plot these as a function of frequency. Equivalently. at frequencies where is larger than 1 (this follows since This provides a direct generalization of the requirement for SISO is that we can limit systems. one should rearrange to have the paired elements along the diagonal and perform a controllability analysis. 4. that is. ¼ which yield ¼.72) tells us must be. the achievable bandwidth is ½ ¼Æ (recall in each loop will be limited by the frequency where Section 5. for each loop .72) To achieve stability of the individual loops one must analyze ´×µ to ensure that the bandwidth required by (10. Remark. we need ½ · Ä to be larger than each of these È Ö ÓÖÑ Ò ½·Ä Ñ Ü (10. It is also consistent with the desire that £´ µ is close to Á . If the plant is not controllable. it is best to perform the analysis one loop at the for each disturbance and plot time. 6. 5.73) ½ ).73) compared to using ½ ourselves to frequencies where control is needed to reject the disturbance (where ½). If the chosen pairing is controllable then the analysis based on (10. then one may consider another choice of pairings and go back to Step 4. When a reasonable choice of pairings has been made. If one still cannot ﬁnd any pairings which are controllable. then one should consider multivariable control. and the requirement is then that ÌÓ ÚÓ ÒÔÙØ ÓÒ×ØÖ ÒØ× (10. One then relies on one may even choose to pair on elements with . Since with decentralized control we usually want to use simple controllers. one may for each loop plot .ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ This rule favours pairing on variables “close to each other”. For performance.66) physically while at the same time achieving stability.72) is achievable. For systems with many loops. which makes it easier to satisfy (10. In some cases. pairings which violate the above rules may be chosen. For example. 7. As already mentioned one may check for constraints by considering the elements and making sure that they do not exceed one in magnitude within the of ½ frequency range where control is needed.
Initial controllability analysis. and so it will be assumed that input constraints pose no problem. and we ﬁnd that £´ µ Á at frequencies above about ¼ rad/min. The magnitude of the elements in ½ ´ µ (not shown) are all less than ½ at all frequencies (at least up to ½¼ rad/min). From this plot the two disturbances seem to be equally difﬁcult to reject with magnitudes larger than ½ up to a frequency of about ¼ ½ rad/min. the ﬂow dynamics partially decouple the response at higher frequencies.4. a change in Þ of ¾¼%.11: Disturbance gains . The disturbances and outputs have been scaled such that a magnitude of ½ corresponds to a change in of ¾¼%. Disturbances in feed ﬂowrate ( ½ ) and feed composition Þ ( ¾ ). Outputs are the product compositions Ý (Ý½ ) and Ü (Ý¾ ). 10 1 Magnitude 10 0 10 −1 ½½ −3 ½¾ −1 ¾¾ ¾½ 10 0 10 10 −2 10 10 1 Frequency [rad/min] Figure 10.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ ËÁ Æ the interactions to achieve the desired performance as loop by itself has no effect. for effect of disturbance on output The steadystate effect of the two disturbances is given by ´¼µ ½ ½½ ¾ ½½ ½ (10. matrix at steadystate are Example 10.7.74) The RGAelements are much larger than ½ and indicate a plant that is fundamentally difﬁcult to control. are included in the model. that is. The state dynamic model is given in Section 12. see Example 10. We conclude that control is needed up to 0. .5. we consider again the distillation process used in Example 10. and a change in Ü and Ý of ¼ ¼½ mole fraction units. The plant and RGA ´¼µ ½¼ ¾ ½¼ £´¼µ ¿ ½ ¿ ½ ¿ ½ ¿ ½ (10.1 rad/min. In order to demonstrate the use of the frequency dependent RGA and CLDG for evaluation of expected diagonal control performance. the large steadystate RGAelements may be less of a problem. Fortunately. the manipulated inputs are reﬂux Ä (Ù½ ) and boilup Î (Ù¾ ).11. ´×µ is stable and has no RHPzeros.10 Application to distillation process. Therefore if we can achieve sufﬁciently fast control.75) ´ µ are plotted as a function of frequency in and the magnitudes of the elements in Figure 10. The ÄÎ conﬁguration is used. An example of this is in distillation control when the ÄÎ conﬁguration is not used.
This seems sensible. Disturbance ½ is the most difﬁcult.12: Closedloop disturbance gains. corresponds to pairing on positive elements of £´¼µ and £´ µ Á at high frequencies.12 and 10. and we need ½ · Ä½ ½½ at frequencies where ½½ is larger than ½. The elements in the CLDG and PRGA matrices are shown as functions of frequency in Figures 10. From ½¾ we see that disturbance ¾ has almost no effect on output ½ under feedback control. and is used in the following. The magnitude of the PRGAelements are somewhat smaller than ½½ (at least at low frequencies). 10 2 Magnitude 10 1 10 0 ½¾ −3 −2 −1 ¾½ ½½ ¾¾ 10 −1 10 10 10 10 0 10 1 Frequency [rad/min] Figure 10.13: PRGAelements for effect of reference on output We now consider one loop at a time to ﬁnd the required bandwidth. for effect of disturbance on output Choice of pairings. which is up to about ¼ ¾ rad/min. At steadystate we have ´¼µ ¿ ½ ¾ ¿¾ ¿ ½ ´¼µ ´¼µ ´¼µ ¼ ¼ ¼ ½½ (10. and ½½ and ½¾ for disturbances. and this also holds at higher frequencies.13. at least on output ½. . We note that ´¼µ is very different from ´¼µ. Analysis of decentralized control. The selection of Ù½ to control Ý½ and Ù¾ to control Ý¾ . For loop ½ (output ½) we consider ½½ and ½¾ for references. .ÅÍÄÌÁÎ ÊÁ 10 2 Ä Ã ÇÆÌÊÇÄ Magnitude 10 1 ½½ ¾¾ ½¾ ¾½ 10 0 10 −1 10 −3 10 −2 10 −1 10 0 10 1 Frequency [rad/min] Figure 10.76) In this particular case the offdiagonal elements of RGA (£) and PRGA ( ) are quite similar. whereas they reduce the effect of disturbance ¾. For disturbance ½ (ﬁrst column in ) we ﬁnd that the interactions increase the apparent effect of the disturbance. so reference tracking will be achieved if we can reject disturbance ½.
Closedloop simulations with these controllers are shown in Figure 10. is required for rejecting disturbance ½. overall stability is determined by Ñ SISO stability conditions. the bandwidths of the loops are quite different. Ä gains.14. for example. and each step may be considered as a SISO control problem. This will not happen if the plant is diagonally dominant. and from ½¾ we require a loop gain larger than ½ up to about ¼ ¿ rad/min. The stability problem is avoided if the controllers are designed sequentially as is commonly done in practice when. each controller is designed locally and then all the loops are closed. In summary.4 0 ËÁ Æ Ý¾ Ý½ 10 20 30 40 50 60 70 80 90 100 Time [min] Figure 10. 10. A bandwidth of about ¼ ¾ to ¼ ¿ rad/min in each loop. and should be achievable in practice.77) .14: Decentralized PIcontrol.ÇÆÌÊÇÄ ËÌÊÍ ÌÍÊ 0. as has also been conﬁrmed by a number of other examples. However. Observed control performance. In this case the outer loops are tuned with the inner (fast) loops in place. with these controllers are larger than the closedloop disturbance The loop gains.4 0. such that we satisfy. As discussed above. even though the local loops ( Ì ) are stable. that is. In particular. there is an excellent agreement between the controllability analysis and the simulations. Responses to a unit step in in ¾ at Ø ¼ min ½ at Ø ¼ and a unit step Also.2 0 −0. at frequencies up to crossover. for example.51). ´ Ì µ ½ ´ µ in (10.7 Sequential design of decentralized controllers The results presented in this section on decentralized control are most useful for the case when the local controllers ´×µ are designed independently. To check the validity of the above results we designed two singleloop PI controllers: ½ ´×µ ¼ ¾ ½½ · ¿ × ¿ × ¾ ´×µ ¿ ¼ ¿ ½ · ¿½¿½× ¿ × (10.2 −0. one problem with this is that the interactions may cause the overall system (Ì ) to be unstable. Æ .8. for loop ¾ we ﬁnd that disturbance ½ is the most difﬁcult. The simulations conﬁrm that disturbance ¾ is more easily rejected than disturbance ½. the issue .
Although the analysis and derivations given in this section apply when we design the controllers sequentially.72) are useful for determining the required bandwidth in each loop and may thus suggest a suitable sequence in which to design the controllers. Some exercises which include a controllability analysis of decentralized control are given at the end of Chapter 6. it is often useful. see Examples 10. we have discussed the issues involved.65) and (10. The conditions/bounds are also useful in an inputoutput controllability analysis for determining the viability of decentralized control.g. to redo the analysis based on the model of the partially controlled system using (10.10. e.5 and 10. of decentralized control systems.8. Thus sequential design may involve many iterations.55). The performance bounds in (10. after having designed a lowerlayer controller (the inner fast loops).g. In this chapter. The engineer must then go back and redesign a loop that has been designed earlier. and we have provided some ideas and tools. 10. we have derived a number of conditions for the stability.26) or (10.51) and (10. There is clearly a need for better tools and theory in this area. however. e. Recall. but it has received relatively little attention in the control community during the last ¼ years.8 Conclusions on decentralized control In this section. For example. 10. . (10.28).ÅÍÄÌÁÎ ÊÁ Ä Ã ÇÆÌÊÇÄ of performance is more complicated because the closing of a loop may cause “disturbances” (interactions) into a previously designed loop. (10. and performance. that in many practical cases decentralized controllers are tuned based on local models or even online.9 Conclusion The issue of control structure design is very important in applications. The conditions may be useful in determining appropriate pairings of inputs and outputs and the sequence in which the decentralized controllers should be designed. where we base the analysis of the composition control problem on a ¾ ¢ ¾ model of the partially controlled ¢ plant. see Hovd and Skogestad (1994b). this is usually done for distillation columns.66).
By model order. is small. So far in this book we have only been interested in the inﬁnity (À ½ ) norm of stable may be unstable and the deﬁnition of the inﬁnity norm systems. We will describe three main methods for tackling this problem: balanced truncation. The central probl