Ad Damen and Siep Weiland
(tekst bij het college:
Robuuste Regelingen
5P430, najaarstrimester)
Measurement and Control Group
Department of Electrical Engineering
Eindhoven University of Technology
P.O.Box 513
5600 MB Eindhoven
Draft version of July 17, 2002
2
Preface
Opzet
Dit college heeft het karakter van een werkgroep. Dit betekent dat u geen kant en klare
portie ‘wetenschap’ ter bestudering krijgt aangeboden, maar dat van u een actieve partic
ipatie zal worden verwacht in de vorm van bijdragen aan discussies en presentaties. In dit
college willen we een overzicht aanbieden van moderne, deels nog in ontwikkeling zijnde,
technieken voor het ontwerpen van robuuste regelaars voor dynamische systemen.
In de eerste helft van het trimester zal de theorie over robuust regelaarontwerp aan
de orde komen in reguliere hoorcolleges. Als voorkennis is vereist de basis klassieke regel
techniek en wordt aanbevolen kennis omtrent LQGcontrol en matrixrekening/functionaal
analyse. In het college zal de nadruk liggen op het toegankelijk maken van robuust rege
laarontwerp voor regeltechnici en niet op een uitputtende analyse van de benodigde math
ematiek. Voor deze periode zijn zes oefenopgaven in het dictaat verwerkt, die bedoeld zijn
om u ervaring op te laten doen met zowel theoretische als praktische aspecten m.b.t. dit
onderwerp.
De bruikbaarheid en de beperkingen van de theorie zullen vervolgens worden getoetst
aan diverse toepassingen die in de tweede helft van het college door u en uw collega
studenten worden gepresenteerd en besproken. De opdrachten zijn deels opgezet voor
individuele oplossing en deels voor uitwerking in koppels. U kunt hierbij een keuze maken
uit :
• het kritisch evalueren van een artikel uit de toegepast wetenschappelijke literatuur
• een regelaarontwerp voor een computersimulatie
• een regelaarontwerp voor een laboratoriumproces.
Meer informatie hierover zal op het eerste college worden gegeven alwaar intekenlijsten
gereed liggen.
Iedere presentatie duurt 45 minuten, inclusief discussietijd. De uren en de verroost
ering van de presentaties zullen nader bekend worden gemaakt. Benodigd materiaal voor
de presentaties (sheets, pennen, e.d.) zullen ter beschikking worden gesteld en zijn verkri
jgbaar bij het secretariaat van de vakgroep Meten en Regelen (Ehoog 4.32 ). Er wordt
verwacht dat u bij tenminste 13 presentaties aanwezig bent en dat u aktief deelneemt aan
de discussies. Een presentielijst zal hiervoor worden bijgehouden.
Over uw bevindingen t.a.v. het door u gekozen onderwerp wordt een eindgesprek
gehouden, waar uit de discussie moet blijken of U voldoende inzicht en ervaring hebt
opgedaan. Bij dit eindgesprek wordt Uw presentatiemateriaal (augmented plant, ﬁl
ters,. . . ) als uitgangspunt genomen.
3
4
Computers
Om praktische ervaring op te doen met het ontwerp van robuuste regelsystemen zal voor
enkele opgaven in de eerste helft van het trimester, alsmede voor de te ontwerpen regelaars
gebruik worden gemaakt van diverse Toolboxen in MATLAB. Deze software is door de
vakgroep Meten en Regelen op commerci¨ele basis aangekocht voor onderzoeksdoeleinden.
U kunt deze toolboxen verkrijgen bij hr. Udo Batzke (Ehoog vloer 4) of op PC’s te werken
hiervoor beschikbaar gesteld door de vakgroep CS (ook via Batrzke). Heel nadrukkelijk
wordt er op gewezen dat het niet is toegestaan software te kopieren.
Beoordeling
Het eindcijfer een gewogen gemiddelde is van de beoordeling van uw presentatie, uw dis
cussiebijdrage bij andere presentaties en het eindgesprek. Ook de mate van coaching, die
U benodigde, is een factor.
Cursusmateriaal
Naast het collegediktaat is het volgende een beknopt overzicht van aanbevolen literatuur:
• [1]
Zeer bruikbaar naslagwerk; voorradig in de TUE boekhandel
• [2]
Geeft zeker een goed inzicht in de problematiek met methoden om voor SISO
systemen zelf oplossingen te creeren. Mist evenwel de toestandsruimte aanpak voor
MIMOsystemen.
• [3]
Zeer praktijk gericht voor procesindustrie. Mist behoorlijk overzicht.
• [4]
Dankzij stormachtige ontwikkelingen in het onderzoeksgebied van H∞ regeltheorie,
was dit boek reeds verouderd op het moment van publikatie. Desalniettemin een
goed geschreven inleiding over H∞ regelproblemen.
• [5]
Goed leesbaar standaardwerk voor vervolgstudie
• [6]
Van de uitvinders zelf . . . Aanbevolen referentie voor µ−analyse.
• [7]
Een boek vol formules voor de liefhebbers van ‘harde’ bewijzen.
• [8]
Een korte introductie, die wellicht zonder al te veel details de hoofdlijnen verduideli
jkt.
• [9]
Robuuste regelingen vanuit een wat ander gezichtspunt.
5 • [12] Dit boek omvat een groot deel van het materiaal van deze cursus. • [15] Dit boek is geschreven in de stijl van het dictaat. . Uitstekende voorbeelden. die ook in ons college gebruikt worden. met echter iets te weinig aandacht voor de praktische aspecten aangaande regelaarontwerp. • [14] Doorwrocht boek geschreven door degenen. • [13] Uitgebreide verhandeling vanuit een wat andere invalshoek: de parametrische be nadering. die aan de mathematische wieg van robuust regelen hebben gestaan. Wiskundig georienteerd. Goed geschreven. mathematisch georienteerd.
6 .
93 9 Filter Selection and Limitations.Contents 1 Introduction 9 2 What about LQG? 15 3 Control goals 21 4 Internal model control 31 5 Signal spaces and norms 37 6 Weighting ﬁlters 61 7 General problem. 81 8 Performance robustness and µanalysis/synthesis. 111 10 Design example 141 11 Basic solution of the general problem 165 12 Solution to the general H∞ control problem 171 13 Solution to the general H∞ control problem 191 7 .
8 CONTENTS .
In this course we will try to overcome these four deﬁciencies i. For facilitating the discussion consider a simple representation of a controlled system in Fig. And. disturbance rejection The output y should be free of the inﬂuences of the disturbing noise. the closed loop system becomes ‘nervous’. Moreover. the Nyquist criterion used to be very popular in testing the closed loop stability and some margins were generally taken into account to stay ‘far enough’ from instability. The proposed margins were really rules of thumb: the allowed perturbations in dynamics were not quantised and only stability of the closed loop is guarded. not the performance. tracking The real output y should follow the reference signal ref. the shift may cause the encirclement of the point –1 resulting in an unstable system. So.Chapter 1 Introduction 1.1. It is then in a kind of transition phase towards actual instability. 9 . deﬁne clear descriptions and bounds for the allowed perturbations and not only guarantee robustness for stability but also for the total performance of the closed loop system even in the case of multivariable systems. These dynamics were analysed and controllers were designed such that the closed loop system was at least stable and showed some de sired performance. if the dynamics of the controlled process deviate somewhat from the nominal model. sensor noise rejection The noise introduced by the sensor should not aﬀect the output y. 1.e. It was readily observed that as soon as the Nyquist curve passes the point –1 too close. the method does not work for multivariable systems. with these margins. provide very strict and well deﬁned criteria. stability was eﬀectively made robust against small perturbations in the process dynamics. In particular.1 What’s robust control? In previous courses the processes to be controlled were represented by rather simple trans fer functions or state space representations. The control block C is to be designed such that the following goals and constraints can be realised in some optimal form: stability The closed loop system should be stable. Consequently a deﬁnition of robust control could be stated as: Design a controller such that some level of performance of the controlled system is guaranteed irrespective of changes in the plant dynamics within a predeﬁned class.
humidity . and as a result the ﬁnal controller can only be a kind of compromise. all previous desiderata. catalysts) or undergo the inﬂuence of temperature (or pressure. the per formance of the system. we can be satisﬁed with less. This is of no help. The true process dynamics are then given by: Ptrue = P + ∆P (1.1: Simple block scheme of a controlled system avoidence of actuator saturation The actuator. As an example. robustness If the real dynamics of the process change by an amount ∆P . not explicitly drawn here but taken as the ﬁrst part of process P . thanks to control. (In speciﬁc cases it may be that only stability is considered.g. INTRODUCTION Figure 1. one can produce a drive with tolerances in the micrometerdomain but. As a consequence the real behaviour is necessarily approximated. . It will be explained how some constraints put similar demands on the controller C.1) where now P takes the role of the nominal model while ∆P represents the additive model perturbation. . will be aﬀected by pollution (e. should not become saturated but has to operate as a linear transfer.e. To that purpose it is important that we can quantify the various aims and consequently weight each claim against the others.10 CHAPTER 1.g. should not deteriorate to an unacceptable level. steel rollers). but also for deviating dynamics. emphasis on the robustness requirement weakens the other achievable constraints. .g. Basically. They are susceptable to wear during aging (e. varying loads Dynamics can substantially change. if the variance over the production series is high. of a CDplayer.) It will be clear that all above desiderata can only be fulﬁlled to some extent.g. while others require contradictory actions. since real processes cannot be caught in those simple representations. timeinvariant and of low order. i. manufacturing variance A prototype process may be characterised very accurately. where the control action can be tuned very speciﬁcally. A low variance production can turn to be immensely costly. if the load is altered: the mass and the inertial moment of a robot arm is determined considerably by the load unless you are willing to pay for a very heavy robot that is very costly in operation. time variance Inevitably the real dynamics of physical processes change in time. ) changes (e. because a performance should not only hold for a very speciﬁc process P . Their is no way to avoid ∆P considering the causes behind it: unmodelled dynamics The nominal model P will generally be taken linear. if one thinks e. day and night ﬂuctuations in glass furnaces).
2 H∞ in a nutshell The techniques. optimal performance is a maximum here. sensor failure or actuator degradation. .1. a well quantised application of the classical control design methods. They have been developped since the beginning of the eighties and are. . 1. So this slice represents the class of possible plants.2 the eﬀect of the robustness requirement is illustrated. It thus took about forty years to evolve a mathematical context strong . as a matter of fact.tools. In this supersimpliﬁed picture we let the horizontal axis represent all possible plant behaviours centered around the nominal plant P with a deviation ∆P living in the shaded slice.. One will readily recognise this eﬀect in many technical designs (cars. we still have to measure or identify its characteristics and this cannot be done without an error. organs. One might require a minimum level of performance (e. to be presented in this course. ). If the controller is designed to perform well for just the nominal process. Measuring equipment and identiﬁcation methods.. . Positive values are representing improvements by the control action com pared to the uncontrolled situation and negative values correspond to deteriorations by the very use of the controller. H∞ IN A NUTSHELL 11 limited identiﬁcation Even if the real process were linear and timeinvariant. where the best performance occurs in the minimum.2: Robust performance In concedance to the natural inclination to consider something as being ”better” if it is ”higher”. will inevitably be suﬀering from inaccuracies. actuators & sensors What has been said about the process can be attributed to actu ators and sensors as well. using ﬁnite data sets of limited sample rate. . Figure 1. but also e. We can improve this eﬀect by robustifying the control and indeed improve the performance for greater ∆P but unfortunately and inevitably at the cost of the per formance for the nominal model P .g. So here the vertical axis represents a degree of performance where higher value indicate better performance. but for a small model error ∆P the performance will soon deteriorate dramatically. in natural evolution (animals. fully applied in the frequency domain. it can really be ﬁnetuned to it. ).g. For extreme values −∞ the system is unstable and +∞ is the extreme optimist’s performance.bikes. stability) of the controlled system in case of e. In Fig. to be introduced later on. This is contrary to the criteria. are named H∞ control and µ analysis/synthesis.2.g. that are part of the controlled system. 1.
y + 6 − + η .3 of the structure of the problem in Fig.1.1. . ? 6+ inputs ? Figure 1. INTRODUCTION enough to tackle this problem. z 6 ?+ 6− outputs d . the loop closed by ∆P won’t have much eﬀect. P . e C . It will then follow that LQG is just one alternative in a very broad set of possible robust controllers each characterised by their own signal and system spaces. . Note also that we have taken the diﬀerence with the actual y and not the measured error e. Another goal. However. These signals will be characterised as bounded by a (mathematical) ball of radius 1 in a normed space together with ﬁlters that represent their frequency contents as discussed in chapter 5. How strong this constraint is in comparison to the tracking aim depends on the quality and thus price of the actuator and is going to be translated in forthcoming weightings and ﬁlters. as we will elucidate in the next chapter 2 and in the discussion of the ﬁnal solution in chapters 11 and 13.12 CHAPTER 1. We are not interested in minimising the actual output y (so this is not part of the output) but only in the way that y follows the reference signal r. If we can keep the transfer from d to u small by a proper controller. In a more complicated way also the eﬀect of perturbation ∆P on the robustness of stability and performance should be minimised. the attenuation of eﬀects of both the disturbance d and the measurement noise η is automatically represented by the minimisation of output z. This may appear very abstract at the moment but these normed spaces are necessary to quantify signals and transfer functions in order to be able to compare and weight the various control goals. at the right side we have put together those output signals that have to be minimised according to the control goals in a similar characterisation as the input signals. Next. As is clearly observed from Fig. Therefor we . The deﬁnitions of the various normed spaces are given in chapter 5 while the translation of the various control goals is described in detail in chapter 3. . u 6 + ? .3 ∆P is an extra transfer between output u and input d .3: Structure dictated by exogenous inputs and outputs to be minimised On the left we have gathered all inputs of the ﬁnal closed loop system that we do not know beforehand but that will live in certain bounded sets. the intermediate popularity and evolution of the LQGdesign in time domain was not in vain. As an extra output to be minimised is shown the input u of the real process in order to avoid actuator saturation.  + + + ? r . Here we will shortly outline the whole procedure starting with a rearrangment in Fig.1. These so called exogenous inputs consist in this case of the reference signal r. Consequently the error z = r − y is taken as an output to be minimised.e. ∆P 6  6 . Consequently robustness is increased implicitely by keeping u small as we will analyse in chapter 3.1.the disturbance d and the measurement noise η. i.
In this chapter you will also get instructions how to use dedicated toolboxes in MATLAB. the principle idea behind the control design of block C now consists of three phases as presented in chapter 11: 1. At last we have to provide a linear. . which is treated till here. Establish around this central controller C0 the set of all controllers that stabilise P according to the Youla parametrisation. The same holds for the controller C and consequently the signals (lines) represent vectors in sdomain so that we can write e. while in the last chapter 10 an example illustrates the methods. H∞ IN A NUTSHELL 13 have to quantify the bounds of ∆P again by a proper ball or norm and ﬁlters. u(s) = C(s)e(s).g. This historically ﬁrst method is treated as it shows a clear analysis of the problem. 3. Having characterised the control goals in terms of outputs to be minimised provided that the inputs remain conﬁned as deﬁned. In the multivariable case all single lines then represent vectors of signals. Provisionally we will discuss the matter in sdomain so that P is representing a transfer function in sdomain. timeinvariant. chapter 9 is devoted to the selection of appropriate design ﬁlters in practice. After the theory. In the next chapter 8 the robustness concept will be revisited and improved which will yield the µanalysis/synthesis.1. Compute a controller C0 that stabilises P . Search in this last set for that (robust) controller that minimises the outputs in the proper sense. nominal model P of the dynamics of the process that may be a multivariable (MIMO Multi Input Multi Input) transfer. After wards the original concept of a general solution is given in chapter 11.2. In the multivariable case. P is a transfer matrix where each entry is a transfer function of the corresponding input to the corresponding output. In later times improved solution algorithms have been developped by means of Riccati equations or by means of Linear Matrix Inequalities (LMI) as explained in Chapter 13. algorithms and programs. This design procedure is quite unusual at ﬁrst instance so that we start to analyse it for stable transfers P where we can apply the internal model approach in chapter 4. 2.
If you have expressed the performance as a function of θ in the form: ˆ = − ln X(θ) + jY (θ) − ln y (1.perf).6) Deﬁne the performance of the controlled system in steady state by: − ln ˆ y (1.  − 6 + ? Let the true process be a delay of unknown value θ: Pt = e−sθ (1.01 (1. P .7) • a) Design a proportional controller C1 = K for the nominal model P . Ci . Plot the actual performance as a function of θ. so completely ignoring the model uncertainty. Hint: Analyse the Nyquist plot.3 Exercise d + ? y . • b) Design a proportional controller C2 = K by incorporating the knowledge about the model uncertainty ∆P = e−sθ − 1 where θ is unknown apart from its range.4) Let there be some unknown disturbance d additive to the output consisting of a single sine wave ω = 25π): d = sin (25πt) (1. Robust stability is required.2) 0 ≤ θ ≤ . . INTRODUCTION 1.3) Let the nominal model be given by unity transfer: P =1 (1. Possible actuator saturation can be ignored in this academic example. perf(k)=log(sqrt(X(theta(k))2 +Y(theta(k))2 ). • c) The same conditions as indicated sub b) but now for an integrating controller C3 = K/s.5) By an appropriate controller Ci the disturbance will be reduced and the output will be: y(t) = yˆ sin(25πt + φ) (1. Plot again the actual performance as a function of θ. such that ˆ y  is minimal and thus performance is maximal.14 CHAPTER 1. end. >>plot(theta.8) the following Matlab program can help you to compute the actual function and to plot it: >> for k=1:100 theta(k)=k/10000.
when robustness enters the control goals. as presented in the course “modern control theory”.B . Given a linear. This short treatment is a summarised display of the article [10].C .1: Block scheme LQGcontrol Before submerging in all details of robust control. how the accomplishments of LQGcontrol can be used and what LQG means in terms of robust control. is leading to a dead end. it is worthwhile to show.1I . Later in this course.Chapter 2 What about LQG? v w + x + y + ? + ? . written just before the emergence of H∞ control.Bt .1I . how the classical interpretation of LQG gives no clues to treat robustness. At the moment we can only show. time invariant model of a plant in state space form: x˙ = Ax + Bu + v y = Cx + w 15 . we will see.  t 6 s + 6 u At ? plant −L 6 x ˆ + + − ? ? . .C  s 6+ + e 6 A ? K ? controller Figure 2. why the LQGcontrol. .
can best be illustrated by a numerical example . while the real process is supposed to have true parameters {At . whether stability is possibly lost. but the crucial question is. C} available and may assume. stable and minimum phase transfer function: s+2 P (s) = (2. does no longer correspond to the model of the form {A. Fig. the closed loop LQGscheme is nominally stable. This multivariable process is assumed to be completely detectable and reachable. y is the measured output. where the state feedback matrix L and the Kalman gain K are obtained from the well known citeria to be minimised: L = arg min E{xT Qx + uT Ru} (2. The robust stability. Bt }. Consider a very ordinary. that we only have the model parameters {A. 2.2: Real state feedback. If we were able to feed back the real states x. represented by state space matrices {At . WHAT ABOUT LQG? where u is the control input. This subtlety is caused by the fact. Certainly. v is the state disturbance and w is the measurement noise.1 intends to recapitulate the setup of the LQGcontrol. if the real system. that we analyse with the modelparameters {A. x is the state vector. loop transfer (LT). B}. the stability properties could easily be studied by analysing the looptransfer L(sI −A)−1 B as indicated in Fig.16 CHAPTER 2.2) for nonnegative Q and positive deﬁnite R. B. The feedback loop is then interrupted at the cross at input u to obtain the Figure 2. Note.5) −2135 3721 and the control criterion given by: √ 2800 √ 80 35 E{x T x + u2 } (2. E{vv T } = (2.3) (s + 1)(s + 3) which admits the following state space representation: 0 1 0 x˙ = x+ u+v −3 −4 1 (2. B.2. C}. Bt .4) y = 2 1 x+w where v and w are independent white noise sources of variances: 1225 −2135 E{w2 } = 1 . that the . which is then under study. Ct }. 2.6) 80 35 80 From this last criterion we can easily obtain the state feedback matrix L by solving the corresponding Riccati equation.1) ˆ) (x − x K = arg min E{(x − x T ˆ)} (2.
C} process transfer All we can do. 17 Figure 2.4. Ct } are very close (in some norm). The Nyquistplot is drawn in Fig. Unfortunately.g.7) model parameters{A. Bt .3: Various Nyquist curves. real parameters {At . but in practice we cannot measure all states directly. so that the actual feedback is brought about according to Fig. yielding the looptransfer: L(sI − A + KC + BL)−1 K Ct (sI − At )−1 Bt (2. e. so that we have to interrupt the true loop at e. Check for yourself. 2. We have to be satisﬁed with estimated states x ˆ. Amazingly. the process gain .g. a loop at cross (1) would lead to the same loop transfer as before under the assumption.3. is substitute the model parameters for the unknown process parameters and study the Nyquist plot in Fig. cross (2).B. that the model and process parameters are exactly the same (then e=0!). the full feedback controller is as indicated by the dashed box. by aging. You will immediately notice. that this curve is far from the endangering point –1.4: Feedback with observer. the robustness is now completely lost and we even have to face conditional stability: If. This is all very well. 2. 2.3. that cutting Figure 2. so that stability robustness is guaranteed.
Next. So we have sacriﬁced our optimal observer for obtaining suﬃcient robustness. This can indeed be accomplished in case of stable and minimum phase processes. we do not even talk about robustness of the complete performance. the straightforward approach of LQG is not the proper way. If we let q increase in the (thus obtained) loop transfer: L(sI − A + qBW C +BL)−1 qBW C (sI − A)−1 B (2. that we ﬁrst ought to deﬁne and quantify the control aims very clearly (see next chapter) in order to be able to weight them relatively and then come up with some machinary. Alternatively. we could of course try to distribute the pain over both K and L. Along similar lines we would then ﬁnd extreme entries in L. when feeding back the real states. Without entering into many details. where W is a nonsingular matrix and q a positive constant. how to eﬀect robustness. we could have taken the feedback matrix L as a means to eﬀect robust ness. we thus have to admit. We are dealing now with very extreme entries in K which will cause a very high impact of measurement noise w. Then this saturation would be the price for robustness. In Fig. An obvious idea is to modify the Kalman gain K in some way. Conclusively. such that the loop transfer resembles the previous loop transfer. In doing so . by doing so. that some observer poles (the real cause of the problem) shift to the zeros of P and cancel out. 2. while the others are moved to −∞. all that matters here is. the Nyquist curve shrinks to the origin and soon the point –1 is tresspassed. And surely. causing instability. it appears. that. On top of that we have conﬁned ourselves implicitly by departing from LQG and thus to the limited structure of the total controller as given in Fig. that is able to design controllers in the face of all these weighted aims.1. And then.3 some loop transfers for increasing q have been drawn and indeed the transfer converges to the original robust loop transfer. WHAT ABOUT LQG? decreases. so that certainly the actuator would saturate. we have implemented a completely nonoptimal Kalman gain as far as disturbance reduction is concerned. 2. but we have no clear means to balance the increase of the robustness and the decrease of the remaining performance. where the only tunable parameters are K and L. The problem now is. . However. the procedure is in main lines: Put K equal to qBW .8) the underbraced term in the ﬁrst inverted matrix will dominate and thus almost completely annihilate the same second underbraced expression and we are indeed left with the simple looptransfer ≈ L(sI − A)−1 B.18 CHAPTER 2.
2. L is the state feedback gain. EXERCISE 19 2.1 Exercise Above block scheme represents a process P of ﬁrst order disturbed by white state noise v and independent white measurement noise w. • a) If we do not penalise the control signal u. K is the Kalman observer gain based upon the known variances of v and w.) What can you do if the resultant solution is not robust? .(Do not try to actually do the computations.1. what would be the optimal L? Could this be allowed here? • b) Suppose that for this L the actuator is not saturated. Is the resultant controller C robust (in stability)? Is it satisfying the 450 phase margin? • c) Consider the same questions when P = s(s+1)1 and in particular analyse what you have to compute and how.
20 CHAPTER 2. WHAT ABOUT LQG? .
the sensor itself has a transfer function unequal to 1.1 in chapter 1. as made for the sensor. how some groups of control aims are in conﬂict with each other. The con 21 . 3.1. controlled structure) incorporates the actuator. Therefor one should know the real plant transfer Pt consisting of the nominal model transfer P plus the possible additive model error ∆P . Basically. 3. However. it is clear that only upper bounds for the equivalent of actuator disturbances in the output disturbances d can be established.Chapter 3 Control goals In this chapter we will list and analyse the various goals of control in more detail. The eﬀects of model errors (or system perturbations) is not yet made explicit in Fig. Only in case the sensor transfer is not suﬃciently broadbanded (easier to manufacture and thus cheaper).1 but will be discussed later in the analysis of robustness. hold for the actuator. Figure 3. Actuator disturbances are combined with the output disturbance d by com puting or rather estimating its eﬀect at the output of the plant.1: Simple control structure Notice that we have made the sensor noise explicit in η. a proper block has to be inserted. so that this should be inserted as an extra block in the feedback scheme just before the sensor noise addition. The process or plant (the word ”system” is usually reserved for the total. In that case the sensor transfer may be neglected. In general one will avoid this. To start with we reconsider the block scheme of a simple conﬁguration in Fig. The same remarks.1 which is only slightly diﬀerent from Fig. In general the actuator will be made suﬃciently broadbanded by proper control loops and all possibly remaining defects are supposed to be represented in the transfer P . because the ultimate control performance highly depends on the quality of measurement: the resolution of the sensor puts an upper limit to the accuracy of the output control as will be shown. The relevant transfer functions will be deﬁned and named and it will be shown. Next we will elaborate on various common control constraints and aims. As only the nominal model P and some upper bound for the model error ∆P is known. a good quality sensor has a ﬂat frequency response for a much broader band than the process transfer.
2) Consequently for this simple scheme we distinguish four diﬀerent transfers from r and dx to y and x. Then also the transfer of dx to say y has to be checked for stability which transfer is given by: y = (I + P C)−1 P dx = P (I + CP )−1 dx (3. the closed loop system is required to be stable. Without feedback the disturbance d is fully present in the real output y. By means of the feedback the eﬀect of the disturbance can be inﬂuenced and at least be reduced in some frequency band. because a closer look soon reveals that inputs d and η are equivalent to r and outputs z and u are equivalent to y. The closed loop eﬀect can be easily computed as read from: y = P C(I + P C)−1 (r − η) + (I + P C)−1 d (3. we thus have to choose controller C such that the sensitivity S is small in the frequency band where d has most of its power or where the disturbance is most “disturbing”. possibly unstable poles may vanish in a polezero cancellation.3) S The underbraced expression represents the Sensitivity S of the output to the disturbance thus deﬁned by: S = (I + P C)−1 (3. in the computation of this transfer. aims like disturbance reduction and good tracking without introducing too much eﬀects of the sensor noise and keeping this total performance on a suﬃcient level in the face of the system perturbations i. nowhere in the closed loop system. some ﬁnite disturbance can cause other signals in the loop to grow to inﬁnity: the socalled BIBOstability from Bounded Input to Bounded Output. Ergo all corresponding transfers have to be checked on possible unstable poles. in a weighted balance.3 Tracking. 3. This can be obtained by claiming that. Let us deﬁne it by dx . but in fact for all systems where a reference signal is involved. Unless one is designing oscillators or systems in transition. given by : y = P C(I + P C)−1 r (3.1) But this alone is not suﬃcient as.2 Disturbance reduction.e. robust stability and (avoidance of ) actuator saturation.22 CHAPTER 3. performance robustness against model errors.g. CONTROL GOALS straints can be listed as stability. additive to what is indicated as x (think e. In detail: 3. of drift of integrators). 3. Another possible input position of stray signals can be found at the actual input of the plant. Especially for servo controllers. one wants to optimise. So certainly the straight transfer between the reference input r and the output y. left by these constraints.4) If we want to decrease the eﬀect of the disturbance d on the output y. Within the freedom.1 Stability. there is the aim of letting the output track the reference signal with a small .
3.4. SENSOR NOISE AVOIDANCE. 23
error at least in some tracking band. Let us deﬁne the tracking error e in our simple
system by:
e = r − y = (I + P C)−1 (r − d) + P C(I + P C)−1 η
def
(3.5)
S T
Note that e is the real tracking error and not the measured tracking error observed as
signal u in Fig. 3.1, because the last one incorporates the eﬀect of the measurement
noise substantially diﬀerently. In equation 3.5 we recognise (underbraced) the sensitivity
as relating the tracking error to both the disturbance and the reference signal r. It is
therefore also called awkwardly the “inverse return diﬀerence operator”. Whatever the
name, it is clear that we have to keep S small in both the disturbance and the tracking
band.
3.4 Sensor noise avoidance.
Without any feedback it is clear that the sensor noise will not have any inﬂuence on the real
output y. On the other hand the greater the feedback the greater its eﬀect in disrupting
the output. So we have to watch that in our enthousiasm to decrease the sensitivity, we are
not introducing too much sensor noise eﬀects. This actually reminiscences to the optimal
Kalman gain. As the reference r is a completely independent signal, just compared with
y in e, we may as well study the eﬀect of η on the tracking error e in equation 3.5. The
coeﬃcient (relevant transfer) of η is then given by:
T = P C(I + P C)−1 (3.6)
and denoted as the complementary sensitivity T . This name is induced by the following
simple relation that can easily be veriﬁed:
S+T =I (3.7)
and for SISO (Single Input Single Output) systems this turns into:
S+T =1 (3.8)
This relation has a crucial and detrimental inﬂuence on the ultimate performance of the
total control system! If we want to choose S very close to zero for reasons of disturbance
and tracking we are necessarily left with a T close to 1 which introduces the full sensor
noise in the output and vice versa. Ergo optimality will be some compromise and the
more because, as we will see, some aims relate to S and others to T .
3.5 Actuator saturation avoidance.
The input signal of the actuator is indicated by x in Fig. 3.1 because the actuator was
thought to be incorporated into the plant transfer P . This signal x should be restricted
to the input range of the actuator to avoid saturation. Its relation to all exogenous inputs
is simply derived as:
x = (I + CP )−1 C(r − η − d) = C(I + P C)−1 (r − η − d) (3.9)
R
24 CHAPTER 3. CONTROL GOALS
The relevant (underbraced) transfer is named control sensitivity for obvious reasons and
symbolised by R thus:
R = C(I + P C)−1 (3.10)
In order to keep x small enough we have to make sure that the control sensitivity R is small
in the bands of r, η and d. Of course with proper relative weightings and “small” still to
be deﬁned. Notice also that R is very similar to T apart from the extra multiplication by
P in T . We will interprete later that this P then functions as an weighting that cannot be
inﬂuenced by C as P is ﬁxed. So R can be seen as a weighted T and as such the actuator
saturation claim opposes the other aims related to S. Also in LQGdesign we have met
this contradiction in a more twofaced disguise:
• Actuator saturation was prevented by proper choice of the weights R and Q in the
design of the state feedback for disturbance reduction.
• The eﬀect of the measurement noise was properly outweighted in the observer design.
Also the stability was stated in LQG, but its robustness and the robustness of the total
performance was lacking and hard to introduce. In this H∞  context this comes quite
naturally as follows:
3.6 Robust stability.
Robustness of the stability in the face of model errors will be treated here rather shortly
as more details will follow in chapter 5. The whole concept is based on the socalled
small gain theorem which trivially applies to the situation sketched in Fig. 3.2 . The
Figure 3.2: Closed loop with loop transfer H.
stable transfer H represents the total looptransfer in a closed loop. If we require that the
modulus (amplitude) of H is less than 1 for all frequencies it is clear from Fig. 3.3 that the
polar curve cannot encompass the point 1 and thus we know from the Nyquist criterion
that the loop will always constitute a stable system. So stability is guaranteed as long as:
Figure 3.3: Small gain stability in Nyquist space
3.6. ROBUST STABILITY. 25
def
H ∞ = sup H(jω) < 1 (3.11)
ω
“Sup” stands for supremum which eﬀectively indicates the maximum. (Only in case that
the supremum is approached at within any small distance but never really reached it is
not allowed to speak of a maximum.) Notice that we have used no information concerning
the phase angle which is typically H∞ . In above fomula we get the ﬁrst taste of H∞ by
the simultaneous deﬁnition of the inﬁnity norm indicated by . ∞ . More about this in
chapter 5 where we also learn that for MIMO systems the small gain condition is given
by:
def
H ∞ = sup σ¯ (H(jω)) < 1 (3.12)
ω
The σ¯ denotes the maximum singular value (always real) of the transfer H (for the ω
under consideration).
All together, these conditions may seem somewhat exaggerated, because transfers, less
than one, are not so common. The actual application is therefore somewhat “nested” and
very depictively indicated in literature as “the baby small gain theorem” illustrated in
Fig. 3.4. In the upper blockscheme all relevant elements of Fig. 3.1 have been displayed
 ∆P   ∆P 
6 6
9 + +
+ ?
 C  P < −equivalent− > −C(I + P C)−1 ?
6
−
? = −R
Figure 3.4: Baby small gain theorem for additive model error.
in case we have to deal with an additive model error ∆P . We now consider the “baby”
loop as indicated containing ∆P explicitly. The lower transfer between the output and
the input of ∆P , as once again illustrated in Fig. 3.5, can be evaluated and happens to
Figure 3.5: Control sensitivity guards stability robustness for additive model error.
be equal to the control sensitivity R as shown in the lower blockscheme. (Actually we get
a minus sign that can be joined to ∆P . Because we only consider absolute values in the
small gain theorem, this minus sign is irrelevant: it just causes a phase shift of 1800 which
leaves the conditions unaltered.) Now it is easy to apply the small gain theorem to the
total looptransfer H = R∆P . The inﬁnity norm will appear to be an induced operator
norm in the mapping between identical signal spaces L2 in chapter 5 and as such it follows
Schwartz inequality so that we may write:
R∆P ∞ ≤ R ∞ ∆P ∞ (3.13)
26 CHAPTER 3. CONTROL GOALS
Ergo, if we can guarantee that:
1
∆P ∞ ≤ (3.14)
α
a suﬃcient condition for stability is:
R ∞ < α (3.15)
If all we require from ∆P is stated in equation 3.13 then it is easy to prove that the
condition on R is also a necessary condition. Still this is a rather crude condition but it
can be reﬁned by weighting over the frequency axis as will be shown in chapter 5. Once
again from Fig. 3.5 we recognise that the robustness stability constraint eﬀectively limits
the feedback from the point, where both the disturbance and the output of the model
error block ∆P enter, and the input of the plant such that the loop transfer is less than
one. The smaller the error bound 1/α the greater the feedback α can be and vice versa!
We so analysed the eﬀect of additive model error ∆P . Similarly we can study the
eﬀect of multiplicative error which is very easy if we take:
Ptrue = P + ∆P = (I + ∆)P (3.16)
where obviously ∆ is the bounded multiplicative model error. (Together with P it evi
dently constitutes the additive model error ∆P .) In similar blockschemes we now get Figs.
3.6 and 3.7. The “baby”loop now contains ∆ explicitly and we notice that transfer P
 ∆   ∆ 
6 : 6
+ +
6 ?
 C  P  < −equivalent− > −P C(I + P C)−1 ?
+
6
−
? = −T
Figure 3.6: Baby small gain theorem for multiplicative model error.
Figure 3.7: Complementary sensitivity guards stability robustness for multiplicative model
error
is somewhat “displaced”out of the additive perturbation block. The result is that ∆ sees
3.7. PERFORMANCE ROBUSTNESS. 27
itself fed back by (minus) the complementary sensitivity T . (The P has, so to speak,
been taken out of ∆P and adjoined to R yielding T .) If we require that:
1
∆ ∞ ≤ (3.17)
β
the robust stability follows from:
T ∆ ∞ ≤ T ∞ ∆ ∞ ≤ 1 (3.18)
yielding as ﬁnal condition:
T ∞ < β (3.19)
Again proper weighting may reﬁne the condition.
3.7 Performance robustness.
Till now, all aims could be grouped around either the sensitivity S or the complementary
sensitivity T . Once we have optimised some balanced criterion in both S and T and
thus obtained a nominal performance, we wish that this performance is kept more or less,
irrespective of the inevitable model errors. Consequently, performance robustness requires
that S and T change only slightly, if P is close to the true transfer Pt . We can analyse
the relative errors in these quantities for SISO plants:
St − S (1 + Pt C)−1 − (1 + P C)−1
= = (3.20)
St (1 + Pt C)−1
1 + P C − 1 − Pt C ∆P P C
= =− = −∆T (3.21)
1 + PC P 1 + PC
and:
Tt − T Pt C(1 + Pt C)−1 − P C(1 + P C)−1
= = (3.22)
Tt Pt C(1 + Pt C)−1
Pt C − P C ∆P P 1
= = ≈ ∆S (3.23)
Pt C(1 + P C) P Pt 1 + P C
As a result we note that in order to keep the relative change in S small we have to take
the product of ∆ and T small. The smaller the error bound is, the greater a T can we
aﬀord and vice versa. But what is astonishingly is that the smaller S is and consequently
the greater the complement T is (see equation 3.7), the less robust is this performance
measured in S. The same story holds for the performance measured in T where the
robustness depends on the complement S. This explains the remark in chapter 1 that
increase of performance for a particular nominal model P decreases its robustness for
model errors. So also in this respect the controller will have to be a compromise!
Summary
We can distinguish two competitive groups because S + T = I. One group centered
around the sensitivity that requires the controller C to be such that S is “small” and can
be listed as:
• disturbance rejection
• tracking
• robustness of T
28 CHAPTER 3. CONTROL GOALS
The second group centers around the complementary sensitivity and requires the controller
C to minimise T :
• avoidance of sensor noise
• avoidance of actuator saturation
• stability robustness
• robustness of S
If we were dealing with real numbers only, the choice would be very easy and limited.
Remembering that
S = (I + P C)−1 (3.24)
−1
T = P C(I + P C) (3.25)
a large C would imply a small S but T ≈ I while a small C would yield a small T and
S ≈ I. Besides, for no feedback, i.e. C = 0, , necessarily T → 0 and S → I. This
is also true for very large ω when all physical processes necessarily have a zero transfer
(P C → 0). So ultimately for very high frequencies, the tracking error and the disturbance
eﬀect is inevitably 100%.
This may give some rough ideas of the eﬀect of C, but the real impact is more diﬃcult
as:
• We deal with complex numbers .
• The transfer may be multivariable and thus we encounter matrices.
• The crucial quantities S and T involve matrix inversions (I + P C)−1
• The controller C may only be chosen from the set of stabilising controllers.
It happens that we can circumvent the last two problems, in particular when we are dealing
with a stable transfer P . This can be done by means of the internal model control concept
as shown in the next chapter. We will later generalise this for also unstable nominal
processes.
3.8. EXERCISES 29
3.8 Exercises
3.1:
u d
+
? + + y
r
+
  u + ?  +

?

C P
6
–
?
• a) Derive by reasoning that in the above scheme internal stability is guaranteed if
all transfers from u and d to u and y are stable.
• b) Analyse the stability for
1
P = (3.26)
1−s
1−s
C= (3.27)
1+s
3.2:
d
r + + y
 C
1
  C2  ? 
+
P 
6
–
C3 ?
Which transfers in the given scheme are relevant for:
• a) disturbance reduction
• b) tracking
30 CHAPTER 3. CONTROL GOALS .
4. 4. the sensitivity and the complementary sensitivity take very simple forms. Furthermore.Chapter 4 Internal model control In the internal model control scheme. In Fig. while only the diﬀerence of the measured and simulated output is fed back. expressed in process and controller transfer. it is easy to denote the set of all stabilising controllers. in this structure.2 the internal Figure 4. Of course. The similarity with the conventional structure is then obvious. So it is easy to relate C and the internal model control block Q as: C = Q(I − P Q)−1 (4. same input as the true process.1 we repeat the familiar conventional structure while in Fig. it is allowed to subtract the simulated output from the feedback loop after the entrance of the reference yielding the structure of Fig.1: Conventional control structure. 4. without inversions.3.1) 31 . model structure is shown.2: Internal model controller concept. where we identify the dashed block as the conventional controller C. The diﬀerence actually is the nominal model which is fed by the Figure 4. A severe condition for application is that the process itself is a stable one. the controller explicitly contains the nominal model of the process and it appears that.
If then P = Pt .4) S = I − T = I − PQ Extreme designs are now immediately clear: • minimal complementary sensitivity T : T =0→S=I→Q=0→C=0 (4. So. eﬀectively.32 CHAPTER 4. because there is no way to correct for ever increasing but equal outputs of P and Pt (due to instability) by feedback. if and only if) transfer Q = R is stable. 4.3: Equivalence of the ‘internal model’ and the ‘conventional’ structure. the Q equals the previously encountered control sensitivity R! The reason behind this becomes clear. it is obvious from Fig. Reﬁnement can only occur by using the information about the model error ∆P that will be done later. Since only d and η are fed back. which are easily derived as: T = PR = PQ (4. if we consider the situation where the nominal model P exactly equals the true process Pt . 4. 4. structure and the complete system is stable. we have no other choice than taking P = Pt for the synthesis and analysis of the controller.e. we may draw the equivalent as in Fig.2 that only the disturbance d and the measurement noise η are fed back because the outputs of P and Pt are equal. As outlined before. and from this we get: C − CP Q = Q (4.2) so that reversely: Q = (I + CP )−1 C = C(I + P C)−1 = R (4. Furthermore.4: Internal model structure equivalent for P = Pt . iﬀ (i.3) Remarkably. as there is no actual feedback in Fig. but take socalled aﬃne expressions in the transfer Q. because P was already stable by condition. as we now simply have the complete set of all controllers that stabilise P ! We only need to search for proper stabilising controllers C by studying the stable transfers Q. Also the condition of stability of P is then trivial. This is very revealing.4 the sensitivity and the complementary sensitivity contain no inversions. there seems to be no feedback in this Figure 4.4.5) . INTERNAL MODEL CONTROL Figure 4.
4. we get inﬁnite feedback causing: – all disturbance is eliminated from the output (S = 0) – y tracks r exactly (S=0) – y is fully contaminated by measurement noise (T = I) – stability only in case Q = P −1 is stable – very likely actuator saturation (Q = R will tend to inﬁnity see later) – questionable robust stability (Q = R will tend to inﬁnity see later) – robust T (S = 0). What is left is to analyse the possibility of the above last sketched extreme where we needed that P Q = I and Q is stable. a > 0 . If P is wide (more inputs than outputs) the pseudo inverse would suﬃce under the condition of stability. because they turn into the poles of P −1 that should be stable. where this is not true. is not allowed. inversion of P turn poles into zeros and vice versa. the problem is more severe. If P is tall (less inputs than outputs) there is no solution though.6) if at least P −1 exists and is stable. b > 0 → P −1 = (4. because we can show that. • minimal sensitivity S: S = 0 → T = I → Q = P −1 → C = ∞ (4. physical processes. Let us take a simple example: s−b s+a P = . the proposed solution yielding inﬁnite feedback is not feasible for realistic. 33 there is obviously neither feedback nor control causing: – no measurement inﬂuence (T =0) – no actuator saturation (R=Q=0) – 100% disturbance in output (S=I) – 100% tracking error (S = I) – stability (Pt was stable) – robust stability (R=Q=0 and T =0) – robust S (T =0). but this “performance” can hardly be worse. Once again it is clear that a good control should be a well designed compromise between the indicated extremes. Processes which have zeros in the closed right half . the given example. It is obvious that the solution could be Q = P −1 if P is square and invertible and the inverse itself is stable. Nevertheless. Figure 4. For a SISO process. Ergo.5. where P becomes a scalar transfer.5: Pole zero inversion of nonminimum phase. even for SISO systems.stable process. It is clear that the original zeros of P have to live in the open (stable) left half plane. but this “performance” can hardly be worse too.7) s+a s−b where the corresponding pole/zeroplots are shown in Fig.
their deﬁnition and considerations for realistic. no poles in the right half plane denoted by C+ ) the maximum modulus on the imaginary axis is always greater than or equal .9) ω→∞ and thus P has eﬀectively (at least) one zero at inﬁnity which is in the closed right half space! Take for example: K s+a P = . The disturbing fact about nonminimum phase zeros can now be illustrated with the use of the socalled Maximum Modulus Principle which claims: ∀H ∈ H∞ : H ∞ ≥ H(s)s∈C+ (4.34 CHAPTER 4.1 Maximum Modulus Principle. Consequently any physical process is by nature strictly proper.e. We can then distinguish the following categories by the attributes: proper if np ≥ nz biproper if np = nz strictly proper if np > nz nonproper if np < nz Any physical process should be proper because nonproperness would involve: lim P (jω) = ∞ (4. But before doing so. the zeros in the closed right half plane. we have to state that in fact all physical plants suﬀer more or less from this negative property.8) ω→∞ so that the process would eﬀectively have poles at inﬁnity. But this implies that: lim P (jω) = 0 (4.11) It says that for all stable transfers H (i. as we will analyse further.10) s+a K and consequently Q = P −1 cannot be realised as it is nonproper. physical processes. named nonminimum phase.e. The real problems are due to the nonminimum phase zeros i. Let np denote the number of poles and similarly nz the number of zeros in a conventional. would have an inﬁnitely large transfer at inﬁnity and would certainly start oscillating at frequency ω = ∞. a>0 → P −1 = (4. In fact poles and zeros in the open left half plane can easily be compensated for by Q. SISO transfer function where denomi nator and numerator are factorised. INTERNAL MODEL CONTROL plane. On the other hand a real process can neither be biproper as it then should still have a ﬁnite transfer for ω = ∞ and at that frequency the transfer is necessarily zero. thus cause problems in obtaining a good performance in the sense of a small S. We need some extra notion about the numbers of poles and zeros. 4. Also the poles in the closed right half plane cause no real problems as the rootloci from them in a feedback can be “drawn” over to the left plane in a feedback by putting zeros there in the controller.
Limitations of control are recognised in the eﬀects of nonminimum phase zeros of the plant and in fact all physical plant suﬀer from these at least at inﬁnity. where S > 1. 4.4. ultimately the sheet will come to the bottom.2. We are now in the position to apply the maximum modulus principle to the sensitivity function S of a nonminimum phase SISO process P : s=zn S ∞ = sup S(jω) ≥ S(s)s∈C+ = 1 − P Qs∈C+ = 1 (4. bottom level. Because of stability there are no poles and thus spikes in the right half plane. though. is the more low frequent the closer the troubling zero is to the origin of the splane! By proper weighting over the frequency axis we can still optimise a solution. As a consequence we have to accept that for some ω the sensitivity has to be greater than or equal to 1.12) ω where zn ( C+ ) is any nonminimum phase zero of P . so that. We will not prove this. If we cut it precisely at the imaginary axis we will notice only valleys at the right hand side. That seems to be the price. It is always going down at the right side and this is exactly what the principle tells. there is a zero at inﬁnity. . For an appropriate explanation of this weighting procedure we ﬁrst present the intermezzo of the next chapter about the necessary norms. It is obvious that such a rubber landscape with mountains exclusively in the left half plane will gets its heights in the right half plane only because of the mountains in the left half plane.2 Summary. for stable processes and the generalisation to unstable systems has to wait until chapter 11. but facilitate its acceptance by the following concept. Because of the strictly properness of the transfer. It has been shown that internal model control can greatly facilitate the design procedure of controllers. 35 to the maximum modulus in the right half plane. SUMMARY. Zeros will then pinpoint the sheet to the zero. Imagine that the modulus of a stable transfer function of s is represented by a rubber sheet above the splane. it is clear that this band. And reminding the rubber landscape. It only holds. For that frequency the disturbance and the tracking errors will thus be minimally 100%! So for some band we will get disturbance ampliﬁcation if we want to decrease it by feedback in some other (mostly lower) band. while poles will act as inﬁnitely high spikes lifting the sheet. in whatever direction we travel.
15) 1 + PC • a) Show that for any ω S > 1 .  u2 6 – + 6 ? –  ? . .14) s+β 1 S= (4.1: u1 + ? yt . y and u are stable. . y P + ? • a) Derive by reasoning that for IMC internal model stability is guaranteed if all transfers from r. Q and C.3: Given: s−1 P = (4.01. What can we reach by variation of K and β ? (MATLAB can be useful) • c) The same question a) but now the zero of P is at −1. u1 and u2 to yt . What is the bound on ∆P ∞ for guaranteed stability ? • b) In order not to saturate the actuator we now add the extra constraint Φuu < . .13) s+2 K(s + 2) C= (4. • a) Design for an IMC setup a Q such that the power density Φyy is minimal. (As you are dealing with white noises all variables are constants independent of the frequency ω. Is the controlled system more robust now ? 4. T . • b) We want to obtain good tracking for a low pass band as broad as possible. Suppose that d is white noise with power density Φdd = 1 and similarly that η is white noise with power density Φηη = .101 .3 Exercises 4. Pt  6 + r + + ? .) Compute Φyy . Take all signal lines to be single. Q u .2: For the general scheme let P = Pt = 1. Answer a) again under this condition. • b) To which simple a condition this boils down if P = Pt ? • c) What if P = Pt ? 4.36 CHAPTER 4. At least the ‘ﬁnal error’ for a step input should be zero. INTERNAL MODEL CONTROL 4. S.
We then consider inputoutput systems and deﬁne the induced norm of an inputoutput mapping. T = R+ or intervals T = [a. Such functions quantify how information evolves over time and are called signals. We therefore introduce a signal space W . In this chapter we will quantify the size of a signal and the size of a system. for sampled signals. More often than not. it is necessary that at each time instant t ∈ T . Many physical quantities (such as voltages. a number of physical quantities are represented. . Typical examples of frequently encountered time sets are ﬁnite horizon discrete time sets T = {0. complex valued signals. we need to ﬁrst recall some basics on signal spaces. We will be rather formal to combine precise deﬁnitions with good intuition. If we wish a signal s to express at instant t ∈ T a 37 .Chapter 5 Signal spaces and norms 5.1 A signal is a function s : T → W where T ⊆ R is the time set and W is a set. angles and quantized signals are very common in applications. Deﬁnition 5. pressures) depend on time and can be interpreted as functions of time. . which is the set in which a signal takes its values. 1. as well as for multivariable systems. N }. A ﬁrst sec tion is dedicated to signals and signal norms. and assume values in diﬀerent sets.1 Introduction In the previous chapters we deﬁned the concepts of sensitivity and complementary sen sitivity and we expressed the desire to keep both of these transfer functions ‘small’ in a frequency band of interest. . The values which a physically relevant signal assumes are usually real numbers. In this chapter we will quantify in a more precise way what ‘small’ means. It is therefore logical to specify a time set T . nonnegative signals. binary signals. called the signal space. inﬁnite horizon discrete time sets T = Z+ or T = Z or. We will think of time as a one dimensional entity and we therefore assume that T ⊆ R. How ever. 5. The H∞ norm and the H2 norm of a system are deﬁned and interpreted both for single input single output systems. b]. temperatures. Examples of continuous time sets include T = R. T = {kτs  k ∈ Z} where τs > 0 is the sampling time. In order to formalize concepts on the level of systems. currents. indicating the time instances of interest. We distinguish between continuous time signals (T a possibly inﬁnite interval of R) and discrete time signals (T a countable set).2 Signals and signal norms We will start this chapter with some system theoretic basics which will be needed in the sequel. 2.
for any two points t1 . These signals have frequency ω/2π (in Hertz) and period P = 2π/ω.e. We will attach to each vector w = (w1 . is a real number for each time instant t. Suppose that the signal space is a complex valued qdimensional space. That is.. . × R q copies which is denoted as W = Rq . respectively. if w = x + jy with x the real part and y the imaginary part of w. A. . suppose that I(t) denotes the current through a resistance R producing a voltage V (t). i. . w2 .2 Continuous time signals It is convenient to introduce various signal classiﬁcations.2. ﬁnite time sets such as intervals T = [a. . The instantaneous power per Ohm is p(t) = V (t)I(t)/R = I 2 (t). t2 ∈ T also t1 + t2 ∈ T . . We emphasize that the sum of two periodic signals does not need to be periodic. The class of all periodic signals with time set T will be denoted by P(T ). sq (t) where si (t). leads to ∞deﬁning2 the total energy (in Joules). t ∈ T.2. W =R × .1 Periodic and aperiodic signals Deﬁnition 5. .2 Suppose that the time set T is closed under addition. W = Cq for some q > 0. wq ) ∈ W its usual ‘length’ w := w1∗ w1 + w2∗ w2 + . + wq∗ wq which is the Euclidean norm of w. The ‘size’ of a signal is measured by norms. Integrating this quantity over time. 5. we consider signals which have ﬁnite energy and ﬁnite power. Well known examples of continuous time periodic signals are sinusoidal signals s(t) = A sin(ωt + φ) or harmonic signals s(t) = Aejωt . deterministic and stochastic signals. Common time sets such as T = Z or T = R are closed under addition. 5. b] are not. which is the reason for using this notation. For example. . To introduce these signal classes. that is. i.. A signal s : T → Rq thus represents at each time instant t ∈ T a vector s1 (t) s2 (t) s(t) = . then the signal space W consists of q copies of the set of real numbers. the ithe component. A signal that is not P periodic for any P is aperiodic. then w∗ = x − jy). w∗ denotes the complex conjugate of the complex number w. SIGNAL SPACES AND NORMS total of q > 0 real valued quantities. ω and φ are constants referred to as the amplitude.e. frequency (in rad/sec) and phase. Here. A signal s : T → W is said to be periodic with period P (or P periodic) if s(t) = s(t + P ). If q = 1 this expresses the absolute value of w. s(t) = sin(t) + sin(πt) is aperiodic. Signals can be classiﬁed in many ways. First. . (Here. periodic and aperiodic signals. We distinguish between continuous and discrete time signals.38 CHAPTER 5. This norm will be attached to the signal space W . . The per Ohm energy of the resistance is therefore −∞ I(t) dt Joules. and makes it a normed space.
for continuous time signals is deﬁned as 1/p s p = s(t)p .1). In particular. Clearly. Recall from your very ﬁrst course of linear algebra that a norm is deﬁned as a realvalued function which assigns to each element s of a vector space a real number s . for harmonic signals s(t) = cejωt we have that s(t)2 = c2 so that Es = ∞ whenever c = 0. The energy content Es of s is deﬁned as ∞ Es := s(t)2 dt −∞ If Es < ∞ then s is said to be a (ﬁnite) energy signal. i. If Ps < ∞ then s is said to be a (ﬁnite) power signal.2. In general. Example 5. s22 = Es .2) i t∈T 1/2 s 2 = s(t)2 dt (5. It is easily seen that the power is independent of the initial time instant t0 in (5.1) does not change if P is replaced by nP .4) t∈T More generally.3) t∈T s 1 = s(t)dt (5. We emphasize that all nonzero ﬁnite power signals have inﬁnite energy.5 The sinusoidal signal s(t) = A sin(ωt+φ) is periodic with period P = 2π/ω.3 Let s be a signal deﬁned on the time set T = R. A signal which is periodic with period P is also periodic with period nP .4 Let s be a continuous time periodic signal with period P . one needs to check whether these quantities indeed deﬁne norms. the energy content of periodic signals is inﬁnite. if T = R. Remark 5. The power of s is deﬁned as 1 t0 +P Ps := s(t)2 dt (5.e the energy content of a signal is the same as the square of its 2norm. not all signals have ﬁnite energy. has inﬁnite energy and has power π/ω ω 2 2 A2 π Ps = A sin (ωt + φ) dt = sin2 (τ + φ) dτ = A2 /2. with 1 ≤ p < ∞. with the properties that . t∈T Note that these quantities are deﬁned for ﬁnite or inﬁnite time sets T . The most important norms associated with s are the inﬁnitynorm. It is in this sense that the power is independent of the period of the signal.6 To be precise. the pnorm. In case of the resistance.1) P t0 where t0 ∈ R. 2π −π/ω 2π −π Let s : T → Rq be a continuous time signal. However. SIGNALS AND SIGNAL NORMS 39 Deﬁnition 5. the twonorm and the onenorm deﬁned either over a ﬁnite or an inﬁnite interval T . the power of a (periodic) current I is measured per period and will be in Watt. Indeed. We therefore associate with periodic signals their power : Deﬁnition 5. where n is an integer. They are deﬁned as follows s ∞ = max sup si (t) (5.5. it is a simple exercise to verify that the right hand side of (5. called the norm of s.
Example 5. 2. ∞) and P[0. It is clear that s is uniquely deﬁned by these equations once an initial condition x(0) = x0 has been speciﬁed. s 2 and s 1 indeed deﬁne (signal) norms and have the properties 1. C). ∞). C). ∞). is symmetric and is called the observability gramian of the pair (A. Then s is equal to s(t) = CeAt x0 where we take t ≥ 0. L2 (T ) and L1 (T ) are normed linear signal spaces of continuous time signals.2 and 3 of a norm. Example 5. The matrix M has the same dimensions as A. We will drop the T in the above signal spaces whenever the time set is clear from the context. SIGNAL SPACES AND NORMS 1. s1 + s2 ≤ s1 + s2 for all s1 and s2 . dt (5. αs = α s for all α ∈ C.7 belongs to L∞ [0. ∞) and neither to L1 [0. 3.40 CHAPTER 5. P is not a linear space as the sum of two periodic signals need not be periodic. The observability gramian M is a solution of the equation AT M + M A + C T C = 0 which is the Lyapunov equation associated with the pair (A. For either ﬁnite or inﬁnite time sets T . consider the signal s(t) which is described by the diﬀerential equations dx = Ax(t). resp. Deﬁne L∞ (T ) = {s : T → W  s ∞ < ∞} L2 (T ) = {s : T → W  s 2 < ∞} L1 (T ) = {s : T → W  s 1 < ∞} P(T ) = {s : T → W  Ps < ∞} Then L∞ (T ). s ≥ 0 and s = 0 if and only if s = 0.7 The sinusoidal signal s(t) := A sin(ωt + φ) for t ≥ 0 has ﬁnite amplitude s ∞ = A but its twonorm and onenorm are inﬁnite.5) s(t) = Cx(t) where A and C are real matrices of dimension n × n and 1 × n. As an example.8 As another example. but not to L2 [0. The quantities deﬁned by s ∞ . the sinusoidal signal of Example 5. the space L2 (T ) is a Hilbert space with inner product deﬁned by . If the eigenvalues of A are in the lefthalf complex plane then ∞ T s 2 = 2 xT0 eA t C T CeAt x0 dt = xT0 M x0 0 with the obvious deﬁnition for M . The sets of signals for which the above quantities are ﬁnite will be of special interest.
s1 . t∈T Two signals s1 and s2 are orthogonal if . s2 = sT2 (t)s1 (t) dt.
s2 = 0.s1 . This is a natural extension of orthogonality in Rn . .
t ∈ R. Equa tion (5. The most important norms are deﬁned as follows.8) is usually referred to as the inverse Fourier transform of s.5. the inﬁnite sum ∞ s¯(t) := sk ejkωt (5. s ∞ = max sup si (t) (5. k ∈ Z.9) i t∈T 1/2 s 2 = s(t)2 (5. Parseval taught us that the power of a P periodic signal s satisﬁes ∞ Ps = sk 2 . (5. SIGNALS AND SIGNAL NORMS 41 The Fourier transforms Let s : R → R be a periodic signal with period P .11) t∈T 1 In fact. Since the line spectrum uniquely determines a continuous periodic signal. P −P/2 where ω = 2π/P . Moreover.6). the spectrum or frequency spectrum of s and gives a description of s in the frequency domain or ωdomain.10) t∈T s 1 = s(t) (5. A continuous P periodic signal can therefore be uniquely reconstructed from its Fourier coeﬃcients by using (5. The sequence {sk }. for aperiodic continuous time signals s : R → R for which the norm s1 < ∞.2. properties of these signals can be expressed in terms of their line spectrum. are called the Fourier coeﬃcients of s and. in which case s¯(t) = (s(t+ ) + s(t− ))/2. ∞ s(ω) := s(t)e−jωt dt.3 Discrete time signals For discrete time signals s : T → Rq a similar classiﬁcation can be set up. Using Parseval. it follows that if s is an energy signal. if s is continuous1 then s¯(t) = s(t) for all t ∈ R.8) 2π −∞ which expresses that the function s can be recovered from its Fourier transform. k=−∞ Similarly. ω ∈ R. −∞ 2π −∞ 2π 5. There holds that ∞ 1 s(t) = s(ω)ejωt dω.6) k=−∞ converges for all t ∈ R. is called the (line) spectrum of s. . The function s is called the Fourier transform. and ∞ ∞ 1 1 Es = s(t)2 dt =  s(ω)2 dω = Es. whenever the summation ∞ k=−∞ sk  < ∞.2. it suﬃces to assume that s is piecewise smooth. k ∈ Z. The complex numbers 1 P/2 sk := s(t)e−jkωt dt. then also s is an energy signal.7) −∞ is a well deﬁned function for all ω ∈ R. (5.
By deﬁnition of stationarity. whenever two signals s1 . with 1 ≤ p < ∞. 0 for t = 0 The amplitude of this signal s ∞ = 1.) (got the idea?) can be viewed as a signal s : Z+ → N with s(t) the tth element of the sequence. 2.11 The Fibonacci sequence (1. t∈T Note that in all these cases the signal may be deﬁned either on a ﬁnite or an inﬁnite discrete time set. . then s1 + s2 and αs1 also belong to ∞ for any real number α. for discrete time signals is deﬁned as 1/p s p = s(t)p . µ(t) := E[u(t)] is independent of the time instant t.A stationary stochastic process is a sequence of real random variablesu(t) where t runs over some time set T .10 The signal s(t):= (1/2)t with ﬁnite time set T = {0. 5. the pnorm. Example 5. 8. the argument T will be omitted whenever the time set T is clear from the context.42 CHAPTER 5. SIGNAL SPACES AND NORMS More generally. We will not give a complete treatise of stochastic system theory at this place but instead recall a few concepts.2.9 thus belongs to ∞ . not all signals have ﬁnite norms. We emphasize that these are sets of signals. Obviously. 2 and 1 . its twonorm s 2 = 1 and it is immediate that als its onenorm s 1 = 1. This means that. The covariance of such a process is deﬁned by Ru (τ ) := E (u(t + τ ) − µ)(u(t) − µ) where µ = µ(t) = E[u(t)] is the mean. The discrete time impulse s deﬁned in Example 5. 1. A stochastic (stationary) process u(t) is called a white noise process if its mean µ = E[u(t)] = 0 and if u(t1 ) and u(t2 ) are uncorrelated for all t1 = t2 .its mean.9 A discrete time impulse is a signal s : Z → R with 1 for t = 0 s(t) = . Again. Note that s ∞ = s 2 = s 1 = ∞ for this signal. the covariance of a (continuous time) white noise process . . ﬁnite norm signals are of special interest and deﬁne the following normed signal spaces ∞ (T ) = {s : T → W  s ∞ < ∞} 2 (T ) = {s : T → W  s 2 < ∞} 1 (T ) = {s : T → W  s 1 < ∞} for discrete time signals. 5. 13. Example 5. Stated otherwise. . and the second order moment E[u(t1 )u(t2 )] depends only on the diﬀerence t1 − t2 . twonorm s 2 = ( 2t=0 (1/2)t 2 )1/2 = 1 + 1/4 + 1/16 = 21/16 and onenorm s 1 = 1 + 1/2 + 1/4 = 7/4. Example 5. 3.4 Stochastic signals Occasionally we consider stocastic signals in this course. s2 belong to ∞ (say). 1. It is easily seen that any of these signal spaces are linear normed spaces. 2} has amplitude s ∞ = 1. As in the previous subsection.
SYSTEMS AND SYSTEM NORMS 43 is Ru (τ ) = σ 2 δ(τ ). We ﬁnd this theme in almost all applications where ﬁlter and control design are used for the processing of signals. A system then speciﬁes the relations among the input and output signals. However.5. The number σ 2 is called the variance.3 indicate the causality direction.12 Also a word of warning concerning the use of blocks is in its place. Remark 5. diﬀerential equations or whatever mathematical expression you can think of. then to each input u ∈ U. A capacitor C . 5. For the purpose of this course.3 Systems and system norms A system is any set S of signals. These relations may be speciﬁed by transfer functions. If an inputoutput system is mathematically represented as a function H. and you probably have a great deal of experience in constructing complex systems by interconnecting various systems using block diagrams. Filters are designed so as to change the frequency characteristics of the input signals. Input signals are typically assumed to be unrestricted. Ohm’s law V = RI imposes a simple relation among the signals ‘voltage’ V and ‘current’ I but it is not evident which signal is to be treated as input and which as output. it is good tradition to depict inputoutput systems as ‘blocks’ as in Figure 5. u(t) y(t) . H  Figure 5. The arrows in Figure 5. state space representations. neither need such a partitioning of variables be unique.13 Again a philosoﬁcal warning is in its place.3. For example. H attaches a unique output y = H(u). In engineering we usually study systems which have quite some structure.1: Inputoutput systems: the engineering view The mathematical analog of such a ‘block’ is a function or an operator H mapping inputs u taken from an input space U to output signals y belonging to an output space Y. Remark 5. more often than not. energy spectrum or just the spectrum of the stochastic process u. It is common engineering practice to consider systems whose signals are naturally decomposed in two independent sets: a set of input signals and a set of output signals. Output signals are the responses of the system (or ﬁlter) after excitation with an input signal. the memory structure of many physical systems allows various outputs to correspond to one input signal. we exclusively consider systems in which an inputoutput partitioning of the signals has already been made. We write H : U −→ Y.3. In engineering applications. The Fourier transform of the covariance function Ru (τ ) is ∞ Φu (ω) := Ru (τ )e−jωτ dτ −∞ and is usually referred to as the power spectrum. many electrical networks do not have a ‘natural’ inputoutput partition of system variables.
15 A P periodic signal with line spectrum {uk }. Undoubtedly.e. • One can take periodic signals with ﬁnite power.e. as the output y is equal to h whenever the input u is taken to be a Dirac impulse u(t) = δ(t).e. U = P. In system theoretic language. If we are interested in the impulse response only. i. U = L2 .7). • One can take white noise stochastic processes as inputs. Hence. • The input class can also exist of one signal only. In a (continuous time) convolution system. there are many ways to represent inputoutput mappings. i. We will not treat discrete time (or digital) systems explicitly. U = {cejωt  c ∈ C. we ﬁrst consider single input single output continuous time systems with time set T = R and turn to the multivariable case in the next section. you have seen various of the following deﬁnitions before.44 CHAPTER 5.. h is usually referred to as the impulse response of the system. k ∈ Z. In order not to complicate things from the outset. Taking I = 0 as input allows the output V to be any constant signal V (t) = V0 . • One can take harmonic signals.12) −∞ where h : R → R is a function called the convolution kernel. • One can take energy signals. U = L∞ .. it is of importance to understand (and fully appreciate) the system theoretic nature of the concepts below. there is no obvious mapping I → V modeling this simple relationship! Of course. but for the purpose of this course.14 For example. This means that we will focus on analog systems. SIGNAL SPACES AND NORMS d imposes the relation C dt V = I on voltagecurrent pairs V.e. we take U = {δ}.. Example 5. i. In that case U consists of all stationary zero mean signals u with ﬁnite covariance Ru (τ ) = σ 2 δ(τ ). an input signal u ∈ U is transformed to an output signal y = H(u) according to the convolution ∞ y(t) = (Hu)(t) = (h ∗ u)(t) = h(t − τ )u(τ )dτ (5. I. the response to a harmonic input signal u(t) = ejωt is given by ∞ y(t) = h(τ )ejω(t−τ ) dτ = h(ω)ejωt −∞ where h is the Fourier transform of h as deﬁned in (5. H deﬁnes a linear map (as H(u1 + u2 ) = H(u1 ) + H(u2 ) and H(αu) = αH(u)) and for this reason the corresponding inputoutput system is also called linear. ω ∈ R}. can be represented as u(t) = ∞k=−∞ uk e jkωt where ω = 2π/P and its corresponding output is given by ∞ y(t) = h(kω)uk ejkωt . There are various options: • One can take bounded signals. i. it deﬁnes a timeinvariant system in the sense that H maps the time shifted input signal u(t − t0 ) to the time shifted output y(t − t0 ). Obviously. Example 5. for their deﬁnitions will be similar and apparent from the treatment below. Moreover.. We will be partic ularly interested in (inputoutput) mappings deﬁned by convolutions and those deﬁned by transfer functions. No mapping is well deﬁned if we are lead to guess what the domain U of H should be. k=−∞ .
SYSTEMS AND SYSTEM NORMS 45 Consequently. boundednes of H can be interpreted in the sense that H is stable with respect to the chosen input class and the corresponding norms. all these expressions are equal and either one of them serves as deﬁnition for the norm of an inputoutput system. A diﬀerent class of inputs or diﬀerent norms on the input and output signals results in diﬀerent operator norms of H. Note that the norm on the left hand side is the norm deﬁned on signals in the output space Y and the norm on the right hand side corresponds to the norm of the input signals in U.u=1 For linear operators. i. k ∈ Z.13) = sup Hu u∈U .3. 5. If we assume that the impulse response h : R → R satisﬁes h 1 = −∞ h(t)dt < ∞ (in other words. The norm H is often called the induced norm or the operator norm of H and it has the interpretation of the maximal ‘gain’ of the mapping H : U → Y. Assume that both U and Y are normed linear spaces. Thus.1 The H∞ norm of a system Induced norms Let T be a continuous ∞ time set. for all u ∈ U} M Hu = sup u∈U .5.∞) := sup u∈L∞ u ∞ Interestingly. H : L2 (T ) −→ L2 (T ) .e. H : L∞ (T ) −→ L∞ (T ) and we can deﬁne the L∞ induced norm of H as H(u) ∞ H (∞. under the same condition. A most important observation is that the norm of the inputoutput system deﬁned by H depends on the class of inputs U and on the signal norms for elements u ∈ U and y ∈ Y. If a linear map H : U → Y is bounded then its norm H can be deﬁned in several alternative (and equivalent) ways: H = inf {M  Hu ≤ M u . under this condition. if we assume that h ∈ L1 ).3.u≤1 = sup Hu u∈U . y is also periodic with period P and the line spectrum of the output is given by yk = h(kω)uk . Then we call H bounded if there is a constant M ≥ 0 such that H(u) ≤ M u . In system theoretic terms. then H is a stable system in the sense that bounded inputs produce bounded outputs.u=0 u (5. H also deﬁnes a mapping from energy signals to energy signals.
18 The frequency response can be written as h(ω) =  h(ω)ejφ(ω) . this norm is also referred to as the induced energy norm.15 shows that H : P(T ) → P(T ) and we deﬁne the powerinduced norm Py Hpow := sup √ . equal to the L∞ norm of the frequency response. ˆ Remark 5. The ﬁrst characterization on the ∞induced norm is interesting.17 The quantity maxω∈R h(ω) satisﬁes the axioms of a norm.2) := sup u∈L2 u 2 In view of our deﬁnition of ‘energy’ signals.16) Loosely speaking. Nevertheless.12). the L2 induced norm of H is given by H (2.16 Let T = R or R+ be the time set and let H be deﬁned by (5. the powerinduced norm of H is given by H pow = max  h(ω) (5. The Fourier transform h of the impulse response h is generally referred to as the frequency response of the system (5. It has the property that whenever h ∈ L1 and u ∈ L2 . Example 5.2) = max  h(ω) (5. but will not be further used in this course. system gains. SIGNAL SPACES AND NORMS with the corresponding L2 induced norm H(u) 2 H (2. and is precisely ˆ ∞ = maxω∈R h(ω).∞) = h 1 2. Various graphical representations of frequency responses are illustrative to investigate ˆ system properties like bandwidth. i.15) ω∈R We will extensively use the above characterizations of the L2 induced and power induced norm. y(t) = (h ∗ u)(t) ⇐⇒ y(ω) = h(ω) u(ω) (5. Suppose that h ∈ L1 . The power does not deﬁne a norm for the class P of periodic signals. Then 1.14) ω∈R 3.46 CHAPTER 5. Pu =0 Pu The following result characterizes these system norms Theorem 5.12). h ˆ Remark 5. etc. this result states that convolution in the time domain is equivalent to multiplication in the frequency domain.e. the L∞ induced norm of H is given by H (∞. A plot of h(ω) and φ(ω) as function .
In view of Example 5. (Unfortunately.16 states that the L2 induced norm of the system deﬁned by (5. Theorem 5.3.2.16. sin(ω0 t) ∈ / L2 . In view of the equivalence (5.16) a Bode diagram therefore provides information to what extent the system ampliﬁes purely harmonic input signals with frequency ω ∈ R. In order to interpret these diagrams one usually takes logarithmic scales on the ω axis and plots 2010 log(h(ω))ˆ to get units in dB.2: A Bode diagram .14.12) equals the highest gain value occuring in the Bode plot of the frequency response of the system.5.) 40 20 Gain dB 0 −20 −3 −2 −1 0 1 10 10 10 10 10 Frequency (rad/sec) 0 Phase deg −90 −180 −3 −2 −1 0 1 10 10 10 10 10 Frequency (rad/sec) Figure 5. any frequency ω0 for which this maximum is attained has the interpretation that a harmonic input signal u(t) = ejω0 t results in a (harmonic) output signal y(t) with frequency ω0 and maximal amplitude h(ω ˆ 0 ). so we cannot use this insight directly in a proof of Theorem 5. See Figure 5. SYSTEMS AND SYSTEM NORMS 47 of ω ∈ R is called a Bode diagram.
using Parseval’s identity for periodic signals Py H 2pow = sup sup P u is P periodic Pu ∞ k=−∞ h(2πk/P )uk  2 = sup sup ∞ k=−∞ uk  2 P u is P periodic ≤ sup max  h(2πk/P )2 P k∈Z = max  h(ω)2 ω∈R showing that H pow ≤ maxω∈R  h(ω). However.16. We therefore obtain that H pow = ˆ h(ω0 ) = max h(ω) ω∈R as claimed. 2 from the context it will always be clear what we mean .2) = sup u∈L2 u 22 1/(2π) h ∗ u 22 = sup ˆ∈L2 u 1/(2π) u 22 ˆ h(ω)2 ˆ u(ω)2 dω = sup ˆ∈L2 u u ˆ 22 ˆ maxω∈R h(ω) 2 u ˆ 22 ≤ sup ˆ∈L2 u uˆ 22 = max  h(ω)2 ω∈R ˆ which shows that H (2. ∞ H(s) := h(t)e−st dt −∞ For we deﬁned H already as the mapping that associates with u ∈ U the element H(u).16 provides equality for the latter inequalities. SIGNAL SPACES AND NORMS To prove Theorem 5. the output has power Py = h(ω0 )2 . The transfer function associated with (5.14 it follows that the output y has line spectrum y1 = h(ω0 ). For periodic signals (statement 3) this can be seen as follows. uk = 0 for k = 1. Suppose that ω0 is such that  h(ω0 ) = max  h(ω) ω∈R Take a harmonic input u(t) = ejω0 t and note that this signal has power Pu = 1 and line spectrum u1 = 1. Formally. we derive from Parseval’s identity that h ∗ u 22 H 2(2. Similarly. and yk = 0 for k = 1.12) is the Laplace transform of the impulse response h. Theorem 5.2) ≤ maxω∈R h(ω). This object will be denoted by H(s) (which the careful reader perceives as poor and ambiguous notation at this stage2 ). From Example 5. The proof of statement 2 is more involved and will be skipped here.48 CHAPTER 5. and using Parseval’s identity.
15) is deﬁned as the H∞ norm of the system (5.20 Let H(s) be the transfer function of a stable single input single output ˆ system with frequency response h(ω).14) and (5. In words.17) ω∈R The H∞ norm of a SISO transfer function has therefore the interpretation of the maximal peak in the Bode diagram of the frequency response hˆ of the system and can be directly ‘read’ from such a diagram. The notation y = Hu can therefore be interpreted in two (equivalent) ways! We return to our discussion of induced norms. (5. denoted H ∞ is the number ˆ H ∞ := max h(ω). this states that the energy induced norm and the power induced norm of H is equal to the H∞ norm of the transfer function H(s).2) = H pow .3. Thus u(t) means something really diﬀerent than u(s)! Whereas y(t) = H(u)(t) refers to the convolution (5. Theorem 5. SYSTEMS AND SYSTEM NORMS 49 where the complex variable s is assumed to belong to an area of the complex plane where the above integral is ﬁnite and well deﬁned.12). The H∞ norm of H. then the Fourier transform is simply h(ω) = H(jω). Deﬁnition 5.12). Remark 5.16 therefore states that H(s) ∞ = H (2. If the Laplace transform exists in an area of the complex plane which includes the imagi nary axis.19 It is common engineering practice (the adjective ‘good’ or ‘bad’ is left to your discretion) to denote Laplace transforms of signals u ambiguously by u. The righthand side of (5. The Laplace transforms of signals are deﬁned in a similar way and we have that ˆ y = h ∗ u ⇐⇒ yˆ(ω) = h(ω)ˆ u(ω) ⇐⇒ Y (s) = H(s)U (s). . the notation y(s) = Hu(s) is to be interpreted as the product of H(s) and the Laplace transform u(s) of u(t).5.
T which depends on the length of the time horizon T . However. The result can be found in [18]. However.12) and assume that h ∈ L1 (i. This is closely related to an induced operator norm for the convolution system (5. This is easily seen as sΩ = 0 for any s ∈ L2 . equal to the H∞ norm of the transfer function H. This expectation can be interpreted as the average power of a stochastic signal.T T →∞ T In this case. as motivated in this section. For this purpose it seems reasonable to deﬁne 1 Es22 := lim E s22. we would also like to work with input and output spaces U and Y that are linear vector spaces. .T (5. Theorem 5. we can extend the “induced norm” Hstoch.50 CHAPTER 5.12). Consider the set ΩT of all stochastic (continuous time) processes s(t) on the ﬁnite time interval [0. With this class of input signals.18) 0 is well deﬁned and bounded. but · Ω does not deﬁne a norm on Ω.T := sup u∈ΩT Eu22. in fact. T ] for which the expectation T Es22.6. We would like to extend this deﬁnition to the inﬁnite horizon case. Then Hstoch = H∞ . Consider the convolution system (5. the system is stable) and the input u ∈ ΩT . Unfortunately. Ω is a linear space of stochastic signals. it is a semi norm as it satisﬁes conditions 2 and 3 in Remark 5. the class of stochastic processes for which the limit in (5. A proof of this result is beyond the scope of these lecture notes. The following result is the crux of this discussion and states that Hstoch is.12). For this reason. the class of stochastic input signals U is set to Ω := {s  sΩ < ∞} where 1 s2Ω := lim sup E s22.19) exists is not a linear space.T to the inﬁnite horizon case yΩ H2stoch := sup u∈Ω u Ω which is bounded for stable systems H.e. SIGNAL SPACES AND NORMS A stochastic interpretation of the H∞ norm We conclude this subsection with a discussion on a stochastic interpretation of the H∞ norm of a transfer function.T := E sT (t)s(t) dt (5.T H2stoch. Then the output y is a stochastic process and we can introduce the “induced norm” Ey22.19) T →∞ T assuming that the limit exists.21 Let h ∈ L1 and let H(s) be the transfer function of the system (5.
the supremum in the deﬁnition of the H2 norm always occurs on the boundary α = 0. we ﬁrst remark that the H2 norm can be evaluated on the imaginary axis. The H2 norm is equal to the expected rootmeansquare 3 The derivations in this subsection are not relevant for the course! . consider again the convolution system (5. a very elegant system theoretic interpretation. Stated otherwise.12) and suppose that we are interested only in the impulse response of this system. Deterministic interpretation To interpret the H norm. H2 = {s : C → C  sanalytic in(s) > 0 and sH2 < ∞} . this boundary function is square integrable. Using Parseval’s identity we obtain 1 Eh = h 22 = √ h 22 2π 1 ∞ = ˆ ∗ (ω)h(ω)dω h ˆ 2π −∞ = H(s)2H2 where H(s) is the transfer function associated with the inputoutput system. The ‘H’ stands for Hardy space. Thus. that we take the impulse δ(t) as the only candidate input for H.e. The resulting output y(t) = (Hu)(t) = h(t) is an energy function so that Eh < ∞. 1 1/2 sH2 = s¯∗ (ω)¯ s(ω) 2π Thus. i.5. It is for this reason that s is usually identiﬁed with the boundary function and the bar in s¯ is usually omitted. The square of the H2 norm is therefore equal to the energy of the impulse response. s¯ ∈ L2 and sH2 = ¯ s2 .20) 2π −∞ Stochastic interpretation The H2 norm of a transfer function has an elegant equivalent interpretation in terms of stationary stochastic signals3 . SYSTEMS AND SYSTEM NORMS 51 5. The H2 norm of H.3. for any s ∈ H2 one can construct a boundary function s¯(ω) = limα↓0 s(α + jω). That is. This means. To summarize: Deﬁnition 5. in fact. This “coldhearted” deﬁnition has. which exists for almost all ω..22 Let H(s) be the transfer function of a stable single input single output ˆ system with frequency response h(ω). (5.2 The H2 norm of a system The notation H2 is commonly used for the class of functions of a complex variable that do not have poles in the open righthalf complex plane (they are analytic in the open righthalf complex plane) and for which the norm ∞ 1/2 1 sH2 := sup s∗ (α + jω)s(α + jω)dω α>0 2π −∞ is ﬁnite.3. Moreover. Before giving this. denoted H H2 is the number 1 ∞ 1/2 H H2 := H(jω)H(−jω)dω .
Taking such a process as input to (5. it then follows that ∞ H2RMS = h(τ )h(τ ) dτ −∞ ∞ 1 = H(jω)H ∗ (jω) dω 2π −∞ = H2H2 Thus. Substitute (5. we set ∞ H2RMS. That is. SIGNAL SPACES AND NORMS (RMS) value of the output of the system when the input is a realization of a unit variance white noise process. Using the deﬁnition of a ﬁnite horizon 2norm from (5. T ] u(t) = 0 otherwise and let y = h ∗ u be the corresponding output.52 CHAPTER 5.T := E y T (t)y(t)dt = Ey22.21) 4 Details are not important here. . The condition that h ∈ L2 guarantees that the output y has ﬁnite covariances Λy (τ ) = E[y(t)y(t − τ )] and easy calculations4 show that the covariances Ry (τ ) are given by Ry (τ ) = E[y(t + τ )y(t)] ∞ ∞ = h(s )Ru (τ + s − s )h(s )ds ds −∞ −∞ The latter expression is a double convolution which by taking Fourier transforms results in the equivalent expression ˆ Φy (ω) = h(jω)Φ ˆ u (ω)h(−jω). It is easy to see that the output y has also zero mean. In fact. the H2 norm of the transfer function is equal to the inﬁnite horizon RMS value of the transfer function. (5. Another stochastic interpretation of the H2 norm can be given as follows. Let u(t) be a stochastic process with mean 0 and covariance Ru (τ ).18).12) results in the output y(t) which is a random variable for each time instant t ∈ T .T = dt h(τ )h(τ ) dτ 0 t−T T T =T h(τ )h(τ )dτ − t(h(τ )h(τ ) + h(−τ )h(−tau))dτ −T 0 If the transfer function is such that the limit 1 H2RMS = lim H2RMS.T T →∞ T remains bounded we obtain the inﬁnite horizon RMSvalue of the transfer function H. let a unit variance white noise process t ∈ [0.12) in the latter expression and use that E(u(t1 )u(t2 )) = δ(t1 − t2 ) to obtain that T t H2RMS.T −∞ where E denotes expectation.
21).3. the spectrum of the output is then given by ˆ Φy (ω) = h(ω)2 (5.5.4 Multivariable generalizations In the previous section we introduced various norms to measure the relative size of a single input single output system. From this it should now be evident that when we deﬁne in this stochastic context the norm of a stochastic (stationary) signal s with mean 0 and covariance Rs (τ ) to be ∞ 1/2 s := Rs (τ ) 2 = E[s(t + τ )s(t)]dτ −∞ then the H2 norm of the transfer function H(s) is equal to the norm y of the output y.12).22) which relates the spectrum of the input and the spectrum of the output of the system deﬁned by the convloution (5. when taking white noise as input to the system. (This implies that Ru (τ ) = δ(τ ). which in turn is necessary to allow for inﬁnitely fast changes of the signal to make future values independent of momentary values irrespective of the small time diﬀerence.) Using (5. Indeed the variance of this signal theoretically equals ψ(0) = δ(0) = ∞. Throughout this section we will consider an inputoutput system with m inputs and p outputs as in Figure 5. Of course in practice it is suﬃcient if the “whiteness” is just in broadbanded noise with respect to the frequency band of the plant under study.4.23) ∞ 1 = Φy (ω)dω 2π −∞ = Ry (τ ) 22 Thus the H2 norm of the transfer function H(s) has the interpretation of the L2 norm of the covariance function Ry (τ ) of the output y of the system when the input u is taken to be a white noise signal with variance equal to 1. starting with a convolution representation of such a system. We now assume u to be a white noise process with Φu (ω) = 1 for all ω ∈ R. MULTIVARIABLE GENERALIZATIONS 53 in the frequency domain. The mathematical background and the main ideas behind the deﬁnitions and characterizations of norms for multivariable systems is to a large extent identical to the concepts derived in the previous section. the output y is determined from the input u by ∞ y(t) = (Hu)(t) = h ∗ u = h(t − τ )u(τ )dτ −∞ . In this section we generalize these measures for multivariable systems. Again. Integrating the latter expression over ω ∈ R and using the deﬁnition of the H2 norm yields that 1 ˆ 2 H(s) 2H2 = h 2 2π ∞ 1 ˆ ˆ = h(ω) h(−ω)dω 2π −∞ (5. Note that above norm is rather a power norm than an energy norm and that for a white noise input u we get ∞ 1/2 u = Ru (τ ) 2 = δ(τ )dτ = 1. This is caused by the fact that all freqencies have equal power (density) 1. −∞ 5.
We will again assume that the system is stable in the sense that all entries [H(s)]ij of H(s) (i = 1. m) have their poles in the left half plane or.y1 u3 .4. . that the ijth element [h(t)]ij of h. .54 CHAPTER 5. equivalently. These norms are deﬁned as in the previous section y 2 H (2. −∞ Thus H(s) has dimension p × m for every s ∈ C. . . for every t ∈ R. It occurs in numerous applications in control theory. Like in section 5. viewed as a function of t. H . SIGNAL SPACES AND NORMS u1  u2 . .y3 u5  Figure 5. We will be mainly interested in the L2 induced and power induced norm of such a system. where the convolution kernel h(t) is now. time series analysis. H : Lm ∞ −→ L∞ p p H : Lm 2 −→ L2 where the superscripts p and m denote the dimensions of the signals. .2) := sup u∈Lm 2 u 2 1/2 Py H pow := sup 1/2 Pu =0 Pu where y = H(u) is the output signal. As in the previous section.3.3: A multivariable system. .y2 u4 . numerical linear algebra. . That is. p and j = 1. Then H maps any vector u ∈ Rm to a vector y = Hu in Rp according to the usual matrix multiplication. . The transfer function associated with this system is the Laplace transform of h and is the function ∞ H(s) = h(t)e−st dt.1 The singular value decomposition In this section we will forget about dynamics and just consider real constant matrices of dimension p × m. . a real matrix of dimension p×m. system identiﬁcation. to mention only a few areas. 5. . we wish to express the L2 induced and powerinduced norm of the operator H as an H∞ norm of the (multivariable) transfer function H(s) and to obtain (if possible) a multivariable analog for the maximum peak in the Bode diagram of a transfer function. We will devote a subsection to the singular value decomposition (SVD) as a refreshment. modelling. This requires some background on what is undoubtedly one of the most frequently encountered decompositions of matrices: the singular value decomposition. Let H ∈ Rp×m be a given matrix. belongs to L1 . under this assumption H deﬁnes an operator mapping bounded inputs to bounded outputs (but now for multivariable signals!) and bounded energy inputs to bounded energy outputs.
.. it is most convenient to view the matrix as a linear operator acting on vectors u ∈ Rm and producing vectors y = Hu ∈ Rp according to the usual matrix multiplication. 0 Σ = diag(σ1 .. . σr are uniquely deﬁned and are called the singular values of H. we can assume the eigenvalues to be ordered: λ1 ≥ . σr ) = . . . • Construct the symmetric matrix H T H (or HH T if m is much larger than p) • Compute the nonzero eigenvalues λ1 . . i. 5 In fact. i.. Σ = Σ0 00 where σ1 0 0 . The singular values of H ∈ Rp×m can be computed via the familiar eigenvalue decomposition because: H T H = U ΣY T Y ΣU T = U Σ2 U T = U ΛU T and: HH T = U ΣU T U ΣY T = Y Σ2 Y T = Y ΛY T Consequently. .5 For this.. . . Y T Y = Y Y T = Ip . . . . if you want to compute the singular values with pencil and paper.23 A singular value decomposition (SVD) of a matrix H ∈ Rp×m is a de composition H = Y ΣU T .4. . .) Algorithm 5.) The singular value decomposition and the singular values of a matrix have a simple and straightforward interpretation in terms of the ‘gains’ and the so called ‘principal directions’ of H. . . . k = 1. . ≥ λr > 0 √ • The kth singular value of H is given by σk = λk . one may argue why eigenvalues of a matrix have played such a dominant role in your linear algebra course. In the context of linear mappings. • Σ ∈ Rp×m is diagonal. you can use the following algorithm. . i.5. . .e. Since for a symmetric matrix the nonzero eigenvalues are positive numbers. . λr of H T H. (For numerically well conditioned methods. . σ2 . 0 0 0 0 σr and σ1 ≥ σ2 ≥ . . • U ∈ Rm×m is orthogonal. U T U = U U T = Im . r. singular values have a much more direct and logical operator theoretic interpretation.. you should avoid the eigenvalue decomposition. . . 0 0 σ2 0 . The number r is equal to the rank of H and we remark that the matrices U and Y need not be unique. . 2. where • Y ∈ Rp×p is orthogonal. . however. .24 (Singular value decomposition) Given a p × m matrix H. ≥ σr > 0 Every matrix H has such a decomposition. (The sign is not deﬁned and nonuniqueness can occur in case of multiple singular values.e. The ordered positive numbers σ1 .e. . . . . . MULTIVARIABLE GENERALIZATIONS 55 Deﬁnition 5. .
In other words. can be viewed as the minimal ‘gain’ of the matrix under normalized ‘inputs’ and provided that the matrix has full rank. there holds Hui = Y ΣU T ui = Y Σei = σi yi . the norm Hu 2 = uT H T Hu = uT U ΣΣT U T u = xT Σ Σ x where x = U T u. um ) and the p × p matrix Y = (y1 . . .e. . SIGNAL SPACES AND NORMS Let H = Y ΣU T be a singular value decomposition of H and suppose that the m × m matrix U = (u1 . It thus follows that Hui = σi yi = σi where we used that yi = 1. the largest singular value σ1 of H equals the induced norm of H (viewed as a function from Rm to Rp ) whereas the input u1 ∈ Rm deﬁnes an ‘optimal direction’ in the sense that the norm of Hu1 is equal to the induced norm of H...25 To verify the latter expression. p. Moreover. whereas the smallest singular value σr . . u = 1. . If the ”energy” in u is restricted to 1. It follows that m max Hu 2 = max Σ Σ x 2 = σi2 xi  u=1 x=1 i=1 which is easily seen to be maximal if x1 = 1 and xi = 0 for all i = 1.e. . the vectors {yj }j=1. the ith basis vector ui is mapped in the direction of the ith basis vector yi and ‘ampliﬁed’ by an amount of σi .m constitute an orthonormal basis for Rm . can thus be viewed as the maximal ‘gain’ of the matrix H. . . these decomposed components are multiplied by the corresponding singular values (Σ) and then (by Y ) mapped onto the corresponding directions yi .56 CHAPTER 5.. it is easy to grasp that the induced norm of H is related to the singular value decomposition as follows Hu Hu1 H := sup = = σ1 u∈Rm u u1 In other words. yj ∈ Rp with i = 1. 2. The maximal singular value σ1 . often denoted by σ¯ . i.. . Similarly. As a consequence. if we have a general input vector u it will ﬁrst be decomposed by U T along the various orthogonal directions ui . Remark 5. . u i ∈ Rm . (If the matrix H has not full rank. u2 .. So.. . since uTj ui is zero except when i = j (in which case uTi ui = 1). m and j = 1. note that for any u ∈ Rm .. . sometimes denoted as σ. . y2 . . i. Since U is an orthogonal matrix. .p constitute an orthonormal basis for Rp . Next. the ”energetically” largest output y is certainly obtained if the u is directed along u1 so that u = u1 . . eﬀectively. yp ) where ui and yj are the columns of U and Y respectively. the vectors {ui }i=1. it has a nontrivial kernel so that Hu = 0 for some input u = 0).
2 The H∞ norm for multivariable systems Consider the p × m stable transfer function H(s) and let H(jω) = Y (jω)Σ(jω)U ∗ (jω) be a singular value decomposition of H(jω) for a ﬁxed value of ω ∈ R.e.4.3 The H2 norm for multivariable systems The H2 norm of a p × m transfer function H(s) is deﬁned as follows. still being real valued (i. Deﬁnition 5. we have that H(jω) ∈ Cp×m and the singular vectors stored in Y (jω) and U (jω) are complex valued. 5. An example of a “multivariable Bode diagram”’ is depicted in Figure 5. ≥ σr (ω) > 0 where r is equal to the rank of H(s) and in general equal to the minimum of p and m.26 Let H(s) be a stable multivariable transfer function. Thus the singular values become frequency dependent! From the previous section we infer that for each ω ∈ R H(jω)ˆu(ω) 0 ≤ ≤ σ1 (ω) uˆ(ω) or. .4. are ordered according to σ1 (ω) ≥ σ2 (ω) ≥ . σi ∈ R). stated otherwise.3 for multivariable systems. u(ω) ≤ σ1 (ω) u H(jω)ˆ ˆ(ω) so that σ ¯ (ω) := σ1 (ω) viewed as a function of ω has the interpretation of a maximal gain ¯ (ω) with ω ∈ R can be of the system at frequency ω. That is.23). For each such ω.4.16: Theorem 5.4. 2 −∞ . Indeed. .2) = H pow = H(s) ∞ The derivation of this result is to a large extent similar to the one given in (5. ω∈R With this deﬁnition we obtain the natural generalization of the results of section 5. Since H(jω) is in general complex valued.27 Let T = R+ or T = R be the time set. The H2 norm of H(s) is deﬁned as 1 ∞ 1/2 H(s) H2 = trace [H ∗ (−jω)H(jω)]dω . For a stable multivariable transfer function H(s) the L2 induced norm and the power induced norm is equal to the H∞ norm of H(s). MULTIVARIABLE GENERALIZATIONS 57 5. H (2. The bottom line of this subsection is therefore that the L2 induced operator norm and the powerinduced norm of a system is equal to the H∞ norm of its tranfer function. we have the following multivariable analog of theorem 5. The H∞ norm of H(s) is deﬁned as H(s) ∞ := sup σ ¯ (H(jω)). the singular values. It is for this reason that a plot of σ viewed as a multivariable generalization of the Bode diagram! Deﬁnition 5.28 Let H(s) be a stable multivariable transfer function of dimension p × m.5.
let us deﬁne m impulsive inputs. The corresponding output is a p dimen sional signal which we will denote by y (i) (t) and which has bounded energy if the system . For singleinput singleoutput systems the square of the H2 norm of a transfer function H(s) is equal to the energy in the impulse response of the system. . in spirit. . the ‘trace’ of a square matrix is the sum of the entries at its diagonal. say y (i) .4: The singular values of a transfer function Here. We will deﬁne the squared H2 norm of a multivariable system as the sum of the energies of the outputs y (i) as a reﬂection of the total “energy”. . Precisely. SIGNAL SPACES AND NORMS 40 30 20 Singular Values dB 10 0 −10 −20 −30 −3 −2 −1 0 1 10 10 10 10 10 Frequency (rad/sec) Figure 5.the ith being 0 .58 CHAPTER 5. 0 where the impulse δ(t) appears at the ith spot. δ(t) u (t) = (i) 0 . The rationale behind this deﬁnition is a very simple one and very similar. . which is a p dimensional energy signal for each such input. to the idea behind the H2 norm of a scalar valued transfer function.. . . For a system with m inputs we can consider m impulse responses by putting an impulsive input at the ith input channel (i = 1. m) and ‘watching’ the corresponding output..
MULTIVARIABLE GENERALIZATIONS 59 is assumed to be stable. i=1 In a stochastic setting. That is: m H(s) 2H2 = y (i) 22 .5. whereas its second order moments E[u(t1 )u(t2 )T ] now deﬁne m × m matrices which only depend on the time dif ference t1 − t2 . −∞ . The H2 norm of the transfer function H(s) is nothing else than the square root of the sum of the twonorms of these outputs.4. an m dimensional (stationary) stochastic process admits an mdimensional mean µ = E[u(t)] which is is independent of t.e. The square of its two norm is given by ∞ p (i) y (i) 22 := yj (t)2 dt −∞ j=1 (i) where yj denotes the jth component of the output due to an impulsive input at the ith input channel. ∞ HRM S = 2 trace[h(t)hT (t)] dt = H2H2 . i.. As in the previous section. we derive that the inﬁnite horizon RMS value equals the H2 norm of the system.
After reading the help information about this procedure (help feedbk) we learn that the procedure requires state space representations of P and C and produces a state space representation of the closed loop system..d]= tf2ss(num. Thus. 2.b. Since (s + 1)(s + 2) = s2 + 3s + 2 the denominator polynomial is represented by a vari able den=[1 3 2] (coeﬃcients always in descending order). . A state space representation of P can be obtained.5 Exercises 1. (a) Determine the H∞ norm of the system P . [a.60 CHAPTER 5. the complementary sensitivity T and the control sensitivity R of the closed loop system. Hint: You can represent the plant P in MATLAB by introducing the numerator (‘teller’) and the denominator (‘noemer’) polynomial coeﬃcients separately. their energy ( · 2 ) and their L1 norm ( · 1 ). We consider the usual feedback interconnection of P and C as described earlier. Distinguish the cases where α > 0. respectively. The H∞ norm of P can now be read from the Bode plot of P by invoking the procedure bode(num. the numerator polynomial of P is represented by num=[0 1 0]. α < 0 and α = 0. This exercise is mainly meant to familiarize you with various routines and procedures in MATLAB. This exercise involves a single input single output control scheme and should be viewed as a ‘prelude’ for the multivariable control scheme of an exercise below. Consider a single input single output system described by the transfer function −s P (s) = . Hint: The feedback interconnection of P and C can be obtained by the MATLAB pro cedures feedbk or feedback. (b) Determine the H∞ norm of the sensitivity S. then try the conversion routine ss2tf.den) gives a state space description of P . (c) x(t) = exp(αt) for ﬁxed α. If you prefer a transfer function description of the closed loop to determine the H∞ norms. T and R. 0 for t < 0 (a) x(t) = 1 1+t for t ≥ 0 (b) x(t) = cos(2πf t) for ﬁxed real frequency f > 0. (s + 1)(s + 2) The system P is controlled by the constant controller C(s) = 1. e. Similarly. Consider the following continuous time signals and determine their amplitude ( · ∞ ). by invoking the routine tf2ss (‘transferto statespace’). See the corresponding help information.g.c. SIGNAL SPACES AND NORMS 5.den). Make sure that you use the right ‘type’ option to obtain S.
viewed as a function of ω.1 The use of weighting ﬁlters 6.1 As mentioned before.1 In this conﬁguration we distinguish various ‘closedloop’ transfer functions: • The sensitivity S = (I + P C)−1 61 .1. that is extensively manipulated in H∞ control system design It is this function. 6. The speciﬁcation of these weights is of crucial importance for the overall control design and is one of the few aspects in H∞ robust control design that can not be automated.1.1. The frequency dependent maximal singular value σ ¯ (ω). if a system is known to be all pass. meaning that the twonorm of the output is equal to the twonorm of the input for all possible inputs u. then at every frequency ω the maximal gain σ ¯ (H(jω)) of the system is constant and equal to the H∞ norm H(s) ∞ .2 Singular value loop shaping Consider the multivariable feedback control system of Figure 6. This in contrast to lowpass or highpass systems in which the function σ ¯ (ω) vanishes(or is attenuated) at high frequencies and low frequencies. ¯ (jω). The system is then said to have a ﬂat spectrum. and a good insight in the performance speciﬁcations one wishes to achieve. respectively. These manipulations are carried out by choosing appropriate weights on the signals entering and leaving a control conﬁguration like for example the one of Figure 6.1 Introduction The H∞ norm of an inputoutput system has been shown to be equal to H(u) 2 H(s) ∞ = sup σ ¯ (H(jω)) = sup ω∈R u∈L2 u 2 The H∞ norm therefore indicates the maximal gain of the system if the inputs are al lowed to vary over the class of signals with bounded twonorm.Chapter 6 Weighting ﬁlters 6. σ to meet desired performance objectives. the multivariable stability margins and performance speciﬁcations can be quantiﬁed by considering the frequency dependent singular values of the various closedloop systems which we can distinguish in Figure 6. provides obviously more detailed information about the gain characteristics of the system than the H∞ norm only. The choice of appropriate weighting ﬁlters is a typical ‘engineering skill’ which is based on a few simple mathematical observations. For example.
deﬁnitely not least. the disturbance d and the measurement noise η to the control input u. All our H∞ control designs will be formulated in such a way that an optimal controller will be designed so as to minimize the H∞ norm of a multivariable closedloop transfer function. Once a control problem has been speciﬁed as an optimization problem in which the H∞ norm of a (multivariable) transfer function needs to be minimized. The algorithms for this computation of H∞ optimal controllers will be the subject of Chapter 8. Suppose that a plant P is given and suppose that we are interested in minimizing the H∞ norm of the sensitivity .1: Multivariable feedback conﬁguration which maps the reference signal r to the (real) tracking error r − y ( = e in Fig.1 because of η!) and the disturbance d to y. The maximal singular values of each of these transfer functions S. the singular values of the com plementary sensitivity model the ampliﬁcation (or attenuation) of the sensor noise η to the closedloop output y for each frequency. • The complementary sensitivity T = P C(I + P C)−1 = I − S which maps the reference signal r to the output y and the sensor noise η to y. As is seen from the deﬁnitions of these transfers. WEIGHTING FILTERS d r . whereas the singular values of the control sen sitivity give insight for which frequencies the reference signal has maximal (or minimal) eﬀect on the control input u. robustness considerations with respect to parameter variations and model uncertainty. The most time consuming part for a well performing control system using H∞ optimal control methods is the concise formulation of an H∞ optimization problem.6. e  C(s) u P (s) ? y 6− ? η Figure 6. • The control sensitivity R = C(I + P C)−1 which maps the reference signal r. the singular values of the sensitivity S (viewed as function of frequency ω ∈ R) determine both the tracking performance as well as the disturbance attenuation quality of the closedloop system. This formulation is required to include all our apriori knowledge concerning signals of interest. all the (sometimes conﬂicting) performance speciﬁcations. Similarly. Let us consider a simpliﬁed version of an H∞ design problem. T and R play an important role in robust control design for multivariable systems. the actual computation of an H∞ optimal controller which achieves this minimum is surprisingly easy.62 CHAPTER 6. fast and reliable. stability requirements and.
ωr ]. In Figure 6. Conclude from (6.2). ωr ]. ωr ]. it is by no means clear that there exists a controller which achieves this minimum.3) where γ ≥ γopt is a prespeciﬁed number which we like to (and are able to) choose as close as possible to the optimal value γopt . The ‘minimum’ is therefore usually replaced by an ‘inﬁmum’ and we need in general to be satisﬁed with a stabilizing controller Copt such that γopt := inf S(s) ∞ (6. (See Figure 6. i. for any r ∈ L2 the signal r(s) = V (s)r (s) has bandwidth [0. Copt is called a suboptimal H∞ controller. The inequalities (6.1.2) ≤ γ.1) C stab ≤ (I + P Copt )−1 ∞ (6. as frequencies larger than ωr are not likely to occur.4) Thus γ is an upperbound of the maximum singular value of the sensitivity at each frequency ω ∈ R. and this controller may clearly depend on the speciﬁed value of γ. For obvious reasons. that the tracking error r − y (interpreted as a frequency signal) then satisﬁes rˆ(ω) − yˆ(ω) ≤ σ ¯ (S(jω)) rˆ(ω) (6. If we deﬁne a stable transfer function V (s) with ideal frequency response 1 if ω ∈ [−ωr .5) hold for all frequencies.3 we see that this amounts to including the transfer function V (s) . The eﬀect of input weightings Suppose that the reference signal r is known to have a bandwith [0. However. the controller was designed to achieve (6.e.6. It then follows that for all frequencies ω ∈ R ¯ (S(jω)) ≤ S(s) ∞ ≤ γ σ (6. (6. Instead of minimizing the H∞ norm of the sensitivity S(s) we now consider minimizing the H∞ norm of the weighted sensitivity S(s)V (s). ωr ]. Then inequal ity (6. Suppose that a controller achieves that S(s) ∞ ≤ γ.4) for all ω ∈ R and did not take bandwith speciﬁcations of the reference signal into account. However.5) ≤ γ rˆ(ω) In this design. V (jω) = 0 otherwise then the outputs of such a ﬁlter are band limited signals with bandwith [0. The H∞ optimal control problem then amounts to determine a stabilizing controller C opt such that min S(s) ∞ = (I + P C opt )−1 ∞ C stab Such a controller then deserves to be called H∞ optimal.5) is only interesting for frequencies ω ∈ [0. no frequency dependent apriori information concerning the reference signal r or frequency dependent performance speciﬁcations concerning the tracking error r − y has been incorporated. THE USE OF WEIGHTING FILTERS 63 S = (I + P C)−1 over all controllers C that stabilize the plant P .4) and the general properties of singular values. ωr ].
r.7) ≤ σ ¯ (S(jω)) r(ω) ≤ γ V −1 (jω) r(ω) where r is now a bandlimited reference signal. . e.4).2: Ideal low pass ﬁlter d r .6) ω≤ωr ≤ γ Thus. ωr ]. γ is now an upperbound of the maximum singular value of the sensitivity for fre quencies ω belonging to the restricted interval [−ωr . and 1 V −1 (jω) = V (jω) which is to be interpreted as ∞ whenever V (jω) = 0.64 CHAPTER 6. Observe that for the ideal lowpass ﬁlter V this implies that S(s)V (s) ∞ = max σ ¯ (S(jω)V (jω)) ω = max σ ¯ (S(jω)) (6. For those frequencies (ω > ωr in this example) the designed controller does not put a limit to the tracking error for these frequencies did not appear in the reference signal r. The tracking error r − y now satisﬁes for all ω ∈ R the inequalities r(ω) − y(ω) = S(jω)V (jω)r (ω) ≤ S(jω) · V (jω)r (ω) (6. ωr ]! Conclude that with this ideal ﬁlter V the minimization of the H∞ norm of the weighted sensitivity corresponds to minimization of the maximal singular value σ ¯ (ω) of the sensitivity function for frequencies ω ∈ [−ωr . we now look for a controller which achieves that S(s)V (s) ∞ ≤ γ where γ ≥ 0. Thus. ? y V (s) C(s) P (s) 6 η Figure 6.1 and considering the ‘new’ reference signal r as input instead of r.3: Application of an input weighting ﬁlter in the diagram of Figure 6. u. instead of the criterion (6. WEIGHTING FILTERS V (jω) 1 −ωr 0 ωr Figure 6.
6.1. THE USE OF WEIGHTING FILTERS 65
The last inequality in (6.7) is the most useful one and it follows from the more general
observation that, whenever S(s)V (s) ∞ ≤ γ with V a square stable transfer function
whose inverse V −1 is again stable, then for all ω ∈ R there holds
σ ¯ [S(jω)V (jω)V −1 (jω)]
¯ [S(jω)] = σ
≤σ σ [V −1 (jω)]
¯ [S(jω)V (jω)]¯
¯ [V −1 (jω)]
≤ γσ
We thus come to the important conclusion that
a controller C which achieves that the weighted sensitivity
S(s)V (s) ∞ ≤ γ
results in a closed loop system in which
σ ¯ [V −1 (jω)]
¯ (S(jω)) ≤ γ σ (6.8)
Remark 6.1 This conclusion holds for any stable weighting ﬁlter V (s) whose inverse
V −1 (s) is again a stable transfer function. This is questionable for the ideal ﬁlter V we
used here to illustrate the eﬀect, because for ω > ωr the inverse ﬁlter V (jω)−1 can be
qualiﬁed as unstable. In practice we will therefore choose ﬁlters which have a rational
transfer function being stable, minimum phase and biproper. An alternative ﬁrst order
ﬁlter for this example could thus have been e.g. V (s) = (s+100ω r)
100(s+ωr ) .
Remark 6.2 It is a standard property of the singular value decomposition that,whenever
V −1 (jω) exists,
1
¯ [V −1 (jω)] =
σ
σ[V (jω)]
where σ denotes the smallest singular value.
The eﬀect of output weightings
In the previous subsection we considered the eﬀect of applying a weighting ﬁlter for an
input signal. Likewise, we can also deﬁne weighting ﬁlters on the output signals which
occur in a closedloop conﬁguration as in Figure 6.1.
We consider again (as an example) the sensitivity S(s) viewed as a mapping from
the reference input r to the tracking error r − y = e, when we fully disregard for the
moment the measurement noise η. A straightforward H∞ design would minimize the H∞
norm of the sensitivity S(s) and result in the upperbound (6.5) for the tracking error.
We could, however, be interested in minimizing the spectrum of the tracking error at
speciﬁc frequencies only. Let us suppose that we are interested in the tracking error e at
frequencies ω ≤ ω ≤ ω ¯ only, where ω > 0 and ω¯ > 0 deﬁne a lower and upperbound. As
in the previous subsection, we introduce a new signal
e (s) = W (s)e(s)
where W is a (stable) transfer function whose frequency response is ideally deﬁned by the
band pass ﬁlter
1 if ω ≤ ω ≤ ω ¯;
W (jω) =
0 otherwise
66 CHAPTER 6. WEIGHTING FILTERS
W (jω)
1 6
 ω
0 ω ω
¯
Figure 6.4: Ideal band pass ﬁlter
e
6
W d
6
r  e 
C(s)
u
P (s) ? y
6
Figure 6.5: Application of an output weighting ﬁlter
and depicted in Figure 6.4. Instead of minimizing the H∞ norm of the sensitivity S(s)
we consider minimizing the H∞ norm of the weighted sensitivity W (s)S(s). In Figure 6.5
it is shown that this amounts to including the transfer function W (s) in the diagram of
Figure 6.1 (where we put η = 0) and considering the ‘new’ output signal e . A controller
which achieves an upperbound γ on the weighted sensitivity
W (s)S(s) ∞ ≤ γ
accomplishes, as in (6.6), that
W (s)S(s) ∞ = max σ
¯ (W (jω)S(jω))
ω
= max σ
¯ (S(jω)) (6.9)
ω≤ω≤¯
ω
≤ γ
which provides an upperbound of the maximum singular value of the sensitivity for fre
quencies ω belonging to the restricted interval ω ≤ ω ≤ ω ¯ . The tracking error e satisﬁes
again the inequalities (6.7), with V replaced by W and it should not be surprising that the
same conclusions concerning the upperbound of the spectrum of the sensitivity S hold. In
particular, we ﬁnd similar to (6.8) that for all ω ∈ R there holds
σ ¯ [W −1 (jω)]
¯ (S(jω)) ≤ γ σ (6.10)
provided the stable weighting ﬁlter W (s) has an inverse W −1 (s) which is again stable.
6.1.3 Implications for control design
In this section we will comment on how the foregoing can be used for design purposes. To
this end, there are a few important observations to make.
6.1. THE USE OF WEIGHTING FILTERS 67
• For one thing, we showed in subsection 6.1.2 that by choosing the frequency response
of an input weighting ﬁlter V (s) so as to ‘model’ the frequency characteristic of the
input signal r, the apriori information of this reference signal has been incorporated
in the controller design. By doing so, the minimization of the maximum singular
value of the sensitivity S(s) has been reﬁned (like in (6.6)) to the frequency interval
of interest. Clearly, we can do this for any input signal.
• Secondly, we obtained in (6.8) and in (6.10) frequency dependent upperbounds for
the maximum gain of the sensitivity. Choosing V (jω) (or W (jω)) appropriately,
enables one to specify the frequency attenuation of the closedloop transfer function
(the sensitivity in this case). Indeed, choosing, for example, V (jω) a low pass transfer
function implies that V −1 (jω) is a high pass upperbound on the frequency spectrum
of the closedloop transfer function. Using (6.8) this implies that low frequencies of
the sensitivity are attenuated and that the frequency characteristic of V has ‘shaped’
the frequency characteristic of S. The same kind of ‘loopshaping’ can be achieved
by either choosing input or output weightings.
• Thirdly, by applying weighting factors to both input signals and output signals
we can minimize (for example) the H∞ norm of the twosided weighted sensitivity
W (s)S(s)V (s), i.e., a controller could be designed so as to achieve that
W (s)S(s)V (s) ∞ ≤ γ
for some γ > 0. Provided the transfer functions V (s) and W (s) have stable inverses,
this leads to a frequency dependent upperbound for the original sensitivity. Precisely,
in this case
¯ [V −1 (jω)]¯
¯ (S(jω)) ≤ γ σ
σ σ [W −1 (jω)] (6.11)
from which we see that the frequency characteristic of the sensitivity is shaped
by both V as well as W . It is precisely this formula that provides you with a
wealth of design possibilities! Once a performance requirement for a closedloop
transfer function (let’s say the sensitivity S(s)) is speciﬁed in terms of its frequency
characteristic, this characteristic needs to be ‘modeled’ by the frequency response
of the product V −1 (jω)W −1 (jω) by choosing the input and output ﬁlters V and
W appropriately. A controller C(s) that bounds the H∞ norm of the weighted
sensitivity W (s)S(s)V (s) then achieves the desired characteristic by equation (6.11).
The weighting ﬁlters V and W on input and output signals of a closedloop transfer
function give therefore the possibility to shape the spectrum of that speciﬁc closedloop
transfer. Once these ﬁlters are speciﬁed, a controller is computed to minimize the H∞
norm of the weighted transfer and results in a closedloop transfer whose spectrum has
been shaped according to (6.11).
In the example of a weighted sensitivity, the controller C is thus computed to establish
that
γﬁlt := inf W (s)S(s)V (s) ∞ (6.12)
C stab
≤ W (s)(I + P C)−1 V (s) ∞ (6.13)
≤ γ. (6.14)
for some γ > 0 which is as close as possible to γf ilt (which depends on the plant P and
the choice of the weigthing ﬁlters V and W ). To ﬁnd such a γ larger than or equal to the
68 CHAPTER 6. WEIGHTING FILTERS
unknown and optimal γﬁlt is the subject of Chapter 8, but what is important here is that
the resulting sensitivity satisﬁes (6.11).
By incorporating weighting ﬁlters to each input and output signal which is of interest
in the closedloop control conﬁguration, we arrive at extended conﬁguration diagrams such
as the one shown in Figure 6.6.
u d
6
?
Wu Vd
6 d
r  r
 v u ? y y
Vr C(s) P (s) Wy 
6
?
η η
Vη
?
e
We e
Figure 6.6: Extended conﬁguration diagram
General guidelines on how to determine input and output weightings can not be given,
for each application requires its own performance speciﬁcations and apriori information
on signals. Although the choice of weighting ﬁlters inﬂuences the overall controller design,
the choice of an appropriate ﬁlter is to a large extent subjective. As a general warning,
however, one should try to keep the ﬁlters of as low a degree as possible. This, because the
order of a controller C that achieves inequality (6.12) is, in general, equal to the sum of the
order of the plant P , and the orders of all input weightings V and output weightings W .
The complexity of the resulting controller is therefore directly related to the complexity
of the plant and the complexity of the chosen ﬁlters. High order ﬁlters lead to high order
controllers, which may be undesirable.
More about appropriate weighting ﬁlters and their interactive eﬀects on the ﬁnal so
lution in a complicated scheme as Fig. 6.6 follows in the next chapters.
6.2 Robust stabilization of uncertain systems
6.2.1 Introduction
The theory of H∞ control design is model based. By this we mean that the design of a
controller for a system is based on a model of that system. In this course we will not
address the question how such a model can be obtained, but any modeling procedure
will, in practice, be inaccurate. Depending on our modeling eﬀorts, we can in general
expect a large or small discrepancy between the behavior of the (physical) system which
we wish to control and the mathematical model we obtained. This discrepancy between
the behavior of the physical plant and the mathematical model is responsible for the fact
that a controller, we designed optimally on the basis of the mathematical model, need not
fulﬁll our optimality expectations once the controller is connected to the physical system.
It is easy to give examples of systems in which arbitrary small parameter variations of
plant parameters, in a stable closed loop system conﬁguration, fully destroy the stability
properties of the system.
There are many ways to do this: • One can take a stochastic approach and attach a certain likelihood or probability to the elements of a class of models which are assumed to represent the unknown.2. in terms of its frequency response. (often called ‘true’) system. the design of controllers is often based on various iterations of the loop data collection −→ modeling −→ controller design −→ validation in which improvement of performance of the previous iteration is the main aim. For frequency domain model uncertainty descriptions one usually distinguishes two approaches. .2. • Structured uncertainty: apart from an upperbound on the modeling errors. For the analysis of unstructured model uncertainty in the frequency domain there are four main uncertainty models which we brieﬂy review. For each of these possibilities a quantiﬁcation of model uncertainty is necessary and es sential for the design of controllers which are robust against those uncertainties. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 69 Robust stability refers to the ability of a closed loop stable system to remain stable in the presence of modeling errors. which lead to diﬀerent research directions: • Unstructured uncertainty: model uncertainty is expressed only in terms of upper bounds on errors of frequency responses. • alternatively. In this chapter we analyze robust stability of a control system. etc. also the speciﬁc structure in uncertainty of parameters is taken into account. No more information on the origin of the modeling errors is used. In this case. Typical examples include de scriptions of variations in the physical parameters in a state space model. 6. model errors can be quantiﬁed in the frequency domain by analyzing perturbations of transfer functions or frequency responses. In practice. the uncertain part of a process is modeled separately from the known (nominal) part.6. We introduce various ways to represent model uncertainty and we will study to what extent these uncertainty descriptions can be taken into account to design robustly stabilizing controllers. in terms of its impulse response. Various approaches are possible • model errors can be quantiﬁed in the time domain. For this. • One can deﬁne a class of models each of which is equally acceptable to model the unknown physical system • One can select one nominal model together with a description of its uncertainty in terms of its parameters. We will basically concentrate on the latter in this chapter.2 Modeling model errors It may sound somewhat paradoxial to model dynamics of a system which one deliberately decided not to take into account in the modeling phase. Our purpose here will be to only provide upperbounds on modeling errors. one needs to have some insight in the accuracy of a mathematical model which represents the physical system we wish to control.
(However. Pt = P + ∆P (6. The popularity of multiplicative model uncertainty description is for this reason diﬃcult to understand for it is well known that an accurate identiﬁcation of the zeros of a dynamical system is a nontrivial and very hard problem in system identiﬁcation.17 can be interpreted as additive perturbations of P . respectively. at least for single input single output systems. the real output y should be still distinguishable from the measured output y + η).3 We also emphasize that. we use the notation ∆P as one mathematical object to display the perturbation of the nominal plant P .  + Figure 6.16) is used to represent output multiplicative uncertainty. That is. Analogously. . We distinguish the two cases Pt = (I + ∆)P = P + ∆ · P = P + ∆P (6.15) where P is the the nominal model. P . Input multiplicative uncertainty is well suited to represent inaccuracies of the actuator being incorporated in the transfer P . Additive perturbations are pictorially represented as in Figure 6. equa tion(6. Note also that the products ∆P and P ∆ in 6. the multiplicative uncertainty description leaves the zeros of the perturbed system invariant.8 and Figure 6. Pt is the true or perturbed model and ∆ is the relative perturbation.16) Pt = P (I + ∆) = P + P · ∆ = P + P ∆ (6. .9. WEIGHTING FILTERS Additive uncertainty The simplest way to represent the discrepancy between the model and the true system is by taking the diﬀerence of their respective transfer functions. the output multiplicative uncertainty is a proper means to represent noise eﬀects of the sensor.17) represents input multiplicative uncertainty. ∆P + ? .7. Pt is the true or perturbed model and ∆P is the additive perturbation. Equation (6.16 and 6.70 CHAPTER 6.7: Additive perturbations Multiplicative uncertainty Model errors may also be represented in the relative or multiplicative form. Remark 6.17) where P is the nominal model. The situations are depicted in Figure 6. Note that for single input single output systems these two multiplicative uncertainty descriptions coincide. In order to comply with notations in earlier chapters and to stress the relation with the next input multiplicative perturbation description.
and mention them only for completeness.10.  Figure 6.9: Input multiplicative uncertainty Feedback multiplicative uncertainty In few applications one encounters feedback versions of the multiplicative model uncer tainties.8: Output multiplicative uncertainty .18) −1 Pt = P (I + ∆) (6. because the phase of ∆ will not be taken into account when considering norms of ∆.19) and referred to as the output feedback multiplicative model error and the input feedback multiplicative model error. P . ∆ 6 +? . ∆ + ? . P  Figure 6. P . .10: Output feedback multiplicative uncertainty . ∆ ? . We will hardly use these uncertainty representations in this course. Note that the sign of the feedback addition from ∆ is irrelevant.6. The situation of an output feedback multiplicative model error is depicted in Figure 6. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 71 .  + Figure 6.2. They are deﬁned by Pt = (I + ∆)−1 P (6.
y) ∈ L2 satisfying y = P u is generated by plugging in v = D−1 u in (6. and conversely.5 The scalar transfer function P (s) = (s−3)(s+4) has a coprime factorization P (s) = N (s)D−1 (s) with s−1 s−3 N (s) = . • there exist stable transfer functions X and Y such that XN + Y D = I which is known as the Bezout equation. any element v in L2 generates via (6.20) an input output pair (u. It follows that n and d are coprime if and only if there exist integers x and y such that xn + yd = 1. equivalently. We have seen that such a transfer matrix maps L2 signals to L2 signals. • D is square and N has the same dimensions as P . . since N and D are stable. Diophantine equation or even Aryabhatta’s identity. Any (multivariable rational) transfer function P can be factorized as P = N D−1 in such a way that • both N and D are stable transfer functions. Now note that. We deﬁne a perturbed model Pt = (N + ∆N )(D + ∆D )−1 (6.20) D u is stable as well. y) for which y = P u.72 CHAPTER 6. the transfer function N y : v → (6. any pair (u. 6. A left coprime factorization has the following interpretation. WEIGHTING FILTERS Coprime factor uncertainty Coprime factor perturbations have been introduced to cope with perturbations of unstable plants.11 illustrates this right coprime uncertainty in a block scheme. We can thus interpret (6.20). Indeed. Remark 6. D(s) = s+4 s+2 Let P = N D−1 be a right coprime factorization of a nominal plant P . Then the input output relation deﬁned by P satisﬁes y = P u = N D−1 u = N v where we deﬁned v = D−1 u.20) as a way to generate all bounded energy inputoutput signals u and y which are compatible with the plant P . Such a factorization is called a (right) coprime factorization of P . Coprime factor uncertainty refers to perturbations in the coprime factors N and D of P . or.4 The terminology comes from number theory where two integers n and d are called coprime if ±1 is their common greatest divisor.21) where ∆ := ∆N ∆D reﬂects the perturbation of the coprime factors N and D of P . (s−1)(s+2) Example 6. u = Dv. Suppose that a nominal plant P is factorized as P = N D−1 . The next Fig.
2. . Thus. whereas small values of γ allow for large deviations of P . ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 73 ∆D . ∆ ∞ ≤ γ which depends on the particular model uncertainty structure. .11: Right coprime uncertainty. It is often required that the coprime factors should satisfy the normalization D∗ D + N ∗ N = I This deﬁnes the normalized right coprime factorization of P and it has the interpretation that the transfer deﬁned in (6. ∆N  6 u ? ? y .  D−1 N v Figure 6. in which case perturbed models Pt coincide with the nominal model P . that is. For a given nominal plant P this class of perturbations deﬁnes a class of perturbed plants 1 Pt .12 is called the stability margin of the system. The H∞ norm of the smallest (stable) perturbation ∆ which destabilizes the closedloop system of Figure 6. The robust stabilization problem amounts to ﬁnding a controller C so that the stability margin of the closed loop system is maximized.2. We will assume that the controller C stabilizes this system if ∆ = 0. The robust stabilization problem is therefore formalized as follows: 1 The reason for taking the inverse γ −1 as an upperbound rather than γ will turn useful later.3 The robust stabilization problem For each of the above types of model uncertainty. . we assume that the closedloop system is asymptotically stable for the nominal plant P .1 Large values of γ therefore allow for small upper bounds on the norm of ∆.6. Remark 6.12 where the plant P has been replaced by the uncertain plant Pt . Consider the feedback conﬁguration of Figure 6. 6. the perturbation ∆ is a transfer function which we assume to belong to a class of transfer functions with an upperbound on their H∞ norm. We can also turn this question into a control problem. .20) is all pass. An obvious question is how large ∆ ∞ can become before the closed loop system becomes unstable.6 It should be emphasized that the coprime factors N and D of P are by no means unique! A plant P admits many coprime factorizations P = N D−1 and it is therefore useful to introduce some kind of normalization of the coprime factors N and D. we assume that 1 ∆ ∞ ≤ γ where γ ≥ 0. Note that if γ → ∞ then the H∞ norm of ∆ is required to be zero.
e. . • The output multiplicative stability margin is the H∞ norm of the smallest stable ∆ which destabilizes the system in Figure 6. The main results with respect to robust stabilization of dynamical systems follow in a straightforward way from the celebrated small gain theorem.15) becomes unstable. Such a controller is called robustly stabilizing or optimally robustly stabilizing for the perturbed systems Pt .12 with Pt deﬁned by (6.16).12 such that C stabilizes the perturbed plant Pt for all ∆ ∞ ≤ γ1 with γ > 0 as small as possible (i. y C(s) Pt (s) 6 Figure 6. If we consider in the conﬁguration of Figure 6.12 output multiplicative perturbed plants Pt = (I + ∆)P then we can replace the block indicated by Pt by the conﬁguration of Figure 6. .8 to obtain the system depicted in Figure 6. WEIGHTING FILTERS r . .17) and • The coprime factor stability margin is analogously deﬁned with respect to (6.12: Feedback loop with uncertain system Find a controller C for the feedback conﬁguration of Figure 6.13.13: Robust stabilization for multiplicative perturbations To study the stability properties of this system we can equivalently consider the system of Figure 6.? y 6− Figure 6.21) and the particular coprime factorization of the plant P ..12 with Pt deﬁned by (6.13 by setting r = 0 and ‘pulling’ out the uncertainty block ∆. . we can deﬁne four types of stability mar gins • The additive stability margin is the H∞ norm of the smallest stable ∆P for which the conﬁguration of Figure 6. Since this problem can be formulated for each of the uncertainty descriptions introduced in the previous section. u. ∆ v w + r .74 CHAPTER 6. C makes the stability margin as large as possible). • The input multiplicative stability margin is similarly deﬁned with respect to equa tion (6. C(s) u P (s) .14 in which M is the system obtained from Figure 6.
14 is asymptotically stable if M ∆ ∞ < 1 For a given controller C the small gain theorem therefore guarantees the stability of the closed loop system of Figure 6. M is precisely the complementary sensitivity transfer function2 . Hence.6. For MIMO systems we obtain. 1966): Theorem 6. for all ω ∈ R we should have that ¯ (M ∆) = M ∆ = M ∆ = σ σ ¯ (M )¯ σ (∆) (where we omitted the argument jω in each transfer) to guarantee the stability of the closed loop system. to obtain robust stability we have to guarantee for both SISO and MIMO systems that σ ¯ [M (jω)]¯ σ [∆(jω)] < 1 2 Like in chapter 3 actually M = −T .14 (and thus also the system of Figure 6. independent of the perturbation ∆ but dependent on the choice of the controller C. ∆ M Figure 6.13 are determined by the small gain theorem (Zames. Then the autonomous system determined by the feedback interconnection of Fig ure 6. Precisely..14: Small gain conﬁguration For the case of output multiplicative perturbations M maps the signal w to v and the corresponding transfer function is easily seen to be M = T = P C(I + P C)−1 i. but the sign is irrelevant as it can be incorporated in ∆. For SISO systems this translates in a condition on the absolute values of the frequency responses of M and ∆.7 (Small gain theorem) Suppose that the systems M and ∆ are both sta ble.e. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 75 . that for all ω ∈ R ∗ ∗ σ ¯ (M ∆) = σ ¯ (YM ΣM UM Y∆ Σ∆ U∆ )≤σ ¯ (M )¯ σ (∆) (where again every transfer is supposed to be evaluated at jω) and the maximum is reached for Y∆ = UM that can always be accomplished without aﬀecting the constraint ∆ ∞ < γ1 . . Since we assumed that the controller C stabilizes the nominal plant P it follows that M is a stable transfer function. The stability properties of the conﬁguration of Figure 6.2.13) provided ∆ is stable and satisﬁes for all ω ∈ R σ ¯ (M (jω)∆(jω)) < 1. by using the singular value decomposition.
22) σ ¯ [M (jω)] for all ω ∈ R. 6. If we assume that C stabilizes the nominal plant P then the small gain theorem and (6. . there exists a perturbation ∆P right on the boundary (and certainly beyond) with 1 σ ¯ [∆P (jω)] = σ ¯ [R(jω)] which destabilizes the system of Figure 6.9 Note that the transfer function R = C(I + P C)−1 is the control sensitivity of the closedloop system.12.2. Remark 6.76 CHAPTER 6. then we need to minimize the H∞ norm of the control sensitivity R(s)! Theorem 6. (6.4 Robust stabilization: main results Robust stabilization under additive perturbations Carrying out the above analysis for the case of additive perturbations leads to the following main result on robust stabilization in the presence of additive uncertainty.8 can be reﬁned by considering for each frequency ω ∈ R the maximal allowable perturbation ∆P which makes the system of Figure 6. Furthermore. Stated otherwise 1 σ ¯ [∆(jω)] < . WEIGHTING FILTERS for all ω ∈ R. the greater will be the norm of the smallest destabilizing additive perturbation.8 (Robust stabilization with additive uncertainty) A controller C sta bilizes Pt = P + ∆P for all ∆P ∞ < γ1 if and only if • C stabilizes the nominal plant P • C(I + P C)−1 ∞ ≤ γ. The control sensitivity of a closedloop system therefore reﬂects the robustness properties of that system under additive perturbations of the plant! The interpretation of this result is as follows • The smaller the norm of the control sensitivity.22) yields that for all additive stable perturbations ∆P for which 1 σ ¯ [∆P (jω)] < σ ¯ [R(jω)] the closedloop system is stable. Theorem 6.12 unstable. The additive stability margin of the closed loop system is therefore precisely the inverse of the H∞ norm of the control sensitivity 1 C(I + P C)−1 ∞ • If we like to maximize the additive stability margin for the closed loop system.
Moreover. If we assume that C stabilizes the nominal plant P then all stable output multiplicative perturbations ∆ for which 1 σ ¯ [∆(jω)] < σ ¯ [T (jω)] leave the closedloop system stable. the main results are as follows Theorem 6. Theorem 6. there exists a perturbation ∆ right on the boundary.12 unstable.12.6. . The output multiplicative stability margin of the closed loop system is therefore the inverse of the H∞ norm of the complementary sensitivity 1 P C(I + P C)−1 ∞ • By minimizing the H∞ norm of the complementary sensitivity T (s) we achieve a closed loop system which is maximally robust against output multiplicative pertur bations. Remark 6. The complementary sensitivity of a closedloop system therefore reﬂects the robustness properties of that system under multiplicative perturbations of the plant! The interpretation of this result is similar to the foregoing robustness theorem: • The smaller the norm of the complementary sensitivity T (s).11 We recognize the transfer function T = P C(I + P C)−1 = I − S to be the complementary sensitivity of the closedloop system. Robust stabilization under feedback multiplicative perturbations For feedback multiplicative perturbations. ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 77 Robust stabilization under multiplicative perturbations For multiplicative perturbations. and reads as follows for the class of output multiplicative perturbations. Theorem 6. the main result on robust stabilization also follows as a direct consequence of the small gain theorem.12 (Robust stabilization with feedback multiplicative uncertainty) A controller C stabilizes Pt = (I + ∆)−1 P for all ∆ ∞ < γ1 if and only if • C stabilizes the nominal plant P • (I + P C)−1 ∞ ≤ γ.10 (Robust stabilization with multiplicative uncertainty) A controller C stabilizes Pt = (I + ∆)P for all ∆ ∞ < γ1 if and only if • C stabilizes the nominal plant P • P C(I + P C)−1 ∞ ≤ γ.2.10 can also be reﬁned by considering for each frequency ω ∈ R the maximal allowable perturbation ∆ which makes the system of Figure 6. so: 1 ¯ [∆(jω)] ≤ σ σ ¯ [T (jω)] which destabilizes the system of Figure 6. the greater will be the norm of the smallest destabilizing output multiplicative perturbation.
the complementary sensitivity (¯ σ [T (jω)]) and the sensitivity (¯σ [S(jω)]) provide precise information about the maximal allowable perturbations σ ¯ [∆(jω)] for which the controlled system remains asymptotically sta ble under (respectively) additive. δmult (ω) and δfeed (ω) then provide an upperbound on the allowable additive. the information about the maximal allowable uncertainty of the plant P has been speciﬁed in terms of one or more of the curves δadd (ω). let us suppose that a nominal plant P is available together with information of the maximal multiplicative model error δmult (ω) for ω ∈ R. Graphically. (which corresponds to ‘mirroring’ the frequency responses of σ ¯ [R(jω)]. we can get insight in the magnitude of these admissable perturbations by plotting the curves 1 δadd (ω) = σ ¯ [R(jω)] 1 δmult (ω) = σ ¯ [T (jω)] 1 δfeed (ω) = σ ¯ [S(jω)] for all frequency ω ∈ R. δmult (ω) or δfeed (ω) then we can use these speciﬁcations to shape the frequency response of either R(jω). A controller C now achieves robust stability against this class of perturbations if and only if it stabilizes P (of course) and P C(I + P C)−1 V ∞ = T V ∞ ≤ 1. The interpretation of this result is similar to the foregoing robustness theorem and not included here. V (jω) = δmult (ω). multiplicative and feedback multiplicative pertur bations of the plant P . We can then interpret δmult as the frequency response of a weighting ﬁlter with transfer function V (s). • If there is no apriori information on model uncertainty then the frequency responses of the control sensitivity (¯σ [R(jω)]). multplicative and feedback multiplicative perturbations per frequency ω ∈ R. WEIGHTING FILTERS Remark 6. 6.78 CHAPTER 6.13 We recognize the transfer function S = (I + P C)−1 = I − T to be the sensitivity of the closedloop system. • If.10. Speciﬁcally.5 Robust stabilization in practice The robust stabilization theorems of the previous section can be used in various ways. The latter expression is a constraint on the H∞ norm of the weighted complementary sensitivity! We therefore need to consider the H∞ optimal control design problem . The set of all allowable multiplicative perturbations of the nominal plant P is then given by V∆ where V is the chosen weighting ﬁlter with frequency response δmult and where ∆ is any stable transfer function with ∆∞ < 1. on the other hand. i. The curves δadd (ω).e.2. σ¯ [T (jω)]. and σ ¯ [S(jω)] around the 0dB axis). Pulling out the transfer matrix ∆ from the closedloop conﬁguration (as in the previous section) now yields a slight modiﬁcation of the formulas in Theorem 6. T (jω) or S(jω) using the ﬁltering techniques described in the previous chapter.
 Figure 6.2. If ∆ satisﬁes the norm constraint ∆ ∞ < 1 then for every frequency ω ∈ R we have that ¯ (∆P (jω)) ≤ σ σ ¯ (W (jω))¯ σ (V (jω)). ROBUST STABILIZATION OF UNCERTAIN SYSTEMS 79 .15: Filtered additive perturbation.23) −42s 50s+2 (s+1)(s+2) (s+1)(s+2) The controller for this system is a diagonal constant gain matrix given by " # 1 0 C(s) = (6.15. Clearly.6 Exercises 1. (b) the class of input feedback multiplicative perturbations 2. At which frequency ω is the norm P ∞ attained? Hint: First compute a state space representation of P by means of the conversion algo rithm tfm2ss (‘transfermatrixtostatespace’). To fulﬁl the small gain constraint the control sensitivity R then needs to satisfy W RV ∞ ≤ 1. P .2.6. (a) Determine the H∞ norm of P .10 for (a) the class of input multiplicative perturbations. This problem will be discussed in forthcoming chapters. Consider a 2 × 2 system described by the transfer matrix −47s+2 56s (s+1)(s+2) (s+1)(s+2) P (s) = . V  6 ? . Read the help information carefully! . So the next goal is to synthesize controllers which accomplish this upperbound.24) 0 1 We consider the usual feedback conﬁguration of plant and controller.and postﬁlters V and W as schematized in Fig 6. Derive a robust stabilization theorem in the spirit of Theorem 6. in this case the additive model error ∆P = V ∆W . (6. In a more general and maybe more familiar setting we can quantify our knowledge concerning the additive model error by means of pre. if we ‘pull out’ the transfer ∆ from the closed loop yields that M = W RV . 6. W . so as to bound the H∞ norm of the weighted complementary sensitivity T V by one. Consequently. ∆ .
etc.343 −2. (b) Use (6.80 CHAPTER 6. Use the routine sigma.146 0.654 5. See also the routine minreal.273 1.715 −5.104 1.136 0 1 0 1 −1 y= 0 1 0 0 (a) Verify (using MATLAB!) that the input output system deﬁned by this model is unstable. interconnect the controller with the given plant and show that the corresponding closed loop system is stable.679 u 1.88 and compute the closedloop poles of this system.675 0 x˙ = x + 5. Consider the linearized system of an unstable batch reactor described by the state space model 1. the second [0 42 0]. (c) Consider the perturbed controller " # 1. Recall that the closed loop poles are the eigenvalues of the ‘A’ matrix in any minimal representation of the closed loop system.13 0 C(s) = 0 . Conclusion? Hint: Use again the procedure feedbk to obtain a state space representation of the closed loop system. WEIGHTING FILTERS (The denominator polynomial is the same as in Exercise 2. Once you have a state space representation of P you can read its H∞ norm from a plot of the singular values of P . Hint: Use the MATLAB routine feedbk with the right ‘type’ option as in exercise 6. (d) What are your conclusions concerning robust stability of the closed loop system? . Determine the robust stability margin of the closed loop system under additive perturbations of the plant.24) as a controller for the plant P and plot the singular values of the closed loop controlsensitivity C(I + P C)−1 to investigate robust stability of this system.136 −3.5814 −4.273 −6. 3. (b) Consider the controller with transfer function 0 −2 1 0 −2 C(s) = + 5 0 s 8 0 Using Matlab.2077 6.893 1.29 0 0. (c) Make a plot of the singular values (as function of frequency) of the complemen tary sensitivity P C(I + P C)−1 of the closed loop system.38 −0. the numerator polynomials are represented in one matrix: the ﬁrst row being [0 47 2].1 to construct a state space representation of the controlsensitivity and use sigma to read H∞ norms of multivariable systems.676 0 0 −0.048 4.067 4.
the socalled augmented plant. all the ﬁlters for characterising the inputs and weighting the penalised outputs as well as the model error lines. In Fig. The output y contains the actually measured signals that can be used as inputs for the controller block K.  . In order not to confuse the inputs and outputs of the augmented plant with those of the internal blocks we will indicate the former ones in bold face.1: Augmented plant. 7. actuator inputs. G(s)  6 control input u y measured output K(s) ? Figure 7. 7. are collected in z and refer to (weighted) tracking errors. sensor noise and the kind. system perturbation signals. model error block inputs etc. applied to the augmented system with transfer function G(s). properly partitioned form: z G11 G12 w = (7. exogenous input w z output to be controlled . We will start with a formal exposure and deﬁnitions and next illustrate it by examples. All exogenous inputs are collected in w and are the L2 −bounded signals entering the shaping ﬁlters that yield the actual input signals such as reference. disturbance. beyond the process model.1 Augmented plant. The augmented plant contains. that have to be minimised in L2 −norm and that result from the weighting ﬁlters.1) y G21 G22 u while: 81 . that the problem is well deﬁned and therefore the solution straight forward to obtain. Consequently in s−domain we may write the augmented plant in the following. Now that we have prepared all necessary ingredients in past chapters we are ready to compose the general problem in such a structure.1 the augmented plant is schematised. Its output u functions as the controller input. The output signals.Chapter 7 General problem.
It is evident that the unstable modes (i. x ˜ n ˜ . Consider the structure of Fig.e. ﬁlter Wx . 7.2) denotes the controller. as explained in chapter 5.2. B2 } is stabilisable and {A. because we want to focus ﬁrst on exclusively one performance measure. . summarising: There exist stabilising controllers K(s). Wy  x − 6 ? Figure 7. Eliminating u and y yields: z = [G11 + G12 K(I − G22 K)−1 G21 ]w = M (K)w def (7. For the moment we take the reference signal r equal to zero and forget about the model error. C2 } must be detectable. u = Ky (7. C . So.e. . This can best be analysed when we consider a state space description of G in the following stylised form: A B1 B2 G : C1 D11 D12 ∈ R(n+[z]+[y])×(n+[w]+[u]) (7.2: Mixed sensitivity structure. This means that the pair {A. An illustrative example: sensitivity. GENERAL PROBLEM. The controller is only able to stabilise.3) in K will be met very often and has got the name linear fractional transformation abbreviated as LFT. Output disturbance n is characterised by ﬁlter Vn from exogenous signal n˜ belonging to L2 . . P . canonical states) have to be reachable from u so as to guarantee the existence of stabilising controllers. i. Our combined control aim requires: z 2 min sup = min M (K) ∞ (7.3) An expression like (7. Of course we have to check whether stabilising con trollers indeed exist. if it can conceive all information concerning the unstable modes so that it is necessary to require that {A.4) turns into: .4) Kstabilising w∈L2 w 2 Kstabilising as the H∞ norm is the induced operator norm for functions mapping L2 −signals to L2 −signals. B2 } needs to be stabilisable. C2 } is detectable. We would like to minimise y˜.] indicates the dimension of the enclosed vector. iﬀ the unstable modes of G are both controllable by u and observable from y which is equivalent to requiring that {A.82 CHAPTER 7. ∆0  6 ? Wx Vn 6 n r=0 + e ? y y˜ . the measurement noise etc.5) C2 D21 D22 where n is the dimension of the state space of G while [. the disturbance in output y with weighting ﬁlter Wy so that equation (7.
The corresponding signals and transfers can be represented as: G y˜ W y Vn W y P n ˜ = −e Vn P x (7.3..2. −e 6 −C ? Figure 7. they can be taken as entries mij in a composed matrix M like: m11 m12 · · · .9) .10) . However. It can be proved that: mij ∞ ≤ M ∞ (7. 7. . Wy .6) ˜ ∈L2 Cstabilising n n˜ 2 Cstabilising = min Wy SVn ∞ (7.. ··· . This is usually realised as follows: If we have several transfers properly weighted.3: Augmented plant for sensitivity alone.6). M = m21 . P .7. ..3).7) Cstabilising In the general setting of the augmented plant the structure would be as displayed in Fig. 83 y˜ 2 min sup = min Wy (I + P C)−1 Vn ∞ = (7. . we are not so much interested in single criteria but much more in conﬂicting and combined criteria. G(s) n ˜ .8) −C = K It is a trivial exercise to subsitute the entries Gij in equation (7. Along similar lines we could go through all kinds of separate and isolated criteria (as is done in the exercises!). (7..2 Combining control aims. y˜ 6 ? x . Vn . COMBINING CONTROL AIMS. yielding the same M as in equation (7. . . 7.
Certainly. Consequently the condition: M ∞ < 1 (7. Wy .4: Augmented plant for the mixed sensitivity problem.15) x x −e Vn P G(s) . y˜ 6 z n ˜ . as can be seen from the example: M = (m1 m2 ) if mi ∞ ≤ 1 for √ i = 1. x ˜ 6 6 Wx ? x . j : mij ∞ < 1 (7. ∞ of the full matrix M bounds the .16) . The reference r is kept zero again so that we have only one exogenous input viz. −e 6 6 ?  −C ? Figure 7. From Fig. n ˜ and two outputs y˜ and x ˜ that yield a two block augmented system transfer to minimise: Wy SVn M ∞ = ∞ (7.13) then M ∞ ≤ 2 so that it is advisory to keep the composed matrix M as small as possible. 7. Consequently the lower term in M viz. Consequently if we end up with: Wx RVn ∞ ≤ M ∞ < γ (7. Wx RVn ∞ ≤ 1 represents a constraint.84 CHAPTER 7. By proper choice of Vn the disturbance can be characterised and the ﬁlter Wx should guard the saturation range of the actuator in P .2. 7. Vn .12) So the . . 7. ∞ of the various entries. P .4 and described by the generalised transfer function G as follows: y˜ Wy Vn W y P x n ˜ n ˜ ˜ =G = 0 Wx (7.14) Wx RVn The corresponding augmented problem setting is given in Fig. 2 (7.  .2 we also learn that we can think the additive weighted model error ∆0 between x ˜ and n ˜ . it is not a necessary condition.11) is suﬃcient to guarantee that: ∀i. The most trivial example is the socalled mixed sensitivity problem as represented in Fig. GENERAL PROBLEM.
So here we have a mixed sensitivity criterion in M . where the lower term puts a constraint in terms of the control sensitivity. MIXED SENSITIVITY PROBLEM. Choose W1 and V1 for the expected. 3. based on desired and obtainable numerical accuracy.21) ∞ where γ is as small as possible.20) 2.7. obtainable performance. If γ > 1. by the ﬁlter Wy we put our requirements on the band. It can in fact always supposed to be T . 7. 6. . 85 we can guarantee robust stability for: Vn−1 ∆P Wx −1 ∞ = ∆0 ∞ < 1/γ (7. runs along the following lines: 1. decrease W1 and/or V1 in gain and/or frequency band in order to relax the performance aim and thereby giving more room to satisfy the robustness constraint. where ε is some small number. there is obviously some room left for improving the performance so that we may tighten the weights W1 and V1 by increasing gains and/or bands. as the diﬀerence with R is only inthe weighting by P that can be brought into the weighting ﬁlters W2 and V2 . for which the disturbance should be removed. simulations and watching possible actu ator saturation. If 1 − ε < γ < 1 stop the above iteration process and evaluate the result by studying the Bode plots of S and N .3.17) or equivalently: ∀ω ∈ R : ∆P (jω) < Vn (jω)Wx (jω)/γ (7. the general way. (7. Finally. Go back to step 3. while the upper term aims for a high performance in terms of the sensitivity. Next repeat step 3. leading to the ﬁrst term Wy SVn . As the lower term puts a hard constraint while the upper term is a control aim that should be optimised under the lower constraint. step responses. In general. If the result is not satisfactory. Choose weights W2 and V2 such that robustness is obtained for: W2 N V2 ∞ < 1. such a mixed sensitivity can be described as: $$ $$ $$ W1 SV1 $$ ← perf ormance $$ $$ (7. 4. 5.3 Mixed sensitivity problem.18) So by proper choice of Vn and Wx we can combine saturation and robust stability. Compute a stabilising controller C (see chapter 13) such that: $$ $$ $$ W1 SV1 $$ $$ $$ $$ W 2 N V 2 $$ < γ (7. to solve this. If γ < 1−ε.19) $$ W2 N V2 $$ ← robustness ∞ In the lower term robustness is guarded by N which is either T or R. repeat above iteration process after having adjusted the weighting ﬁlters.
If the order of the controller is too high. . Let these be given by bi . V . For comparison we can also optimise the 2norm by a controller C2 analogously yielding M2 . . The plant P is also supposed SISO. 2. GENERAL PROBLEM.86 CHAPTER 7. Do not try to solve this yourself. ..4 A simple example.22) Cstabilising u2 ≤1 Cstabilising $$ $$ $$ V $$ = inf SV ∞ = inf $$ $$ (7. 7. i = 1. Then we know from the maximum modulus principle (see chapter 4): V sup M (jω) = sup M (s) = sup   (7.26) 2b M2 = V (b) (7. 8. the total performance is not degraded beyond acceptable level and if so. The solutions can be found in [11]). . The ideal controllers are computed which will turn out to be nonproper. r˜ r e y + . For the ideal controllers the corresponding optimal. 7.5: Tracking problem structure. Really troublesome are the zeros in the closed right half plane. closed loop transfers are given by: M∞ = V (b) (7. Neither do the unstable poles in the right half plane. with only one ﬁlter and no complications like disturbance and measurement noise in order to be able to easily compute and analyse the solution. we can optimise M by a stabilising controller C∞ in the ∞norm leading to optimal transfer M∞ .24) ω s∈C+ s∈C+ 1 + PC The peaks in C+ will occur for the extrema in S = (1 + P C)−1 when P (bi ) is zero. the poles and zeros of plant P in the left half plane cause no problems. adapt step 7. Check whether. These zeros put the bounds and it can be proved that a controller can be found such that: M ∞ = max V (bi ) (7. C . some model order reduction method may be applied.5 as a single criterion. due to the order reduction of the controller. P  −6 ? Figure 7.27) s+b .25) i If there exists only one right half plane zero b. Consider the tracking problem of Fig. For higher frequencies we have to attenuate the controller transfer by adding a suﬃcient number of poles to accomplish the socalled rolloﬀ. 7. Trivially the control criterion is: inf inf e 2 = inf M ∞ = (7.23) Cstabilising Cstabilising 1 + P C $$ $ $ ∞ As we have learned. In practice we can therefore only apply these controllers in a suﬃciently broad band.
H2 accepts the extra costs at the low pass band for obtaining large advantage after the corner frequency ω = b. 7. Remem ber that we still study the solution for the ideal. A SIMPLE EXAMPLE. if somewhere on the frequency axis there were a little hill for M . We also note that this could never be the solution for the 2norm problem as the integration of this constant level M∞  from ω = 0 till ω = ∞ would result in an inﬁnitely large value. the corresponding sensitivities can be displayed in a Bode diagram as in Fig. a>0 (7.28) s+a Since S = M V −1 . 7. (From this alone we may conclude that the ideal controller must be nonproper.7.) It turns out that. the S∞ approaches inﬁnity for increasing ω. Therefore. Therefore we have to deﬁne the shaping ﬁlter V that characterises the type of reference signals that we may expect for this particular tracking system.7: Bode plot of tracking solution S. Unfortunately. Suppose e.4. Notice that M∞ is an all pass function. Figure 7. Nevertheless.g. Is this increasing sensitivity disastrous? Not in the ideal situation. This eﬀect is known as the waterbed eﬀect. whose top determines the ∞norm. 87 as displayed in the approximate Bodediagram Fig. where we did not expect any reference . nonproper controllers. contrary to the S2 .7. Figure 7.6: Bode plot of tracking solution M (K). the H2 solution has another advantage here.6. that the reference signals live in a low pass band till ω = a so that we could choose ﬁlter V as: a V (s) = . optimisation could still be continued to lower this peak but at the cost of an increase of the bottom line until the total transfer were ﬂat again. if we study the real goal: the sensitivity.
W1 V1 is low pass and W2 V2 is high pass. this is a bad behaviour as we necessarily require that T is small and because S + T = 1. But it is no use complaining.29) ω→∞ ω→∞ ω→∞ Consequently robustness and saturation requirements will certainly be violated. ultimately bounds the obtainable tracking and disturbance reduction band represented in the performance measure S.5 The typical compromise A typical weighting situation for the mixed sensitivity problem is displayed in Fig. that dictates how model uncertainty and actuator saturation. Suppose the constraint is on N = T .32) as exempliﬁed in Fig. GENERAL PROBLEM. by readjusting weights W1 . inevitably: lim S∞  = ∞ ⇒ lim T∞  = lim 1 − S∞  = ∞ (7. like in the H2 solution. Otherwise. we have indeed obtained: inf M (K) ∞ = γ ≈ 1 (7.8. signal components for these high frequencies. Usually. This is the basic eﬀect. 7. that puts a constraint on T . 7. This is another waterbed eﬀect. we have to pay then by a worse sensitivity at the low pass band. there would be a conﬂict with S + T = 1 and there would be no solution! Consequently. 7. in the face of stability robustness and actuator saturation. as these requirements were not included in the criterion after all. V1 . heavily weighted bands (> 0dB) for S and T should always exclude each other. Inclusion can indeed improve the solution in these respects.8: Typical mixed sensitivity weights.8. However.88 CHAPTER 7. . Figure 7.30) Kstabilising Then certainly : W1 SV1 ∞ < 1 ⇒ ∀ω : S(jω) < W1 (jω)−1 V1 (jω)−1  (7. Now it is crucial that the point of intersection of the curves ω → W1 (jω)V1 (jω) and ω → W2 (jω)V2 (jω) is below the 0 dBlevel. Suppose also that.31) −1 −1 W2 T V2 ∞ < 1 ⇒ ∀ω : T (jω) < W2 (jω) V2 (jω)  (7. but.
the main lines remain valid. the more conservative the bounding of the subcriteria deﬁned by these entries will be.Cf f . the more entries M has. while before. As a straightforward example we just take the standard control scheme with only a feedforward block extra as sketched in Fig. Nevertheless. these were combined in the sensitivity. AN AGGREGATED EXAMPLE 89 7.P0 .9. Note also that the additive uncertainty ∆P is combined with the disturbance characterisation ﬁlter Vv and the actuator weighting ﬁlter Wu such that ∆P = Vv ∆o Wu under the assumption: ∀ω ∈ R : ∆o  ≤ 1 ⇒ ∆P  ≤ Vv Wu  (7.j ∞ < γ . This socalled two degree of freedom controller oﬀers more possibilities: tracking and disturbance reduction are represented now by diﬀerent transfers. Also the familiar transfers take more complicated forms. because we only have: if M ∞ < γ then ∀i.33) By properly choosing Vv and Wu we can obtain robustness against the model un certainty and at the same time prevent actuator saturation and minimise disturbance. .6. .7.Vr . but a higher appeal is done for one’s creativity in combining control aims and constraints. These extra inputs and outputs would in crease the dimensions of the closed loop transfer M and.∆o  6u ˜ nv ? Wu Vv 6 ∆P v nr r u + ? + .  6+ + y Cf b ? + − ? .9: A two degree of freedom controller. j : mi. ? ?e We ?e˜ Figure 7. Certainly we then have to design the two ﬁlters Vv and Wu for the worst case bounds of the three control aims and thus we likely have to exaggerate somewhere for each separate aim. If we deal with more complicated schemes where also more control blocks can be distinguished. 7.6 An aggregated example Till so far only very simple situations have been analysed. this is preferable above the choice of not combining them and instead adding more exogenous inputs and outputs.
7. .Wu  w 6 nr + ? . 2. it is advantageous to combine most control aims. . . j ∈ 1.Vr r. nj ).10 the augmented plant/controller conﬁguration is shown for the two degree of freedom controlled system. .P .34) y G21 G22 u (7. nj . However.j ∞ < 1 for i ∈ 1. if we would know beforehand that say mi. AugmentedP lant + . . ni .  o 6 + y y  u 6 r u ?  Controller Cf f ? + ? 6 + Cf b ? Figure 7. Inversely.V v  v u ˜ nv . then the norm for the complete matrix M ∞ could still become max (ni .37) y Vv 0 Po u r 0 Vr 0 (7.10: Augmented plant/controller for two degree of freedom controller.36) that take for the particular system the form: e˜ −We Vv We Vr −We Po u ˜ 0 0 Wu nv = nr (7.We  6 −6 e˜ z . GENERAL PROBLEM.39) r The closed loop system is then optimised by minimising: . .90 CHAPTER 7. An augmented planted is generally governed by the following equations: z G11 G12 w = (7. . . In Fig. the bound for a particular subcriterion will mainly be eﬀected if all other entries are zero.38) y u= Cf b Cf f (7. Ergo. . .35) u = Ky (7. 2.
44) Cf b (I − Po Cf b )−1  < γ Wu Vv  (I − Po Cf b )−1 Cf f  < γ Wu Vr  The respective.41) Wu Cf b (I − Po Cf b )−1 V v Wu (I − Po Cf b )−1 Cf f Vr which can be schematised as: sensitivity : ne˜v tracking : e˜ nr ← perf ormance (7.7. .40) M21 M22 and in particular: −We (I − Po Cf b )−1 Vv We {I − (I − Po Cf b )−1 Po Cf f }Vr M = (7.43) then it can be guaranteed that ∀ω ∈ R: (I − Po Cf b )−1  < γ We Vv  I − (I − Po Cf b )−1 Po Cf f  < γ We Vr  (7.42) stability robustness : u˜ nv input saturation : u˜ nr ← constraints Suppose that we can manage to obtain: M ∞ < γ ≈ 1 (7. above transfer functions at the left and the right side of the inequality signs can then be plotted in Bode diagrams for comparison so that we can observe which constraints are the bottlenecks at which frequencies. AN AGGREGATED EXAMPLE 91 −1 M11 M12 M ∞ = G11 + G12 K(I − G22 K) G21 ∞ = ∞ (7.6.
7 Exercise 6 e˜ 6x ˜ 6z˜ ξ ? We Wx Wz Vξ 6 6 6 n r˜ r e x z ? + y˜ . . P .92 CHAPTER 7. . . C1 . Vr . Train yourself for the following transfers: a) from ξ to y˜ (see example ‘sensitivity’ in lecture notes) b) from r˜ to e˜ c) from ξ to z˜ (two goals!) d) from ξ to x ˜ (two goals!) The same for the following MIMOtransfers: e) from ξ to y˜ and z˜ (three goals!) We now split the previously combined inputs in ξ into two inputs ξ1 and ξ2 with respective shaping ﬁlters V1 and V2 : f) from ξ1 and ξ2 to y˜ and z˜. P . to explain the use of the particular transfer.W  e 6 + 6 + C2 ? ?  g) from r˜ to x ˜ and e˜. .W  x 6 r˜ r + x y – e e˜ . . It is asked to compute the linear fractional transfer. Vr . Also for the next scheme: x ˜ . C . Wy  + + 6– y ?  For the given blockscheme we consider ﬁrst SISOtransfers from a certain input to a certain output. to name it (if possible) and ﬁnally to give the augmented plant in blockscheme and express the matrix transfer G. 7. GENERAL PROBLEM.
1. named by g and h. It is time now to reconsider this issue.Chapter 8 Performance robustness and µanalysis/synthesis.1) z M21 M22 w h = ∆g (8. as shown in the right block scheme. When we incorporate the controller K. Because proper scaling was taken.1: Performance robustness translated into stability robustness The left block scheme shows the augmented plant where the lines. 8. But it is not since chapter 3 that we have discussed performance robustness and then only in rather abstract terms where a small S had to watch robustness for T and vice versa. to quantify its importance and to combine it with the other goals. We can then make three remarks about the closed loop matrix M (K): • Stability robustness. have been made explicit. With the proper partitioning the total transfer can be written as: g M11 M12 h = (8.1 Robust performance It has been shown how to solve a multiple criteria problem where also stability robustness is involved. the closed loop system M (K) is also containig these lines. it follows that stability robustness can be guaranteed according to: 93 . Figure 8. 8.2) We suppose that a proper scaling of the various signals has been taken place such that each of the output signals has 2norm less than or equal to one provided that each of the input components has 2norm less than one. It will turn out that we have practically inadvertently incorporated this aspect as can be illustrated very easily with Fig. linking the model error block.
8) Combination with the ﬁrst inequality yields: $$ $$ $$ $$ $$ g $$ $$ $$ $$ $$ < $$ g $$ (8. by introducing a fancy feedback over a fancy block ∆p as: w = ∆p z : { ∆p ∞ ≤ 1} ∩ { M22 (K) ∞ < 1} (8. If we now require that: M (K) ∞ < 1 (8. This condition can be unambiguously translated into a stability condition.10) which ends the proof. we require z 2 < 1.e. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS.9) $$ z $$ $$ w $$ 2 2 so that indeed: z 2 < w 2 ≤ 1 (8.11) . { ∆ ∞ ≤ 1} ∩ { M11 (K) ∞ < 1} (8. • Nominal performance.3) So the ∞norm of M11 determines robust stability. That is. proof: From equation 8.4) So the ∞norm of M22 determines nominal performance. Without model errors taken into account (i.94 CHAPTER 8. For robust performance we have to guarantee that z stays below 1 irrespective of the model errors. ∆=0 and thus h=0) z 2 can be kept less than 1 provided that: M22 (K) ∞ < 1 (8.6) we have a suﬃcient condition to guarantee that the performance is robust. Of course robust stability and nominal performance is implied as: { M ∞ < 1} ⇒ { M11 ∞ < 1 and M22 ∞ < 1} (8.6 we have: $$ $$ $$ $$ $$ g $$ $$ $$ $$ $$ < $$ h $$ (8. in the face of a signal h unequal to zero and h 2 ≤ 1.5) There is now a complete symmetry and similarity in the two separate loops over ∆ and ∆p . like for stability robustness.7) $$ z $$ $$ w $$ 2 2 From ∆ ∞ ≤ 1 we may state: h 2 ≤ g 2 (8. • Robust performance.
we required too much. Then condition (8. where: M11 M12 M= (8. (see Smit [19]) It can easily be understood. The oﬀdiagonal zeros indicate that the performance output z has no inﬂuence whatsoever onto the model error output h and reciprocally the model error input line g won’t aﬀect the exogenous inputs w .3 µanalysis We could be very satisﬁed with the general result of section 8. inadvertently.15) 0 ∆p So we passed over the diagonal structure of the total ∆t block. It means that h may also depend on z and that w may also depend on g.6) is too strong and introduces conservatism. as we did for the nominal performance. This condition asks more than is strictly necessary. Ergo.95 But we also obtain that z 2 < 1 for all allowed ∆ ∞ ≤ 1. As there are no such relations. as derived in the previous section.6). all its entries may be nonzero. NO PERFORMANCE ROBUSTNESS FOR THE MIXED SENSITIVITY STRUCTURE. the performance robustness. We know that ∆t is not a full block but: ∆ 0 ∆t = (8. .8. only holds for the so called four block problem. we combined stability robustness and nominal performance in the above structure and we automatically receive perfor mance robustness almost as a spin oﬀ! 8. Small deviations of the real plant from its model will soon deteriorate the performance: the intended polezero cancellations will not be perfect and the rootlocus will show small excursions close to the imaginary axis causing sharp resonance peaks in the closed loop Bodeplots.e. Consequently the earlier proposed mixed sensitivity problem: WS (I + P C)−1 V M= (8.6) provides robust stability even if ∆t is a full block.2. Ergo. Thus performance robustness is guaranteed.14) WR C(I + P C)−1 V lacks robust performance. condition (8. if we translate this condition into a stability condition. that this will be a bad solution if the plant has poles and zeros close to the instability border: the imaginary axis.12) M21 M22 If M12 and M21 do not exist in two block problems: M11 M= M11 M22 or M= (8. It turns out that the resulting controller simply com pensates the stable poles and zeros of the plant (model) P. but there is an annoying aspect in the suﬃciency of the condition (8.13) M22 the robust performance property is lost (see for yourself if you try to proof it along the lines of the previous section). i. 8.1.2 No performance robustness for the mixed sensitivity structure. This can be understood. Unfortunately.
8.17) For a SISO ”plant” M a zero determinant implies 1 − M ∆ = 0. A way to avoid this conservatism by incorporating the knowledge on the oﬀdiagonal zeros is oﬀered by the µanalysis/synthesis. 8. This block ∆ is a generalised version of block ∆t of the previous section and contains as diagonal blocks the “fancy feedback performance block” ∆p and various structured model errors ∆i where we will give examples of later. It simply states that we cannot ﬁnd a ∆ ≤ 1 such that the point 1 is enclosed in the Nyquist plane. In particular.2. You might be used to ﬁnd the point −1 here. these blocks can act on various dimensional spaces but their matrix norm (largest singular value) should be less than 1: σ (∆i ) ≤ 1} ∀ω ∈ R : ∆(jω) ∈ ∆ = {diag(∆1 . So in particular : .17 is just a generalisation for MIMO systems.2 the usual minus sign is not explicitly introduced. Note that the conditions of the problem of the previous section agree with this deﬁnition. which is the distance of M ∆ to the point 1. The ”magnitude” of the ∆ needs further deﬁnition and analysis for the MIMO case. Since the phase angle of ∆ is indeterminate. Equation 8. where closed loop system M is stable.96 CHAPTER 8. but notice that in the formal feedback loop of Fig. . while ∆p is typically the (fancy) performance block. . The following should continuously be read with the addition ”for each fre quency ω”. has dimension ni xmi and is bounded by ∆i ∞ ≤ 1.16) Thus the block ∆ in Fig. . A condition for stability of the conﬁguration of Fig. ∀∆(jω) ∈ ∆ : det (I − M (jω)∆(jω)) = 0 (8. .2: Robust performance closed loop. if ∆ has the proposed diagonal structure. ∆p )¯ (8. this condition can be understood as a limitation on the ”magnitude” of M ∆ such that 1 is not encircled and thus completely comparable with the small gain condition. of course. Figure 8. unless very crucial. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS. 8. Let the spectral radius ρ of a matrix be deﬁned as the maximum of the absolute values of the eigenvalues of that matrix. 8. ∆2 . is given by : ∀ω. the numbers n := Σni and m := Σmi are the numbers of rows and columns of ∆ respectively. Formally. µanalysis guarantees the robust stability of the general loop in Fig.2 takes the structured form of a block diagonal matrix where each diagonal block ∆i (s) belongs to H∞ . To facilitate reading of formula’s the explicit statement ∀ω and the notation of the argument ω is skipped. so that ”magnitude” is coupled with ”direction”.2 where the ∆block has a diagonal structure. There are p ≥ 1 of these blocks and.
If M = W ΣV ∗ represents the singular value decomposition of M .8. The σ ¯ (M ) indi cates the “maximum ampliﬁcation” by mapping M . Note.20 and 8. that this is a condition solely on matrix M . A simple eigenvalue decomposition of M ∆ then shows: I − M ∆ = EE −1 − EΛE −1 = E(I − Λ)E −1 (8. Consequently. in such a case the system would not be robustly stable for unstructured ∆ but could still be robustly stable for structured ∆.ω R As we will show.19) Because the diagonal matrix I − Λ has a zero on the diagonal. if ∆ ∈ ∆ has the special diagonal structure. it is singular so that its determinant is zero.23 robust stability is a fact if we have for each frequency: {∀∆. proof: Condition (8.18) i Suppose that for some ∆ ∈ ∆ we have ρ(M ∆) ≥ 1. Ergo.21) for the case that the block ∆ has no special structure.17 is violated.22) which is allowed. end proof. Consequently: ¯ = W ΣW ∗ = W ΣW −1 ⇒ σ M∆ ¯ = sup ρ(M ∆) ¯ (M ) = ρ(M ∆) (8.24) If we apply this for each ω. Also a multiplication of ∆ by a constant 0 ≤ α ≤ 1 leads to a new ∆ ∈ ∆. this condition takes the already encountered form: for ∆ is unstructured : { ∆ ∞ ≤ 1} ∩ { M ∞ < 1} (8. the stability condition 8. So from equations 8.20) ∆ ∆. So. we can always choose ∆ ¯ = V W ∗ because: σ ¯ (∆) ¯ (V W ∗ ) = ¯ =σ λmax (V W ∗ W V ∗ ) = λmax (I) = 1 (8. But in analogy we deﬁne: def µ(M ) = sup ρ(M ∆) (8. So there will be some ∆ ∈ ∆ which brings about an eigenvalue λ(M ∆) = 1. However. σ ¯ (∆) ≤ 1} ∩ {¯ σ (M ) < 1} (8. it no longer holds that sup∆ ρ(M ∆) = σ ¯ (M ).23) ∆ because the singular value decomposition happens here to be the eigenvalue decom postion as well.21). In other words. µANALYSIS 97 def ρ(M ∆) = max λi (M ∆) (8.26) .3. then we can not (generally) choose ∆ = V W ∗ .21) for the unstructured ∆ can be explained as follows. we end up in condition (8. an equivalent condition for stability is: sup ρ(M ∆) < 1 (8.25) ∆∈∆ and the equivalent stability condition for each frequency is: {∀∆ ∈ ∆} ∩ {µ(M ) < 1} (8. The phase angle of ∆ can freely be chosen so that we can inﬂuence the phase angle of λmax (M ∆) accordingly.
27 suggests that we can ﬁnd a norm “ · µ ” on exclusively matrix M that can function in a condition for stability. Secondly. 8. Because all above conditions and deﬁnitions may be somewhat confusing by now. to illustrate the eﬀects. and thus this µnorm. some simple examples will be treated. First of all. because it incorporates the knowledge about the diagonal structure and should thus display less conservatism. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS.28) ω µ represents a yet unknown measure.29) This µvalue is certainly less than or equal to the maximum singular value σ ¯ of M . the condition. The father of µ is John Doyle and the symbol has generally been accepted in control community for this measure. Equation 8. which is not explicitly deﬁned. so that robust stability ¯ (M ) = 12 so that there is no diﬀerence between is guaranteed. which is less than one. Because all matrices are diagonal.32) 1 0 2 δ2 Obviously µ(M ) = ρmax (M ∆) = 12 . . ni and mi ) should be used. we can indeed connect a certain number to M µ . but it is not a norm “pur sang”.30) 0 δ2 σ(δ2 ) ≤ 1(⇔ δ2  ≤ 1) Next we study three matrices M in relation to this ∆: • 1 2 0 M= 1 (8. 8. The loop transfer consists of two independent loops as Fig. We ﬁrst consider some matrices M and ∆ for a speciﬁc frequency ω. we are just dealing with two independent loops. by: for ∆ is structured : {∆ ∈ ∆} ∩ { M µ < 1} (8. the µ is also called the ¯ ∈ ∆ it structured singular value. cannot be independent on ∆ because the special structural parameters (i. “ · µ ” is called a seminorm.3. but it lacks one property necessary to be a norm. It has all properties to be a ”distance” in the mathematical sense. For obvious reasons.31) 0 2 see Fig. But in this case also σ the structured and the unstructured case. Consequently this socalled µnorm is implicitely taken for the special structure of ∆. We depart from one ∆matrix given by: δ1 0 σ(δ1 ) ≤ 1(⇔ δ1  ≤ 1) ∆= (8.e. namely: M µ can be zero without M being zero itself (see example later on).98 CHAPTER 8. Because in general we can no longer have V W T = ∆ will also be clear that µ(M ) ≤ σ ¯ (M ) (8.3 reveals and that follows from: 1 M∆ = 2 δ1 0 (8. In analogy we then have a similar condition on M for robust stability in the case of the structured ∆.27) when: def sup µ(M (jω)) = M µ (8. Consequently.
µANALYSIS 99 Figure 8. But also σ ¯ (M ) = 2 would have told us this and Fig.3: Two separate robustly stable loops • The equivalence still holds if we change M into: 2 0 M= (8.8.34) 0 δ2 so that µ(M ) = ρmax (M ∆) = 2 > 1 and stability is not robust. Figure 8.3.4. 8.4: Two not robustly stable loops • Things become completely diﬀerent if we leave the diagonal matrices and study: .33) 0 1 Then one learns: 2δ1 0 M∆ = (8.
Nevertheless µ = 0 indicates maximal robustness. ∆ δ1 δ12 6 6 6 δ21 δ2 6 ? .5 shows . supposing ∆ a full matrix. Figure 8. It is clear that µ(M ) = ρmax (M ∆) = 0.100CHAPTER 8.35) 0 0 0 0 Now we deal with an open connection as Fig.36) 0 0 δ21 δ2 0 0 so that Fig. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS.5: Robustly stable open loop. Summarising we obtained merely as a deﬁnition that robust stability is realised if: {∆ ∈ ∆} ∩ { M µ = sup µ(M ) < 1} (8. although M = 0! Indeed µ is not a norm. Correctly the σ ¯ (M ) = 10 tells us that for robust stability we require σ¯ (∆) < 1/¯σ (M ) = 1/10 and thus δ21  < 1/10. On the other hand. 8. Whatever σ ¯ (∆) < 1/µ(M ) = ∞ .37) ω . this is correct since: 0 10 δ1 δ12 10δ21 10δ2 M∆ = = (8. because M is certainly stable and the stable transfers are not in a closed loop at all.10  ? 6 M Figure 8. Clearly there is a closed loop now with looptransfer 10δ21 where in worst case we can have δ21  = 1 so that the system is not robustly stable. . 8. From its perspective .6: Detailed closed loop M with unstructured ∆. the “conservative” ∞norm warns for nonrobustness as σ ¯ (M ) = 10 > 1.6 represents the details in the closed loop. 0 10 0 10δ2 M= ⇒ M∆ = (8. the closed loop is stable.
As U is unitary. So a Bode plot could look like displayed in Fig. we will exploit them in deriving tighter bounds in the next two subsections. The crucial observation at the basis of the computation. COMPUTATION OF THE µNORM. Let the matrix U consist of diagonal blocks Ui corresponding to the blocks ∆i : U U = {diag(U1 . .4 Computation of the µnorm. Without aﬀecting the loop properties we can insert an identity into the loop eﬀected by U U ∗ = U ∗ U = I where U is a unitary matrix. we can also redeﬁne the dashed block U ∗ ∆ as the new model error which also lives in set ∆: ∆ = U ∗ ∆ ∈ ∆ def (8. neither the stability nor the loop transfer is changed if we insert I = U U ∗ into the loop. U2 . is: ρ(M ) ≤ µ(M ) ≤ σ ¯ (M ) (8. A matrix is unitary if its conjugate transpose U ∗ . indirect and at least cumbersome. Ui Ui∗ = I} (8.7.41) U . 8.38) Without proving these twosided bounds explicitly. The actual computation of the µnorm is quite another thing and appears to be complicated. which will become an approxi mation. The lower bound can be increased by inserting such compensating blocks U and U ∗ in the loop such that the ∆block is unchanged while the M part is maximised in ρ. is orthonormal to U .8. 101 Figure 8.8.8 Then. . . Up ) dim(Ui ) = dim(∆i ∆Ti ). 8.7: Bode plot of structured singular value. 8.4.39) as exempliﬁed in Fig. 8. 8. The ∆ is invariant under premultiplication by a unitary matrix U ∗ of corresponding structure as shown in Fig.40) Because µ(M ) will stay larger than ρ(M U ) even if we change U we can push this lower bound upwards until it even equals the µ(M ): sup ρ(M U ) = µ(M ) (8. .1 Maximizing the lower bound. It is just a generalisation of orthonormal matrices for complex matrices.4. so U ∗ U = 1.
. di ∈ R} (8. This can be formalised as: DL ∈ DL = {diag(d1 I1 .43) If all ∆i are square. 8.8: Detailed structure of U related to ∆. . Again we apply the trick of inserting identities. . So our hope is ﬁxed to lowering the upper bound. dp Ip ) dim(Ii ) = dim(∆i ∆Ti ). . U1∗ 0 ∆1 0 U2∗ ∆2 U ∗∆ ∈ ∆ 6 0 U3∗ 0 ∆p U1 0 U2 ?. This time both at the left and the right side of the ∆ block which we want to keep unchanged as exempliﬁed in Fig.9 Careful inspection of this Fig. dp Ip ) dim(Ii ) = dim(∆Ti ∆i ). . So in principle this could be used to compute µ.2 Minimising the upper bound. d2 I2 . d2 I2 .4. into the loop. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS. di ∈ R} (8.102CHAPTER 8. MU 0 U3 Figure 8.42) DR ∈ DR = {diag(d1 I1 . Therefore the coeﬃcient d1 is generally chosen to be one as a ”reference”. M (K) . . consisting of matrices. . the left matrix DL and a right matrix DR coincide. Again the loop transfer and the stability condition are not inﬂuenced by DL and DR and we can redeﬁne the model error : . . 8. but unfortunately the iteration process. All coeﬃcients di can be multiplied by a free constant without aﬀecting anything in the complete loop. to arrive at the supremum is a hard one because the function ρ(M U ) is not convex in the entries uij . 8.9 teaches that if ∆ is postmultiplied by DR and −1 premultiplied by DL it remains completely unchanged because of the ”corresponding identities structure” of DR and DL . .
3. shorthand notation: . stable and minimum phase ﬁlters dˆi (jω) are ﬁtted to the sequence di (ωj ) and the augmented plant in a closed loop with the controller K is properly pre. it should be done for all frequencies ω which boils down to a ﬁnite. So µA is generally used as the practical estimation of µ.9: Detailed structure of D related to ∆. In that way we are left with generalised rational transfers again. the optimisation −1 with respect to di is a well conditioned one.and postmultiplied by the obtained ﬁlter structure.3.p ω In practice one minimises for a suﬃcient number of frequencies ωj the maximum sin −1 gular value σ ¯ (DL M DR ) for all di (ωj ).p It turns out that this upper bound µA (M ) is very close in practice to µ(M ) and it even equals µ(M ) if the dimension of ∆ is less or equal to 3...46) di (ω). .8. And fortunately. M (K) .  0 d−1 3 I1 d3 I2 0 −1 DL M DR Figure 8. −1 ∆ = DR ∆DL def =∆∈∆ (8. However.45) di .44) Again the µ is not inﬂuenced so that we can vary all di and thereby pushing the upper bound downwards: −1 def µ(M ) ≤ inf σ ¯ (DL M DR ) = µA (M ) (8. because the function DL M DR ∞ appears to be convex in di . 103 −1 DR ∆DL ∈∆ I1 0 I1 0 ∆1 0 d−1 2 I1 d2 I1 ∆2 6 0 d3 I1 0 ∆p d−1 3 I2 0 I1 0 I1 0 d−1 2 I1 d2 I1 ?. COMPUTATION OF THE µNORM..i=2... representative number of frequencies ω and we ﬁnally have: −1 M µ ≤ inf DL M DR ∞ = sup µA (M (ω)) (8. This operation leads to the following formal.4.i=2. Next biproper.
49) Kstabilising but we have just found that this is conservative and that we should minimise: inf DM (K)D−1 ∞ (8. the more robustly stable the closed loop system is.51) Kstabilising D In practice this is tried to be solved by the following iteration procedure under the name DKiteration process: 1. Has the criterion DM (K)D−1 ∞ changed signiﬁcantly during the last two steps? If yes: goto Kiteration. for each new K the subsequently altered M (K) involves a new minimisation for D so that we have to solve: inf inf DM (K)D−1 ∞ (8. A few extra remarks will be added before a simple example will illustrate the theory. This ﬁnishes the µanalysis part: given a particular controller K the µanalysis tells you about robustness in stability and performance.48) D ω Consequently. if µA remains below 1 for all frequencies. 4. . 8. if no: stop. 3. As a consequence we can write: M µ ≈ inf DM D−1 ∞ ≈ sup µA (M (ω)) (8. Also their rational ﬁlter structure is not explicitly indicated.p D where the distinction between DL and DR is left out of the notation as they are linked in di anyhow. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS. The augmented plant includes both the model error block and the artiﬁcial.. fancy performance block.104CHAPTER 8. Compute optimal D for the last K. But this is only the analysis.46) we have a tool to verify robustness of the total augmented plant in a closed loop with controller K. For the synthesis of the controller we were used to minimise the H∞ norm: inf M (K) ∞ (8. Compute optimal K for the last D. Kiteration.. given a particular controlled block M which is still a function (LFT) of the controller K.47) di (ω).50) Kstabilising However. In practice this iteration process appears to converge usually in not too many steps.5 µanalysis/synthesis By equation (8. robust stability is guaranteed and the smaller it is..i=2.3. −1 ˆ −1 ∞ inf DL M DR ∞ ≈ D ˆ L M (K)D R −→ inf DM (K)D−1 ∞ (8. Consequently robust stability should be understood here as concerning the generalised stability which implies that also the performance is robust against the plant perturbations. This formally completes the very brief introduction into µanalysis/synthesis. Put D = I 2. Diteration. But there can be exceptions and in principle there is a possibility that it does not converge at all.
10: First order plant with parameter uncertainties. Nevertheless. The complete inputoutput transfer of the augmented plant Ge can be represented as: .g. Implicit incorporation of this knowledge asks more complicated numerical tools though.53) s+α where we have some doubts about the correct values of the two parameters K0 and α. So let δ1 be the uncertainty in the gain K0 and δ2 be the model error of the pole value α. 8.11. For simplicity there are no shaping nor weighting ﬁlters and measurement noise and actuator saturation are neglected.6 A simple example Consider the following ﬁrst order process: K0 P = (8. Furthermore. Consequently not taking into account this conﬁnement to real numbers (R) will again give rise to conser vatism. • It is tacitly supposed that all ∆i live in the unity balls in Cni ×mi while we often know that only real numbers are possible. we assume a disturbance w at the input of the process. We want to minimise its eﬀect at the output by feedback across controller C. In the exercises one can verify that the three methods (if dim(∆) ≤ 3) yield the same results.10 and the corresponding augmented plant by Fig. The whole set up can then easily be presented by Fig. This happens e.8. 8. when it concerns inac curacies in “physical” real parameters (see next section).6. the deﬁnition is equivalent with the one discussed in this section.52) ∆ where one has to keep in mind that the inﬁmum is over ∆ which has indeed the same structure as deﬁned in the set ∆ but not restricted to σ ¯ (∆i ) < 1. A SIMPLE EXAMPLE 105 • As a formal deﬁnition of the structured singular value µ one often “stumbles” across the following “mind boggling” expression in literature: σ (∆) det(I − M ∆) = 0}]−1 µ(M ) = [inf {¯ (8. Figure 8. 8.
60) γ For µ(ω) we get (the computation is an exercise): .58) s + α + K0 K −K −1 Since we did not scale. we may deﬁne the µanalysis as: M11 µ = γ (8. which is taken as a static feedback here.59) 1 ∆ ∈ ∆ = (diag(δ1 .106CHAPTER 8.55) b2 a2 0 δ2 a2 u = Ky = Cy (8. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS.11: Augmented plant for parameter uncertainties. δ2 )¯ σ (δi ) < ) (8.54) s+α s+α s+α y −K0 K0 −K0 u 1 s+α s+α s+α while the outer loops are deﬁned by: b1 a1 δ1 0 a1 =∆ = (8.57) s + α + K0 K z s + α −K0 K0 w M (K) The analysis for robustness of the complete matrix M (K) is rather complicated for analytical expressions so that we like to conﬁne to the robust stability in the strict sense for changes in δ1 and δ2 that is: 1 −K −1 M11 = (8.56) Incorporation of a stabilising controller K. Figure 8. −1 1 −1 a1 0 s+α s+α s+α b1 a2 0 −1 1 −1 b2 s+α s+α s+α z = 1 −K0 K0 −K0 w (8. we obtain for the transfer M (K): a1 −K −1 1 b1 a2 = 1 −K −1 1 b2 (8.
8. the µanalysis is less conservative than the H∞ analysis as it is easy to verify that: γ∞ > γµ (8.11. A SIMPLE EXAMPLE 107 K + 1 µ(ω) = (8.12. This one is obtained by recognising that signals a1 and a2 are the same in Fig. 8. the pole of the system with proportional feedback K is given by: −(α + K0 K + δ2 + Kδ1 ) (8. As mentioned before. The two square bounds are the µbound and the H∞ bound. This is the reason that M11 had so evidently rank 1.66) α + K0 K α + K0 K 1 δi  < = (8.12 for numerical values: α = 1. 8. The improve of µ on H∞ is rather poor in this example but can become substantial for other realistic plants. δ2space is drawn in Fig.65) ω 2 + (α + K0 K)2 2(K 2 + 1) M11 ∞ = = γ∞ → (8.2 : δi  < = (8. K = 2. K0 = 1. µanalysis guarantees robust stability as long as : α + K0 K 1 for i = 1.70) This half space in δ1 . There is also drawn a circular bound in Fig.63) Ergo.62) α + K0 K because K stabilises the nominal plant so that: α + K0 K > 0 (8. total stability is guaran teed for: Kδ1 + δ2 > −(α + K0 K) (8. Because we know that δ1 and δ2 are real.6.68) Finally we would like to compare these results with an even less conservativeapproach where we make use of the phase information as well.61) ω 2 + (α + K0 K)2 The supremum over the frequency axis is then obtained for ω = 0 so that: K + 1 M11 µ = = γµ (8. all phase information is lost in the H∞ approach and this takes over to the µapproach. ω)) = → (8.67) 2(K 2 + 1) γ∞ Indeed.69) Because K is such that nominal (for δi = 0) stability is true.64) K + 1 γµ It is easy to verify (also an exercise) that the unstructured H∞ condition is: % 2(K 2 + 1) σ ¯ (M (K. By proper combination the robust stability can thus be established by a reduced . Explicit implementation of the phase information can only be done in such a simple example and will appear to be the great winner.8.
108CHAPTER 8. (This is an exercise. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS.71) K0 ∞ − bound : δ1  < √ (8.73) . Figure 8.72) 2 true − bound : δ1 > −K0 (8.12: Various bounds in parameter space. M11 that consists of only one row and then µ is no longer diﬀerent from H∞ both yielding the circular bound with less computations.) Another appealing result is obtained by letting K approach ∞. then: µ − bound : δ1  < K0 (8.
12 results.8. Are these good bounds for µ ? b) Compute µ in three ways. so that ∆ can be deﬁned as ∆ = [ δ1 δ2 ]T ? Show that the circular bound of the last Fig.3: Given: −1/2 1/2 δ1 0 M= ∆= (8. the robust performance condition is fulﬁlled if both the robust stability and the performance for the nominal model are guaranteed.1: Show that. oﬀdiagonal terms of M zero.75) ∆ 9.4: Compute explicitly M11 ∞ and M11 µ for the example in this chapter where: 1 K −1 M11 = (8.77) s + α + K0 K K −1 What happens if we use the fact that the the error block output signals a1 and a2 are the same . EXERCISES 109 8. Does this case.74) 0 1/2 0 1 0 0 δ1 0 Compute the µnorm if ∆ = according to the second deﬁnition : 0 δ2 σ (∆) det (I − M ∆) = 0}]−1 µ = [inf {¯ (8.76) −1/2 1/2 0 δ2 a) Compute ρ and σ ¯ of M . in case M12 = 0 or M21 = 0. 9.2: Given the three examples in this chapter: 1/2 0 2 0 0 10 M= (8.7.7 Exercises 9. . make sense ? 9. 8.
110CHAPTER 8. PERFORMANCE ROBUSTNESS AND µANALYSIS/SYNTHESIS. .
As a matter of fact the complete concept of mapping normed input signals onto normed output signals.1 Scaling The numerical values of the various signals in a controlled system are usually expressed in their physical dimensions like m. N . . ”S+T=I” and other fundamental bounds. First we will show how signal characteristics and model errors can be measured and how these measurements together with performance aims can lead to eﬀective ﬁlters. that solutions with M ∞ < γ ≈ 1 are feasible without contradicting e. Next. . Eﬀective in the sense. why γ ≈ 1 could not be obtained and what the best remedy or compromise can be. So each signal s can then be expressed in dimensionless units as s˜ according to: 1 s˜ = smax s = Ws s s = smax s˜ = Vs s˜ smax = sup(s) (9. Scaling on physical. as discussed in chapter 5. 111 . V . We will shortly indicate their eﬀects such that one is able to detect the reason. we also have a rough scaling possibility in the choice of the units. In this chapter we will discuss several aspects of ﬁlter selection in practice. Still this is too rough a scaling to compare signals of diﬀerent physical dimensions. actuator and output ranges.1 A zero frequency setup.1) where supremum should be read as the extreme value it can take given the corre sponding (expected) range. µm. say ω = 0. which ultimately bound the performance. mm. less inputs than outputs etc. 9. depending on the size of the signals. numerically comparable units as indicated above is not accurate enough and a trivial solution is simply the familiar technique of eliminating physical dimensions by dividing by the maximum amplitude. For instance a distance will basically be expressed in meters. but in order to avoid very large or very small numbers we can choose among km. 9. 9. o . ˚ A or lightyears. For a typical SISOplant conﬁguration such scaling leads to the blockscheme of Fig.1. . The ﬁlter choice is actually a scaling problem for each frequency.Chapter 9 Filter Selection and Limitations. for instance RHP (=Right Half Plane) zeros and/or poles.g. . A. So let us start in a simpliﬁed context and analyse the scaling ﬁrst for one particular frequency. incorporates the basic idea of appropriate comparison of physically diﬀerent signals by means of the input characterising ﬁlters V∗ and output weighting ﬁlters W∗ . Apart from the chosen ﬁlters there are also characteristics of the process itself.1.
zmax  C umax P˜ 6 − y η˜ η. so M ∞ = maxi (σi (M )) = σ ¯ (M ) . if : r˜ w 2 = d˜ 2 ≤ 1 (9. This eﬀect can . In zero frequency setup it would be the Euclidean norm for inputs and outputs. ? . The induced norm is trivially the usual matrix norm. R = C/(1 + P C) and e = r − y.112 CHAPTER 9. In the one frequency setup the majority of ﬁlters can be directly obtained from the scaling: 1 1 1 r˜ z= u ˜ = umax Rrmax umax Rdmax umax Rηmax d˜ = Mw (9. z˜ z ? . FILTER SELECTION AND LIMITATIONS. Note that because of the scaling we immediately have for all signals. T = P C/(1 + P C). inputs or outputs: s˜ 2 = ˜ s ≤ 1 (9. parsimony and model error.3) e˜ We SVr We SVd We T Vη η˜ where as usual S = 1/(1 + P C).2) For instance a straightforward augmented plant could lead to: r˜ u ˜ Wu RVr Wu RVd Wu RVη d˜ = Mw z= = (9.1. i. .4) e˜ We Srmax We Sdmax We T ηmax η˜ 9. 1 .g.rmax . which is not suﬃcient to avoid actuator saturation.5) then this tells us e. An H∞ analogon for such a zero frequency setup would be as follows. d˜ .6) η˜ √ By the applied √ scaling we can only guarantee that w 2 < 3 so that disappointingly follows u < 3umax .1: Range scaled controlled system.  dmax d P r˜ r + u u ˜. .e. In H∞ we measure the inputs and outputs as w 2 and z 2 so that the induced norm is M ∞ . w E = w 2 = Σi wi2 and likewise for z. so certainly u ˜ < 1 or u < umax .2 Actuator saturation. Suppose that the problem is well deﬁned and we would be able to ﬁnd a controller C R such that M ∞ = σ ¯ (M) < γ ≈ 1 (9. that z 2 < 1.ηmax ? Figure 9.
ηmax . we take: pmax = δumax (9.ηmax  C ? η˜ Figure 9.7) The transfer from p˜ to u ˜ in Fig.1. In combining the output additions we get n = −p − d + r and: .2 we show how by rearrangement reference signals.12) Consequently.2: Combining sensitivity inputs.9) Combination yields that: pmax 1 ∆P ∞ = pmax ∆ ∞ ∞ < (9. . This can be accomplished because both tracking and disturbance reduction require a small sensitivity S. disturbances and model perturbations can be combined in one augmented plant input signal. since weights are naturally chosen as positive numbers. P  p d ? r u 6 ? y − ? e=r−y . 6η η˜ η ⇒ . . In the next Fig. n ˜ p˜ d˜ r˜ ˜ 6 u ? .8) umax so that stability is robust for: ∆ ∞ < 1 (9. 113 √ be weakened by choosing Wu = 3/umax or we can try to eliminate it by diminishing the number of inputs. A ZERO FREQUENCY SETUP. The measuring of the model perturbation will be discussed later. ∆  6 nmax u ˜ ? ? ? 1 umax pmax dmax rmax n 1 6 − ? umax ? . 9. an extra addition to the output of the plant representing the model perturbation is realised by p ≤ pmax .11) umax or. Here we assume that the general.9.2 is given by: 1 Rpmax ∞ < γ ≈ 1 (9.  P ? u ?e = r−y C . frequency dependent.10) umax umax In the one frequency concept of our example a suﬃcient condition for robust stability is thus: pmax ≤δ (9. additive model error can be expressed as : ∆P ∞ < δ (9. 9.
14) whatever the derivation of nmax might be.1.17) ¯ (M ) ≈ 1 to prevent actuator saturation we can put σ In order to keep σ ¯ (M ) = 1 for the computed controller C yielding: We = 1/ n2max − P 2 u2max (9. that for the proper quantisation of the actuator input signal we had to actually add the reference signal. FILTER SELECTION AND LIMITATIONS.16) e˜ We 1+P C nmax m21 Because σ ¯ (M ) = m211 + m221 we can easily compute the optimal controller. either Vn or Wu should be corrected. If not. The 2normapplied to w in the two blockschemes of Fig. So if actuator range and plant gain is suﬃciently large we can choose We = ∞ and thus C = ∞ so that M becomes: . heuristically deﬁne a Wu and verify whether for robust stability the condition: ∀ω : Vn Wu  ≤ δ(ω) (9. Till sofar we have discussed all weights except for the error weight We .13) Note that the sign of p and d. that minimises σ ¯ (M ). In the next section we will see that in the frequency dependent case. By reducing the number of inputs we have done so taking care that the maximum value was retained.18) A special case occurs for P umax  = nmax  which simply states that the range of the actuator is exactly suﬃcient to cause the output z of plant P to compensate for the ”disturbance” n. Certainly we would like to choose We as big and broad as possible in order to keep the error e as small as possible. Consequently we are confronted again with the fact that not H∞ is suited for protection against actuator saturation. For robust stability alone it is now suﬃcient that: nmax = δumax (9. implying a reduction of a factor 3 in stead of 3. 9. Also convince yourself of the substantial diﬀerence √ of diminishing the number of inputs compared with increasing the Wu with a factor 3.3 Bounds for tracking and disturbance reduction. If several 2normed signals are placed in a vector.114 CHAPTER 9. the disturbance and the model error output. If we forget about the measurement noise η for the moment and apply the simpliﬁed right scheme of Fig.2 would √ √ indeed yield the factor 3 as p˜ 2 + d˜ 2 + r˜ 2 ≤ 3 contrary to n ˜ 2 ≤ 1. does not inﬂuence the weighting. actually being a phase angle. We have ˜ n ≤ 1 contrary to the original√three inputs ˜ ˜ p ≤ 1. Actual practice will then be to combine Vd and Vr into Vn . Note. but l1 control is.2. the total 2norm takes the average of the energy or power. d ≤ 1 and ˜ r ≤ 1. 9. 9. a real prevention of actuator saturation can never be guaranteed in H∞ control. nmax = pmax + dmax + rmax (9. we obtain a simple mixed sensitivity problem: 1 C u ˜ u 1+P C nmax m11 = max 1 (˜ n) = (˜ n) (9.15) is fulﬁlled. as: C = We2 P u2max (9.
20) We nmax n2max If e. only half the n can be compensated. 115 nmax M= P umax (9.19) 0 and no error results while σ ¯ (M ) = m11  = 1. This is reﬂected in the maximal weight We we can choose that allows for a ¯ (M ) ≈ 1. For this simple example the last problem can be solved. One can test this beforehand by analysing the scaled plant as indicated in Fig.g. we have S ≈ 1 3 We nmax = 4 which is very poor. The plant P has been normalised internally according to: 1 P = zmax P˜ (9. one should not think that the way is free to zero sensitivity S. If P umax  < nmax . In that case we have indeed inﬁnite gain (1/(jω)) similar to the previous example by taking C = ∞. So sensor noise bounds both traking error and disturbance rejection and should be brought in properly by the weight ηmax in our example in order to minimise its eﬀect in balance with the other bounds and claims. it is principally impossible to compensate all ”possible distur bance” n. The maximal input umax can only yield: 1 z = zmax P˜ umax  = zmax P˜  < zmax (9. inevitably T = 1 and e = T η = η. where the full frequency dependence plays a role.22) umax Consequently. their is plenty of choice for the controller and the H∞ criterion is minimised by decreasing m11  more at the cost of a small increase of m21 .9. Note that this control design is diﬀerent from minimising m21  under the constraint of m11  ≤ 1. undisturbed. For real systems. but for MIMO systems the same internal scaling of plant P can be very revealing in detecting these kind of internal insuﬃciencies as we will show later. For SISO plants this eﬀect is quite obvious. If we indeed have S = 0. A ZERO FREQUENCY SETUP. if the gain of the scaled plant P˜ is larger than 1. the tracking error e = r − y can never become small. scaled output z˜.e. If P umax  > nmax . . It tells you that not all outputs in the intended output range can be obtained due to the actual actuator. 9. Suppose now that P˜  < 1. maximally excited actuator normalised on 1. Some algebra shows that: σ % 1 P 2 u2max = 1− (9.1. On the other hand. Nevertheless in practice we always have to deal with the sensor and inevitable sensor noise η. i. So in its full extent the measurement noise is present in the error. If one increases the weight We one is confronted with a similar increase of γ and no solution M ∞ ≈ 1 can be obtained.1. P umax  = 12 nmax . which simply reﬂects the trivial fact that you can never track better than the accuracy of the sensor.21) umax so that P˜ is the transfer from u˜. but the reader is invited to do this and by doing so to obtain an impression of the tremendous task for a realistically sized problem. Only for ω = 0 we are used to claim zero sensitivity in case of integrator(s) in the loop. This represents the impossibility to track better than 50% or reduce the disturbance more than 50%. to maximal. we will see plenty of limiting eﬀects. if we have obviously rmax ≈ zmax for a tracking system.
unless input. one sample from an ergodic ensemble). where certain characteristics may vary. One can even combine the two approaches.23) w w power w cw power Furthermore. Only if one sticks to one kind of approach. one can determine the power density Φs and use the squareroot of it as the scaling. but in practice we seldomly deal with ﬁnite en ergy signals.1 Weight selection by scaling per frequency. 9. If one considers a signal to be deterministic.g. In the previous section the single frequency case served as a very simple concept to illus trate some fundamental limitations. This maximum can thus be used as a scaling for each frequency analogous to the one frequency example of the previous section.and outputﬁlters are deﬁned with diﬀerent constants. e.116 CHAPTER 9. FILTER SELECTION AND LIMITATIONS. In that case one should bear in mind that the dimensions are fundamentally diﬀerent and a proper con stant should be brought in for appropriate weighting. for input v to output x: 1 1 Wx Mxv Vv = Mxv vmax = Mxv cvmax (9. So smax(ω) represents the squareroot of any power deﬁnition for signal s(jω). It has been done by explicitely computing the norms in both concepts for an example signal set that can serve for both interpretations. for instance stochastic disturbances and deterministic reference signals.2. Fortunately.Vs (jω) = Arrows smax (ω) = Φss (jω) would unfortunately rarely yield a rational transferfunction Vs (jω) . frequency dependent situation.25) have been used in above equations because immediate choice of e. Remember that the phase of ﬁlters and thus of smax (ω) is irrelevant.g.24) xmax cxmax So again the constant is irrelevant. that certainly exist in the full. From here on we suppose that one has chosen the one or other convention and that we can continue with a scaling per frequency similar to the scaling in the previous section.2 Frequency dependent weights. smax (ω) = Φss (jω). In chapter 5 it has been illustrated how the constant relating the deterministic power contents to a power density value can be obtained. All eﬀects take over.g. the power can simply be obtained by describing that signal by a Fourier series. In engineering terms we then talk about the (squareroot of) the energy of inputs w 2 towards the (squareroot of) the energy of outputs z 2 . any scaling constant c is irrelevant as it disappears by the fundamental division in the deﬁnition: Mw power cMw power M ∞ = sup = sup (9. this is ﬁne. Mathematically. Straightforward implementation of scaling would then lead to: s˜(ω) = 1 smax (ω) s(ω) → Ws (jω)s(ω) s(ω) = smax (ω)˜ s(ω) → Vs (jω)˜ s(ω) (9. Usually the H∞ norm is presented as the induced norm of the mapping from the L2 space to the L2 space. the maxima (=range) of the inputs scale and thus deﬁne the input characterising ﬁlters directly. as we have learned from the ω = 0 scaling. the H∞ norm is also the induced norm for mapping powers onto powers or even expected powers onto expected powers as explained in chapter 5. while the output ﬁlters are determined by the inverse so that we obtain e. where we have to consider a similar kind of scaling but actually for each frequency. 9. On the other hand if one considers the signal to be stochastic (sta tionary. where the Fourier coeﬃcients directly represent the maximal amplitude per frequency.
2. that make the ﬁlter uninvertable and inversion happens implicitly in the controller design.26) smax (ω) The routine ”magshape” in Matlabtoolbox LMI can help you with this task. also for the augmented plant. The weighting ﬁlter should be stable and minimum phase. Also ﬁlters very steep at the border of the aimed tracking band will cause problems for the robustness as small deviations will easily let the fast loops in Nyquist plot tresspass the hazardous point 1. Alternatively. FREQUENCY DEPENDENT WEIGHTS. 3. If one wants an integral weighting. In practice it means that one positions a pole in the weight very close to the origin in the LHP (Left Half Plane).2. A characterisation per frequency is well in line . you have watch the following side conditions: 1.e. one could properly include an integrator to the plant and separate it out to the controller lateron. 9. In single precision it thus means that the smallest radius (=distance to the origin) divided by the largest radius of all poles and zeros of plant and ﬁlters should not be less than 10−5 . 117 and all available techniques and algorithms in H∞ design are only applicable for rational weights.g. The distance to the origin should be very small compared to the distances of other poles and zeros in plant and ﬁlters. Poles or zeros on the imaginary axis cause numerical problems for virtually the same reason and should thus be avoided.2. you can easily come up with a rational weight suﬃciently close (from above) to them. So double precision can increase the number of decades. a pole in the origin. one should approximate this in the ﬁlter. Therefore one has to come up with not too complicated rational weights Vs or Ws satisfying: e. Unstable poles would disrupt the condition of stability for the total design. in order to obtain an inﬁnite weight at frequency zero and to force the design to place an integrator in the controller. i. When you have a series of measured or computed weights in frequency domain.9. The ﬁlters should be preferably be biproper. In that case be thoughtful about how the integrator is included in the plant (not just concatenation!). There you can deﬁne a number of points in the Bode amplitude plot where the routine provides you with a low order rational weight function passing through these points.g. when the design is ﬁnished.2 Actuator saturation: Wu The characterisation or weighting ﬁlters of most signals can suﬃciently well be obtained as described in the previous subsection. Whether you use these routines or you do it by hand. The ﬁlters are preferably of low order. Not only the controller will be simpler as it will have the total order of the augmented plant. Vs (jω) ≥ smax (ω) =  Φss (jω) Ws (jω) ≥   =  Φss (jω) (9. Nonminimum phase zeros would prohibit implicit inversion of the ﬁlters in the controller design. 4. The dynamics of the generalised plant should not exceed about 5 decades on the frequency scale for numerical reasons dependent on the length of the mantissa in your computer. 5. 1 e. Any pole zero excess would in fact cause zeros at inﬁnity. Be sure that there are no RHP (=Right Half Plane) poles or zeros.
3: Maximum sum of 3 properly phase aligned sine waves. time domain bounds.5 −4 −3 −2 −1 0 1 2 3 4 Figure 9. for the continuous case.5 −1 −1.3.118 CHAPTER 9. are incompatible with frequency domain norms. As we were only dealing with frequency zero a bounded power would uniquely limit the maximum value in time as the signal is simply a constant value: √ s L∞ = s = s2 = s power (9. + an sin(ωn t + φn ) (9. starting with the zero frequency setup of the ﬁrst section. we have inﬁnitely many frequencies so that n → ∞ and: & 1 lim n =∞ (9. with practice. If we distribute the power equally over all sine waves we get: Σni=1 a2i = 1 ∀i : ai = a ⇒ ai = a = n1 (9.31) n→∞ n So the bare fact that we have inﬁnitely many frequencies available (continuous spec trum) will create the possibility of inﬁnitely large peaks in time domain.5 2 1.5 1 0. However. The famous exception is the ﬁlter Wu where we would like to bound the actuator signal (and sometimes its derivative) in time.30) n Certainly. Let us illustrate this. Nevertheless fundamentally we cannot have any mathematical basis to choose the proper weight Wu and we have to rely on heuristics. 9.28) with total power equal to one.29) and consequently. this very worst case will usually not happen in practice and we can put bounds in frequency domain that will generally be suﬃcient for the practical kind of signals that will virtually exclude the very exceptional occurrence of above phase aligned sine waves. This is in contrast with the energy and power norms ( . with proper choice of phases φi the peak in time domain equals: & n 1 Σi=1 ai = n (9. 3 2. . in fact L∞ norms. . Fortunately. Usually an actuator will be able to follow sine waves . Suppose we have n sine waves: s(t) = a1 sin(ω1 t + φ1 ) + a2 sin(ω2 t + φ2 ) + a3 sin(ω3 t + φ3 ) + .5 0 −0. a maximum peak in time can be created by proper phase alignment of the various components as represented in Fig. FILTER SELECTION AND LIMITATIONS. 2 ) that relate exacty according to the theorem of Parceval.27) If the power can be distributed over more frequencies.
If the actuator is excited far from saturation the weight Wu can be softened. We can measure p = zt − z = (Pt − P )u. should be evaluated in a polar (Nyquist) plot contrary to what is often shown by means of a Bode plot. zt . As an example we treat the additive model error according to Fig.32) Since P is a rational transfer.9. Beyond this band.  − Figure 9. obtained either by measurements or by computations. Given the known inputs. the steep increases and decreases of the signals cannot be tracked any more and in particular the higher frequencies cause the high peaks. we would like to have the transfer Pt in terms of gain and phase as function of the frequency ω. it will be clear that Wu should be increased in amplitude and/or bandwidth. 9. but they can fortunately be deﬁned and measured directly in frequency domain. can be circumvented by proper computations. Like actuator saturation.4: Additive model error from p/u. the various deviating transfers for a typical set of frequencies. If the structure of the planttransfer is very well known but various parameter values are unclear. FREQUENCY DEPENDENT WEIGHTS. also model errors put strict bounds. that are contaminated by inevitable disturbances and measurement noise and are very hard to obtain in case of unstable plants.3 Model errors and parsimony.5. This is illustrated in Fig.4. The design. If the actuator happens to saturate. the deviating transfers Pt for the respective frequencies can be computed. Alternatively. 119 over a certain band. based upon such a ﬁlter . For each frequency we would like to obtain the diﬀerence Pt (jω) − P (jω). one could use broadbanded input noise and compute the various tranfer samples by crosscorrelation techniques. This can be measured by oﬀering respective sinewaves of increasing frequency to the real plant and measure amplitude and phase of the output for long periods to monitor all changes that will usually occur. 9. Next. Pt  6 + u z ? p . P . Therefore in most cases Wu has to have the character of a high pass ﬁlter with a level equal to several times the maximum amplitude of a sine wave the actuator can track. has to be tested next in a simulation with realistic reference signals and disturbances. 9. Quite often these cumbersome measurements.2. In particular we are interested in the maximum deviation δ(ω) R such that: ∀ω : Pt (jω) − P (jω) = ∆P (jω) < δ(ω) (9. The model P is given by: . one can simply evaluate the transfers for sets of expected parameters and treat these as possible modeldeviating transfers. This Wu certainly forms the weakest aspect in ﬁlter design.2.
5 1 1. Then we really obtain the vectorial diﬀerences for each ω: δ(ω) = max Pt (jω) − P (jω) (9.2 or s+1.4 is that component in the disturbance free output of the true proces Pt due to input u. This component can be represented by an extra disturbance at the output in the generalised p.2 Pt = s+. In the Nyquist plot we have indicated for ω = 1 the model transfer by ’M’ and the several deviating transfers by ’X’.2 or s+. Finally we have the following bounds for each frequency: ∆P (jω) < δ(ω) (9.8 (9.37) The signal p in Fig.5: Additive model errors in Bode and Nyquist plots. This would lead to: max Pt  − P  (9.34) Given the Bode plot one is tended to take the width of the band in the gain plot as a measure for the additive model error for each frequency.33) s+1 while deviating transfers Pt are taken as: .4 −20 0. 9.35 can be distinguished in the Nyquistplot.38) For robust stability.2 −30 −1 0 1 Imag Axis 10 10 10 Frequency (rad/sec) 0 0 −0.36) Pt The reader is invited to analyse how the wrong measure of equation 9. 10 0 0. we have as condition: .8 1.6 Gain dB −10 0.8 1. The maximum model error is clearly given by the radius of the smallest circle around ’M’ that encompasses all plants ’X’.2 . 1 P = (9.120 CHAPTER 9.6 −90 X −1 0 1 10 10 10 0 0.35) Pt which is certainly wrong. that is not accounted for by the model output P u.4 −60 X M X −0.2. but now with a weighting ﬁlter p = Vp (jω)˜ γ ≈ 1 we have: Wu RVp ∞ < 1 ⇐⇒ ∀ω : Wu RVp  < 1 (9. If the goal M ∞ < plant like in Fig. FILTER SELECTION AND LIMITATIONS.5 Frequency (rad/sec) Real Axis Figure 9.8 or s+1. based on the small gain theorem. 9.2 −30 Phase deg X −0.
40) Wu Vp  Given the bounded transfer of equation 9. Here . Unfortunately. Vr or even Vη in stead of Vp could have been used. We would like to see that the ﬁnal closed loop sys tem shows good tracking behaviour and disturbance rejection for a broad frequency band. sensors.1. 121 R∆P ∞ < 1 ⇐⇒ ∀ω : R∆P  < 1 ⇐⇒ (9. we don’t have to introduce an extra ﬁlter Vp for stability. FREQUENCY DEPENDENT WEIGHTS. Pragmatically. but now with appropriate combination for each frequency. 9.38. the weighting ﬁlter Wu is adapted until the condition is satisﬁed. The charac terising ﬁlters of the exogenous inputs n ˜ and η˜ left little choice as these were determined by the actual signals to be expected for the closed loop system. the We will appear to be restricted by many bounds.2.9. Wu Vη } (9. like we did in section 9. This boils down to ﬁnding a rational ﬁlter transfer Vn (jω) such that: ∀ω : Vn (jω) ≥ Vd (jω) + Vr (jω) + Vp (jω) (9.4 We bounded by fundamental constraint: S + T = I For a typical lowsized H∞ problem like: u˜ Wu RVn Wu RVη n ˜ z= = = Mw (9. but this can also be done by increasing Wu properly. one usually combines only Vd and Vr into Vn and cheques whether: ∀ω : δ(ω) < Wu (jω)Vn (jω) (9. The extra exogenous input p can be prefered for proper quantisation of the control signal u. for robust stability it is suﬃcient to have: ∀ω : δ(ω) < sup{Wu Vd . The control weighting ﬁlter Wu was deﬁned by rigorous bounds derived from actuator limitations and model perturbations.45) is satisﬁed. because they all combined with Wu limit the control sensitivity R.44) Again the routine ”magshape” in the LMItoolbox can help here.37 can be brought in as: ∀ω : ∆P  < δ(ω) < Wu Vp  (9.41) and this can be guaranteed if the weights are suﬃciently large such that the bounded model perturbations of equation 9. The best is to combine the exogeneous inputs d. r and p into a signal n.2. Consequently. The inﬂuence of the plant dynamics will be discussed in the next section. induced by limita tions in actuators. for stability also the other input weight ﬁlters Vd . If not.43) If this condition is fullﬁlled. Now it is to be seen how good a ﬁnal performance can be obtained by optimum choice of the errorﬁlter We . Wu Vr . a suﬃcient condition is: ∀ω : ∆P  < Wu Vp  (9.39) ∆P  ∀ω : Wu RVp  <1 (9.46) e˜ We SVn We T Vη η˜ all weights have been discussed except for the performance weight We .2. model accuracy and the dynamic properties of the plant to be controlled.42) Of course.
Figure 9. It is clear that not both S < 1/2 and T  < 1/2 can be obtained.53) W1 V1  W2 V2  This is still too restrictive.6: Typical mixed sensitivity weights. Because the T is bounded accordingly the freedom in the performance represented by the sensitivity S is bounded on the basis of the fundamental constraint S + T = I. Vη and the combination of Wu . Consequently the intersection point of the inverse weights should be greater than 1: 1 1 ∃ω : = > 1/2 (9. The constraints on T can be represented as W2 T V2 ∞ < 1 where W2 and V2 represent the various weight combinations of inequalities 9. FILTER SELECTION AND LIMITATIONS.52) A typical weighting situation for the mixed sensitivity problem is displayed in Fig.50) Now we can repeat the comments made in section 7.49. Mentioned ﬁlters all bound the complementarity sensitivity T as a contraint: { Wu RVn ∞ < 1} ⇔ {∀ω : Wu RVn  = Wu P −1 T Vn  < 1} (9. we will show how the inﬂuences of actuator. Vn and Vη . The H∞ design problem requires: W1 SV1 ∞ < 1 ⇔ ∀ω : S(jω) < W1 (jω)−1 V1 (jω)−1  (9. 9.5. To allow for suﬃcient freedom in phase it is usually required to take at least: . sensor and model accuracy put restrictions on the performance via respectively Wu . functions as part of the weights on T .6. because S + T = 1 for the SISOcase.49) In above inequalities the plant transfer P . Renaming the performance aim as: def We SVn ∞ = W1 SV1 ∞ < 1 (9. that is not optional contrary to the controller C.479.122 CHAPTER 9.48) { We T Vη ∞ < 1} ⇔ {∀ω : We T Vη  < 1} (9. because it is not to be expected that equal phase 0 can be accomplished by any controller at the intersection point.51) −1 −1 W2 T V2 ∞ < 1 ⇔ ∀ω : T (jω) < W2 (jω) V2 (jω)  (9.47) −1 { Wu RVη ∞ < 1} ⇔ {∀ω : Wu RVη  = Wu P T Vη  < 1} (9.
9. This leads to an upper bound for the ﬁlter We according to: 1 1 1 > Vη (1 − ) ≈ Vη  ⇒ We  < (9.54) W1 V1  W2 V2  ∃ω : W1 V1  = W2 V2  < 1 (9. it is crucial that the point of intersection of the curves W1 (jω)V1 (jω) and W2 (jω)V2 (jω) is below the 0 dBlevel. heavily weighted bands (> 0dB) for S and T should always exclude each other. the larger the ﬁlter We  can be chosen and the better the ultimate performance will be.56) W1 V1  W2 V2  then necessarily: 1 1 1−S =T ⇒1− < T  < (9. S * T z ^ : 0 S 1 T S T ^ Figure 9.7. Generally. this can be accomplished but an extra complication occurs when W1 = W2 and V1 and V2 have ﬁxed values as they characterise real signals.2. If we want: 1 1 {S < } ∩ {T  < } (9. enforced by W1 V1 . T  < 1 and S + T = 1. 123 1 1 ∃ω : = >1⇔ (9. FREQUENCY DEPENDENT WEIGHTS.7: Possibilities for S < 1. otherwise there would be a conﬂict with S + T = 1 and there would be no solution γ ≈ 1! Consequently.58) We  We Vn  Vη  The better the sensor. This happens in the example under study where we have We SVn and We T Vη .57) W1 V1  W2 V2  which essentially tells us that for aimed small S.55) It can easily be understood that the S and T vectors for frequencies in the neighbour hood of the intersection point can then only be taken in the intersection area of the two circles in Fig. Consequently. the weight W2 V2  should be chosen less than 1 and vice versa. Again this reﬂects the fact that we can never control better than the accuracy of the sensor allows . 9. the smaller the measurement ﬁlter Vη  can be. Further away from the intersection point the condition S + T requires that for small S the T should eﬀectively be greater than 1 and vice versa.
We encountered this very same eﬀect before in the one frequency example.61) We  {Q = P −1 } (9. In frequency domain. 9. Finally. In the previous subsections the weights V∗ have been based on the exogenous input char acteristics. That is. given the bound on the actuator input by 1/Wu . P/Wu  should potentially compensate the maximum disturbance Vn . Very high weights We  for good tracking necessarily require: ∀ω : {We SVn  = We (1 − P Q)Vn  < 1} ⇔ (9. It appears that these very dynamical properties put bounds on the ﬁnal performance. the gain of the plant P for each frequency in the tracking band should be large enough to compensate for the disturbance n as a reaction of the input u.3.3 Limitations due to plant characteristics. but in particular nonminimum phase zeros and limited gain can have detrimental eﬀects on the ﬁnal performance.59) e˜ We SVn From chapter 4 we know that for stable plants P we may use the internal model implementation of the controller where Q = R and S = 1 − P Q.63) {P (jω) > Wu (jω)Vn (jω)} ⇔ (9. Let us forget about the low measurement noise for the moment and concentrate on the remaining mixed sensitivity problem: u ˜ Wu RVn z= = (˜ n) = Mw (9.62) Even in the case that P is invertable. since the ﬁrst term in the mixed sensitivity problem yields: ∀ω : {Wu RVn  = Wu P −1 Vn  < 1} ⇔ (9.124 CHAPTER 9. Never. limits on the weight We were derived based on the relation S + T = I. This is clear if one accepts that some eﬀort is to be made to stabilise the plant. FILTER SELECTION AND LIMITATIONS. the maximum eﬀect of an input u at the output. it needs to have suﬃcient gain.64) 1 {P (jω) > Vn (jω)} (9. 9. Indeed. viz. A ﬁnal zero error in the stepresponse for a control loop including an integrator should therefore be understood within this measurement noise eﬀect. this is the same constraint as we found in subsection 9.3 Typically.1 Plant gain. We will see that not so much instability. we get: . The weight Wu was determined by the actuator limits and the model pertur bations. us.65) Wu (jω) The last constraint simply states that. the characteristics of the plant itself were considered. if we compare the lower bound on the plant with the robustness constraint on the additive model perturbation.1. which inevitably will be at the cost of the performance. this eﬀect particularly poses a signiﬁcant limiting eﬀect on the aim to accomplish zero tracking error at ω = 0.60) 1 {(1 − P Q)Vn  < ≈ 0} ⇒ (9.
where we like to have a small S. Starting from the constraint we have: 1 ∀ω : {Wu QVn  < 1} ⇔ {Q < } (9. we should at least require that the weights satisfy: We (z)Vn (z) < 1 (9.69) Vn (1 − P Q) P  Vn (1 − Wu Vn  ) P  Vn  − W ) u and the best sensivity we can expect for such a weight We is necessarily close to its upper bound given by: P  1 Vn  − W u P  ∀ω Ω : S < = =1− (9. we learn from the condition We (1 − P Q)Vn  < 1: 1 1 1 ∀ω Ω : We  < < = (9.68) Wu Vn  Consequently. In the previous section this was thwarted by the range of the actuator or by the model uncertainty via mainly Wu . For perfect tracking and disturbance rejection one should be able to choose Q = P −1 .71) ω where z is any RHPzero where necessarily P (z) = 0 and Q(z) < ∞.9. this puts an underbound on the weighted sensitivity. at least at certain frequencies? Simply adapt your performance aim by decreasing the weight We as follows.66) which says that modelling error larger than 100% will certainly prevent tracking and disturbance rejection. 125 ∀ω : P  > Wu Vn  > ∆P  (9.67) Wu Vn  Above bound on Q prohibits to take Q = P −1 as P  is too small for certain frequencies ω Ω.72) This puts a strong constraint on the choice of the weight We because heavy weights at the imaginary axis band.3. will have to be arranged by poles and zeros of We and Vn in the LHP and the ”mountain peaks” caused by the poles will certainly have their ” mountain ridges” passed on to the RHP where at the position of the zero z their height is limited according to above formula. So necessarily from the maximum modulus principle. Another condition on Q is stability and here the nonminimum phase or RHP (Right Half Plane) zeros are the spoilsport.70) We Vn  Vn  Wu Vn  9. but what should be done if the gain of P is insuﬃcient. so that we will always have: P  ∀ω Ω : {P Q < 1} ⇒ {1 − P Q > 1 − P Q > 1 − > 0} (9. This is quite an . The crux is that no controller C may compensate these zeros by RHPpoles as the closed loop system would become internally unstable. Unfortunately. we get: sup We (jω)S(jω)Vn (jω) ≥ We (z)(1 − P (z)Q(z))Vn (z) = We (z)Vn (z) (9. as can easily be grasped.2 RHPzeros. All is well. introduced in chapter 4. Because we want the weighted sensitivity to be less than one. LIMITATIONS DUE TO PLANT CHARACTERISTICS.3.
For the lower frequencies ω (0. In Fig. The eﬀect of the higher frequencies is still seen at the initial time of the response where the direction (sign) is wrong. by diﬀerent gains of the two ﬁrst order transfers (try for yourself). The zero could also have been occured in the LHP. As an example may function: 1 2 s−8 P (s) = P1 (s) + P2 (s) = − =− (9. 9. In the RHP this is not allowed because of internal stability requirement. The bodeplot shows a total gain enclosed by the two separate components and the component −2/(s + 10) is even more broadbanded. as the engineering name indicates.8. Brought into one rational transfer this eﬀect causes the RHPzero at z = 8.g. For the higher frequencies the phase of the controller is incorrect. 20 15 10 5 Imag Axis 0 −5 −10 −15 −20 −20 −15 −10 −5 0 5 10 15 20 Real Axis Figure 9. Also the results for the same controller applied to the one component P1 (s) = 1/(s+1) or the other component P2 (s) = −2/(s + 10) is shown. Note that the zero is right between the two poles in absolute value. Alas. The sign of the other transfer with the faster dynamics pole at 10 is negative. FILTER SELECTION AND LIMITATIONS. 3. The sign of the transfer with the slowest pole at 1 is positive. if we would have only this component. Nonminimum phase zeros.126 CHAPTER 9. originate from some strange internal phase characteristics.9 the step response and the bode plot for the closed loop system is showed. usually by contradictory signs of behaviour in cer tain frequency bands. which leads to K = 3.8: Rootlocus for PIcontrolled nonminimumphase plant. . 9.73) s + 1 s + 10 (s + 1)(s + 10) The two transfer components show competing eﬀects because of the sign.74) (s + 1)(s + 10) s and take controller gain K such that we obtain equal real and imaginary parts for the closed loop poles as shown in Fig. the chosen controller would make the plant unstable as seen in the step response. In that case a controller could easily cope with the phase characteristic by putting a pole on this LHPzero. let us take a straightforward PIcontroller that compensates the slowest pole: s−8 s+1 P (s)C(s) = − K (9. abstract explanation. So. e. Let us therefore turn to the background of the RHPzeros and a simple example.5) the phase of the controller is appropriate and the plant is well controlled.
3 Bode integral. 9.5 1 1. Even zeros at inﬁnity play a role as explained in the next subsection. It is irrelevant whether inﬁnity is in the RHP.5 −2 10 −1 0 1 2 Time (sec. In a band ω (z/2. ω (0. 100z) the controller can indeed well be chosen to control the component −2/(s + 10). as a rule of the thumb.) 10 10 10 10 Figure 9.3. z/2) and also the gain of We is limited. For strictly proper plants combined with strictly proper controllers we will have zeros at inﬁnity. 2z) we can never track well. 127 Step Response From: U(1) 1 10 1. the following holds: ∞ ln S(jω)dω = 0 (9. while now the other component 1/(s + 1) is the nasty one. on the other hand. we have that the combination of plant and controller L(s) = P (s)C(s) has at least a pole zero excess (#poles − #zeros) of two. Because in practice each system is strictly proper. If we have more RHPzeros zi . Consequently it is required: We (∞)Vn (∞) < 1 (9. how can we see the inﬂu ence of zeros at inﬁnity at a ﬁnite band? Here the Bode Sensitivity Integral gives us an impression (the proof can be found in e.75) If.g. However.77) s→∞ s→∞ 1 + L(s) Any tracking band will necessarily be bounded. LIMITATIONS DUE TO PLANT CHARACTERISTICS. 2 ∗ zi ).76) and we necessarily have: 1 lim S = lim  =1 (9.78) 0 .5 P2C/s(P2C+1) −1 0 0. If the pole zero excess is at least 2 and we have no RHP poles.9. simply because they cannot be compensated by poles.3. we have as many forbidden tracking bands ω (zi /2.5 1 PC/(PC+1) 0 P1C/s(P1C+1) 10 0. This limit is reﬂected in the above found limitation: We (z)Vn (z) < 1 (9. As a consequence for the choice of We for such a system we cannot aim at a broader frequency band than.5 P2C/(P2C+1) Amplitude To: Y(1) PC/s(PC+1) P1C/(P1C+1) 0 −1 10 −0. Zeros at inﬁnity should be treated like all RHPzeros. we would like to obtain a good tracking for a band ω (2z. Doyle [2]). because the opposite eﬀects of both components of the plant are apparent in their full extent.9: Closed loop of nonminimum phase plant and its components.
their shift towards the LHP has to be paid for. Note that we have as usual a horizontal logarithmic scale for ω in Fig.10 which visually disrupts the concepts of equal areas.4 RHPpoles. as we will see. Nevertheless the message is clear: the less tracking error and disturbance we want to obtain over a broader band. The Bode rule states that the area of S(jω) under 0dB equals the area above it. 9.3. The explanation can best be done with an example: K L(s) = P (s)C(s) = (9. The straightforward generalisation . Also the RHPpoles cannot be compensated by RHPzeros in the controller. 210000} the tracking band will be broader but we have to pay with higher overshoot in both frequency and time domain as Fig. So in closed loop. FILTER SELECTION AND LIMITATIONS. Nevertheless.79) s(s + 100) so that the sensitivity in closed loop will be: s(s + 100) S= (9.10 shows.128 CHAPTER 9. The RHPzeros play a fundamental role in the performance limitation because they cannot be compensated by poles in the controller and will thus persist in existence also in the closed loop. again because of internal stability. they are no longer existent. 21000.10: Sensitivity for looptransfer with pole zero excess ≤ 2 and no RHPpoles. the more we have to pay for this by a more than 100% tracking error and disturbance multiplication outside this band. and consequently their eﬀect is not as severe as of the RHPzeros. 9. but in closed loop they have been displaced into the LHP by means of the feedback.80) s2 + 100s + K For increasing controller gain K = {2100. 1 10 0 10 10 −1 K=2100 −2 10 K=21000 K=210000 −3 10 −4 10 −5 10 −1 0 1 2 3 4 10 10 10 10 10 10 Figure 9. because this concept can only be applied to stable plants P . 9. The eﬀect of RHPpoles cannot be analysed by means of the internal model.
Otherwise stabilisation is not possible. limited by the actuator saturation. Part of the actuator range will be occupied for the stabilisation task so that less is left for the optimisation compared with a stable plant. for the unstable plant. The plant has either a pole in RHP at a > 0 or a pole in LHP at −a < 0. Essentially.3. Also the ﬁnal error for the step response is smaller: s±a ±a e = Sr ⇒ e(∞) = lim = (9. The maximum u for a unit step occurs at t = 0 so: max(u) = u(0) = lim R(s) = K = umax (9. a part a is used for stabilisation of the plant and only the remainder K − a can be used for a bandwidth K − a as illustrated in Fig. 9. Note that the actuator range should be large enough. . r + u 1 . It deﬁnes a lower bound on K > a. umax > a.  s±a 6 − ? Figure 9. Consequently.11. where we can use the whole range of the actuator for optimisation. the plant is ﬁrst fed back for stabilisation and next an extra external loop with a stable controller Q is applied for optimisation.81) u K 1 + s±a s±a+K For stability we certainly need K > a.9. K = umax −K − a −K + a ∗ < ∗ X < X −a a K = umax Figure 9. 9. It will be clear that the extra eﬀort of stabilisation has to be paid for. K . The control sensitivity is given by: r K K(s ± a) R= = = (9. the pole in closed loop can maximally be shifted umax to the left. i. The proportional controller K is bounded by the range of u < umax .12.83) s→0 s ± a + K K ±a . LIMITATIONS DUE TO PLANT CHARACTERISTICS. The currency is the use of the actuator range. With the same eﬀort we obtain a tracking band of K + a for the stable plant. while the closed loop should be able to track a unit step. 129 of the internal model for unstable plants has been explained in chapter 11.82) t s→∞ So it is immediately clear that.e. This can be illustrated by a simple example represented in Fig.12: Rootloci for both plants 1/(s ± a). So the idea is ﬁrst stabilisation and on top of that optimisation of the stable closed loop.11: Example for stabilisation eﬀort.
9. Fig. which leads to poles of respectively 4 and 6 and dito bandwidths. Step Response 0 From: U(1) 10 1. There is a minimum on the closed loop transfer around ω (p/2.13: Step response and S(jω) for controlled stable and unstable plants. For the low size generalised plant of equation 9. PC sup T (jω) ≥ T (s)s C+ =  s=p = 1 (9.46 we have three times a weighted complementary sensitivity. because of the maximum modulus principle: e. a = 1. Com pare the simple example above. Theoret ically the eﬀect is dual to the restriction by the RHP zeros on the sensitivity. Including general weights WT and VT we obtain: WT (p)VT (p) ≤ sup WT (jω)T (jω)VT (jω) < 1 (9.6 P=1/(s−1) 0.5 −1 10 −1 0 1 2 Time (sec.4 P=1/(s+1) 0.13 shows the two step responses.) 10 10 10 10 Figure 9. The question remains how these eﬀects inﬂuence the choice of the weights. Let us show these eﬀects by assuming some numerical values: K = umax = 5. Here we have. The ﬁnal errors are respectively 1/6 and 1/4. Also the two sensitivities are shown.89) P Only the ﬁrst entry yields bounds on the weights according to: . FILTER SELECTION AND LIMITATIONS.130 CHAPTER 9.84) K +a K −a being the respective absolute ﬁnal errors.88) P Wu Vη Wu RVη ⇒ WT VT = (9.2 0 0 0. 2p) for reasons of stability.8 Amplitude To: Y(1) P=1/(s+1) K=5 0. where the diﬀerences in bandwidth and the ﬁnal error (at ω = 0) are evident.4 1.5 1 1.g.87) Wu Vn Wu RVn ⇒ WT VT = (9.85) ω 1 + PC where (p) > 0 is such an unstable pole position. of which two are explicitly weighted control sensitivities: We T V η ⇒ W T VT = We Vη (9.86) ω So there is an upper bound on the weights on complementary sensitivity. and certainly a a < (9.2 P=1/(s−1) K=5 1 0.
LIMITATIONS DUE TO PLANT CHARACTERISTICS. You are invited to express the stabilisation eﬀort explicitly in the weights. This is not the eﬀect we are looking for.95) 0 ∞ =p h(t)e−pt dt (9. resembling the Bode integral in frequency domain (see Engell [16]).9. weighted by e−pt must vanish.93) 0 according to the Laplace transform of the closed loop impulse response g(t). because if we have Np unstable poles pi the Bode integral changes into: ∞ Np ln S(jω)dω = πΣi=1 (pi ) (9.90) because for the other two holds: Wu (p)Vn∪η (p)  =0<1 (9. because measurement noise is usually very small. Let the open loop transfer have an unstable pole at p. The condition of inequality 9. Then integration by parts yields: ∞ ∞ 1= g(t)e−pt dt = e−pt dh(t) = (9. It implicitly states that we have to choose We such that there is left room for S in the nontracking band to build up this extra positive area where tracking and disturbance rejection are worse than 100%. As h(t) is below 1 for small values of t.97) 0 p 0 p 0 p the combination yields the restrictive time integral: ∞ {1 − h(t)}e−pt dt = 0 (9. Also in time domain we have a restriction on the step response. Let the step response be h(t) so that g(t) = dh(t)/dt.90 is only a poor condition.91) P (p) as P (p) = ∞.98 restricts the attainable step responses: the integral of the step response error. 131 We (p)Vη ((p) < 1 (9. In the Bode integral the eﬀect of RHPpoles is evident.3. but alas I have not been able to ﬁnd it explicitly.92) 0 which says that there is an extra positive area for ln S given by the sum of the real parts of the unstable poles multiplied by π. This is exactly the cost of the stabilisation and increases the further away the poles are from the imaginary axis.94) 0 0 ∞ = h(t)e−pt ∞ 0 − h(t)de−pt = (9. so that we may write: ∞ 1 = T (p) = g(t)e−pt dt (9.98) 0 Equation 9. this area must . Because it is straightforward that ∞ −pt 1 ∞ −pt e−pt ∞ 1 e dt = − de =−  = (9.96) 0 where we used that h(0) = 0 when the closed loop system is strictly proper and that h(∞) is ﬁnite.
9. as we did before.6 exp(−t) 0.3.4 C=5 1. RHPzeros will attract rootloci to the RHP. For our example P = 1/(s − 1) we can choose C = 5.2 0 0 0. the shorter the available compensation time will be.5 2 2. you will certainly get a good mark for this course.98. Consequently either large overshoot and rapid convergence to the steady state value or small overshoot and slow convergence must occur. If an unstable pole and actuator limitations are both present. we have presented some insight into the mechanism of RHPpoles but the only testable bound on the weights is We (p)Vη (p) < 1.4 0. be compensated by values above 1 for larger t.132 CHAPTER 9. The larger p is.14: Closed loop step response for P = 1/(s − 1) restricted by the time integral 9. 9.5 RHPpoles and RHPzeros It goes without saying that.5 3 Time (secs) Figure 9. It will be more because the stabilisation eﬀort will be larger. the initial error integral of the step response is bounded from below. when a plant has both RHPzeros and RHPpoles. FILTER SELECTION AND LIMITATIONS. 1. and this compensation is discounted for t → ∞ by the weight e−pt and even more so if the steady state error happens to be zero by integral control action.5 1 1. The respective step responses are displayed in Fig.2 1 C=5(s+1)/s 0.8 Amplitude 0. These plants are called ”not strongly . So the step response cannot show an inﬁnitesimally small error for a long time to satisfy 9. the bounding eﬀect of stabilisation eﬀort in the face of restricted actuator range could not be made explicit in a bound on the allowable weighting ﬁlters for the left over performance. because they can only be stabilised by unstable and nonminimum phase controllers that add to the limitations again. during which the response is larger than 1. So ﬁnally. If you have a good idea yourself. The stabilisation is in particular a heavy task when we have to deal with alternating poles and zeros on the positive real axis. This refers to the limitations of the sensor via Vη . the limi tations of both eﬀects will at least add up. while we want to pull the rootloci over the imaginary axis into the LHP. and hence there must be a positive control error area which is at least as large as the initial error integral due to the weight e−pt . These plants are infamous.98.14 together with the weight e−t . However. or C = 5(s+1)/s to accomplish zero steady state error and still avoiding actuator saturation.
the rootloci will always remain in the RHP as displayed in Fig. p c1j = Πi=1 ≥1 (9.16: Rootloci for a plant. which is not strongly stabilisable. c2j = ΠN z ≥1 (9. 9. z. p and ∞. Depending on the sign of the controller gain K.9. Then for closedloop stability the weighted sensi tivity function must satisfy for each RHPzero zj : N zj + p¯i  WS SVS ∞ ≥ c1j WS (zj )VS (zj ).16. K>0 K<0 K>0 K<0 < X > O < X > 0 z p Figure 9. p > 0 and an integrator pole at 0. LIMITATIONS DUE TO PLANT CHARACTERISTICS.3.15. which is not strongly stabilisable. It will be clear that this stabilisation eﬀort is considerable and the more if the RHP poles and RHPzeros are close to each other so that without precautions the open ends of the rootloci leaving and approaching the real positive axis in Fig.16 will close without passing through the LHP ﬁrst.99) zj − pi  and the weighted complementary sensitivity function must satisfy for each RHPpole pi : ¯ zj + p i  WT T VT ∞ ≥ c2j WT (pi )VT (pi ). If z < p we have alternatingly poles and zeros at 0. Suppose that P (s) has Nz RHPzeros zj and has Np RHPpoles pi . 133 stabilisable”. In Skogestadt & Postlethwaite [15] this is formalised in the following bounding theorem: Theorem: Combined RHPpoles and RHPzeros. with an unstable nonminimum phase controller. K>0 K>0 6 K>0 K<0 K<0 K<0 < X > O ?O < X X > 0 6 z p ? Figure 9. and no poles or zeros of the controller on the positive real axis.15: Rootloci for a plant. Take for instance a plant with a zero z > 0. 9.100) j=1 zj − pi  . 9. Only if we add RHPzeros and RHPpoles in the controller such that we alternatingly have pairs of zeros and poles on the real positive axis we can accomplish that the rootloci leave the real positive axis and can be drawn to the LHP as illustrated in Fig.
134 CHAPTER 9. . Vn }. above inequalities put upper bounds on the weight ﬁlters. where WS and VS are sensitivity weighting ﬁlters like the pair {We . WT and VT are complementary sensitivity weighting ﬁlters like the pair {We . Vη }. FILTER SELECTION AND LIMITATIONS. Similarly. If we want the inﬁnity norms to be less than γ ≈ 1. On the other hand if we apply the theorem without weights we get: S ∞ ≥ max c1j T ∞ ≥ max c2i (9.101) j i This shows that large peaks for S and T are unavoidable if we have a RHPpole and RHPzero located close to each other.
.9. .We2  u2 − e2 e˜2 + ? . m = 3.17.6 MIMO.102) Note that we have as usual diagonal weights where Wui (jω) stands for the maximum range of the corresponding actuator for the particular frequency ω. 135 9. The previous subsections were based on the silent assumption of a SISO plant P . .17: Scaling of a 3x3plant. For example the plant gain is multivariable and consequent limitations need further study.g. . where e. . The scaled tracking error e˜ as a function of the scaled reference r˜ and the scaled control signal u ˜ is given by: e˜1 e˜ = e˜2 = e˜3 We1 0 0 Vr1 0 0 r˜1 = 0 We2 0 0 Vr2 0 r˜2 0 0 We3 0 0 Vr3 r˜ 3 −1 We1 0 0 P11 P12 P13 Wu1 0 0 u ˜1 − 0 We2 0 P21 P22 P23 0 −1 Wu2 0 u ˜2 = −1 0 0 We3 P31 P32 P33 0 0 Wu3 u ˜3 = We Vr r˜ − P Wu−1 u ˜ (9. For MIMO plants fundamentally the same restrictions hold but the interpretation is more complicated. For heavy weights We in order to make e˜ ≈ 0 we need: Vr r˜ = P Wu−1 u ˜ = r = Pu (9. LIMITATIONS DUE TO PLANT CHARACTERISTICS.We3  u3 − e3 e˜3 Figure 9. For a m input m output plant the situation is sketched in Fig 9.3. P .We1  u1 − e1 e˜1 + ? . 6u r˜1 r˜2 r˜3 ˜1 6u ˜2 6u ˜3 ? ? ? Wu1 Wu2 Wu3 Vr1 Vr2 Vr3 6 6 6 + r1 r2 r3 ? .3.103) The ranges of the actuators should be suﬃciently large in order to excite each output up to the wanted amplitude expressed by: . Also the aimed range of the reference ri is characterised by Vri (jω) and should at least correspond to the permitted range for the particular output zi for frequency ω.
that the diﬀerence in sign for r and d does not matter. We presented the analysis for the tracking problem.105) ⇔ Wu P −1 Vr ∞ ≤ 1 ¯ (A−1 ) = σ(A) we may write: Because σ ¯ (Wu P −1 Vr ) ≤ 1 ⇔ ∀ω : σ (9. 9. can be treated in the same way and certainly the combination of tracking.104) so that ˜ 2 ≤ Wu P −1 Vr ∞ r˜ 2 ≤ 1 u (9. Note.e. i.136 CHAPTER 9. 9. Exactly the same holds of course for the disturbance rejection for which Vd should be substituted for Vr . Let us show this with a well known example: the pendulum on a carriage of Fig. We can only aim at controlling p − m output combinations. A singular value less than one implies that a certain aimed combination of outputs indicated by the corresponding left singular vector cannot be achieved by any allowed input vector u ˜ ≤ 1. y 6 j h θ 2l ? M F x Figure 9. In above derivation we assumed that all matrices were square and invertible. We actually have p − m singular values equal to 0 which is certainly less than 1.19.18. It says that certain output combinations cannot be controlled independent from other output combinations as we have insuﬃcient inputs. If we have m inputs against p outputs where p > m (tall transfer matrix) we are in trouble. FILTER SELECTION AND LIMITATIONS.18: The inverted pendulum on a carriage. Also the additive model perturbation. The plant is scaled with respect to each allowd input ui and each aimed output zj for each frequency ω.106) ∀ω : σ(Vr−1 P Wu−1 ) ≥ 1 which simply states that the gains of the scaled plant in the form of the singular values should all be larger than 1. So we have 1 input and 2 outputs and we would like to track a certain reference for the carriage and at the same time keep the infuence of disturbance on the pendulum angle small according to Fig. Vp and Wu . disturbance reduction and model error robustness by means of Vn . Let the input u = F being the horizontal force exerted to the carriage and let the outputs be θ the angle of the pendulum and x the position of the carriage. . ˜ = Wu P −1 Vr r˜ u (9.
19: Keeping e and θ small in the face of r and d. In above terms all the remaining eﬀects of RHPzeros and RHPpoles can be treated. If we state this reversely: If we want to have a good tracking in all outputs in a certain frequency band. i.9. performance will be very bad.110) This result implies that we can never control both outputs e and θ appropriately in the same frequency band! It does not matter how the real transfer functions P1 and P2 look like.109) Since the rank of P C is 1 (1 input u) we have σ(P C) = 0 so that: σ(I + P C) ≤ 1 ⇒ σ ¯ (S) ≥ 1 (9. For frequencies.C1 C2 . the ﬁrst p(=number of outputs) singular values of P should be close to each other. when the pendulum is upright. That is. LIMITATIONS DUE TO PLANT CHARACTERISTICS. we like to make the total sensitivity small: θ = P1 u + d x = P2 u u = C1 θ + C2 e e=r−x (9.108) σ(I + P C) The following can be proved: σ(I + P C) ≤ 1 + σ(P C) (9. or for a gantry crane. P1 . the condition number of P is very bad. The same result holds for a rocket. Every time again we have to take careful notice of which combinations of outputs we want . Also instability is not relevant here.107) ⇒ −1 θ 1 0 C1 d = + P1 P 2 e 0 1 C2 r If we want both the tracking of x and the disturbance reduction of θ better than without control we need: 1 σ ¯ ((I + P C)−1 ) = ¯ (S) = σ <1 (9.e.  P2  r 6 − 6+ ? d  Figure 9. when the pendulum is hanging. a torque on the pendulum) or require less by weighting the tracking performance heavily and leaving θ only determined by stabilisation conditions.g. . In above example of the pendulum we treated the extreme case that σ(P ) = 0 but certainly simular eﬀects occur approximately if σ(P ) << σ ¯ (P ). 137  6 e 6 +  ? u + θ . where this happens. The crucial limitation is the fact that we have only one input u. The remedy is therefore either to add more independent inputs (e.3.
Unstable modes cannot be controlled.111) y = C2 x + D21 w + D22 u 1. These combinations ﬁnd their representation in characteristic vectors like singular vectors and eigenvectors. Try to combine as many inputs as possible in an augmented plant. Analyse the exogenous signals such that the characterising ﬁlters V∗ can be given numerical values. enlarge some weights or add others e. Program the problem and try to ﬁnd M ∞ < γ ≈ 1. the eﬀect is the same as when we have in LQGcontrol a weight matrix R which is singular and needs to be inverted in the 1 solution algorithm. Be sure that all your weights are stable and minimum phase.g. Usually not well deﬁned plant. 3.4 Summary Before passing on to an example where we apply above rules together with the program ming instructions. Skogestadt [15] for more details. . as their are problems before. sometimes it is just the opposite as a ”diﬃcult” vector appears to be crucial for the control task. 2. 9. Unstable modes cannot be observed. The following anomalies are often encountered. C2 ) is not detectable. to control and what combinations of inputs we can use therefore. A procedure for the set up can be described as: 1. If not all ui are weighted for all frequencies.138 CHAPTER 9. B2 ) is not stabilisable. Deﬁne the proposed control structure. 5. This means that not all inputs ui are penalised in the outputs z by means of the weights Wui . 3. all your weights should be stable and minimum phase.g. Analyse the model perturbations and try to ”catch” them with input ﬁlters V∗ and Wu . Propose a ﬁlter We for performance. 6. This makes the complete analysis a rather extensive task. They should be penalised for all frequencies so that biproper weights Wui are required. Wz for multiplicative errors. At this stage one often meets diﬃculties as the programs cannot even ﬁnd any gamma at all. Therefore we refer to e. (A. yielding the Wu . In chapter 13 we saw that for the LQGproblem D12 = R 2 . 7. (A. it seems worthwhile to summarise the features here. 2. Analyse the actuator bounds for each frequency. Again. 4. FILTER SELECTION AND LIMITATIONS. Sometimes this relieves the control task as certain ”diﬃcult” vectors are irrelevant for the control job. too big for the limited course we are involved in. once the augmented plant (see chapter 13) has been obtained: x˙ = Ax + B1 w + B2 u z = C1 x + D11 w + D12 u (9. If not possible. Test whether S + T = I is not violated by the set of weights and adapt We if necessary. Usually not well deﬁned plant. D12 does not have full rank equal to the number of inputs ui .
you have indeed obtained a γ which is far exceeding one. whether the respective weights intersect below 0dB. D21 does not have full rank equal to the number of measured outputs yj . Still no solution? Find an expert. Next. 2. Test for suﬃcient plant gain(s) by ∀ω : σ(V∗−1 P Wu−1 ) ≥ 1 with V∗ = Vr or Vd or Vn . In chapter 13 we saw that 1 for the LQGproblem D21 = Rw2 . that. If one does not want to increase the number of inputs wi . In case of both RHPpoles and RHPzeros test on basis of theorem equations 9. SUMMARY 139 4. Test for the fundamental equality S + T = I again. 3. direct feedthrough from w. i.4. This is dual to the previous item. 5. i.100. In case of RHPpoles p test We (p)Vη (p) < 1 5.9. Quite often this is due to too broad a frequency band where all dynamics should be considered.e.112) 0 6. one can search for a proper entry in D21 and give it a very small value. Test whether suﬃcient room is left for the sensitivity by its weights to satisfy the bode integral. In particular the biproperness requirement induced by the two previous items may have set you to trespass the 5 decades or the integration pole. Supposing. In case of RHPzeros z test We (z)V∗ (z) < 1 withV∗ = Vr or Vd or Vn . Usually the problem is solved by making all exogenous input ﬁlters V∗ biproper. The lack of full rank is comparable again with LQGcontrol. . 4. for each frequency there should be some disturbance. It means that not all measurements are polluted by ”noise”.99 and 9. when the covariance matrix of the measurement noise is singular. So. in particular in case of RHPpoles pj : ∞ N ln S(jω)dω = πΣi=1 p (pi ) (9. one should investigate whether the problem deﬁnition is realistic and some measurement noise should be added. the following tests in the proposed order can be done for ﬁnding the underlying reason: 1. the accuracy appears to be insuﬃcient. based on above hints. Numerical problems occur. Noisefree measurements cannot exist as they would be inﬁnitely reliable.e. the exogenous inputs w. thereby fooling the algorithm without inﬂuencing the result seriously. which you have wisely shifted somewhat in the LHP is still too small compared to the largest pole or zero. Be sure that all poles and zeros in absolute value do not cover more than 5 decades. however.
. FILTER SELECTION AND LIMITATIONS.140 CHAPTER 9.
125) (s + .1) (s + 1)(s − 1) Note that M (0) = 1. This unstable phenomenon can be controlled with a feedback of the pitch rate to thrust control.1 Plant deﬁnition The model has been inspired by a paper on rocket control from Enns [17]. The input is a thrust vector control and the measured output is the pitch rate. The elastic modes are described by complex.125) M (s) = −8 (10. We we will use the model M as the basic model P in the control design.tue. Thus the sensor and actuator are not colocated. In this simpliﬁed model we only take the lowest frequency mode yielding: (s + . We have taken the worst scenario in which poles and zeros change place. Fuel consumption will decrease dis tributed mass and stiﬀness of the fuel tanks.2) (s + 1)(s − 1) (s + .05 + 5j)(s + . they encounter aerody namic forces which tend to make the rocket tumble.05 − 5j) Ps (s) = Ks (10. so called “short period pole pair” which are mirrored with respect to the imaginary axis. The pitch rate is measured with a gyroscope located just below the center of the rocket. Instability can result if the control law confuses elastic motion with rigid body motion. The elasticity of the rocket complicates the feedback control. The rocket engines are mounted in gimbals attached to the bot tom of the vehicle to accomplish the thrust vector control. First.nl or via the internet home page of this course. The use of various control toolboxes will be illustrated. Along the way. In this example we have an extra so called “ﬂight path zero” in the transfer function on top of the well known. The rigid body motion model is described by the transfer function (s + . Also changes in temperature play a role.06 − 6j) The gain Ks is determined so that Ps (0) = 1.06 + 6j)(s + . This yields: 141 . Booster rockets ﬂy through the atmosphere on their way to orbit.Chapter 10 Design example The aim of this chapter is to synthesize a controller for a rocket model with perturbations. The program ﬁles which will be used can be obtained from the ftpsite nt01. 10.er. a classic control design will be made so as to compare the results with H∞ control and µcontrol. the elastic modes will change. We refer to the “readme” ﬁle for details.ele. lightly damped poles associated with zeros. As a consequence.
[nums. magm=bode(numm. [numm..dena..ddena. % Plot the bode diagram of model and its (additive) errors w=logspace(3.w).1).125) (s + .06 + 6j)(s + .125. 10.06 − 6j) Pa (s) = Ka (10.0).ddena]=parallel(numa.1].05+j*5.1).ddens. za=[. it introduces % the perturbed models Pa(s)=M(s)*D(s) and Ps(s) = M(s)/D(s) where % D(s) has poles and zeros nearby the imaginary axis z0=.w).1: Nominal plant and additive perturbations.M % % It first defines the model M(s)=8(s+. dmaga=bode(dnuma.M−Ps.06j*6]. dmags=bode(dnums. ks=polyval(dens.05j*5]. we have M (s) = P (s) as basic model and Ps (s) − M (s) and Pa (s) − M (s) as possible additive model perturbations.5 the control band will certainly be less wide.0)/polyval(numa.3) (s + 1)(s − 1) (s + .ps.0)/polyval(nums.numa=numa*ka. DESIGN EXAMPLE (s + .06j*6].p0.. % Define error models MPa and MPs [dnuma.125.05+j*5.ddens. % adjust the gains: km=polyval(denm. [numa. ps=[1.05 − 5j) Finally. .125...nums=nums*ks.dena]=zp2tf(za.125)/(s+1)(s1) % from its zero and pole locations.05 + 5j)(s + . mags=bode(dnums.0)/polyval(numm.w).dens]=zp2tf(zs. Subsequently. M.denm.06+j*6.denm]=zp2tf(z0. zs=[. The Bode plots are shown in Fig.denm).05j*5].1).142 CHAPTER 10. the plant deﬁnition can be implemented as follows % This is the script file PLANTDEF. p0=[1.denm). In Matlab.3000).w).1.2.0).dens. As the errors exceed the nominal plant at ω ≈ 5.06+j*6. [dnums.. ka=polyval(dena.ddens]=parallel(nums..pa.1.0).numm.. numm=numm*km.numm.M−Pa 2 10 1 10 0 10 −1 10 −2 10 −3 10 −4 10 −5 10 −6 10 −7 10 −3 −2 −1 0 1 2 10 10 10 10 10 10 The plant and its perturbations Figure 10.1. pa=[1.
5 rad/s. we observe that the control band is bounded by ω ≈ 0.3.MPa’). In general.2: Classic low order controller. This model shows the most nasty dynamics. loop in the RHP.2. By keeping the control action strictly low pass. 10. For the controlled system we wish to obtain a zero steady state. Some trial and error with a simple low order controller. 10. 10.w. 50 Gain dB 0 −50 −3 −2 −1 0 1 2 10 10 10 10 10 10 Frequency (rad/sec) bodeplots controller −250 −255 Phase deg −260 −265 −270 −3 −2 −1 0 1 2 10 10 10 10 10 10 Frequency (rad/sec) Figure 10. i. xlabel(’The plant and its perturbations’). but we could have done much better by shifting the pole at 2 to the left and increasing the gain. Also in the corresponding right Nyquist plot we see that an increase of the gain would soon lead to an extra and forbidden encircling of the point 1 by the loops originating from the elastic mode. integral action. title(’M. 10.2.2 Classic control The plant is a simple SISOsystem. 10. leads soon to a controller of the form 1 (s + 1) C(s) = − (10.4 and the Nyquist plots in Fig.dmaga). 10.10.4) 2 s(s + 2) In the bode plot of this controller in Fig. the elastic mode dynamics will hardly be inﬂuenced.e. as we require robust stability and robust performance for the elastic mode models.dmags.magm.MPs. as we may observe from the closed loop disturbance step responses of the nominal model and the elastic mode models in Fig. this is a good start as it gives insight into the problem and is therefore of considerable help in choosing the weighting ﬁlters for an H∞ design. Still.. we notice some high . Increase of the controller gain or bandwidth would soon cause the root loci to pass the imaginary axis to the RHP for the elastic mode model Pa . The root locus and the Nyquist plot look familiar for the nominal plant in Fig. If we study the root loci for the two elastic mode models of Fig. while the bandwidth is bounded by the elastic mode at approximately 5. so we should be able to design a controller with classic tools.w.6. It has the pole pair closest to the origin.25rad/s. CLASSIC CONTROL 143 loglog(w.5. The root loci. it is clear why such a restricted low pass controller is obtained. which emerge from those poles.
144 CHAPTER 10. We can do better by taking care that the feedback loop shows no or very little action just in the neighborhood of the elastic modes.4: Root loci for the elastic mode models. rootloci PtsC and PtaC 10 5 Imag Axis 0 −5 −10 −10 −5 0 5 10 Real Axis Figure 10.3: Root locus and Nyquist plot for low order controller. Nyquist PtsC Nyquist PtaC 5 5 4 4 3 3 2 2 1 1 Imag Axis Imag Axis 0 0 −1 −1 −2 −2 −3 −3 −4 −4 −5 −5 −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 Real Axis Real Axis Figure 10.5: Nyquist plots for elastic mode models. which has a narrow dip in the transfer just at the proper place: . as the poles have been shifted closer to the imaginary axis by the feedback and consequently the elastic modes are less damped. frequent oscillations. Therefore we include a notch ﬁlter. DESIGN EXAMPLE rootlocus MC Nyquist MC 4 5 4 3 3 2 2 1 1 Imag Axis Imag Axis 0 0 −1 −1 −2 −2 −3 −3 −4 −4 −5 −4 −3 −2 −1 0 1 2 3 4 −5 −4 −3 −2 −1 0 1 2 3 4 5 Real Axis Real Axis Figure 10. that occur for the model Pa .
because at ω = 50 the plant transfer itself is very small. 50 0 Gain dB −50 −100 −3 −2 −1 0 1 2 10 10 10 10 10 10 Frequency (rad/sec) bodeplots controller 0 −90 Phase deg −180 −270 −3 −2 −1 0 1 2 10 10 10 10 10 10 Frequency (rad/sec) Figure 10.8 are hardly changed close to the origin. 10.5 0 Amplitude −0.5 −2 0 10 20 30 40 50 60 70 80 90 100 Time (secs) Figure 10. CLASSIC CONTROL 145 step disturbance for M.6: Disturbance step responses for low order controller. . as expected. given the small plant transfer at those high frequencies.5j) C(s) = − 150 (10. We clearly discern this dip in the bode plot of this controller in Fig. 10. Studying the root loci for the two elastic mode models of Fig. Even Matlab had problems in showing the eﬀect because apparently the gain should be very large in order to derive the exact track of the root locus.7. 1 (s + 1) (s + . 10.10. Roll oﬀ poles have been placed far away.5 −1 −1.5j)(s + . Further away. Pts or Pta in loop 1 0.5) 2 s(s + 2) (s + 50 + 50j)(s + 50 − 50j) We have positioned zeros just in the middle of the elastic modes polezero couples.7: Classic controller with notch ﬁlter. where they cannot inﬂuence control. The root locus and the Nyquist plot for the nominal plant in Fig.9 it can be seen that there is hardly any shift of the elastic mode poles.055 + 5. the root locus is not interesting and has not been shown.2.055 − 5. The poles remain suﬃciently far from the imaginary axis. where the rolloﬀ poles lay.
possibly with modiﬁcations. 10. the loops due to the elastic modes have been substantially decreased in the loop transfer and consequently there is little chance left that the point 1 is encircled.146 CHAPTER 10. by running raketcla.8: Root locus and Nyquist plot controller with notch ﬁlter.10. Nyquist PtsC Nyquist PtaC 5 5 4 4 3 3 2 2 1 1 Imag Axis Imag Axis 0 0 −1 −1 −2 −2 −3 −3 −4 −4 −5 −5 −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 Real Axis Real Axis Figure 10. as a consequence. DESIGN EXAMPLE Nyquist MC 5 rootlocus MC 10 4 8 3 6 2 4 1 Imag Axis 2 Imag Axis 0 0 −1 −2 −2 −4 −6 −3 −8 −4 −10 −5 −10 −8 −6 −4 −2 0 2 4 6 8 10 −5 −4 −3 −2 −1 0 1 2 3 4 5 Real Axis Real Axis Figure 10. rootloci PtsC and PtaC 10 8 6 4 2 Imag Axis 0 −2 −4 −6 −8 −10 −10 −8 −6 −4 −2 0 2 4 6 8 10 Real Axis Figure 10. Because of the notch ﬁlters.10. This is also reﬂected in the Nyquist plots in Fig.10: Nyquist plots for elastic mode models with notch ﬁlter.m as listed below: . Finally. the disturbance step responses of the two elastic models show no longer elastic mode oscillations and they diﬀer hardly from the rigid mass model as shown in Fig.11. You can replay all computations.9: Root loci for the elastic mode models with notch ﬁlter.
denl. title(’Nyquist PtaC’).5 −2 0 2 4 6 8 10 12 14 16 18 20 Time (secs) Figure 10.get(gcr.numla.dencl]=feedback(1. [numcls.’currentaxes’. hold. [numla.2.dencla]=feedback(1. hold off. pause.denls.w).denl.’currentaxes’.1). pause. . CLASSIC CONTROL 147 step disturbance for M.get(gcr.1).’plotaxes’)) axis([5.numl.5. set(figure(1).denla]=series(numc. pause. nyquist(numla.10. set(figure(1). denl=conv(denc.denc.numm).5 −1 −1.dena).5.5]).’currentaxes’. [numcla.denls.’plotaxes’)) axis([5.dencl). % % First try the classic control law: C(s)=.dens).5.5]). step(numcl.1).denls]=series(numc. title(’Nyquist MC’). bode(numc.M % % In this script file we synthesize controllers % for the plant (defined in plantdef) using classical % design techniques.5.denla).dencls]=feedback(1.1.denla.5]). nyquist(numls. pause.1. pause.denc. pause.11: Disturbance step responses for controller with notch ﬁlter.5.nums.5.w). [numls.5 0 Amplitude −0. numl=conv(numc.denl).numls.denla. rlocus(numl. title(’Nyquist PtsC’). rlocus(numla.5 . title(’rootloci PtsC and PtaC’). It is assumed that you ran % *plantdef* before invoking this script.5].denm). rlocus(numls.w).1. Pts or Pta in loop 1 0.denc.numa. % This is the script file RAKETCLA. hold. denc=[1 2 0]. title(’bodeplots controller’).denls). set(figure(1).’plotaxes’)) axis([5.get(gcr. [numcl. title(’rootlocus MC’).5(s+1)/s(s+2) numc=[. nyquist(numl.w).
denla.3 Augmented plant and weight ﬁlter selection Being an example we want to keep the control design simple so that we propose a simple mixed sensitivity setup as depicted in Fig.dencla). denc=conv([1 2 0]. At the same time. nyquist(numls.’plotaxes’)) axis([5.dencl). title(’rootloci PtsC and PtaC’). axis([10.5(s+1)/s(s+2)]* % 150(s+. bode(numc. set(figure(1).5]).[1 .5)/(s+50j*50)(s+50j*50) numc=conv([. rlocus(numl. Because we can see it as an error.denc.denl.denl).w). step(numcl. It will also represent the model perturbations together with the weight on the actuator input u = u.5. set(figure(1).2525]*150). The exogenous input w = d˜ stands in principle for the aerodynamic forces acting on the rocket in ﬂight for a nominal speed. hold off.denla.1). step(numcla.dencl]=feedback(1.denls. pause.1.10.numl. pause.step(numcla. The disturbed output of the rocket. title(’step disturbance for M. as a component of the output z = (˜u. numl=conv(numc.dencls]=feedback(1. hold.denla]=series(numc. pause. title(’rootlocus MC’). title(’bodeplots controller’). the pitch rate. [numcls. we incorporate it.’plotaxes’)) axis([5. Pts or Pta in loop’).numm).dencls). title(’Nyquist PtaC’).10. DESIGN EXAMPLE step(numcls.10.’plotaxes’)) axis([5.numls.1.10]).dencls).denm). pause. the error e is used as the measurement y = e for the controller.w). title(’Nyquist PtsC’).numa. hold off.get(gcr.5.055j*5.[1 100 5000]).dens). Pts or Pta in loop’).5]). Note that we did not pay attention to measurement errors. pause.5.nums. 10.get(gcr. rlocus(numls. % Improved classic controller C(s)=[. axis([10. pause. title(’Nyquist MC’). hold off.dencla). pause.12.5]. [numcl. hold.5]).dena).10]).055+j*5.denls. nyquist(numla.5 .5.denla). [numls.5.1 30.w). [numcla.’currentaxes’. nyquist(numl.denls]=series(numc.denc. in a weighted form e˜.denls).1). The mixed sensitivity is thus deﬁned by: .denl.148 CHAPTER 10. should be kept to zero as close as possible.’currentaxes’.1. 10. e˜)T .’currentaxes’.5.1). step(numcls.w).10. pause. title(’step disturbance for M. [numla. set(figure(1).numla.denc. rlocus(numla.dencla]=feedback(1. denl=conv(denc.get(gcr.5)(s+.
Furthermore.01s + 1 s + 100 Vd = = .0001 Note that the pole and zero lay 6 decades apart. As we know very little about the aerodynamic forces. Wu  6 . of which the predominant behaviour is dictated by two poles and one zero. there is no problem of scaling. If we increase the gain of Vd we will just have a larger inﬁnity norm bound γ.12: Augmented plant for rocket. it would be more straightforward to model d as an input disturbance. Passing through the process. + Wu KVd . We  y=e  6 ?  G ? K Figure 10. To keep track with the presentation of disturbances at the output throughout the lecture notes and to cope more easily with the additive perturbations by means of Vd Wu .01.0001 s + .10. but no diﬀerent optimal solution because Vd is equally involved in both terms of the mixed sensitivity problem. For numerical reasons we have to take the integration pole somewhat in the LHP at a distance which is small compared to the poles and zeros that determine the transfer P . The bode plot of ﬁlter is displayed in Fig. P .01 (10. Since these are forces which act on the rocket. Vd has been chosen as: . u ˜ 1−P K Wu RVd ˜ z= = We Vd w= d (10. we have chosen to leave it an output disturbance. Vd . in particular for the mixed sensitivity problems. 10. which appeared to give least numerical problems. We like to shift the pole to the origin.3. So. Choosing again a biproper ﬁlter for We . As there are no other exogenous inputs. if we choose Vd to be biproper.13. where we will see that in the controller a lot of polezero cancellations with the augmented plant will occur. we cannot go . we avoid problems with inversions. We could then choose a ﬁrst order ﬁlter Vd with a pole at 1. like the actuator does by directing the gimballs.6) e˜ 1−P K We SVd The disturbance ﬁlter Vd represents the aerodynamic forces. Based on the exercise of classic control design.7) s + . which will be on the edge of numerical power. there will be a decay for frequencies higher than 1rad/s with 20dB/decade. . so that the controller will necessarily contain integral action. we cannot expect a disturbance rejection over a broader band than 2rad/s. In that way we will penalise the tracking error via We SVd inﬁnitely heavily at ω = 0. AUGMENTED PLANT AND WEIGHT FILTER SELECTION 149 u ˜ z= e˜ w = d˜ . . a ﬂat spectrum seems appropriate as we see no reason that some frequencies should be favoured. The gain has been chosen as .  ? 6 ? u=u .
the allowed band will be low pass.02 (10. So all we can do is to choose a high pass penalty Wu such that the expected model perturbations will be covered and hope that this is suﬃcient to prevent from actua tor saturation. Compare additive modelerror weight and "real" additive errors 4 10 2 10 0 10 −2 10 −4 10 −6 10 −8 10 −3 −2 −1 0 1 2 10 10 10 10 10 10 Figure 10.02s + 2 s + 100 We = = . we again know very little about the actuator consisting of a servosys tem driving the angle of the gimbals to direct the thrust vector. much further with the zero than the zero at 100 for Vd .13: Weighting ﬁlters for rocket. We will see . The additive model perturbations Ps (jω) − M (jω) and Pa (jω) − M (jω) are shown in Fig. Concerning Wu . Certainly. DESIGN EXAMPLE Weighting parameters in control configuration 4 10 2 10 0 10 −2 10 −4 10 −6 10 −3 −2 −1 0 1 2 10 10 10 10 10 10 Vd. 10. We have chosen two poles in between the poles and zeros of the ﬂexible mode of the rocket just at the place where we have chosen zeros in the classic controller. Wu Figure 10. We.14 and should be less than WR (jω) = Wu (jω)Vd (jω). Keeping We on the 0 dB line for low frequencies we thus obtain: .150 CHAPTER 10.14: WR encompasses additive model error.8) s+2 s+2 as displayed in Fig. which are displayed as well. 10.13.
denVd=[1 .02 2].05 + j5. 10.05 − j5. The proposed ﬁlter selection is implemented in the following Matlab script. we now have to choose zeros at the lower end of the frequency range. % This is the script WEIGHTS. control and complementary sensitivity weightings 3 10 2 10 1 10 0 10 −1 10 −2 10 −3 10 −4 10 −3 −2 −1 0 1 2 10 10 10 10 10 10 WS. title(’Weighting parameters in control configuration’).001)2 Wu = = (10.001rad/s. AUGMENTED PLANT AND WEIGHT FILTER SELECTION 151 that. they intersect below the 0dBline. i. In particular. Similarly the weight for the control sensitivity R is WR = Wu Vd and from that we derive that for the complementary sensitivity T the weight equals WT = Wu Vd /P represented in Fig. . xlabel(’Vd.0001]/3.2525 3 (s + . WR and WT Figure 10.1 30.9) 3 s2 + . more importantly.. R and T .15.e.15.2 . numWu=[100 . The gain of the ﬁlter has been chosen such that the additive model errors are just covered by WR (jω) = Wu (jω)Vd (jω). pause. magWe=bode(numWe.0001 100 (s + .magVd.1s + 30.w). magVd=bode(numVd. denWe=[1 2]. 10.10.15: Weighting ﬁlters for sensitivities S.5) Having deﬁned all ﬁlters.w. where one starts with certain ﬁlters and adapts them in subsequent iterations such that they lead to a controller which gives acceptable behaviour of the closed loop system.5)(s + . Sensitivity.001]. by doing so.magWu). indeed the mixed sensitivity controller will also contain zeros at these positions showing the same notch ﬁlter. whether the conditions with repect to S + T = 1 are satisﬁed.denWe.magWe.w.denWu.2525]. numWe=[. Therefore we display WS = We Vd as the weighting ﬁlter for the sensitivity S in Fig. loglog(w.w). We.denVd.M numVd=[.w). In this example.2s + . we can now test.01 1]. at . denWu=[1 . We observe that WS is lowpass and WT is high pass and. magWu=bode(numWu. the above reasoning seems to suggest that one can derive and synthe size weighting ﬁlters. Finally we have for Wu : 1 100s2 + . In order to make the Wu biproper again. Wu’). the gains of the various ﬁlters need several iterations to arrive at proper values. In reality this is an iterative process.3.
magWS. More trial and error for improving the weights is therefore necessary.10) ∀ω : R(jω) < γWR−1 (jω) = γ Wu (jω)Vd (jω) Note that for low frequencies the sensitivity S is the limiting factor.w.16 and looks similar to the controller found by classical means. 10. it ﬁnds γ = 1.*magWe. DESIGN EXAMPLE magWS=magVd. WR and WT’). For the example. control and complementary sensitivity weightings’) pause. pause. With the Matlab Robust Control toolbox we can compute a controller together with the associated γ.18 the sensitivity and the control sensitivity are shown together with their bounds which satisfy: ∀ω : S(jω) < γWS−1 (jω) = γ We (jω)Vd (jω) (10.w. γ has to be decreased.17 still show the oscillatory eﬀects of the elastic modes. The frequency response of the controller is displayed in Fig. magWT=magWR. so that we should adapt the weights once again.dmaga). Finally in Fig. magWR=magVd.4 Robust control toolbox The mixed sensitivity problem is well deﬁned now.16: H∞ controller found by Robust Control Toolbox. echo off 10. 10.magWT). loglog(w. the impulse responses displayed in Fig. only the weighting ﬁlters corresponding to S. In particular.dmags. but we emphasize that the toolbox lacks the ﬂexibility for larger.w. or diﬀerent structures. while for high frequencies the control sensitivity R puts the contraints. At about 1 < ω < 2rad/s they .magWR.w. xlabel(’WS. 10. 100 50 Gain dB 0 −50 −6 −4 −2 0 2 4 10 10 10 10 10 10 Frequency (rad/sec) 270 Phase deg 180 90 0 −6 −4 −2 0 2 4 10 10 10 10 10 10 Frequency (rad/sec) Figure 10. This toolbox can only be used for a simple mixed sensitivity problem. loglog(w. Nevertheless. title(’Compare additive modelerror weight and "real" additive errors’). T and/or R have to be speciﬁed./magm. title(’Sensitivity.152 CHAPTER 10. The example which we study in this chapter ﬁts in such a framework.magWR.*magWu. The conﬁguration structure is ﬁxed.338 which is somewhat too large.
W2. % (This may be viewed as a severe handicap!) These weights will be % called W1. and W3 and represent the transfer function weightings .4 −0. ROBUST CONTROL TOOLBOX 153 1 0.18: S and R and their bounds γ/WS  resp.dg). R and their bounds 4 10 3 10 2 10 1 10 0 10 −1 10 −2 10 −3 10 −3 −2 −1 0 1 2 10 10 10 10 10 10 Figure 10.4.cg.17: Step responses for closed loop system with P = M . γ/WR . change role. % Define the weigthing parameters weights.2 Amplitude 0 −0. syg=mksys(ag. The listing of the implementation is given as follows: % This is the script RAKETROB.6 0.8 −1 0 10 20 30 40 50 60 70 80 90 100 Time (secs) Figure 10.bg. % Next we need to construct the augmented plant.8 0.denm).4 0. To do so.M % Design of an Hinfinity control law using the ’Robust Control Toolbox’ % It is assumed that you ran *plantdef* before you invoked this script. % the robust control toolbox allows to define *three weigths* only. Ps or Pa and H∞ controller.cg. % % First get the nominal plant in the internal format [ag. S.bg.6 −0.10.2 −0.dg]=tf2ss(numm.
001.cla.df)..dw2).dcls).ba. ssw1=mksys(aw1. pause. [acla.dcle]=feedback([].m is given as: % % SCRIPT FILE FOR THE CALCULATION AND EVALUATION .cf.dw2]=tf2ss(conv(numVd.bf.cf.bw2.bcla.df.dcla]=feedback([].denWe)).da]=tf2ss(numa. ssw2=mksys(aw2. ssw3=mksys([].ccla.bf.[].1.dls.boundR.ccle.aa.bf. title(’S.bla.1).[.ca. it is the only command for this purpose in this toolbox. [aw2.cf.dw1]=tf2ss(conv(numVd.[].w).ccla.w.ccls. hold off. respectively.dcle).) [tss]=augss(syg.1.bcla.bw1.sscl]=hinfopt(tss.dg).w. R and their bounds’).cf.[].154 CHAPTER 10.ala. we have plenty of freedom to deﬁne the structure of the augmented plant ourselves.bcls.numWe).bs.[].blm. [gamma.bg.cf.1).magclu.bclu. The listing for the example under study raketmut.1). boundS=gamma.ds). step(acle.ag.ba.1.magcle. magcle=bode(acle.cclu.as.ag.bf.dens). step(acla.ccls.df]=branch(ssf). % Controller synthesis in this toolbox is done with the routine % *hinfopt*.dlm]=series(af.bcle. [aclu. In the “µanalysis and synthesis toolbox”.dclu.bcle. [aw1.1.dla]=series(af.clm.w.[].bw2. loglog(w. W2=Vd*Wu and W3 is not in use.1.dcle.[]. gamma=1/gamma. hold. control sensitivity (R) % and complementary sensitivity (T).dena). % returns Ps in state space form [aa. % returns the controller in state space form bode(af.cs.cw1.cls.dclu]=feedback(af.bla..5 H∞ design in mutools. [as.bg. DESIGN EXAMPLE % on the controlled system sensitivity (S). disp(’Optimal Hinfinity norm is approximately ’).[].conv(denVd.dcls]=feedback([]. step(acls.ssf.numWu).dw1).bw1.bs.dg./magWS.conv(denVd. pause./magWR.ssw1. We specify % this in state space form as follows. From our configuration % we find that W1 = Vd*We. disp(num2str(gamma)).boundS).alm.cf.cg.cw1.df.blm.[]).bcls.ssw2.1.bf.w).cclu. magclu=bode(aclu.[].ccle.cs. [acle. boundR=gamma. % the augmented system is now generated with the command *augss* % (sorry.dcla). % returns Pa in state space form [alm.clm.bf. [acls.cls.cg. simply indicated by “Mutools”.cw2.bclu.dlm.bls.cla.denWu)).[1:2].als.bls.df.ca.dls]=series(af.ssw3).0]). Check out the help information on this routine and % find out that we actually compute 1/gamma where gamma is % the ‘usual’ gamma that we use throughout the lecture notes.ccle. [als. [ala.df.dla. 10.da). % Next we evaluate the robust performance of this controller [af.bcle.cw2.1).ds]=tf2ss(nums.
sysoutname=’G’. systemnames=’Contr Plants’. inputvar=’[d]’. % MAKE GENERALIZED PLANT USING *sysic* systemnames=’Plant Vd We Wu’. inputvar=’[dw. % MAKE CLOSED LOOP INTERCONNECTION FOR Ps Plants=nd2sys(nums.fclp. % CALCULATE CONTROLLER [Contr. sysoutname=’realclps’. H∞ DESIGN IN MUTOOLS. inputvar=’[d]’. cleanupsysic=’yes’. Wu=nd2sys(numWu. cleanupsysic=’yes’. input_to_Contr=’[Plantsd]’. input_to_Plants=’[Contr]’. cleanupsysic=’yes’. input_to_Plant=’[u]’.denWe). input_to_Contr=’[Plantd]’.0.gamma]=hinfsyn(G. input_to_Contr=’[Plantad]’.10. sysoutname=’realclp’.Contr]’. input_to_Plant=’[Contr]’. 155 % OF CONTROLLERS USING THE MUTOOLBOX % % This script assumes that you ran the files plantdef and weights % % REPRESENT SYSTEM BLOCKS IN INTERNAL FORMAT Plant=nd2sys(numm.u]’.dens).1e4). systemnames=’Contr Planta’. input_to_We=’[PlantVd]’. % MAKE CLOSED LOOP INTERCONNECTION FOR MODEL systemnames=’Contr Plant’.1.Wu.PlantVd]’. We=nd2sys(numWe.denVd). input_to_Planta=’[Contr]’.1.10. sysoutname=’realclpa’.Contr]’. input_to_Vd=’[dw]’. outputvar=’[Plantsd.denm).Contr]’. % MAKE CLOSED LOOP INTERCONNECTION FOR Pa Planta=nd2sys(numa. sysic.5. inputvar=’[d]’. sysic. sysic. Vd=nd2sys(numVd. . outputvar=’[Plantad. input_to_Wu=’[u]’. cleanupsysic=’yes’.dena).denWu). outputvar=’[Plantd. outputvar=’[We.
It shows that the controller is not unique as it is just a controller in the set of controllers that obey G ∞ < γ with G stable.bc.337 and a controller that deviates somewhat from the robust control toolbox controller for high frequencies ω > 103 rad/s.numWe. [magcl.bcla. % FIRST REPRESENT SYSTEM BLOCKS IN INTERNAL FORMAT Ptsys=ltisys(’tf’. step(acl.denVd).dcl. . The step responses and sensitivities are virtually the same.dcl]=unpck(realclp). [acl. Furthermore there are aberrations due to numerical anomalies.u’. Running this script in Matlab yields γ = 1. The toolbox has its own format for the internal representation of dynamical systems which. For MIMOplants even for minimal γ the solution for the controller is not unique.6 LMI toolbox. bode(ac. title(’S . boundR=gamma. The “LMI toolbox” provides a very ﬂexible way for synthesizing H∞ controllers.bcl.phasecl. we refer to the routine magshape The calculation of H∞ optimal controllers proceeds as follows.dc). Vdsys=ltisys(’tf’. hold. the set of controllers contains more than one controller.ccla.dcla]=unpck(realclpa).cc.bcl.denm). pause.denWe).bcls.dcls]=unpck(realclps).ccla. loglog(w. step(acls.bcla.dcls). pause. Wusys=ltisys(’tf’.ccls.dcla). [acls.ccl.ccls.bcls.ccl./magWS. is not compatible with the formats of other toolboxes (as usual).bc. % MAKE GENERALIZED PLANT inputs = ’dw.bcl.numm.cc.boundS). [acla.w). boundS=gamma. pause.boundR. DESIGN EXAMPLE sysic.w]=bode(acl. Wesys=ltisys(’tf’.156 CHAPTER 10.dc]=unpck(Contr). hold off. % CONTROLLER AND CLOSED LOOP EVALUATION [ac.1. R and their bounds’).magcl. The toolbox can handle parameter varying systems and has a user friendly graphical interface for the design of weighting ﬁlers.w. 10.dcl). As for the latter.numWu. This script assumes that you ran the files % *plantdef* and *weights* before.w.denWu).ccl. step(acla. % Script file for the calculation of Hinfinity controllers % in the LMI toolbox. As long as γ is not exactly minimal./magWR. pause.numVd. in general.
Wusys). pause splot(Rsys.Vdsys. % MAKE CLOSEDLOOP INTERCONNECTION FOR MODEL Ssys = sinv(sadd(1.7. title(’Nyquist plot of Control Sensitivity’).’sv’)..Ksys]=hinflmi(G. pause 10.w). % EVALUATE CONTROLLED SYSTEM splot(Ksys.’ny’).10. Rsys = smult(Ksys.. pause splot(Rsys.1e4). title(’Step response of Control Sensitivity’).19: Variability of elastic mode. pause splot(Ssys. title(’Maximal sv of Control Sensitivity’).05 Figure 10. We suppose that the poles and zeros of the ﬂexible mode shift along a straight line in complex plane between the extreme positions of Ps and Pa as illustrated in Fig.7 µ design in mutools In µdesign we pretend to model the variability of the ﬂexible mode very tightly by means of speciﬁc parameters in stead of the rough modelling by an additive perturbation bound by Wu Vd .[1 1].Wu.. title(’Bodeplot of controller’). % CALCULATE HINFTY CONTROLLER USING LMI SOLUTION [gamma. µ DESIGN IN MUTOOLS 157 outputs = ’We.Wuin.outputs. . 6j K U 5j −. Wein. pause splot(Ssys.Ksys))). In that way we hope to obtain a less conservative controller. 10. pause splot(Rsys.’st’).Ptin.’ny’).06 −. Wuin=’Wu : u’.[]. Ptin=’Pt : u’.’sv’). title(’Step response of Sensitivity’).Ssys).smult(Ptsys. G=sconnect(inputs.Ptsys.0.’st’). Wein=’We : PtVd’.’bo’. Vdin=’Vd : dw’.Vdin.PtVd’. title(’Maximal singular value of Sensitivity’). title(’Nyquist plot of Sensitivity’) pause splot(Ssys.19.Wesys.
5 + . .5 rearrangement yields: s + .12) (s − 1)(s + 1) (s + . If we deﬁne the nominal position of the poles and zeros by a0 = . = .11) zeros : −.055 + .055 + .16) a0 + b20 + .250025δ 2 − δ(.50055.5 − .15) s + (2a0 + .055 − .5 + .125) Pt (s) = ∗ (10. DESIGN EXAMPLE Algebraically this variation can then be represented by one parameter δ according to: ∀ δ R.125 Pt (s) = −8 ∗ {1 + ∆mult } (10. D}: 0 1 0 0 A = A1 + dA = + −a20 − b20 −2a0 −γδ − δ 2 .14) s2 − 1 −δ(.01δ)s + a20 + b20 + δ(.055 and b0 = 5.5δ))(s + .5δ)) where the extra constant K0 is determined by Pt (0) = 1: the DCgain is kept on 1.5 + .055 − . If we let δ = δ(s) with δ(jω) ≤ 1 we have given the parameter delta much more freedom.005δ + j(5.250025δ 2 + δ(.250025 Note that for δ = 0 we simply have F = 1 + ∆mult = 1.055 + . γ = 5.005δ − j(5.055 − .5δ))(s + .005δ + j(5.02s + . but the whole description then ﬁts with the µanalysis.01a0 + b0 ) k0 = 02 (10.19) −γδ − δ 2 −.0011.5δ) So that the total transfer of the plant including the perturbation is given by: −8(s + .01a0 + b0 ) + .5 − .2502 we rewrite: .5 − . 0 0 dA = dC = −αδ 1 + α 0 (10. We have for the dynamic transfer F (s): sx = A1 x + B1 u1 + dA(s)x + dB(s)u1 dB(s) = 0 (10.13) (s + .158 CHAPTER 10.17) 0 0 B = B1 + dB = + 1 0 D = D1 + dD = 1 + 0 α = 11.01δ a20 +b20 +γδ+ δ2 C = C1 + dC = 0 0 + a2 +b2 −γδ+ δ2 −αδ −.02δ 0 0 (10.005δ ± j(5. C.5δ)) K0 (10.5δ) (10.02δ β−γδ+ δ 2 Next we can deﬁne 5 extra input lines in a vector u2 and correspondingly 5 extra output lines in a vector y2 that are linked in a closed loop via u2 = ∆y2 with: .005δ ± j(5. B. .005δ − j(5.01a0 + b0 ) The factor F = 1 + ∆mult can easily be brought into a state space description with {A. −1 ≤ δ ≤ 1 poles : −.02a0 + 2b0 ) ∆mult = k0 2 (10.250025δ 2 a2 + b20 + . .18) y1 = C1 x + D1 u1 + dC(s)x + dD(s)u1 dD(s) = 0 With β = a20 + b20 = 30.
24) −1 dD = D12 (I − ∆D22 ) ∆D21 (10. 10.10. 1 . The two representations correspond according to linear fractional transformation LFT: dA = B2 (I − ∆D22 )−1 ∆C2 (10.25) and with some patience one can derive that: .20. µ DESIGN IN MUTOOLS 159 δ 0 0 0 0 0 δ 0 0 0 ∆= 0 0 δ 0 0 (10. D22  6  . D1  Figure 10. D21 . .?  B1 sI C1 6 6 A1 ? ? .7.20: Dynamic structure of multiplicative error.20) 0 0 0 δ 0 0 0 0 0 δ and let F be represented by: x˙ A B1 B2 x y1 = C1 D11 D12 u1 (10. .23) −1 dC = D12 (I − ∆D22 ) ∆C2 (10.? . y2 ∆ u2 6 F .22) −1 dB = B2 (I − ∆D22 ) ∆D21 (10. D12  6 6 ? B2 C2 6 u1 y1 .21) y2 C2 D21 D22 u2 so that we have obtained the structure according to Fig.
We  6 u1 y1 6 ?  . .21: Augmented plant for µsetup. .0001.0001 . the direct feedthrough of the augmented plant D12 has insuﬃcient rank.02 A B1 B2 C1 D11 D12 = 1 0 0 0 0 0 0 0 (10.01 0 0 1 −α 0 −2αγ β 0 −. δI5 6 d˜ e˜ . Vd . . Note that we have skipped the weighted controller output u ˜. We had no real bounds on the actuator ranges and we actually determined Wu in the previous H∞ designs such that the additive model perturbations are covered.160 CHAPTER 10. ?  6 G K ? Figure 10. . Unfortunately. In this µdesign under study the model perturbations are represented by the ∆block so that in principle we can skip Wu ..21.27) 0 0 0 δ4 0 0 0 0 0 δ5 . It is just suﬃcient to avoid numerical anomalies without inﬂuencing substantially the intended weights. P .  F ? .11 1 −γ 0 0 − −. If we do so.  6 u2 y2 ? . 10. We have to penalise the input u and this is accomplished by the extra gain block with value . This weights u very lightly via the output error e. the µtoolbox was not yet ready to process uncertainty blocks in the form of δI so that we have to proceed with 5 independent uncertainty parameters δi and thus: δ1 0 0 0 0 0 δ2 0 0 0 ∆= 0 0 δ3 0 0 (10. DESIGN EXAMPLE 0 1 0 0 0 0 0 0 −β −.26) 0 0 0 0 0 1 0 0 C2 D21 D22 0 0 0 1 − β γ 0 0 β 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 This multiplicative error structure can be embedded in the augmented plant as sketched in Fig.
1.0. .epsilon/beta. sysic.sens1. clp2_g=frsp(clp2. pause.dvec2. .0.omega).0.0.3. input_to_We=’[DMULT(1)Vd]’.blkp). 1 1.. beta=30. vplot(’liv.k1_g). 0.0.0. gamma=5.clp2]=hinfsyn(mu_inc1.7..0. DDMULT=[1.We+.0.0.1.50055.0. vplot(’bode’. pause.0.25302. [bnds1.. but the controller will become more robust.0]..0. % CALCULATE HINF CONTROLLER [k1.sens2.1 1..DMULT(1)Vd]’.. .1e4)..0.1 1.100.alpha. 0.0. [k2.GMU.100..1.0.dvec1.omega).0.omega).1 1. 0. µ DESIGN IN MUTOOLS 161 As a consequence the design will be more conservative.1.blkp). mu_inc1=mmult(dsysL1..0.gamma. inputvar=’[u2(5).0.0.0011.02.0.2*alpha*gamma/beta.1 1].minv(dsysR1)). outputvar=’[DMULT(2:6). CDMULT=[0.0.vnorm(clp1_g). . cleanupsysic=’yes’.x]’.1].1.0. BDMULT=[0.1 1.0001*x. input_to_DMULT=’[Plant.0.1 1.0. % MAKE GENERALIZED MUPLANT systemnames=’Plant Vd We DMULT’.10. The commands for solving this design in the µtoolbox are given in the next script: % Let’s make the system DMULT first alpha=11.2).0. blk=[1 1.1.01]. .0.0. mat=[ADMULT BDMULT.. epsilon=.CDMULT DDMULT]. % FIRST muCONTROLLER [dsysL1.0.0.0.dw.1). % PROPERTIES OF CONTROLLER omega=logspace(2.dsysR1]=musynfit(’first’.1e4). blkp=[1 1. ADMULT=[0.0.clp1]=hinfsyn(GMU. .sens1.11].1. spoles(k1) k1_g=frsp(k1.1.pvec1]=mu(clp1_g.blkp.0.m’..dvec1.0.1. input_to_Vd=’[dw]’. DMULT=pss2sys(mat.0.. 0.250025..1.epsilon. clp1_g=frsp(clp1.0.1. [bnds2. 0. sysoutname=’GMU’.u2(1:5)]’..pvec2]=mu(clp2_g.1 1].0.0.bnds1).0.beta.1 1..gamma/beta.100). input_to_Plant=’[x]’.0.
ccl. hold off.dena). bode(ac.ccla.dcla]=unpck(realclpa). input_to_k2=’[Plantad]’.bcls.dcla). pause [ac.k2]’.bcl. input_to_k2=’[Plantsd]’.dcl). step(acla.cc. sysoutname=’realclps’.bcls. input_to_Planta=’[k2]’. input_to_Plants=’[k2]’. % MAKE CLOSED LOOP INTERCONNECTION FOR Pa Planta=nd2sys(numa. input_to_Plant=’[k2]’.ccls. outputvar=’[Plantd. % MAKE CLOSED LOOP INTERCONNECTION FOR Ps Plants=nd2sys(nums.bcla. % SECOND muCONTROLLER spoles(k2) k2_g=frsp(k2.ccl.bc. [acls. vplot(’bode’. inputvar=’[d]’.dcl]=unpck(realclp). % MAKE CLOSED LOOP INTERCONNECTION FOR MODEL systemnames=’k2 Plant’. cleanupsysic=’yes’.bc.k2_g). . outputvar=’[Plantsd.dc). pause.dens). pause.162 CHAPTER 10. sysic. inputvar=’[d]’. sysic. sysic. step(acls. cleanupsysic=’yes’.omega).bnds2). hold.k2]’.vnorm(clp2_g). cleanupsysic=’yes’.ccla.ccls. sysoutname=’realclpa’. DESIGN EXAMPLE vplot(’liv. systemnames=’k2 Planta’. sysoutname=’realclp’.dcls]=unpck(realclps). pause.dc]=unpck(k2). outputvar=’[Plantad.k2]’. step(acl.dcls). systemnames=’k2 Plants’.cc. input_to_k2=’[Plantd]’. pause.bcla.bcl. inputvar=’[d]’. % Controller and closed loop evaluation [acl. [acla.m’.
7. [k3. step(acl. pause [ac.ccla. sysoutname=’realclpa’.dvec3.bcla.bcl. [dsysL2.1). clp3_g=frsp(clp3.sens2.k3]’. [bnds3. µ DESIGN IN MUTOOLS 163 pause. pause.1e4).ccl.1. % MAKE CLOSED LOOP INTERCONNECTION FOR MODEL systemnames=’k3 Plant’.bc.dcl]=unpck(realclp). sysic.pvec3]=mu(clp3_g.blkp).k3]’. .k3]’. input_to_k3=’[Plantd]’.dcls). pause.omega).bnds3).vnorm(clp3_g).dc).dcla]=unpck(realclpa). pause. input_to_Plants=’[k3]’. bode(ac.dcla). input_to_k3=’[Plantad]’.ccl. sysic.0.dena).ccls. % MAKE CLOSED LOOP INTERCONNECTION FOR Pa Planta=nd2sys(numa.ccls. vplot(’liv. cleanupsysic=’yes’. [acla. input_to_Plant=’[k3]’. outputvar=’[Plantad. step(acla.minv(dsysR2)). inputvar=’[d]’.bcl. % MAKE CLOSED LOOP INTERCONNECTION FOR Ps Plants=nd2sys(nums. sysic. input_to_Planta=’[k3]’.cc.dcls]=unpck(realclps).ccla.10.1.cc. [acls.clp3]=hinfsyn(mu_inc2. systemnames=’k3 Plants’.bcla. sysoutname=’realclp’. step(acls.dens).dsysR2]=musynfit(dsysL1.dvec2.bc. outputvar=’[Plantd. sysoutname=’realclps’. mu_inc2=mmult(dsysL2.sens3. input_to_k3=’[Plantsd]’. % Controller and closed loop evaluation [acl. cleanupsysic=’yes’. outputvar=’[Plantsd. systemnames=’k3 Planta’.dc]=unpck(k3). inputvar=’[d]’.dcl).1. cleanupsysic=’yes’.m’.bcls.100.mu_inc1. inputvar=’[d]’.bcls.blkp. hold.
Ps and Pa . A second try with second order ﬁlters in the ﬁrst iteration brings µ = γ down to 5.4538 but still leads to an unstable Pa . but the second iteration results in very stable closed loops for all P . much too high. the ﬁrst approximate µ = γ = 19. The γ = 33. but the resulting closed loops are all stable. A second iteration with second order approximate ﬁlters even increases µ = γ = 29.9184 and a Pa that just oscillates in feedback. The cost is a very complicated controller of the order 4+10*4+10*4=44! . In the ﬁrst iteration the Pa still shows a ill damped oscillation. A second iteration with ﬁrst order ﬁlters increases the µ = γ to 21.9840 and yields un Pa unstable at closed loop. Stimulated nevertheless by the last attempt we increase the ﬁrst iteration order to 3 which produces a µ = γ = 4.4670 and Pa remains unstable.2902. DESIGN EXAMPLE pause. Next one is invited to choose the respective orders of the ﬁlters that approximate the Dscalings for a number of frequencies.4876 and 10.164 CHAPTER 10.8217. In second iteration with second order ﬁlters the program fails altogether. First the H∞ controller for the augmented plant is computed. If one chooses a zero order.0406. Going still higher we take both iterations with 4th order ﬁlters and the µ = γ take the respective values 4. hold off.
1) 165 . also for unstable systems. The lower. 11.Chapter 11 Basic solution of the general problem In this chapter we will present the principle of the solution of the general problem. can be derived from the blockscheme in Fig. Figure 11. as long as they cause stable poles from: sI − A − B2 F  = 0 sI − A − HC2  = 0 (11. The fundamental solution discussed here is a generalisation of the previously discussed “internal model control” for stable systems.1: Solution principle The upper major block represents the augmented plant. For reasons of clarity.1. we have skipped the direct feedthrough block D. The set of all stabilising controllers. major block can easily be recognised as a familiar LQGcontrol where F is the state feedback control block and H functions as a Kalman gain block. The computational solution follows a somewhat diﬀerent direction (nowadays) and will be presented in the next chapter 13. It oﬀers all the insight into the problem that we need. Neither F nor H have to be optimal yet.
. we had as optimisation criterion: min G11 + G12 K(I − G22 K)−1 G21 ∞ (11. This means that the model thereafter. 11.  6 6 y u y u T ? K ? = = . incorporated in block T . 11. If T22 = 0. the consequent aﬃne expression in controller Q can be interpreted then very easily as a simple forward tracking problem as illustrated in Fig.2) Kstabilising “Around” the stabilising controller Knom . This is the moment to step back for a moment and memorise the internal model control where we were also dealing with a comparable transfer Q. if Q itself is a stable transfer. Originally. . For analysing the eﬀect of the extra feedback by Q.3. If Q = 0. we can be sure that all transfers Tij will be stable. that is used in the nominal controller. From the viewpoint of Q: it sees no transfer between v and e. 11. the output error e.166 CHAPTER 11.3) Qstabilising because T22 appears to be zero! As illustrated in Fig. only excited by v. is called the Youla parametrisation after its inventor. It leaves the augmented plant fully and exactly known. . for all these eﬀects one should have taken care by appropriately chosen ﬁlters that guard the robustness.2: Combining Knom and G into T . which we will not explicitly give here for reasons of compactness. This set is then clustered on the nominal controller Knom . we get a similar criterion in terms of Tij that highly simpliﬁes into the next aﬃne expression: min T11 + T12 QT21 ∞ (11. and certainly the ultimate controller K can be expressed in the “parameter” Q.G . . . . But then the simple forward tracking problem of Fig. This expression. if w=0. To understand that this transfer is zero. Because Knom stabilised the augmented plant for Q = 0. Once more Fig.G . T22 is actually the transfer between output error e and input v of Fig. Although the real process may deviate and cause a model error.2. BASIC SOLUTION OF THE GENERAL PROBLEM The really new component is the block transfer Q(s) as an extra feedback operating on the output error e. ﬁts exactly.  . It incorporates the nominal plant model P and known ﬁlters. deﬁned by F and H.4 pictures that structure with comparable signals v and e. 11.3 can only remain stable. As a consequence we now have the set of all stabilising controllers by just choosing Q stable. Consequently. we can combine the augmented plant and the nominal controller in a block T as illustrated in Fig. w z w z w z . 11. must be zero! And the corresponding transfer is precisely T22 .  6 6 K v e v e Q ? Q ? Figure 11.3.nom . we just have the stabilising LQGcontroller that we will call here the nominal controller Knom . we have to realise that the augmented plant is completely and exactly known.1. 11.
167
Figure 11.3: Resulting forward tracking problem.
d+η
+?
 Pt 
r e v 6 +
  Q 
+ +
6− ?
?  P 
−
?
Figure 11.4: Internal model control structure.
Indeed, for P = Pt and the other external input d (to be compared with w) being zero,
the transfer seen by Q between v and e is zero. Furthermore, we also obtained, as a result
of this T22 = 0, aﬃne expressions for the other transfers Tij , being the bare sensitivity
and complementary sensitivity by then. So the internal model control can be seen as a
particular application of a much more general scheme that we study now. In fact Fig. 11.1
turns into the internal model of Fig. 11.4 by choosing F = 0 and H = 0, which is allowed,
because P and thus G is stable.
The remaining problem is:
min T11 + T12 QT21 ∞ (11.4)
Qstable
Note that the phrase Qstabilising is now equivalent with Qstable! Furthermore we
may as well take other norms provided that the respective transfers live in the particular
normed space. We could e.g. translate the LQGproblem in an augmented plant and then
require to minimise the 2norm in stead of the ∞norm. As Tij and Q are necessarily stable
they live in H2 as well so that we can also minimise the 2norm for reasons of comparison.(
If there is a direct feed through block D involved, the 2norm is not applicable, because a
constant transfer is not allowed in L2 .)
(The remainder of this chapter might pose you to some problems, if you are not well introduced into “functional analysis”.
)
Then just try to make the best out of it as it is only one page.
It appears that we can now use the freedom, left in the choices for F andH, and it can
168 CHAPTER 11. BASIC SOLUTION OF THE GENERAL PROBLEM
be proved that F and H can be chosen (for square transfers) such that :
T12 ∗ T12 = I (11.5)
∗
T21 T21 = I (11.6)
In mathematical terminology these matrices are therefore called inner, while engineers
prefer to denote them as all pass transfers. These transfers all possess poles in the left
half plane and corresponding zeros in the right half plane exactly symmetric with respect
to the imaginary axis. If the norm is restricted to the imaginary axis, which is the case
for the ∞norm and the 2norm, we may thus freely multiply by the conjugated transpose
of these inners and obtain:
min T11 + T12 QT21 = min T12 ∗ T11 T21 ∗ + T12 ∗ T12 QT21 T21 ∗ = (11.7)
Qstable Qstable
def
= min L + Q (11.8)
Qstable
By the conjugation of the inners into Tij ∗ , we have eﬀectively turned zeros into poles
and vice versa, thereby causing that all poles of L are in the right halfplane. For the
norm along the imaginary axis there is no objection but more correctly we have to say
now that we deal with the L∞ and the L2 spaces and norms. As outlined in chapter 5 the
(Lebesque) space L∞ combines the familiar (Hardy) space H∞ of stable transfers and the
complementary H∞ − space, containing the antistable or anticausal transfers that have all
their poles in the right half plane. Transfer L is such a transfer. Similarly the space L2
consists of both the H2 and the complementary space H2⊥ of anticausal transfers. The
question then arises, how to approximate an anticausal transfer L by a stable, causal Q in
the complementary space where the approximation is measured on the imaginary axis by
the proper norm. The easiest solution is oﬀered in the L2 space, because this is a Hilbert
space and thus an inner product space which implies that H2 and H2⊥ are perpendicular
(that induced the symboling). Consequently Q is perpendicular to L and can thus never
“represent” a component of L in the used norm and will thus only contribute to an increase
of the norm unless it is taken zero. So in the 2norm the solution is obviously: Q = 0.
Unfortunately, for the space L∞ , where we are actually interested in, the solution is
not so trivial, because L∞ is a Banach space and not an inner product space. This famous
problem :
−
L ∈ H∞ : min L + Q ∞ (11.9)
Q∈H∞
has been given the name Nehari problem to the ﬁrst scientist, studying this problem. It
took considerable time and energy to ﬁnd solutions one of which is oﬀered to you in chapter
8, as being an elegant one. But maybe you already got some taste here of the reasons why
it took so long to formalise classical control along these lines. As ﬁnal remarks we can
add:
• Generically minQ∈H∞ σ ¯ (L + Q) is all pass i.e. constant for all ω ∈ R. T12 and T21
were already taken all pass, but also the total transfer from w to z viz. T11 +T12 QT21
will be all pass for the SISO case, due to the waterbed eﬀect.
• For MIMO systems the solution is not unique, as we just consider the maximum
singular value. The freedom in the remaining singular values can be used to optimise
extra desiderata.
11.1. EXERCISES 169
11.1 Exercises
7.1:Consider the following feedback system:
• Plant: y = P (u + d)
• Controller: u = K(r − y)
• Errors: e1 = W1 u and e2 = W2 (r − y)
It is known that r 2 ≤ 1 and d 2 ≤ 1, and it is desired to design K so as to minimise:
e
1 2
e2
a) Show that this can be formulated as a standard H∞ problem and compute G.
b) If P is stable, redeﬁne the problem aﬃne in Q.
7.2: Take the ﬁrst blockscheme of the exercise of chapter 6. To facilitate the computa
tions we just consider a SISOplant and DCsignals (i.e. only for ω = 0!) so that we avoid
complex computations due to frequency dependence. If there is given that ξ 2 < 1 and
P = 1/2 then it is asked to minimise y˜ 2 under the condition x 2 < 1 while ξ is the
only input.
a) Solve this problem by means of a mixed sensitivity problem iteratively adapting Wy
renamed as β. Hint: First deﬁne V and Wx . Sketch the solution in terms of controller
C and compute the solution directly as a function of β.
b) Solve the problem exactly: minimise y˜ 2 while x 2 < 1. Why is there a diﬀerence
with the solution sub a) ? Hint: For this question it is easier to deﬁne the problem
aﬃne in Q.
170 CHAPTER 11. BASIC SOLUTION OF THE GENERAL PROBLEM
Francis. Glover. August 1989. P.e. 171 . K. This chapter is organized as follows. four authors of a famous and prize winning paper in the IEEE Transactions on Automatic Control1 . The solution presented in this chapter admits a relatively straightforward implementation in a software environment like Matlab. this so called ‘state space solution’ to the H∞ control problem is extremely elegant and worth a thorough treatment. where we present the main results concerning H∞ controller synthesis in Theorem 12. This has been a problem of main concern in the early 80s. simplest. we did not address the question how such a controller is actually computed.Chapter 12 Solution to the general H∞ control problem 12. i.1 Introduction In previous chapters we have been mainly concerned with properties of control conﬁgura tions in which a controller is designed so as to minimize the H∞ norm of a closed loop transfer function. feedback controllers which stabilize a closed loop system and at the same time minimize the H∞ norm of a closed loop transfer function. However. The solution which we present here is the result of almost a decenium of impressive research eﬀort in the area of H∞ optimal control and has received widespread attention in the control community. In this chapter we will treat a solution of the general H∞ control problem which popularly is referred to as the ‘DGKFsolution’. Glover. So far. These results will be used in subsequent sections.. IEEE Transactions on Automatic Control. for practical applications it is suﬃcient to know the precise conditions under which the state space solution ‘works’ so as to have a computationally reliable way to obtain and to de sign H∞ optimal controllers. by J. From a math ematical and system theoretic point of view. An amazing number of scientiﬁc papers have appeared (and still appear!) in this area of research.7. the acronym standing for Doyle. Doyle. and computationally most reliable and eﬃcient way to synthesize H∞ optimal controllers. Khargonekar and B. The Robust Con trol Toolbox has various routines for the synthesis of H∞ optimal controllers and we will devote a section in this chapter on how to use these routines. In the next section we ﬁrst treat the problem of how to compute the H2 norm and the H∞ norm of a transfer function. Khargonekar and Francis. We will make a comparison to the H2 optimal 1 “State Space Solutions to the Standard H2 and H∞ Control Problems”. Various mathemat ical techniques have been developed to compute ‘H∞ optimal controllers’. In this chapter we treat a solution to a most general version of the H∞ optimal control problem which is now generally accepted to be the fastest.
we deﬁned ∞ T M := eA t C T CeAt dt 0 .2 The computation of system norms We start this chapter by considering the problem of characterizing the H2 and H∞ norms of a given (multivariable) transfer function H(s) in terms of a state space description of the system.2. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM control algorithms. We suppose that the state space has dimension n and. 12. Since H(s) has m inputs. Since H(s) is stable. Here. C and D are real matrices deﬁning the state space equations x(t) ˙ = Ax(t) + Bw(t) (12. and for i = 1.1) (otherwise the H2 norm is inﬁnite so H ∈/ H2 ) and if bi denotes the ith column of B. Let H(s) be a stable transfer function of dimension p × m and suppose that H(s) = C(Is − A)−1 B + D where A. . We will consider the continuous time case only for the discrete time versions of the results below are less insightfull and more involved. to avoid redundancy. 12. their L2 norms satisfy ∞ T z i 2 = 2 bTi eA t C T CeAt bi dt 0 ∞ T T = bi eA t C T CeAt dtbi 0 T = bi M b i . Let us recall the deﬁnitions of the H2 and H∞ norms of H(s): ∞ H(s) 2 := 1/2π 2 trace(H(jω)H ∗ (jω))dω −∞ H(s) ∞ := sup σ ¯ (H(jω)) ω∈R where σ ¯ denotes the maximal singular value. we have m of such responses. B.1). which we brieﬂy describe in a separate section and which you are probably familiar with. .1) z(t) = Cx(t) + Dw(t).1 The computation of the H2 norm We have seen in Chapter 5 that the (squared) H2 norm of a system has the simple inter pretation as the sum of the (squared) L2 norms of the impulse responses which we can extract from (12. all eigenvalues of A are assumed to be in the left half complex plane. m. . we moreover assume that (12.1) deﬁnes a minimal state space representation of H(s) in the sense that n is as small as possible among all state space representations of H(s).172 CHAPTER 12. . then the ith impulse response is given by zi (t) = CeAt bi . If we assume that D = 0 in (12.
1) then m H(s) 22 = trace(B T M B) = bTi M bi . i=1 Thus the H2 norm of H(s) is given by a trace formula involving the state space matrices A.1. Then 1. C. the pair (A.1 which is obtained from the fact that H(s) 2 = H ∗ (s) 2 . C) is observable3 . B. Since xT M x ≥ 0 for all x ∈ Rn we have that M is non negative deﬁnite 2 . If M is the observability gramian of (12. B.1 provides an algebraic characterization of the H2 norm which proves extremely useful for computational purposes. CeAt x0 = 0 for all t ≥ 0 only if the initial condition x0 = 0. B. There is a ‘dual’ version of theorem 12.2 therefore states that the H2 norm of H(s) can also be obtained by computing the controllability gramian associated with the system (12. C. We state it for completeness Theorem 12. 2 which is not the same as saying that all elements of M are nonnegative!!! 3 that is. Thus. H(s) 2 < ∞ if and only if D = 0. . from which the observability gramian M is computed.2. In fact.1).12.1). D) is a minimal representation of H(s).3) The square symmetric matrix W is called the controllability gramian of the system (12.2).2 Under the same conditions as in theorem 12. (12.1). the observability gramian M satisﬁes the equation M A + AT M + C T C = 0 (12. Theorem 12. The observability gramian M completely determines the H2 norm of the system H(s) as is seen from the following characterization. Suppose that (A.2) which is called a Lyapunov equation in the unknown M . 2. The main issue here is that Theorem 12. the Lyapunov equation (12. C. D) deﬁne a minimal representation of the transfer function H(s).1 Let H(s) be a stable transfer function of the system described by the state space equations (12. THE COMPUTATION OF SYSTEM NORMS 173 which is a square symmetric matrix of dimension n × n which is called the observability gramian of the system (12.1). and the matrix M is the only symmetric nonnegative deﬁnite solution of (12. which is a much simpler task than solving the inﬁnite integral expression for M . Since we assumed that the state space parameters (A.2). H(s) 22 = trace(CW C T ) where W is the unique symmetric nonnegative deﬁnite solution of the Lyapunov equation AW + W AT + BB T = 0. M can be computed from an algebraic equation. Theorem 12.
e. w) = xT0 Kx0 − w + [γ 2 I − DT D]−1 (B T K − DT C)x 2(γ 2 I−DT D) (12.7) for all w ∈ L2 .4) for some real number γ ≥ 0. By performing this test for various values of γ we may get arbitrarily close to the norm H(s) ∞ . for ﬁxed x0 . Motivated by the middle expression of (12. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM 12. 0 where z is the output of the system (12. (12. w) := z 22 −γ 2 w 22 ∞. substitute (12.2. Now.1).1) is given. w) ≤ J(x0 .8) and integrate over [0.1) when the input w is applied and the initial state x(0) is taken to be x0 .8) there holds J(x0 . w∈L2 w 2 This means that H(s) ∞ ≤ γ if and only if Hw 22 −γ 2 w 22 = z 22 −γ 2 w 2 ≤ 0. . We will brieﬂy outline the main ideas behind this test. . we look for an optimal input w∗ ∈ L2 such that J(x0 . (12.5) for all w ∈ L2 .174 CHAPTER 12. that the H∞ norm is equal to the L2 induced norm of the transfer function.. dividing (12. but instead of ﬁnding an exact expression for H(s) ∞ . We will moreover require that the state trajectory x(t) generated by this so called worst case input is stable in the sense that the solution x(t) of the state equation x˙ = Ax + Bw∗ with x(0) = x0 satisﬁes limt→0 x(t) = 0. (Indeed. For ﬁxed initial condition x0 we will be interested in maximizing this criterion over all possible inputs w. If you are interested. i. work out the derivative dt x (t)Kx(t) d T using (12. we will ﬁnd an algebraic condition whether or not H(s) ∞ < γ (12.1) when the input w is applied and when the initial state x(0) is set to 0. Let us take γ > 0 such that γ 2 I − DT D is positive deﬁnite (and thus invertible) and introduce the following Riccati equation AT K + KA + (B T K − DT C)T [γ 2 I − DT D]−1 (B T K − DT C) + C T C = 0.4) holds for certain value of γ ≥ 0. suppose that γ ≥ 0 and the system (12.8) It is then a straightforward exercise in linear algebra4 to verify that for any real symmetric solution K of (12. ∞) to obtain the desired expression (12.9) 4 A ‘completion of the squares’ argument. Recall from Chapter 5. w∗ ) (12. we will set up a test so as to determine whether (12. z = Hw is the output of the system (12. The solution to this problem is simpler than it looks.2 The computation of the H∞ norm The computation of the H∞ norm of a transfer function H(s) is slightly more involved. (12. Thus. Precisely. Here.6) = z(t)2 − γ 2 w(t)2 dt.5) we introduce for arbitrary initial conditions x(0) = x0 and any w ∈ L2 . We will again present an algebraic algorithm.5) by w 22 gives you the equivalence). Hw 2 H(s) ∞ = sup . the criterion J(x0 .9).
These observations provide the main idea behind the proof of the following result. have a look at the expression (12. Then 1. H(s) ∞ < ∞ if and only if eigenvalues λ(A) ⊂ C− 2.8). w∗ ) = xT0 Kx0 for all w ∈ L2 . It shows that for all w ∈ L2 .1)).8) admits a stabilizing solution.1). If so. w) is at most equal to xT0 Kx0 .8). Step 1. The only extra requirement for the solution K to (12. we thus obtain that J(x0 . How does this result convert into an algorithm to compute the H∞ norm of a transfer function? The following bisection type of algorithm works in general extremely fast: Algorithm 12. So there exists at most one stabilizing solution to (12.4 INPUT: stopping criterion ε > 0 and two numbers γl . .7) (again. Step 4. w) = z 22 −γ 2 w 22 ≤ 0 for all w ∈ L2 .2. THE COMPUTATION OF SYSTEM NORMS 175 for all w ∈ L2 which drive the state trajectory to zero for t → ∞.5) and it follows that H(s) ∞ ≤ γ.8). provided the feedback (12. Theorem 12.11) which then maximizes J(x0 . Verify whether (12. set γh = γ. (12. γh satisfying γl < H(s) ∞ < γh .10) 0 Now. Step 3. This is precisely (12. w) ≤ J(x0 . Here. Now. This worst case input achieves the inequality (12. and equality is obtained by substituting for w the state feedback w∗ (t) = −[γ 2 I − DT D]−1 (B T K − DT C)x(t) (12.9). One can show that whenever a stabilizing solution K of (12. f 2Q with Q = QT > 0 denotes the ‘weighted’ L2 norm ∞ f Q := 2 f T (t)Qf (t)dt. If not. H(s) ∞ < γ if and only if there exists a stabilizing solution K of the Riccati equation (12.8) is therefore that the eigenvalues λ{A + [γ 2 I − DT D]−1 (B T K − DT C)} ⊂ C− all lie in the left half complex plane.3 Let H(s) be represented by the (minimal) state space model (12.12. Put ε¯ = γh − γl Step 5. The latter is precisely the case when the solution K to (12.11) stabilizes the system (12. set γl = γ.8) is nonnegative deﬁnite and for obvious reasons we call such a solution a stabilizing solution of (12. Step 2.8) exists. it is unique. elso go to Step 1. (for which limt→∞ x(t) = 0) the criterion J(x0 . For a stabilizing solution K. If ε¯ ≤ ε then STOP. Set γ = (γl + γh )/2. taking x0 = 0 yields that J(0. w) over all w ∈ L2 .
Every such admissible controller K gives rise to a closed loop system which maps disturbance inputs w to the tobecontrolled output variables z. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM OUTPUT: the value γ approximating the H∞ norm of H(s) within .3 The computation of H2 optimal controllers The computation of H2 optimal controllers is not a subject of this course. u denote the control inputs. is the fact that the computation of the H∞ norm of a transfer function (just like the computation of the H2 norm of a transfer function) has been transformed to an algebraic problem. z is the to be controlled output signal and y denote the measurements. The H2 optimal control problem is formalized as follows Synthesize a stabilizing controller K for the generalized plant G such that M 2 is minimal. The generalized plant G is supposed to be given. We consider the general control conﬁguration as depicted in Figure 13. 12. noise signals. M = G11 + G12 K(I − G22 K)−1 G21 . However. The second step of this algorithm involves the investigation of the existence of stabilizing solutions of (12.8). This implies a fast and extremely reliable way to compute these system norms. Admissable controllers are all linear timeinvariant systems K that internally stabilize the conﬁguration of Figure 13. though. w z . Precisely. whereas the controller K needs to be designed.  G  u y K Figure 12. H2 optimal controllers coincide with the well known LQG controllers which some of you may be familiar with from earlier courses.1. reference inputs). which is a standard routine in Matlab. What is of crucial importance here. based on the measure ments y. The solution of this important problem is split into two independent problems and makes use of a separation structure: • First.176 CHAPTER 12. .1.1: General control conﬁguration w are the exogenous inputs (disturbances. if M denotes the closedloop transfer function M : w → z. Here. obtain an “optimal estimate” x ˆ of the state variable x. then with the obvious partitioning of G. We will not go into the details of an eﬃcient algebraic implementation of the latter problem. for the sake of completenes we treat the controller structure of H2 optimal controllers once more in this section. In fact.
The other inputs and outputs are given by: w1 w= . y = y.12) we assume that the system G has no direct feedthrough in the transfers w → z (otherwise M ∈ / H2 ) and u → y (mainly to simplify the formulas below). use this estimate x ˆ as if the controller would have perfect knowledge of the full state x of the system.2: The LQG problem. All these conditions are easy to grasp if we compare the set of equations (12. w2 . where v and w are independent. They represent the direct state disturbance and the measurement noise. .[sI − A]−1 . u = u. The latter two conditions are necessary to guarantee the existence of stabilizing controllers.12) y = C2 x + w2 w1 where the disturbance input w = w 2 is assumed to be partitioned in a component w1 acting on the state (the process noise) and an independent component w2 representing measurement noise. We further assume that the pair (A. In order to cope with the requirement of the equal variances of the inputs they are inversely scaled −1 −1 by blocks Rv 2 and Rw 2 to obtain inputs w1 and w2 that have unit variances.12) with the LQGproblem deﬁnition as proposed e.  Figure 12. C2 ) is detectable and that the pair (A. B2 ) is stabilizable. the Kalman ﬁlter is the optimal solution to the ﬁrst problem and the state feedback linear quadratic regulator is the solution to the second problem. The output of this augmented plant is deﬁned by: + 1 . As is well known. B . in the course “Modern Control Theory”: Consider Fig. THE COMPUTATION OF H2 OPTIMAL CONTROLLERS 177 • Second. Q2 x x˜ z= 1 = R u 2 u˜ in order to accomplish that ∞ z 22 = {xT Qx + uT Ru}dt 0 (compare the forthcoming equation (12. We will devote a short discussion on these two subproblems. In (12.g.16)).12. C . Let the transfer function G be described in state space form by the equations x˙ = Ax + B1 w1 + B2 u z = C1 x + Du (12. Gaussian noises of variance respectively Rv and Rw .12.2 6u ˜ w1 6x ˜ w2 ? ? 1 1 1 1 R2 2 Rv Q 2 Rw2 6 v 6 w u ? x ? y . white.3.
14) where H = Y C2T and Y is the unique square symmetric solution of 0 = AY + Y AT − Y C2T C2 Y + B1 B1T (12. The solution Y to the Riccati equation (12. Thus.12) be given and assume that the pair (A. .Filter Figure 12. w .5 is put completely in a deterministic setting: no stochastics are necessary here. C2 ) is detectable. − 6x ˆ y u . using our deterministic interpretation of the H2 norm of a transfer function. the optimal ﬁlter which minimizes the H2 norm of the mapping Me : w → e in the conﬁguration of Figure 12.used) x + e u Plant . That is.3is given by the state space equations dˆ x x(t) + B2 u(t) + H(y(t) − C2 x (t) = Aˆ ˆ(t)) (12. The minimal H2 norm of the transfer Me : w → e is given by Me 22 = trace Y . It is implemented as follows. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM The resulting state space description then is: 1 A Rv2 0 B A B1 0 B2 1 Q2 0 0 0 G = C1 0 0 D = 0 1 C2 0 I 0 0 0 R2 1 C 0 Rw2 0 The celebrated Kalman ﬁlter is a causal. Note that Theorem 12.15) or the gain matrix H = Y C2T are sometimes referred to as the Kalman gain of the ﬁlter (12. 2. z (not .3 for which the L2 norm of the impulse response of the estimator Me : w → e is minimized.178 CHAPTER 12.5 (The Kalman ﬁlter.13).) Let the system (12. Then 1. and producing an estimate x ˆ of the state x in such a way that the H2 norm of the transfer function from the noise w to the estimation error e = x−x ˆ is minimal. the Kalman ﬁlter is the optimal ﬁlter in the conﬁguration of Figure 12. For our second subproblem we assume perfect knowledge of the state variable.13) dt = (A − HC2 )ˆ x(t) + B2 u(t) + Hy(t) (12.12) and our aim is to ﬁnd a state feedback control law of the form u(t) = F x(t) . linear mapping taking the control input u and the measurements y as its inputs.15) which has the property that λ(A − HC2 ) ⊂ C− . we assume that the controller has access to the state x of (12.3: The Kalman ﬁlter conﬁguration Theorem 12.
In this subproblem the measurements y and the measurement noise w2 evidently do not play a role. B2 ) is stabilizable. The separation structure of the optimal H2 controller is depicted in Figure 12.) Let the system (12.18) then a straightforward exercise in ﬁrstyearslinearalgebra gives you that z 22 = xT0 Xx0 + u − [DT D]−1 (B2T X + DT C1 )x DT D where X is the unique solution of the Riccati equation (12. From the latter expression it is immediate that z 2 is minimized if u is chosen as in (12.16 yields the so called quadratic regulator and supposes only an initial value x(0) and no inputs w. Since Mx 2 is equal to the L2 norm of the corresponding impulse response. the optimal state feedback regulator which minimizes the H2 norm of the transfer Mx : w → z is given by u(t) = F x(t) = −[DT D]−1 (B2T X + DT C1 )x(t) (12. The result of Theorem 12.13).17) where X is the unique square symmetric solution of 0 = AT X + XA − (B2T X + DT C1 )T [DT D]−1 (B2T X + DT C1 ) + C1T C1 (12. The solution is independent of the initial value x(0) and thus such an initial value can also be accomplished by dirac pulses on w1 .12. Then 1.6 (The state feedback regulator. closedloop. The ﬁnal solution is as follows: Theorem 12.5 and Theorem 12.12).18) which has the property that λ(A − B2 F ) ⊂ C− . The minimal H2 norm of the transfer Mx : w → z is given by R 22 = trace B1T XB1 . our aim is therefore to ﬁnd a control input u which minimizes the criterion ∞/ 0 z 22 = xT (t)C1T C1 x(t) + 2uT (t)DT C1 x(t) + uT (t)DT Du(t) dt (12.19) u(t) = F x ˆ(t) where the gains H and F are given as in Theorem 12.3.17) by the Kalman ﬁlter estimate xˆ generated in (12.16).17). 2. Minimization of equation 12.10).18) and where we used the notation of (12. . the optimal H2 controller K is represented in state space form by dt (t) = (A + B2 F − HC2 )ˆ dˆ x x(t) + Hy(t) (12. If X satisﬁes the Riccati equation (12. 5 The word ‘principle’ is an incredible misnamer at this place for a result which requires rigorous math ematical deduction.16) 0 subject to the system equations (12. The so called certainty equivalence principle or separation principle 5 implies that an optimal controller K which minimizes M (s) 2 is obtained by replacing the state x in the state feedback regulator (12. transfer function Mx : w → z is minimized. In equations.4.6. This is similar to the equivalence of the quadratic regulator problem and the stochastic regulator problem as discussed in the course “Modern Control Theory”.12) be given and assume that (A. THE COMPUTATION OF H2 OPTIMAL CONTROLLERS 179 such that the H2 norm of the state controlled.6 is easily derived by using a completion of the squares argument applied for the criterion (12. The optimal solution of the H2 optimal control problem is now obtained by combining the Kalman ﬁlter with the optimal state feedback regulator.
output weightings and interconnection structures). We defer this background material to the next section.7 Note that already at this stage of formalizing the H∞ control problem.3 and the cost criterion (12. input weightings. if M denotes the closedloop transfer function M : w → z. we can see that the solution of the problem is necessary going to be of a ‘testing type’. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM K(s) u y x ˆ Regulator Filter u Figure 12. the block K needs to be designed. The background and the main ideas behind the algorithms are very similar to the ideas behind the derivation of Theorem 12. Admissable controllers are all linear. then with the obvious partitioning of G. this is a suboptimal H∞ control problem.4 The computation of H∞ optimal controllers In this section we will ﬁrst present the main algorithm behind the computation of H∞ optimal controllers. In this section we present the main algorithm and we will resist the temptation to go into the details of its derivation. Precisely.2 we learned that the characterization of the H∞ norm of a transfer function is expressed in terms of the existence of a particular solution to an algebraic Riccati equation. The block G contains all the ‘known’ features (plant model. The optimal H∞ control problem amounts to minimizing M (s) ∞ over all stabilizing controllers K. M = G11 + G12 K(I − G22 K)−1 G21 and the H∞ control problem is formalized as follows Synthesize a stabilizing controller K such that M ∞ < γ for some value of γ > 0. Every such admissible controller K gives rise to a closed loop system which maps disturbance inputs w to the to becontrolled output variables z. We consider again the general control conﬁguration as depicted in Figure 13. if γ0 := inf Kstabilizing M (s) ∞ then the optimal control problem is to determine γ0 and an optimalK that achieves this minimal norm. The synthesis algorithm will require to 6 Although it took about ten years of research! 7 Strictly speaking. It should therefore not be a surprise6 to see that the computation of H∞ optimal controllers hinges on the computation of speciﬁc solutions of Riccati equations. The block K denotes the “generalized controller” and includes typically a feedback controller and/or a feedforward controller. . The block G denotes the “generalized system” and typically includes a model of the plant P together with all weighting functions which are speciﬁed by the ‘user’.1 with the same interpretation of the signals as given in the previous section. From Section 12.6).1. Precisely.4: Separation structure for H2 optimal controllers 12. this problem isvery hard to solve in this general setting. All variables may be multivariable. However.180 CHAPTER 12. timeinvariant sys tems K that internally stabilize the conﬁguration of Figure 13.
A2 The triple (A. • See whether there exist a controller K such that M (s) ∞ < γ. Assumption A4. B1 . C2 ) is stabilizable and detectable. THE COMPUTATION OF H∞ OPTIMAL CONTROLLERS 181 • Choose a value of γ > 0. consider the generalized system G and let x˙ = Ax + B1 w + B2 u z = C1 x + D11 w + D12 u (12. then increase γ. simply requires that ∞ ∞ z 2 = 2 C1 x + D12 u dt = 2 (xT C1T C1 x + uT u)dt. With some sacriﬁce of generality we make the following assumptions. Assumptions A4 and A5 are just scaling assumptions that can be easily removed. Thus.12. but will make all formulas and equations in the remainder of this chapter acceptably complicated. T A4 D12 (C1 D12 ) = (0 I). If no. we thus have a unit weight on the control input signal u. B2 . assumption A5 claims that state noise (or process noise)is independent of measurement w1 noise. With assumption A5 we can partition the exogenous noise input w as w = w2 where w1 only aﬀects the state x and w2 only aﬀects the measurements y. • If yes. The foregoing assumptions therefore require our state space model to take the form x˙ = Ax + B1 w1 + B2 u z = C1 x + D12 u (12. G22 (s) = C2 (Is − A)−1 B2 + D22 . C1 ) is stabilizable and detectable. G11 (s) = C1 (Is − A)−1 B1 + D11 . G12 (s) = C1 (Is − A)−1 B2 + D12 G21 (s) = C2 (Is − A)−1 B1 + D21 . This assumption is precisely equivalent to say ing that internally stabilizing controllers exist. Assumption A1 states that there is no direct feedthrough in the transfers w → z and u → y. then decrease γ.4. a weight C1T C1 on the state x and a zero weight on the cross terms involving u and x. A5 D21 (B1T D21 T ) = (0 I). Similarly. Assumption A3 is a technical assumption made on the transfer function G11 .20) y = C2 x + D21 w + D22 u be a state space description of G. A1 D11 = 0 and D22 = 0.21) y = C2 x + w2 w1 where w = w2 . . The second assumption A2 implies that we assume that there are no unobservable and uncontrollable unstable modes in G22 . To solve this problem. 0 0 In the tobe controlled output z. A3 The triple (A.
2. Similarly.22) has a stabilizing solution X = X T ≥ 0.23) if λ(A − Y C2 C2T + γ −2 Y C1T C1 ) ⊂ C− . a symmetric matrix Y is called a stabilizing solution of (12. Equation (12.21). . then they are unique.22) if the eigenvalues λ(A − B2 B2T X + γ −2 B1 B1T X) ⊂ C− . However. and we moreover observe that both equations (and hence their solutions) depend on the value of γ. The unknown matrices X and Y are symmetric and have both dimension n × n where n is the dimension of the state space of (12. it is not at all clear that stabilizing solutions in fact exist. We call a symmetric matrix X a stabilizing solution of (12. Equation (12.23) has a stabilizing solution Y = Y T ≥ 0.7 Under the conditions A1–A5. because these Riccati equations are indeﬁnite in their quadratic terms. ¯ (XY ) < γ 2 . σ Moreover. In other words. . The quadratic terms are indeﬁnite in both equations (both quadratic terms consist of the diﬀerence of two nonnegative deﬁnite matrices).23). (12. 3.8 Theorem 12. and has been considered as one of the main contributions in optimal control theory during the last 10 years. The following result is the main result of this section. there exists an internally stabilizing con troller K that achieves M (s) ∞ < γ if and only if 1. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM The synthesis of H∞ suboptimal controllers is based on the following two Riccati equations 0 = AT X + XA − X[B2 B2T − γ −2 B1 B1T ]X + C1T C1 (12. It can be shown that whenever stabilizing solutions X or Y of (12.22) and at most one stabilizing solution Y of (12. We will be particularly interested in the so called stabilizing solutions of these equations. in that case one such controller is given by ξ˙ = (A + γ −2 B1 B1T X)ξ + B2 u + ZH(C2 ξ − y) (12.23) exist. there exists at most one stabilizing solution X of (12. .23) Observe that these deﬁne quadratic equations in the unknowns X and Y .22) or (12.22) −2 0 = AY + Y A − Y [C2 C2 − γ T T T T C1 C1 ]Y + B1 B1 .182 CHAPTER 12.24) u = Fξ where F := −B2T X H := Y C2T Z := (I − γ −2 Y X)−1 8 We hope you like it .
21). THE COMPUTATION OF H∞ OPTIMAL CONTROLLERS 183 A few crucial observations need to be made.7 claims that three algebraic conditions need to be checked before we can conclude that there exists a stabilizing controller K which achieves that the closed loop transfer function M has H∞ norm less than γ. Once these conditions are satisﬁed.24) which we put in observer form.7 with those which determine the H2 optimal controller. The controller (12.23) coincide with the Riccati equations of the previous section. tolerance level ε > 0. • It is interesting to compare the Riccati equations of Theorem 12. the H∞ control synthesis algorithm looks as follows: Algorithm 12.23) become deﬁnite in the limit and that in the limit the equations (12. K(s) u y ξ F H∞ ﬁlter u Figure 12. In particular. we emphasize that the presence of the indeﬁnite quadratic terms in (12.22) and (12.8 INPUT: generalized plant G in state space form (13. Summarizing.23) are a major complication to guarantee existence of solutions to these equations. Incorporating high order weighting ﬁlters in the internal structure of G therefore results in high order controllers.4.16) or (12.22) and (12. Find γl .24) and takes the explicit state space form ξ˙ = (A + γ −2 B1 B1T X + B2 F + ZHC2 )ξ − ZHy (12. ASSUMPTIONS: A1 till A5. • Theorem 12.25) u = Fξ which deﬁnes the desired map K : y → u. Step 1.22) and (12. γh such that M : w → z satisﬁes γl < M (s) ∞ < γh .12.5. • Note that the dynamic order of this controller is equal to the dimension n of the state space of the generalized system G. If we let γ → ∞ we see that the indeﬁnite quadratic terms in (12. This diagram shows that the controller consists of a dynamic observer which computes a state vector ξ on the basis of the measurements y and the control input u and a memoryless feedback F which maps ξ to the control input u.24) has the block structure as depicted in Fig ure 12. one possible controller is given explicitly by the equations (12.5: Separation structure for H∞ controllers A transfer function K(s) of the controller is easily derived from (12. which may be undesirable.
aims to maximize it. . then set γl = γ.7. we assume that the measurements y = x and we wish to design a controller K(s) for which the closed loop transfer function.2 only depends on the initial condition x0 of the state and the input w of the system (12. The criterion (12. Step 3.1). u. w) := z 22 −γ 2 w 2 ∞. w∗ ) optimal with respect to the criterion J(x0 . The procedure to obtain such a controller is basically an interesting extension of thearguments we put forward in section 12. Step 5. w. Put ε¯ = γh − γl .21) with state measurements (y = x) and two inputs u and w. w) if for all u ∈ L2 and w ∈ L2 the inequalities J(x0 .26) = z(t)2 − γ 2 w(t)2 dt. 12. .26) as a game between two players.5 The state feedback H∞ control problem The results of the previous section can not fully be appreciated if no further system theoretic insight is given in the main results. Step 4. w∗ ) (12. In this section we will therefore assume that the controller K(s) has access to the full state x. which is a special case of Theorem 12. . u. u∗ .6) deﬁned in section 12.9 We call a pair of strategies (u∗ . Here z is of course the output of the system (12..27) 9 Just like a soccer match where instead of administrating the number of goals of each team. We will view the criterion (12. we should treat the criterion J(x0 .7 and which provides quite some insight in the structure of optimal H∞ control laws. (12. Step 6. then set γh = γ. If ε¯ > ε then go to Step 2. . alternatively indicated here by Mx : w → z satisﬁes Mx ∞ < γ. OUTPUT: K(s) deﬁnes a stabilizing controller which achieves M (s) ∞ < γ. aims to minimize the criterion J. u. After all. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM Step 2. One player. this is the only relevant criterion which counts at the end of a soccer game .2. . i. while the other player. Put γ = γh and let ξ˙ = (A + γ −2 B1 B1T X + B2 F + ZHC2 )ξ − ZHy u = Fξ deﬁne the state space equations of a controller K(s). If not. w) ≤ J(x0 . the diﬀerence between the number of goals is taken as the relevant performance criterion.184 CHAPTER 12.e. If so. w∗ ) ≤ J(x0 . u∗ . In this section we will treat the state feedback H∞ optimal control problem. 0 as a function of the initial state x0 and both the control inputs u as well as the disturbance inputs w.21) when the inputs u and w are applied and the initial state x(0) is taken to be x0 . u. Since we are now dealing with the system (12. Let γ := (γl + γh )/2 and verify whether there exists matrices X = X T and Y = Y T satisfying the conditions 1–3 of Theorem 12.
taking the initial state x0 = 0 gives that the saddle point J(0.w2 ∈L2 w1 22 + w2 22 .27) gives that for all w ∈ L2 J(0. We may think of u∗ as a best control strategy. Just like the Kalman ﬁlter. We reconsider the state space equations (12. THE H∞ FILTERING PROBLEM 185 are satisﬁed.6 The H∞ ﬁltering problem Just like we splitted the optimal H2 control problem into a state feedback problem and a ﬁltering problem. in the conﬁguration of Figure 12.2 it thus follows that the closed loop system Mx : w → z obtained by applying the static state feedback controller u∗ (t) = −B2T Xx(t) results in Mx (s) ∞ ≤ γ. while w∗ is the worst exogenous input.21) then (12.27). w) ≤ J(0. Such a pair (u∗ . if both ‘players’ u and w have access to the state x of (12.28) Thus.29) y = C2 x + w2 under the same conditions as in the previous section.22). the H∞ ﬁlter is a causal. Thus. we wish to design a ﬁlter mapping (u. The H∞ ﬁltering problem is the subject of this section and can be formalized as follows. w∗ ) = 0 which. u. u∗ . u∗ . w∗ ) = xT0 Xx0 which gives a nice interpretation of the solution X of the Riccati equation (12. The existence of such a saddle point is guaranteed by the solutions X of the Riccati equation (12. Now.21): x˙ = Ax + B1 w1 + B2 u z = C1 x + D12 u (12.22).6. and producing an estimate zˆ of the signal z in such a way that the H∞ norm of the transfer function from the noise w to the estimation error e = z − zˆ is minimal.6. the H∞ control problem admits a similar separation. for any solution X of (12. w) of (square integrable) inputs of the system (12. Speciﬁcally. by (12. w) = xT0 Xx0 + w − γ −2 B1T Xx 22 − u + B2T Xx 22 .21) for which limt→∞ x(t) = 0 there holds J(x0 .12.22) a completion of the squares argument will give you that for all pairs (u. w∗ ) deﬁnes a saddle point for the criterion J.30) w1 . 12. We moreover see from this analysis that the worst case disturbance is generated by w∗ . under the assumptions made in the previous section. linear mapping taking the control input u and the measurements y as its inputs.28) gives us immediately a saddle point u∗ (t) := −B2T Xx(t) w∗ (t) := γ −2 B1 Xx(t) which satisﬁes the inequalities (12. u∗ . y) → zˆ such that the for overall conﬁguration with transfer function Me : w → e the H∞ norm e 22 Me (s) 2∞ = sup (12. (12. w∗ ) = 0 As in section 12. u∗ . We see that in that case the saddle point J(x0 .
Then 1. the H∞ ﬁlter depends on the tobeestimated signal. 2.12)) as its input arguments and produces the . The matrix H is generally referred to as the H∞ ﬁlter gain and clearly depends on the value of γ (since Y depends on γ).12) or the more general state space model (13.31) zˆ = C1 ξ + D21 u where H = Y C2T . These routines are implemented with the algorithms described in this chapter. e u Plant y .186 CHAPTER 12.6: The H∞ ﬁlter conﬁguration is less than or equal to some prespeciﬁed value γ 2 . This. • It is important to observe that in contrast to the Kalman ﬁlter. because the matrix C1 .6 satisﬁes M e ∞ < γ if and only if the Riccati equation (12.9 (The H∞ ﬁlter. which deﬁnes the tobe estimated signal z explicitly. The solution to this problem is entirely dual to the solution of the state feedback H∞ problem and given in the following theorem. Let us make a few important observations • We emphasize again that this ﬁlter design is carried out completely in a deterministic setting. Theorem 12.7 Computational aspects The Robust Control Toolbox in Matlab includes various routines for the computation of H2 optimal and H∞ optimal controllers.23) has a stabilizing solution Y = Y T ≥ 0.) Let the system (12. This routine takes the parameters of the state space model (12.16) (which it converts to (12. The relevant routine in this toolbox for H2 optimal control synthesis is h2lqg.29) be given and assume that the assumptions A1 till A5 hold. zˆ u . In that case one such ﬁlter is given by the equations ξ˙ = (A + γ −2 B1 B1T X)ξ + B2 u + H(C2 ξ − y) (12. appears in the Riccati equation. 12. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM w . The resulting ﬁlter therefore depends on the tobeestimated signal. there exists a ﬁlter which achieves that the mapping Me : w → e in the conﬁguration of Figure 12. z +. Filter Figure 12. 6− .
give an example for which γ 2 I − DT D is positive deﬁnite and H(s) ∞ > γ. this routine also produces the state space description of the corresponding closedloop transfer function M as its output. B. Give an example of a system for which the converse is not true.01 ? Use the tool MHC! Exercise 1. eig. Show that H(s) ∞ < γ implies that γ 2 I − DT D is positive deﬁnite. Write a routine h2comp in MATLAB which computes the H2 norm of a transfer function H(s). What bandwidth can you obtain for the sensitivity being less than .19) as its outputs.7.8 Exercises Exercise 0. C. the Robust Control Toolbox includes an eﬃcient im plementation of the result mentioned in Theorem 12. An eﬃcient use of these routines. The procedures abcdchk. requires quite some programming eﬀort in Matlab. r 2 < 1. Bc.2). During this course we will give a software demonstration of this package.16) as its input arguments and produces the state space parameters of the so called central controller as speciﬁed by the formulae (12. If desired. Also the state space parameters of the corresponding closed loop system can be obtained as an optional output argument.7. The Matlab routine hinf takes the state space parameters of the model (13. or obsv may prove helpful. . i.8. Let the state space parameters (A. Exercise 2. Suppose that a stable transfer function H(s) admits the state space repre sentation x˙ = Ax + Bw z = Cx + Dw.1. and the H2 norm C(Is − A)−1 B + D 2 its output. 12. Furthermore. however. For H∞ optimal control synthesis. Filter We is low pass and has to be chosen.. Hint: Use the Theorem 12. D) to guarantee a ‘foolproof’ behavior of the routine. The Robust Control Toolbox provides features to quickly generate augmented plants which incorporate suitable weighting ﬁlters. Dc) of the optimal H2 controller as deﬁned in (12.12.e. C. Cc. The package MHC (Multivariable H∞ Control Design) has been written as part of a PhD study by one of the students of the Measurement and Control Group at TUE. Write a block diagram for the optimal H2 controller and for the optimal H∞ controller. Build in suﬃcient checks on the matrices (A. P = (s − 1)/(s + 1) and Wx = s/(s + 3). Take the ﬁrst blockscheme of the exercise of chapter 6. Deﬁne a mixed sensitivity problem where the performance is represented by good tracking. B. Exercise 3. Although we consider this an excellent exercise it is not really the purpose of this course.24) in Theorem 12. The routine makes use of the two Riccati solution as presented above. See the help ﬁles of the routines lyap in the control system toolbox to solve the Lyapunov equation (12. EXERCISES 187 state space matrices (Ac. The robustness term is deﬁned by a bounded additive model error: Wx −1 ∆P ∞ < 1. and has been cus tomized to easily experiment with ﬁlter design. (See the corresponding help ﬁle). D) be the input to this routine. minreal.
and y is the roll angle measurement (in rad). resulting in a four state model. −0.5390j. B = . it is desirable that the closed loop be robustly stable to variations in this parameter. variations in the roll angle are to be limited in the face of torque disturbances.0002 + 0. Robust stability: stable response for about 10% variations in the natural frequency ω. ω.003 is the ﬂexural damping ratio.5 N m. 3.0007 rad for all t > 0. Control level: control eﬀort due to 0.188 CHAPTER 12. can only be approximately estimated. The model for control analysis represents the transfer function from the torque applied to the roll axis of the satellite to the corresponding satellite roll angle.0046 − 1.3N m step torque disturbances u(t) < 0. The state space system is described by x˙ = Ax + Bu + Bw y = Cx where u is the control torque (in units N m). Because of the highly ﬂexible nature of this system. The system considered in this design is a satellite with two highly ﬂexible solar arrays attached. This exercise is a more extensive simulation exercise. The state space matrices are given by 0 1 0 0 0 0 0 0 0 1. the response time is required to be less than 1 minute (60sec). 1. It is therefore desired to design a feedback controller which increases the system damping and maintains a speciﬁed pointing accuracy. You may also like to use the MHC package that has been demonstrated and for which a manual is available upon request. 0 1 0 0 0 −ω −2ζω 2 3. w is a constant disturbance torque (N m). 0 and the ﬁnite zeros at −0. .0002 − 0. Hence.7859 × 10 −4 C= 1 0 1 0 . In order to keep the model simple. The nominal open loop poles are at −0. Additionally. Using MATLAB and the installed package MHC (Multivariable H∞ Controller design) you should be able to design a robust controller for the following problem. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM Exercise 4. Performance: required pointing accuracy due to 0.5390j. the use of control torque for attitude control can lead to excitation of the lightly damped ﬂexural modes and hence loss of control. −0. and the natural frequency. The design objectives are as follows.539rad/sec is the frequency of the ﬂexible mode and ζ = 0. D = 0.3219j.3219j. where ω = 1. only a rigid body mode and a single ﬂexible mode are included. That is.3N m step torque distur bance should be y(t) < 0. 2.0046 + 1.7319 × 10−5 A= 0 0 . In addition the stiﬀness of the structure is uncertain.
• Note that.33) s + 0. for a suitable weighting function W we can achieve that U (jω) ≤ γ W (jω) for all ω. Determine a value of k so as to achieve the required level of pointing accuracy in the H∞ design.32) 0. Hint: Set up a scheme for H∞ controller design in which the output y + 10−5 w is used as a measurement variable and in which the to be controlled variables are Wk y z= 10−5 u (the extra output is necessary to regularize the design).3N m step torque input disturbance w to verify whether your closedloop system meets the pointing speciﬁcation. to achieve a pointing accuracy of 0.33) with k as determined in 2.3N m.3 at least at low frequencies.8. After you complete the design phase.4 Wk (s) = k (12. make Bode plots of the closedloop re sponse of the system and verify whether the speciﬁcations are met by perturbing the parameter ω and by plotting the closed–loop system responses of the signals u and y under step torque disturbances of 0.0007 σ ¯ (I + P C)−1 P < ¯ (U ) = σ = 0. we require that U satisﬁes the condition 0. Choose V in such a way that an H∞ suboptimal controller C which minimizes M ∞ meets the design speciﬁcations.0007rad in the face of 0. EXERCISES 189 We will start the design by making a few simple observations • Verify that with a feedback control law u = −Cy the resulting closedloop transfer U := (I + P C)−1 P maps the torque disturbance w to the roll angle y. Construct a 0. 2. 1. Consider the weighting ﬁlter s + 0. Try to obtain a value of γ which is more or less equal to 1. Use the MHC package to compute a suboptimal H∞ controller C which minimizes the H∞ norm of the closed loop transfer w → z for various values of k > 0. where γ is the usual parameter in the ‘γ–iteration’ of the H∞ optimization procedure. Hint: Use the same conﬁguration as in part 2 and compute controllers C by using the package MHC and by varying the weighting ﬁlter V .12. . Let Wk be given by the ﬁlter (12. See the MHC help facility to get more details.3N m torque input disturbances. • Recall that. 3. Let V (s) be a second weighting ﬁlter and consider the weighted control sensitivity M := Wk U V = Wk (I + P C)−1 P V .001 where k is a positive constant.0021 rad/N m (12.
190 CHAPTER 12. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM .
Finally. we did not address the question how such a controller is actually computed. Roughly speaking. So far. This technique is fast. or other kinds of energy ‘losses’. This has been a problem of main concern in the early 80s. Especially in the physical sciences. feedback controllers which stabilize a closed loop system and at the same time minimize the H∞ norm of a closed loop transfer function.1 Introduction The notion of dissipativity (or passivity) is motivated by the idea of energy dissipation in many physical dynamical systems. They are not part of the material of the course and can be skipped upon ﬁrst reading of the chapter. simple. Proofs of theorems are included for completeness only. For 191 . We will make use of a technique which is based on Linear Matrix Inequalities (LMI’s). We will see that linear dissipative systems are closepy related to Linear Matrix Inequalities (LMI’s) and we will subsequently show how the H∞ norm of a transfer function can be computed by means of LMI’s. mass. In the next section we ﬁrst treat the concept of a dissipative dynamical system. 13.e. Various mathematical techniques have been developed to compute H∞ optimal controllers. It is a most important concept in system theory and dissipativity plays a crucial role in many modeling questions.. dissipativity is closely related to the notion of energy. i. when time evolves a dissipative system absorbs a fraction of its supplied energy and transforms it for example into heat. In this chapter we treat a solution to a most general version of the H∞ optimal control problem. an increase of entropy. a dissipative system is characterized by the property that at any time the amount of energy which the system can conceivably supply to its environment can not exceed the amount of energy that has been supplied to it. Stated otherwise.Chapter 13 Solution to the general H∞ control problem In previous chapters we have been mainly concerned with properties of control conﬁg urations in which a controller is designed so as to minimize the H∞ norm of a closed loop transfer function. This chapter is organized as follows. the question whether a system is dissipative or not can be answered from physical considerations on the way the system interacts with its environment.1. In many applications. and at the same time a most reliable and eﬃcient way to synthesize H∞ optimal controllers.1 Dissipative dynamical systems 13. we consider the synthesis question of how to obtain a controller which stabilizes a given dynamical system so as to minimize the H∞ norm of the closed loop system. electromagnetic radiation.
e. Popov. x. Deﬁnition 13.1) the function s(t) := s(u(t). 13. u). etc. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM example.. Although the history of LMI’s goes back to the fourties with a major emphasis of their role in control in the sixties (Kalman. i.1 (Dissipativity) The system Σ with supply rate s is said to be dissipa tive if there exists a nonnegative function V : X → R such that t1 V (x(t0 )) + s(t)dt ≥ V (x(t1 )) (13.1). y satisfying (13. x is the state which takes its values in a state space X = Rn .2) t0 for all t0 ≤ t1 and all trajectories (u.1) y = Cx + Du As usual. by observing that the system is an interconnection of dissipative components. Willems). optical dispersion. linear matrix inequalities have emerged as a powerful tool to approach control problems that appear hard if not impossible to solve in an analytic fashion. LMItool). Let s:U ×Y →R be a mapping and assume that for all time instances t0 . u). It will be shown that linear matrix inequalities (LMI’s) occur in a very natural way in the study of linear dissipative systems. In recent years. y) which satisfy (13. and H∞ con troller design.192 CHAPTER 13. Nemirovskii 1994). robustness. only recently powerful numerical interior point techniques have been developed to solve LMI’s in a prac tically eﬃcient manner (Nesterov. t1 ∈ R and for all inputoutput pairs u. y = g(x. evaporation losses. 1 Much of what is said in this chapter can be applied for (much) more general systems of the form x˙ = f (x. Several Matlab software packages are available that allow a simple coding of general LMI problems and of those that arise in typical control problems (LMI Control Toolbox. t0 s(t)dt < ∞. u is the input taking its values in an input space U = Rm and y denotes the output of the system which assumes its values in the output space Y = Rp . The mapping s will be referred to as the supply function. or by considering systems in which a loss of energy is inherent to the behavior of the system (due to friction.). . This interpretation will play a key role in understanding the relation between LMI’s and questions related to stability. Yakubovich. y(t)) t1 is locally integrable. timeinvariant dynamical system Σ described by the equa tions1 x˙ = Ax + Bu Σ : (13. In this chapter we will formalize the notion of a dissipative dynamical system for the class of linear timeinvariant systems. Solutions of these inequalities have a natural interpretation as storage functions associated with a dissipative dyamical system.2 Dissipativity Consider a continuous time.1.
y = V and x = VCT ILT .3 If the function V (x(·)) with V a storage function and x : R → X a state trajectory of (13.1. inductors and lossless elements such as transformers and gyrators. respectively. Let (T.2) is known as the dissipation inequality. Q. This means that in a time interval [0. t1 ] will never exceed the amount of supply that ﬂows into the system (or the ‘work done on the system). Example 13. For such a circuit.3) Remark 13. W ) be the external variables of such a system and assume that –either by physical or chemical principles or through experimentation– the mathematical model of the ther modynamic system has been decided upon and is given by the time invariant system (13.1). I(t)) = V T (t)I(t). This system is dissipative and nC nL V (x) := Ci VC2i + Li IL2 i i=1 i=1 is a storage function of the system that represents the total electrical energy in the capac itors and inductors. capacitors.2) formalizes the intuitive idea that a dissipative system is characterized by the property that the change of internal storage (V (x(t1 )) − V (x(t0 ))) in any time interval [t0 . . . Let nC and nL denote the number of capacitors and inductors in the network and denote by VC and IL the vectors of voltage drops accrioss the capacitors and currents through the inductors of the network.1 by saying that the system Σ is conservative with respect to the supply function s1 := (W + Q) and dissipative with respect to the supply function s2 := −Q/T .2 The supply function (or supply rate) s should be interpreted as the supply delivered to the system. The system Σ is said to be conservative (or lossless) if there exists a nonnegative function V : X → R such that equality holds in (13.1) is diﬀerentiable as a function of time.4 (this remark may be skipped) There is a reﬁnement of Deﬁnition 13.5 Consider an electrical network with n external ports. Example 13.1). This means that part of what is supplied to the system is stored. Denote the external voltages and currents of the ith port by (Vi .13. The ﬁrst and second law of thermodynamics may then be formulated in the sense of Deﬁ nition 13.1). while the remaining part is dissipated. t] work has been t done on the system whenever 0 s(τ )dτ is positive. y) which satisfy (13. while work is done by the system if this integral is negative. Inequality (13. inequality (13. The nonnegative function V is called a storage function and generalizes the notion of an energy function for a dissipative system. DISSIPATIVE DYNAMICAL SYSTEMS 193 Interpretation 13. y(t)).1 which is worth mentioning. Assume that the network contains (a ﬁnite number of) resistors. a natural supply function is s(V (t). x. Remark 13.2) can be equivalently written as V˙ (t) ≤ s(u(t). Ii ) and let V and I denote the vectors of length n whose ith component is Vi and Ii . With this interpretation.2) for all t0 ≤ t1 and all (u.6 Consider a thermodynamic system at uniform temperature T on which mechanical work is being done at rate W and which is being heated at rate Q. where u = I. Indeed. then (13. An impedance description T of the system then takes the form (13. (13.
then we will assume that there exists a reference point x∗ ∈ X of minimal storage. W ) and all time instants t0 ≤ t1 t1 E(x(t0 )) + Q(t) + W (t) dt = E(x(t1 )) t0 (which is conservation of thermodynamical energy) and the second law of thermodynamics states that the system trajectories satisfy t1 Q(t) S(x(t0 )) + − dt ≥ S(x(t1 )) t0 T (t) for a storage function S. y) = y2 (13. If Σ is dissipative with storage function V . V¯ (x∗ ) = 0 and V¯ is a storage function of Σ whenever V is. an ‘equilibrium state’ for which no energy is stored in the system. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM the two basic laws of thermodynamics state that for all system trajectories (T.5) s(u.1) we usually take x∗ = 0. For linear systems of the form (13. we will restrict attention to the set of normalized storage functions. Formally.8 Typical examples of supply functions s : U × Y → R are s(u. The ﬁrst law promises that the change of internal energy is equal to the heat absorbed by the system and the mechanical work which is done on the system. . For those familiar with the theory of bondgraphs we remark that every bondgraph can be viewed as a representation of a dissipative dy namical system where input and output variables are taken to be eﬀort and ﬂow variables and the supply function s is invariably taken to be the product of these two variables. the product of forces and velocities is a candidate supply function in mechanical systems. Here. bondgraph theory. Given a storage function V . i. Q. LQoptimal control and H2 optimal control theory.6) s(u. The second law states that the entropy decreases at a higher rate than the quotient of absorbed heat and temperature. there exists x∗ ∈ X such that V (x∗ ) = minx∈X V (x). A bondgraph is therefore a special case of a dissipative system (and not the other way around!). y) = y + u 2 2 (13. the set of normalized storage functions (associated with (Σ.194 CHAPTER 13. its normalization (with respect to x∗ ) is deﬁned as V¯ (x) := V (x) − V (x∗ ). game theory. Obviously.7) which arise in network theory. y) = y2 − u2 (13. Example 13.1.2) holds}. Note that thermodynamical systems are dissipative with respect to more than one supply function! Example 13.4) s(u. (13.e. y) = uT y.7 As another example. E is called the internal energy and S the entropy. scattering theory. 13. You can think of x∗ as the state in which the system is ‘at rest’. s)) is deﬁned by V(x∗ ) := {V : X → R+  V (x∗ ) = 0 and (13.3 A ﬁrst characterization of dissipativity Instead of considering the set of all possible storage functions associated with a dynamical system Σ. H∞ theory.
Note that in (13. We introduce two mappings Vav : X → R+ ∪ ∞ and Vreq : X → R ∪ {−∞} which will play a crucial role in the sequel. Moreover. Then 1. Let Σ be dissipative. Σ is dissipative if and only if Vav (x) is ﬁnite for all x ∈ X.1. it is assumed that there exist a control input u which brings the state trajectory x from x∗ at time t = t−1 to x0 at time t = 0. In particular.e. any trajectory of the system which emanates from x∗ has the property that the net ﬂow of supply is into the system. They are deﬁned by t1 1 Vav (x0 ) := sup − s(t) dt  t1 ≥ 0. statement (b) shows that the available storage and the required supply are the extremal storage functions in V(x∗ ). 2. i. Interpretation 13. (u. Vreq (x) reﬂects the minimal supply the environment has to deliver to the system in order to excite the state x via any trajectory in the state space originating in x∗ . If Σ is dissipative and controllable then (a) Vav . 0 Taking the supremum over all t1 ≥ 0 and all such trajectories (u. respectively. To prove the converse implication it suﬃces to show that Vav is a storage .13. y) (with x(0) = x0 ) yields that Vav (x0 ) ≤ V (x0 ) < ∞.10 Let the system Σ be described by (13. t1 − s(u(t). y(t)) dt ≥ 0 0 for any t1 ≥ 0 and any (u. x. From (13. Similarly.8a) 0 0 Vreq (x0 ) := inf s(t) dt  t−1 ≤ 0. 1.2) it then follows that for all t1 ≥ 0 and all (u. y) satisfy (13.1) and let s be a supply function. Proof. y(t))dt ≤ V (x0 ) < ∞. Vreq ∈ V(x∗ ). It shows that both the available storage and the required supply are possible storage functions. In many treatments of dissipativity this property is often taken as deﬁnition of passivity. Theorem 13. x.11 Theorem 13. y) satisfying (13. DISSIPATIVE DYNAMICAL SYSTEMS 195 The existence of a reference point x∗ of minimal storage implies that for a dissipative system t1 s(u(t). V a storage function and x0 ∈ X. y) satisfy (13.1) with (13.8b) t−1 x(0) = x0 and x(t−1 ) = x∗ } Interpretation 13.1) with x(0) = x0 (13. y) satisﬁng (13.10 gives a necessary and suﬃcient condition for a system to be dissipative. x. x. for any state of a dissipative system.1) with x(0) = x∗ . (u.9 Vav (x) denotes the maximal amount of internal storage that may be recovered from the system over all state trajectories starting from x. the available storage is at most equal to the required supply. x.8b) it is assumed that the point x0 ∈ X is reachable from the reference pont x∗ . This is possible when the system Σ is controllable.1) with x(0) = x0 . Stated otherwise. We refer to Vav and Vreq as the available storage and the required supply. (b) {V ∈ V(x∗ )} ⇒ {For all x ∈ X there holds 0 ≤ Vav (x) ≤ V (x) ≤ Vreq (x)}.
x. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM function.2) as both t1 0 0 s(t)dt ≥ 0 and t−1 s(t)dt ≥ 0.12 Substituting the output equation y = Cx + Du in the supply function (13. 13. let t0 ≤ t1 ≤ t2 and (u. y(t))dt.2). t0 t1 Since the second term in the right hand side of this inequality holds for arbitrary t2 ≥ t1 and arbitrary (u.8 can all be written in the form (13.8a)). y) = (13. we can take the supremum over all such trajectories to conclude that t1 Vav (x(t0 )) ≥ − s(u(t). Indeed.9). Note that the supply functions given in Example 13. Then. But this yields a contradiction with (13. Qux Quu 0 I Quy Quu 0 I .1).2 Dissipative systems with quadratic supply functions 13. Then V¯ (x) := V (x)−V (x∗ ) ∈ V(x ) so that V(x∗ ) = ∅. t0 which shows that Vav satisﬁes (13. y(t))dt − Vav (x(t1 )). Q = QT ) which is partitioned conformally with u and y. Qyy Qyu Q := Quy Quu is a real symmetric matrix (i. Remark 13. using controllability of the system. there exists t−1 ≤ 0 ≤ t1 and a state trajectory x with x(t−1 ) = x(0) = x(t1 ) = x∗ such t 0 that − 0 1 s(t)dt > 0 and t−1 s(t)dt < 0. Vav (x∗ ) = Vreq (x∗ ) = 0. To see this. y)[t1 . Now take the supremum and inﬁmum over all such trajectories to obtain that Vav ≤ V ≤ Vreq . Observe that Vav (x∗ ) ≥ 0 and Vreq (x∗ ) ≤ 0 (take t1 = t−1 = 0 in ∗ (13.8)).2). 2a.2. x. We already proved that Vav is a storage function so that Vav ∈ V(x∗ ). Along the same lines one shows that also Vreq ∈ V(x∗ ). y(t))dt 0 t−1 for all t−1 ≤ 0 ≤ t1 and (u. x. To prove that Vav satisﬁes (13. y(t))dt − s(u(t).1) with x(t−1 ) = x∗ and x(0) = x0 . ﬁrst note that Vav (x) ≥ 0 for all x ∈ X (take t1 = 0 in (13. If V ∈ V(x∗ ) then t1 0 − s(u(t). Thus. Suppose that the latter inequalities are strict.t2 ] (with x(t1 ) ﬁxed).1 Quadratic supply functions In this section we will apply the above theory by considering systems of the form (13. T x Qxx Qxu x s(u. 2b.1) with quadratic supply functions s : U × Y → R. y(t))dt ≤ V (x0 ) ≤ s(u(t). Cx + Du) = u Qux Quu u where T Qxx Qxu C D Qyy Qyu C D = .9) u Quy Quu u Here. y) satisfying (13. Then t1 t2 Vav (x(t0 )) ≥ − s(u(t). y) satisfy (13. deﬁned by T y Qyy Qyu y s(u.e. Suppose that Σ is dissipative and let V be a storage function.196 CHAPTER 13. y) = s(u.9) shows that (13.9) can equivalently be viewed as a quadratic function in the variables u and x.
10).13 Suppose that the system Σ described by (13. s) is dissipative then we infer from Theorem 13. Indeed.2. (1⇒2. It is well known that this inﬁmum is a quadratic form in x.1) is controllable and let G(s) = C(Is − A)−1 B + D be the corresponding transfer function.2 Complete characterizations of dissipativity The following theorem is the main result of this section. 3. Vreq is a storage function. (2⇒3). s) is dissipative.10) BTK 0 0 I Quy Quu 0 I T 4.10). 2. then V (x) := xT Kx is a quadratic storage function in V(0) if and only if K ≥ 0 and F (K) ≥ 0. Vreq is . 1. (Σ. y(t)) dt ≥ 0. s) admits a quadratic storage function V (x) := xT Kx with K = K T ≥ 0. if one of the above equivalent statements holds. Conversely. T 5.12) holds for all t0 ≤ t1 and all inputs u this reduces to the requirement that K ≥ 0 satisﬁes the LMI F (K) ≥ 0.12) holds and it follows that V (x) = xT Kx is a storage function which satisﬁes the dissipation inequality. Then the following statements are equivalent. If (Σ. For all ω ∈ R with det(jωI − A) = 0. (3⇒2). Since Vreq is deﬁned as an optimal cost corresponding to a linear quadratic optimization problem.10 that the available storage Vav (x) is ﬁnite for any x ∈ Rn . Obvious from Theorem (13. There exists K− = K− ≥ 0 such that Vav (x) = xT K− x. If (Σ. 6. If V (x) = xT Kx with K ≥ 0 is a storage function then the dissipation inequality can be rewritten as t1 d − x(t) Kx(t) + s(u(t). There exists K = K T ≥ 0 such that T T A K + KA KB C D Qyy Qyu C D F (K) := − + ≥ 0.1).9). t0 u(t) −B T K 0 0 I Quy Quu 0 I u(t) F (K) (13. T t0 dt Substituting the system equations (13. if there exist K ≥ 0 such that F (K) ≥ 0 then (13.12) Since (13.11) I Quy Quu I Moreover. It provides necessary and suﬃcient conditions for dissipativeness. Proof. there holds ∗ G(jω) Qyy Qyu G(jω) ≥ 0 (13. There exists K+ = K+ ≥ 0 such that Vreq (x) = xT K+ x. s is quadratic and t1 t1 Vav (x) = sup − s(t)dt = − inf s(t)dt 0 0 denotes the optimal cost of a linear quadratic optimization problem. Let the supply function s be deﬁned by (13. This is a standard result from LQ optimization. (4⇒1). We claim that Vav (x) is a quadratic function of x. (1⇔5).2. DISSIPATIVE SYSTEMS WITH QUADRATIC SUPPLY FUNCTIONS 197 13. (Σ.4). s) is dissipative then by Theorem (13. (13. Theorem 13. this is equivalent to t1 T T x(t) −AT K − KA −KB C D Qyy Qyu C D x(t) + dt ≥ 0.13.
then it is easily seen that Vreq satisﬁes the dissipation inequality (13. s) is dissipative. this means that whenever the system is dissipative with respect to a quadratic supply function (and quite some physical systems are).198 CHAPTER 13. Then y(t) = exp(jωt)G(jω)u0 and the triple (u. For physical systems. Since u0 and t1 > t0 are arbitrary this yields that statement 6 holds. Conversely. y) is periodic with period P = 2π/ω. there must exist a time instant t0 such that x(t0 ) = x(t0 + kP ) = 0. x. Deﬁne x(t) := exp(jωt)(jωI − A)−1 Bu0 and y(t) := Cx(t) + Du(t).1) together with the . consider the system (13.2. the set of normalized quadratic storage functions associated with (Σ. y) satisﬁes (13. y(t)) dt = ¯∗0 u u0 t0 t0 I Quy Quu I ∗ G(jω) Qyy Qyu G(jω) u∗0 = (t1 − t0 )¯ u0 ≥ 0 I Quy Quu I for all t1 > t0 . Using Theorem 13. First.13 and is left to the reader. In other words. Any physically relevant energy function which happens to be of the form V (x) = xT Kx will satisfy the linear matrix inequalities K > 0 and F (K) ≥ 0.3 The positive real lemma We apply the above results to two quadratic supply functions which play an important role in a wide variety of applications.14 The matrix F (K) is usually called the dissipation matrix.1). Statement 6 provides a frequency domain characterization of dissipativity. it moreover follows that any solution K = K T ≥ 0 of F (K) ≥ 0 has the property that 0 ≤ K− ≤ K ≤ K + . x. Since V (0) = 0. The precize formulation is evident from Theorem 13. ∗ G(jω) Qyy Qyu G(jω) ¯∗0 s(u(t). this function is in general nonunique and squeezed in between the available storage and the required supply.10. if Vreq = xT K+ x. k ∈ Z. Now suppose that (Σ. The inequality F (K) ≥ 0 is an example of a Linear Matrix Inequality (LMI) in the (unknown) matrix K. For conservative systems with quadratic supply functions a similar characterization can be given.2) which implies that (Σ. (1⇔6). In other words.2) reads t1 t1 ∗ G(jω) Qyy Qyu G(jω) s(u(t). if the reference point x∗ = 0. The crux of the above theorem is that the set of quadratic storage functions in V(0) is completely characterized by the inequalities K ≥ 0 and F (K) ≥ 0. For nonzero frequencies ω the triple (u. the available storage and the required supply are quadratic storage functions and hence K− and K+ also satisfy F (K− ) ≥ 0 and F (K+ ) ≥ 0. Moreover. In particular. K+ ≥ 0. In particular. 13. The implication 6 ⇔1 is much more involved and will be omitted here. the dissipation inequality (13. Interpretation 13. Hence. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM quadratic. among the set of positive semideﬁnite solutions K of the LMI F (K) ≥ 0 there exists a smallest and a largest element. then there is at least one energy function which is a quadratic function of the state variable. s) coincides with those matrices K for which K = K T ≥ 0 and F (K) ≥ 0. Let ω ∈ R be such that det(jωI − A) = 0 and consider the harmonic input u(t) = exp(jωt)u0 with u0 ∈ Rm . s) is dissipative. Vreq (x) is of the form xT K+ x for some K+ ≥ 0. y(t)) = u u0 I Quy Quu I which is a constant for all time t ∈ R.
2. (Σ.15 Suppose that the system Σ described by (13. y) = y T u be a supply function. This function satisﬁes (13.16 Corollary 13. Corollary 13.4 The bounded real lemma Second. For all ω ∈ R with det(jωI − A) = 0 G(jω)∗ + G(jω) ≥ 0. Let s(u. Then equivalent statements are 1. the following is an immediate consequence of Theorem 13. 2. s) is dissipative. 13. Qyy = 0 and Quy = QTyu = 1/2I. Then equivalent statements are 1. the LMI’s K = KT ≥ 0 −AT K − KA −KB + C T ≥ 0 −B T K + C D + DT have a solution.17 Suppose that the system Σ described by (13. The LMI’s K = KT ≥ 0 T A K + KA + C T C KB + C T D ≤ 0 B T K + DT C DT D − γ 2 I have a solution. 2. For all ω ∈ R with det(jωI − A) = 0 G(jω)∗ G(jω) ≤ γ 2 I. With these parameters. V (x) = xT Kx deﬁnes a quadratic storage function if and only if K satisﬁes the above LMI’s.1) is controllable and has transfer function G. 3. Moreover.13. (Σ.2. y) = y T u. DISSIPATIVE SYSTEMS WITH QUADRATIC SUPPLY FUNCTIONS 199 quadratic supply function s(u. In a similar fashion we obtain the following result as an immediate conse quence of Theorem 13. .13. Let s(u. consider the quadratic supply function s(u. V (x) = xT Kx deﬁnes a quadratic storage function if and only if K satisﬁes the above LMI’s. Moreover.13. y) = γ 2 uT u − y T y be a supply function.15 is known as the KalmanYacubovichPopov or the posi tive real lemma and has played a crucial role in questions related to the stability of control systems and synthesis of passive electrical networks. s) is dissipative. 3. Transfer functions which satisfy the third statement are generally called positive real.1) is controllable and has transfer function G. Corollary 13.9) with Quu = 0. y) = γ 2 uT u − y T y (13.13) where γ ≥ 0. Remark 13.
Then equivalent statements are 1. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM 13. The issue here is that such a test and minimization can be eﬃciently performed in the LMItoolbox as implemented in MATLAB. then we infer from Remark 13.17. (Σ. Theorem 13.14) from t = 0 till ∞ to obtain that for all u ∈ L2 γ 2 u22 − y22 ≥ 0 where the norms are the usual L2 norms. Equivalently. If Σ is dissipative with respect to the supply function (13. We can compute the L2 induced gain of the system (which is the H∞ norm of the transfer function) by minimizing γ > 0 over all variables γ and K > 0 that satisfy the LMI’s of statement 3. Then both the state x and the output y of (13.15) u∈L2 u2 Now recall from Chapter 5. (13. Interpretation 13. 3. We can therefore integrate (13. The LMI’s K = KT ≥ 0 T A K + KA + C T C KB + C T D ≤ 0 B T K + DT C DT D − γ 2 I have a solution.13). s) is dissipative. V˙ ≤ γ 2 uT u − y T y. Moreover.18 Suppose that the system Σ described by (13. A has all its eigenvalues in the open lefthalf complex plane (i. stable and has transfer function G.3 that for any quadratic storage function V (x) = xT Kx.15) is the L2 induced norm or L2 gain of the system (13.3 Dissipativity and H∞ performance Let us analyze the importance of the last result. GH∞ ≤ γ. for H∞ optimal control.19 Statement 3 of Theorem 13. Let s(u.1) are square integrable functions and limt→∞ x(t) = 0. In particular.1). .e. (13. the system Σ is stable) and the input u is taken from the set L2 of square integrable functions.14) Suppose that x(0) = 0. y) = γ 2 uT u − y T y be a supply function.200 CHAPTER 13. We thus derived the following result. y2 sup ≤ γ. V (x) = xT Kx deﬁnes a quadratic storage function if and only if K satisﬁes the above LMI’s. Corollary 13. 2. that the lefthand side of (13.18 therefore provides a test whether or not the H∞ norm of the transfer function G is smaller than a predeﬁned number γ > 0.1) is controllable. from Chapter 5 we infer that the H∞ norm of the transfer function G is equal to the L2 induced norm.
the block K needs to be designed. Precisely. w are the ex ogenous inputs (disturbances. The block G denotes the “generalized system” and typically includes a model of the plant together with all weighting functions which are speciﬁed by the user. z is the to be controlled output signal and y denote the measurements. Since our ultimate aim is to minimize the H∞ norm of the closedloop transfer function M. output weightings and interconnection structures). if M denotes the closedloop transfer function M : w → z. M = G11 + G12 K(I − G22 K)−1 G21 .17) u = Cc xc + Dc y .4. reference inputs).1: General control conﬁguration 13. Every such admissible controller K gives rise to a closed loop system which maps disturbance inputs w to the tobecontrolled output variables z. then with the obvious partitioning of G. The H∞ control problem is formalized as follows Synthesize a stabilizing controller K such that M H∞ < γ for some value of γ > 0. u denote the control inputs. input weightings. SYNTHESIS OF H∞ CONTROLLERS 201 w z . An admissible controller is a ﬁnite dimensional linear time invariant system described as x˙ c = Ac xc + Bc y (13.16) y = Cx + F w be a state space description of G. All variables may be multivariable. The block G contains all the known features (plant model.4 Synthesis of H∞ controllers In this section we present the main algorithm for the synthesis of H∞ optimal controllers.13. Admissable controllers are all linear timeinvariant systems K that internally stabilize the conﬁguration of Figure 13. Consider the general control conﬁguration as depicted in Figure 13.  G  u y K Figure 13. The block K denotes the “generalized controller” and includes typically a feedback con troller and/or a feedforward controller.1. Here. consider the generalized system G and let x˙ = Ax + B1 w + Bu z = C1 x + Dw + Eu (13. we wish to synthesize an admissible K for γ as small as possible. noise signals.1. To solve this problem.
By Theorem 13. Cc . Bc . . Y. M N For this purpose. Recall that A depends on the controller parameters. we observe that X A depends nonlinearly on the variables to be found. B(v)T −γI D(v)T < 0. the number γ is larger than γ ∗ if and only if there exists a controller such that σ(A) ⊂ C− and M∞ < γ. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM Controllers are therefore simply parameterized by the matrices Ac . is transformed to an aﬃne dependence of a new set of decision variables K L v := X. There exist a clever transformation so that the blocks in (13. Bc . the inequalities (13. since X is also a variable. T < 0 (13.Bc . The optimal value of the H∞ controller synthesis problem is deﬁned as γ∗ = inf M∞ .20) can be replaced by the inequalities A(v)T + A(v) B(v) C(v)T X(v) > 0.19) C D C1 + EDc C ECc D + EDc F The closedloop transfer matrix M can therefore be represented as M(s) = C(Is−A)−1 B+ D. Cc .20) which depend nonlinearly on the decision variables X and (Ac . Cc . deﬁne Y I X(v) := I X AY + BM A + BN C B1 + BN F A(v) B(v) := K AX + LC XB1 + LF C(v) D(v) C1 Y + EM C1 + EN C D + EN F With these deﬁnitions. Dc ). C − c.Cc . Dc ) and an X > 0 such that (13.182 . Dc ) achieves that σ(A) ⊂ C− and the H∞ norm MH∞ < γ if and only if there exists a symmetric matrix X satisfying T A X + X A + CTC X B + CTD X = X > 0. (Ac . Dc . Bc .18) z = Cξ + Dw where A + BDc C BCc B1 + BDc F A B = Bc C Ac Bc F . (13.202 CHAPTER 13.20) BT X + DT C DT D − γ 2 I The corresponding synthesis problem therefore reads as follows: Search controller param eters (Ac . The optimal H∞ value γ ∗ is then given by the minimal γ for which a controller can still be found.20) holds. Bc . the controller (Ac .Dc ) such that σ(A)⊂C− Clearly. The controlled or closedloop system then admits the description ξ˙ = Aξ + Bw (13. (13.21) C(v) D(v) −γI 2 With a slight variation.
the size of Ac was arbitrary. If we intend to design a strictly proper controller (i. . The power of Theorem 13. Bc . for any such v. Dc = 0).21) hold. Note that the direct feedthrough of the controller Dc is actually not transformed.21) and solutions of the H∞ control problem are now given in the following main result. Cc Dc 0 I M N CY I We have obtained a general procedure for deriving from analysis inequalities the cor responding synthesis inequalities and for construction of the corresponding controllers. they include rank constraints that are hard if not impossible to treat by current optimization techniques. In particular. SYNTHESIS OF H∞ CONTROLLERS 203 The oneone relation between the decision variables in (13.20). 1. It is not very diﬃcult to derive the corresponding synthesis inequalities. Virtually all analysis results that are based on a dissipativity constraint with respect to a quadratic supply function can be converted with ease into the corresponding synthesis result.4. Cc . The construction of the other controller parameters remains the same. however. the decision variables in (13. There exists a controller (Ac . V such that I − XY = U V T . The story is very diﬀerent in reduced order control: Then the intention is to include a constraint dim(Ac ) ≤ k for some k that is smaller than the dimension of A. we simply have Dc = N . The speciﬁc construction of a controller in proving suﬃciency leads to an Ac that has the same size as A. the same holds if one wishes to impose an arbitrary more reﬁned structural constraint on the direct feedthrough term as long as it can be expressed in terms of LMI’s. we recommend to take some precautions to improve the conditioning of the calculations to reconstruct the controller out of the decision variable v.20 also include the side result that controllers of order larger than that of the plant oﬀer no advantage over controllers that have the same order as the plant. Dc ) and an X satisfying (13. Y.20 (H∞ Synthesis Theorem) The following statements are equivalent. Hence Theorem 13. In proving necessity of the solvability of the synthesis inequalities. such that the inequalities (13. The unique solutions X and (Ac .13. Remarks on numerical aspects. M N Moreover. and that I − XY is close to singular what might render the controller computation illconditioned. Clearly. After having veriﬁed the solvability of the synthe sis inequalities. one should avoid that the parameters v get too large. Remark on strictly proper controllers. Bc . Cc .20 lies in its simplicity and its generality. Dc ) are then given by −1 Y V I 0 X = I 0 X U −1 −1 Ac Bc U XB K − XAY L VT 0 = .20) K L 2. There exists v := X.e. the matrix I − XY is invertible and there exist nonsingular U . In Theorem 13. we can just set N = 0 to arrive at the corresponding synthesis inequalities. Remark on the controller order.20 we have not restricted the order of the controller. Theorem 13.
C]. splot(G. While the LMI approach is computationally more involved for large problems. plots of responses of G are obtained through splot(G.5 H∞ controller synthesis in Matlab The result of Theorem 13. The following are the main synthesis routines in the LMI toolbox. LMI synthesis routines have no assumptions on the matrices which deﬁne the system (13. it has the decisive merit of eliminating the so called regularity condi tions attached to the Riccatibased solutions. the transfer functions G12 (s) := C(Is−A)−1 B1 +F and G21 (s) := C1 (Is−A)−1 B+E have no zeros on the jω axis.204 CHAPTER 13. splot(G. .dc] = ltiss(K). etc.or LMI based approaches. The command [gopt. respectively.r). In the LMI toolbox the command G = ltisys(A.cc. ’bo’) for a Bode diagram. Both approaches are based on state space calculations. The LMI Control Toolbox supports continuous. [D E. SOLUTION TO THE GENERAL H∞ CONTROL PROBLEM 13.20 has been implemented in the LMI Control Toolbox of Mat lab. then returns the optimal H∞ performance γ ∗ in gopt and the optimal controller K in K. F zeros(dy.du)]). (The Riccati based approach had not been discussed in this chapter). ’sv’) for a singular value plot. Dc ) which deﬁne the controller K are returned by the command [ac. Cc . Here dy and du are the dimensions of the measurement vector y and the control input u.16). the matrices E and F have full rank. Examples of the usage of these routines will be given in Chapter 10. The state space matrices (Ac . Bc . deﬁnes the state space model (13. 2. [B1 B]. K] = hinflmi(G. Information about G is obtained by typing sinfo(G). [C1.bc.16) in the internal LMI format.and discrete time H∞ synthesis using either Riccati. We refer to the corresponding helpﬁles for more information. ’st’) for a step response. Riccatibased LMIbased continuous time systems hinfric hinflmi discrete time systems dhinfric dhinflmi Riccatibased synthesis routines require that 1.
“The Complex Structured Singular Value. [8] I. Practice. “Linear Robust Control”. A. E.” Journal A. [9] B. No.J.3. Packard and J. Vol.1994. “Multivariable Feedback Design. January 1993. No. New Yersey. Francis. “Robust Process Control.P. Springer. [11] G. [4] B.“Robustness with Observers”. [12] M.C. Engell. August 1997. Vol. 1990.” John Whiley and Sons. 205 .A.Farlane and K.” Addison Wesley. [7] A. 88.71–109. pp 8–19.” Lecture Notes in Control and Infor mation Sciences. 1995. Vol. [15] S. [10] Doyle and Stein. H.IEEE AC28. [2] J.no. 1987. “A Course in H∞ Control Theory. 138. Doyle. “Robust Controller Design using Normalized Coprime Plant Descriptions.” Prentice Hall Inc.” McMillan Publish ing Co. Limebeer. Weinmann. 1995. 4. 29.N. Maciejowski.3. IEEE AC24. 1989. Springer. Vol 32. Skogestad and I. Vol. [13] S. Zhou. B. [5] D. Chapellat and L.Minimax Sensitivity and Optimal Robust ness”. with J.” Springer. “New Tools fo Robustness of Linear Systems”.” Prentice Hall Information and Science Series. 1995. Glover.. Chichester.” Lecture Notes in Control and Information Sciences. 1989. “Robust Control: The Parametric Approach ”.“Multivariable Feedback Control. Postlethwaite.1996. [6] A.May 1983. Bhattacharyya.Zames and B. New Yersey.5. Doyle and K. Doyle. pp. no.“Robust and Optimal Control. “Feedback Control Theory. [14] K.M.C.Francis.H. pp. Morari. Green and D. Postlethwaite. “Robust Control of Multivariable Systems using H∞ Optimization.4.” Automatica. Prentice Hall Information and System Science Series. Control Eng. Francis. Mc. “Design of Robust Control Systems with TimeDomain Speciﬁcations”. Keel.Ross Barmish. New Yersey.Bibliography [1] J. 1990. 1996.A. [16] S. 1991. [3] M. Zaﬁriou. Tannenbaum.Prentice Hall Information and Science Series.“Feedback . Glover.365372. 1991. “Uncertain Models and Robust Control. Macmillan Publishing Company..
25142520. “Structured Singular Value Synthesis Design Example: Rocket Stabilisa tion”. “Mixed H2 /H∞ Control in a Stochastic Frame work. Enns.” Linear Algebra and its Applications. 1990.F. Stoorvogel. . [19] S. American Control Conf. 205206. “Polezero cancellations in the multivariable mixed sensitivity problem. Vol.A. pp.”.A. 1990. [18] M.3. Vol. Peters and A. Delft University. Identiﬁcation and Control. 971996. Smit.G. 1984.206 BIBLIOGRAPHY [17] D. pp. Selected Topics in Modelling..