Design Methods for Control Systems

Okko H. Bosgra
Delft University of Technology Delft, The Netherlands

Huibert Kwakernaak
University of Twente Enschede, The Netherlands

Gjerrit Meinsma
University of Twente Enschede, The Netherlands

Notes for a course of the Dutch Institute of Systems and Control Winter term 2001–2002

ii

Preface
Part of these notes were developed for a course of the Dutch Network on Systems and Control with the title “Robust control and H∞ optimization,” which was taught in the Spring of 1991. These first notes were adapted and much expanded for a course with the title “Design Methods for Control Systems,” first taught in the Spring of 1994. They were thoroughly revised for the Winter 1995–1996 course. For the Winter 1996–1997 course Chapter 4 was extensively revised and expanded, and a number of corrections and small additions were made to the other chapters. In the Winter 1997–1998 edition some material was added to Chapter 4 but otherwise there were minor changes only. The changes in the 1999–2000 version were limited to a number of minor corrections. In the 2000–2001 version an index and an appendix were added and Chapter 4 was revised. A couple of mistakes were corrected for the 2001–2002 issue. The aim of the course is to present a mature overview of several important design techniques for linear control systems, varying from classical to “post-modern.” The emphasis is on ideas, methodology, results, and strong and weak points, not on proof techniques. All the numerical examples were prepared using M ATLAB. For many examples and exercises the Control Toolbox is needed. For Chapter 6 the Robust Control Toolbox or the µ-Tools toolbox is indispensable.

iii

iv

Contents

1. Introduction to Feedback Control Theory

1

1.1. Introduction . . . . . . . . . . . . . . . . . 1.2. Basic feedback theory . . . . . . . . . . . . 1.3. Closed-loop stability . . . . . . . . . . . . 1.4. Stability robustness . . . . . . . . . . . . . 1.5. Frequency response design goals . . . . . . 1.6. Loop shaping . . . . . . . . . . . . . . . . 1.7. Limits of performance . . . . . . . . . . . 1.8. Two-degrees-of-freedom feedback systems 1.9. Conclusions . . . . . . . . . . . . . . . . . 1.10. Appendix: Proofs . . . . . . . . . . . . . .
2. Classical Control System Design

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

1 3 11 20 27 34 40 47 52 52
59

2.1. Introduction . . . . . . . . . . . . . . . . . . . 2.2. Steady state error behavior . . . . . . . . . . . 2.3. Integral control . . . . . . . . . . . . . . . . . 2.4. Frequency response plots . . . . . . . . . . . . 2.5. Classical control system design . . . . . . . . . 2.6. Lead, lag, and lag-lead compensation . . . . . . 2.7. The root locus approach to parameter selection 2.8. The Guillemin-Truxal design procedure . . . . 2.9. Quantitative feedback theory (QFT) . . . . . . 2.10. Concluding remarks . . . . . . . . . . . . . . .
3. LQ, LQG and H2 Control System Design

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

59 60 64 69 79 81 87 89 92 100
103

3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7.

Introduction . . . . . . . . . . . . . . . . . LQ theory . . . . . . . . . . . . . . . . . . LQG Theory . . . . . . . . . . . . . . . . . H2 optimization . . . . . . . . . . . . . . . Feedback system design by H2 optimization Examples and applications . . . . . . . . . Appendix: Proofs . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

103 104 115 124 127 133 139
153

4. Multivariable Control System Design

4.1. 4.2. 4.3. 4.4. 4.5.

Introduction . . . . . . . . . . . . . . . . . . . . . Poles and zeros of multivariable systems . . . . . . Norms of signals and systems . . . . . . . . . . . . BIBO and internal stability . . . . . . . . . . . . . MIMO structural requirements and design methods

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

153 158 167 174 177

v

Contents

4.6. Appendix: Proofs and Derivations . . . . . . . . . . . . . . . . . . . . . . . . . 192
5. Uncertainty Models and Robustness 197

5.1. 5.2. 5.3. 5.4. 5.5. 5.6. 5.7. 5.8. 5.9.
6.

Introduction . . . . . . . . . . . . . . . . . . . . . Parametric robustness analysis . . . . . . . . . . . The basic perturbation model . . . . . . . . . . . . The small gain theorem . . . . . . . . . . . . . . . Stability robustness of the basic perturbation model Stability robustness of feedback systems . . . . . . Structured singular value robustness analysis . . . Combined performance and stability robustness . . Appendix: Proofs . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . .

197 198 208 210 213 221 232 240 247
249

H∞ -Optimization and µ-Synthesis
6.1. Introduction . . . . . . . . . . . . . . . . . . . . . 6.2. The mixed sensitivity problem . . . . . . . . . . . 6.3. The standard H∞ optimal regulation problem . . . 6.4. Frequency domain solution of the standard problem 6.5. State space solution . . . . . . . . . . . . . . . . . 6.6. Optimal solutions to the H∞ problem . . . . . . . 6.7. Integral control and high-frequency roll-off . . . . 6.8. µ-Synthesis . . . . . . . . . . . . . . . . . . . . . 6.9. An application of µ-synthesis . . . . . . . . . . . 6.10. Appendix: Proofs . . . . . . . . . . . . . . . . . .

249 250 258 264 272 275 279 288 292 304
311

A. Matrices

A.1. Basic matrix results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 A.2. Three matrix lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Index 324

vi

6. Establish the type and the placement of actuators in the system. 4. and thus specify the inputs that control the system. performance and robustness. 1 . 7. Extra degrees of freedom in the feedback system configuration introduce more flexibility. 3. possibly including a description of its uncertainty. Specify or choose the class of command signals that are to be followed by certain outputs.1. The decisions imply compromises between conflicting requirements. Formulate a model for the dynamic behavior of the system. also in dependence on its technical implementation. The design targets for linear time-invariant feedback systems may be phrased in terms of frequency response design goals and loop shaping. specify the scope of the control problem and of the system to be controlled. Introduction Designing a control system is a creative process involving a number of choices and decisions. The design targets need to be consistent with the limits of performance imposed by physical realizability. Characterize the system boundary. Decide on the type and the placement of sensors in the system. Introduction to Feedback Control Theory Overview – Feedback is an essential element of automatic control systems. The design of a control system involves the following steps: 1. 2. Decide upon the functional structure and the character of the controller. and thus specify the variables that are available for feedforward or feedback. These choices depend on the properties of the system that is to be controlled and on the requirements that are to be satisfied by the controlled system. 1. The primary requirements for feedback control systems are stability. that is. 5.1. Formulate a model for the disturbances and noise signals that affect the system.

signals and performance requirements. The primary function of a regulator system is to keep a designated output within tolerances at a predetermined value despite the effects of load changes and other disturbances. In several of these steps it is crucial to derive useful mathematical models of systems. In this case the reference signal is not predetermined but presents itself as a measured or observed signal to be tracked by an output. This is why § 1. The reason is that they involve design decisions whose consequences only become clear at later steps. In view of the importance of stability we elaborate in § 1. Specify the desirable or required properties and qualities of the control system. We distinguish different types of control systems: Regulator systems. data gathering. These include robustness. Tracking systems. In a servo system or positioning control system the system is designed to change the value of an output as commanded by a reference input signal. Introduction to Feedback Control Theory 8. and disturbance reduction. and in addition is required to act as a regulator system. It may then be necessary or useful to revise an earlier decision. • corruption by measurement noise. The targets include   • closed-loop stability. For the success of a control system design the depth of understanding of the dynamical properties of the system and the signals often is more important than the a priori qualifications of the particular design method. The models of systems we consider are in general linear and time-invariant. targets  • stability robustness. Feedback is an essential element of automatic control. and fitting models using methods for system identification. including the Nyquist criterion.1. 2 . The functional specifications for control systems depend on the application. Stability is a primary requirement for automatic control systems. • disturbance attenuation.2 presents an elementary survey of a number of basic issues in feedback control theory. On other occasions they follow from experimental or empirical modelling involving experimentation on a real plant or process. Some of the steps may need to be performed repeatedly. For single-input single-output feedback systems realizing the most important design targets may be viewed as a process of loop shaping of a one-degree-of-freedom feedback loop. Sometimes they are the result of physical modelling obtained by application of first principles and basic laws. More refined results follow by using the Nyquist criterion to establish conditions for robust stability with respect to loop gain perturbations and inverse loop gain perturbations. Servo or positioning systems. First we recall several classical and more recent notions of stability margin. Design thus is a process of gaining experience and developing understanding and expertise that leads to a proper balance between conflicting targets and requirements.3 various definitions of stability we review several well known ways of determining stability. linearity and bandwidth improvement. After recalling in § 1.4 on the notion of stability robustness. within the limitations set by limitations • plant capacity.

Certain properties of the plant. (1. This section introduces various important closed-loop system functions such as the sensitivity function.1 (Cruise control system).2 of Kwakernaak and Sivan (1991). 1 This (1.1 shows a block diagram of an automobile cruise control system. 3 . Figure 1. in particular its pole-zero pattern.2.5. and the input sensitivity function. are further targets • satisfactory closed-loop response. which is used to maintain the speed of a vehicle automatically at a constant level.1. Introduction In this section feedback theory is introduced at a low conceptual level 1 .1) where m is the mass of the car. By Newton’s law mv( ˙ t ) = Ftotal (t ). This feedback mechanism is meant to correct automatically any deviations of the actual vehicle speed from the desired cruise speed. and decreased if the difference is negative. Basic feedback theory Further design targets. 1.8. Ignoring these limitations may well lead to unrealistic design specifications. These results deserve more attention than they generally receive.2. The throttle opening is controlled by the cruise controller in such a way that the throttle opening is increased if the difference vr − v between the reference speed vr and the actual speed is positive. are discussed in Section 1.2.: Block diagram of the cruise control system For later use we set up a simple model of the cruising vehicle that accounts for the major physical effects. It is shown how the simple idea of feedback has far-reaching technical implications. The total force may be expressed as Ftotal (t ) = cu (t ) − ρv2 (t ). which may require a two-degree-of-freedom configuration.1. Basic feedback theory 1. impose inherent restrictions on the closed-loop performance.2.1. • robustness of the closed-loop response.2) section has been adapted from Section 11. t ≥ 0.7 the limitations that right-half plane poles and zeros imply are reviewed. and Ftotal the total force exerted on the car in forward direction. designed for positioning and tracking. the complementary sensitivity function. Example 1. the derivative v ˙ of the speed v its acceleration. reference speed throttle opening vr vr − v cruise controller u car v Figure 1. Loop shaping and prefilter design are discussed in § 1. In § 1. 11 2 and 2-degree-of-freedom feedback systems. The speed v of the car depends on the throttle opening u.

and its dynamical properties 4 . then the speed √ has a corresponding steady-state value vmax . (1. Show that the solution of the scaled differential equation (1. The “plant” is a given system. 1. with ρ the friction coefficient. To a constant throttle setting u0 corresponds a steady-state cruise speed w0 such that 0 = u0 − w2 ˜ and w = w0 + w ˜ . We linearize the differential equation (1. whose output is to be controlled. Often the function of this part of the feedback system is to provide power. t ≥ 0. t ≥ 0.5) T w( ˙ t ) = u (t ) − w2 (t ). which satisfies 0 = c − ρv2 .2(a). with proportionality constant c. with 0 . (1. Let u = u0 + u |w ˜ | w0 . θ T with θ= T .9) and initial condition w(0 ) = 0 is given by t w(t ) = tanh( ). T t ≥ 0. Substitution of Ftotal into Newton’s law results in mv( ˙ t ) = cu (t ) − ρv2 (t ). t ≥ 0.8) t ≥ 0. Defining max max w= v vmax (1.4) as the speed expressed as a fraction of the top speed. (1.7) (1. If the cruise speed increases from 25% to 75% of the top speed then θ decreases from 20 [s] to 6.10) Plot the scaled speed w as a function of t for T = 10 [s]. (1. Is this a powerful car? 1. The signal r is an external control input.5) for a constant maximal throttle position u ( t ) = 1.2. √ where T = m/ ρc. (1. The friction force is proportional to the square of the speed of the car. The second term ρv2 (t ) is caused by air resistance. The throttle opening varies between 0 (shut) and 1 (fully open). Substitution into (1. the differential equation reduces to t ≥ 0. T w( ˜ t) = u ˜ (t ) − 2w0 w( Omitting the circumflexes we thus have the first-order linear differential equation 1 1 w ˙ = − w + u.2.6) The time constant θ strongly depends on the operating conditions. Feedback configurations To understand and analyze feedback we first consider the configuration of Fig.3) If u (t ) = 1.5) while neglecting second-order terms yields ˙ ˜ t ). A typical practical value for T is T = 10 [s].2. and is proportional to the throttle opening u (t ). Exercise 1. 2 w0 (1.1. v = c/ρ.5).7 [s]. Hence.2 (Acceleration curve). Introduction to Feedback Control Theory The first term cu (t ) represents the propulsion force of the engine.

(b) Equivalent unit feedback configuration are not always favorable. while the return compensator has the IO map ψ. For the purposes of this subsection we reduce the configuration of Fig. The system of Fig. 1. The plant is represented as an input-output-mapping system with input-output (IO) map φ. (b) Unit feedback plant r e φ y r e γ ψ return compensator (a) (b) Figure 1.2(b).3. 1.1 is a unit feedback system. Basic feedback theory r e forward compensator u plant y return compensator (a) r e forward compensator u plant y (b) Figure 1. mapping time signals to time signals. φ and ψ are IO maps of dynamical systems.2. e = r − ψ( y ). The difference e is called the error signal and is fed to the plant via the forward compensator. is said to have unit feedback. The cruise control system of Fig.: (a) Feedback configuration with input-output maps. where the forward compensator has been absorbed into the plant.: Feedback configurations: (a) General.3 (Unit feedback system). the error signal e and the output signal y usually all are time signals. (1. 1. The control input r .11) These equations may or may not have a solution e and y for any given control input r . The output y of the plant is fed back via the return compensator and subtracted from the external input r . in which the return compensator is a unit gain. Correspondingly. 1. If a solution 5 .1. Example 1.2.2(a) to that of Fig.3(a).2. The feedback system is represented by the equations y = φ(e ).

13) has a solution e. 6 .3. Hence. (1.16) (1. We refer to it as the feedback equation. the error signal e satisfies the equation e = r − ψ(φ(e )). is the IO map of the series connection of the plant followed by the return compensator. or y ≈ ψ−1 (r ).15) In words: If the gain is large then the error e is small compared with the control input r . Going back to the configuration of Fig.3). or e + γ(e ) = r. This is not necessarily always the case. γ(e ) this implies that (1. that is. and depends on the “capacity” of the plant.” We shall see that one of the important consequences of this is that the map from the external input r to the output y is approximately the inverse ψ−1 of the IO map ψ of the return compensator. Equation (1. (1.12) reduces the feedback system to a unit feedback system as in Fig.and amplitude-limited signals. If e is bounded for every bounded r then the closedloop system by definition is BIBO stable3 . Suppose also that for this class of signals the “gain” of the map γ is large. 2A 3A signal is bounded if its norm is finite. γ(e ) e .3(b).12) Here γ = ψ ◦ φ. This class of signals generally consists of signals that are limited in bandwidth and in amplitude. (1. the existence of solutions to the feedback equation is equivalent to the (BIBO) stability of the closed-loop system. Suppose that for a given class of external input signals r the feedback equation e + γ(e ) = r (1. we see that this implies that ψ( y ) ≈ r . 1. (1. and is called the loop IO map.3.2. with ◦ denoting map composition.3(a). the IO map from the control input r to the control system output y is almost independent of the plant IO map. so that γ(e ) ≈ r. Note that it is assumed that the feedback equation has a bounded solution2 e for every bounded r . The class usually consists of band. Hence.17) where ψ−1 is the inverse of the map ψ (assuming that it exists). See also § 1.13) we may neglect the first term on the left. system is BIBO (bounded-input bounded-output) stable if every bounded input results in a bounded output (see § 1. Note also that generally the gain may only be expected to be large for a class of error signals. Then in (1. 1. Norms of signals are discussed in § 4.12) is a functional equation for the time signal e. Note that because γ maps time functions into time functions. Introduction to Feedback Control Theory exists.3. denoted E . High-gain feedback Feedback is most effective if the loop IO map γ has “large gain.14) with · some norm on the signal space in which e is defined.1. 1. Since by assumption e e r .

θ T (1. If the gain g is large then Hcl ( jω) ≈ 1 for low frequencies. T (1.2.21) (1. A simple form of feedback that works reasonably well but not more than that for the cruise control system of Example 1.7) (once again omitting the circumflexes) we have g 1 w ˙ = − w + (r − w). θ T that is. w ˙ =− g 1 + θ T w+ g r.2.24) in spite of uncertainty about the plant dynamics is called robustness of the feedback system with respect to plant uncertainty. and write w(t ) = w0 + w( ˜ t ) as in Example 1.1 is proportional feedback.22) After Laplace transformation of (1.2. (1.17) remains valid as long as the feedback equation has a bounded solution e for every r and the gain is large. This means that the throttle opening is controlled according to u (t ) − u0 = g[r (t ) − w(t )]. even if the IO map of the plant is poorly defined or has unfavorable properties. Basic feedback theory Example 1. ω ∈ R.4. Denote w0 as the steady-state cruising speed corresponding to the nominal throttle setting u0 .1. The larger the gain.23) Hcl (s ) We follow the custom of operational calculus not to distinguish between a time signal and its Laplace transform. Robustness of feedback systems The approximate identity y ≈ ψ−1 (r ) (1.2.4 gives Bode magnitude plots of the closed-loop frequency response Hcl ( jω). (1.4 (Proportional control of the cruise control system). Setting r ˜(t ) = r (t ) − w0 we have u ˜ (t ) = g[r ˜(t ) − w( ˜ t )] . (1. the larger the frequency region is over which this holds.20) Stability is ensured as long as g 1 + > 0. 7 .21) and solving for the Laplace transform of w we identify the closed-loop transfer function Hcl from w= s+ g T g 1 θ + T r. The fact that y ≈ ψ−1 (r ) (1.19) Substituting this into the linearized equation (1. This results in a matching accuracy for the IO map of the feedback system as long as the gain is large.2. Figure 1. 1. The IO map ψ of the return compensator may often be implemented with good accuracy.18) with the gain g a constant and u0 a nominal throttle setting. for different values of the gain g.1.

1. Linearity improvement is a consequence of the fact that if the loop gain is large enough.26) g Hence. This increases the bandwidth.2. Example 1. Also bandwidth improvement is a result of the high gain property.4 is a first-order system.5. Introduction to Feedback Control Theory 1 | Hcl | (log scale) g2 g1 g3 ω (log scale) 1 θ + g T Figure 1. They include linearity improvement.27) The open-loop frequency response function obviously is much more sensitive to variations in the time constant θ than the closed-loop frequency response. the IO map of the feedback system approximately equals the inverse ψ−1 of the IO map of the return compensator. no matter how nonlinear the plant IO map φ is.23) as g 1 1 = + . and disturbance reduction. bandwidth improvement. several other favorable effects may be achieved by feedback. θcl does not depend much on the open-loop time constant θ . The proportional cruise feedback control system of Example 1.2. For g θ we have Hcl ( jω) ≈ jω + g T g T ≈1 for |ω| ≤ g . Linearity and bandwidth improvement by feedback Besides robustness.25) T T As long as g θ the closed-loop time constant θcl approximately equals g . θcl θ T (1. 8 .1.5 (Cruise control system). The closed-loop time constant θcl follows by inspection of (1.: Magnitude plots of the closed-loop frequency response function for three values of the gain with g1 < g2 < g3 . which is quite variable with the speed of the T vehicle. If this IO map is linear. θ (1. like the open-loop system. The frequency response of the open-loop system is 1 T H ( jω) = jω + 1 θ ≈ θ T for |ω| < 1 . up to the frequency T the closed-loop frequency response is very nearly equal to the unit gain.2. so is the IO map of the feedback system. the IO map of the feedback system is close to unity over those frequencies for which the feedback gain is large. T (1.4. Hence. with good approximation. If the return compensator is a unit gain.

The feedback system then is described by the equations z = d + y.5 and g = 10. w0 ). that is. These disturbances are usually caused by environmental effects.2.5(a). or z = d − δ( z ).6. Comment on the two plots. (b) Equivalent unit feedback configuration in the absence of the control input r .7 (Steady-state linearity improvement of the proportional cruise control system).: (a) Feedback system with disturbance. 1.2. To assess the linearity improvement by feedback compare this plot with a plot of w − w0 versus u − u0 for the open-loop system.5. Eliminating the output y and the error signal e we have z = d + φ(e ) = d + φ(−ψ( z )). with w0 = scheme u − u0 = g (r − w).30) Calculate the steady-state dependence of w − w0 on r − w0 (assuming that r is constant). Example 1. The dynamics of the vehicle are given by Tw ˙ = u − w2 . we assume r = 0. 1.28) For positive gain g the closed-loop time constant is smaller than the open-loop time constant θ and. the closed-loop bandwidth is greater than the open-loop bandwidth. For simplicity we study the effect of the disturbance in the absence of any external control input. Exercise 1. It frequently happens that in the configuration of Fig. Basic feedback theory r e disturbance d plant y z φ d z δ ψ return compenator (a) (b) Figure 1. The effect of disturbances may often be modeled by adding a disturbance signal d at the output of the plant as in Fig.2. For a given steady-state solution (u0 .1.5 the time constant of the closed-loop proportional cruise control system is θcl = θ 1+ gθ T . consider the proportional feedback (1. (1. and e = −ψ( z ). y = φ(e ).2.2(a) external disturbances affect the output y. hence. Plot this dependence for w0 = 0.31) 9 . √ (1.29) u0 . In Example 1. 1.2. Disturbance reduction A further useful property of feedback is that the effect of (external) disturbances is reduced. (1.6 (Bandwidth improvement of the cruise control system).

5(b). 1.or downhill grades.37) S (s ) S is known as the sensitivity function of the closed-loop system.1 this leads to the modification 1 1 w ˙ =− w− u+d θ T (1.33) we see that in the open-loop system the effect of the disturbance on the output is wol = 1 s+ 1 θ d.18) this results in the modification w ˙ =− g 1 + θ T w+ g r+d T (1. Introduction to Feedback Control Theory where δ = (−φ) ◦ (−ψ). All this holds provided the feedback equation (1.34) and (1.7). The progress of the cruising vehicle of Example 1. that is.36) This signal wol actually is the “equivalent disturbance at the output” of Fig.31) is a feedback equation for the configuration of Fig. so that the effect of the disturbance is much reduced. s + θ1 cl (1.35) shows that wcl = s+ s+ 1 θ 1 θcl wol .2.6 shows the Bode magnitude plot of the frequency response function S ( jω).21). Under the effect of the proportional feedback scheme (1. Laplace transformation and solution for w (while setting r = 0) shows that the effect of the disturbance on the closed-loop system is represented by wcl = 1 d. but it is obtained by “breaking the loop” at a different point compared with when constructing the loop IO map γ = ψ ◦ φ.2.6(a).1 may be affected by head or tail winds and up. (1. The equation (1.38) 10 . provided the closed-loop system is BIBO stable. (1. 1. (1.34) of (1. These effects may be represented by modifying the dynamical equation (1. The map δ is also called a loop IO map. The plot shows that the open-loop disturbances are attenuated by a factor 1 θcl = θ θ 1+ g T (1. with d the disturbing force. By analogy with the configuration of Fig. Example 1.1. After scaling and linearization as in Example 1. Comparison of (1.35) From (1.2. 1.32) This means that the output z of the feedback system is small compared with the disturbance d .3(b) it follows that if the gain is large in the sense that δ( z ) z then we have z d .33) of (1.31) has at all a bounded solution z for any bounded d . Figure 1.8 (Disturbance reduction in the proportional cruise control system).3) to mv ˙ = cu − ρv2 + d .

Closed-loop stability until the angular frequency 1/θ . The function of the feedback loop is to provide stability. and disturbance attenuation. 1. Feedback implies measuring the output by means of an output sensor. 1.5.1.1.3. 3. After a gradual rise of the magnitude there is no attenuation or amplification of the disturbances for frequencies over 1/θcl . Pitfalls of feedback As we have shown in this section. The result is reduction of the gain and an associated loss of performance. A MIMO or SISO plant with transfer matrix P is connected in feedback with a forward compensator with transfer matrix C . The disturbance attenuation is not satisfactory at very low frequencies. feedback may achieve very useful effects. When the prefilter is replaced 1 | S ( jω)| (log scale) θcl /θ 1/θ 1/θcl ω (log scale) Figure 1. Introduction In the remainder of this chapter we elaborate some of the ideas of Section 1. The associated measurement errors and measurement noise may cause loss of accuracy. Even if the feedback system is stable then high gain may result in overly large inputs to the plant.3. The function of the prefilter is to improve the closed-loop response to command inputs. Na¨ ıvely making the gain of the system large may easily result in an unstable feedback system. We return to these points in § 1.7. 1. 2. Most of the results are stated for single-input single-output (SISO) systems but from time to time also multi-input multi-output (MIMO) results are discussed. zero-frequency disturbances) are not completely eliminated because S (0 ) = 0. If the feedback system is unstable then the feedback equation has no bounded solutions and the beneficial effects of feedback are nonexistent.3 it is explained how this effect may be overcome by applying integral control.: Magnitude plot of the sensitivity function of the proportional cruise control system 11 . The configuration of Fig. In particular.2 for linear timeinvariant feedback systems. This means that a steady head wind or a long uphill grade slow the car down.3. We consider the two-degree-of-freedom configuration of Fig. which the plant cannot absorb. robustness. In § 2. 1. constant disturbances (that is. It also has pitfalls: 1.7.7 is said to have two degrees of freedom because both the compensator C and the prefilter F are free to be chosen by the designer.6.2. The feedback loop is connected in series with a prefilter with transfer matrix F . Closed-loop stability 1.

39–1.7 (or any other control system) is stable if the state representation (1.7. (1. B.7. including the plant. and D are constant matrices of appropriate dimensions. In § 1.8(a) the system is called a single-degree-of-freedom feedback system. 1. and study which is the most effective configuration. 1. (1. 1. C . We assume that these state space representations include all the important dynamic aspects of the systems. Stability In the rest of this section we discuss the stability of the closed-loop control system of Fig. the compensator and the prefilter. A. if for a zero input and any initial state the state of the system asymptotically approaches the zero state as time increases. with a return compensator instead of a forward compensator.3.8(b). We assume that the overall system.2.: Two-degree-of-freedom feedback system configuration r e C u P z r u P z C (a) (b) Figure 1. The state representation of the overall system is formed by combining the state space representations of the component systems.: Single-degree-of-freedom feedback system configurations with the unit system as in Fig. The feedback system of Fig. Definition 1.8 we consider alternative two-degree-of-freedom configurations. They may be uncontrollable or unobservable.39) The command signal r is the external input to the overall system. Also the configuration of Fig.3.8. is called a single-degree-of-freedom system. The signal x is the state of the overall system. the plant input u and the error signal e jointly form the output. 1. 1. Introduction to Feedback Control Theory reference input r prefilter F error signal e compensator C plant input u plant P output z Figure 1.40) = Ax (t ) + Br (t ). has a state representation x ˙(t )  z (t ) u (t ) e (t )  = Cx (t ) + Dr (t ). Besides the reference input r the external input to the overall system may include other exogenous signals such as disturbances and measurement noise. 12 .40) is asymptotically stable.1. that is.1 (Stability of a closed-loop system). while the control system output z.

v1 . Definition 1. 2. prove that if the system is BIBO stable and has no unstable unobservable modes4 then it is stable in the sense of Definition 1.9. Identifying five exposed interconnections. inject an “internal” input signal vi (with i an index). Conversely.3. u. Stability in the sense of Definition 1.3.40). 4A 13 .1. To illustrate the definition of internal stability we consider the two-degree-of-freedom feedback configuration of Fig. 5 The state system x ˙ = Ax + Bu. 1. A signal is said to be bounded if its norm is finite. w2 . Part (1) of this exercise obviously still holds. v2 . w1 .3.7 is called BIBO stable (bounded-inputbounded-output stable) if every bounded input r results in bounded outputs z. prove that if the system is BIBO stable in this sense and has no unstable unobservable and uncontrollable modes5 then it is stable in the sense of Definition 1. We discuss the notion of the norm of a signal at some length in § 4. w4 . is strongly related to the presence and choice of input and output signals.3.1. Then the system is said to be internally stable if the system whose input consists of the joint (external and internal) inputs and whose output is formed by the joint (external and internal) outputs is BIBO stable. Exercise 1. The system has the external input r . We define the notion of internal stability of an interconnected systems in the following manner. on the other hand. 3.1 then it is also BIBO stable. 1. With this definition.3. For the time being we say that a (vector-valued) signal v(t ) is bounded if there exists a constant M such that |vi (t )| ≤ M for all t and for each component vi of v. It deals with the stability of interconnected systems. and the external output z. y = Cx + Du has an unobservable mode if the homogeneous equation x ˙ = Ax has a nontrivial solution x such that Cx = 0.3 (Stability and BIBO stability). v3 . The system is internally stable if the system with input ( r . The mode is unstable if this solution x ( t ) does not approach 0 as t → ∞. w3 . v5 ) and output (z.1. we include five internal input-output signal pairs as shown in Fig. We introduce a further form of stability.2 (BIBO stability). and define an additional “internal” output signal wi just after the injection point. Internal stability is BIBO stability but decoupled from a particular choice of inputs and outputs. of which the various one. Conversely. Prove that if the closed-loop system is stable in the sense of Definition 1.3. Definition 1. 1.3. 1.3. and e for any initial condition on the state.10.1 is independent of the presence or absence of inputs and outputs.4 (Internal stability of an interconnected system). BIBO stability. In each “exposed interconnection” of the interconnected system. y = Cx + Du has an uncontrollable mode if the state differential equation x ˙ = Ax + Bu has a solution x that is independent of u.39–1.3. Closed-loop stability Given the state representation (1. w5 ) is BIBO stable. There is another important form of stability. Often BIBO stability is defined so that bounded input signals are required to result in bounded output signals for zero initial conditions of the state. the overall system is asymptotically stable if and only if all the eigenvalues of the matrix A have strictly negative real part. Exercise 1.5 (Stability and internal stability). v4 .and two-degree-of-freedom feedback systems we encountered are examples. To know what “bounded” means we need a norm for the input and output signals. state system x ˙ = Ax + Bu.3. The system of Fig.

: Two-degree-of-freedom feedback system v1 r F w1 v2 w2 C v3 w3 P v4 w4 z w5 v5 Figure 1.1. If no unstable unobservable and uncontrollable modes are present then internal stability is equivalent to stability in the sense of Definition 1.3.42) is the characteristic polynomial of its system matrix A.3. (1. 2. is stabilizing. 1. Hint: This follows from Exercise 1.9. Introduction to Feedback Control Theory r F C P z Figure 1.3.3.1. (1. 14 .1. 1. Cx (t ) + Du (t ).41) (1.43) The roots of the characteristic polynomial χ are the eigenvalues of the system.3(b). Closed-loop characteristic polynomial For later use we discuss the relation between the state and transfer functions representations of the closed-loop configuration of Fig.: Two-degree-of-freedom system with internal inputs and outputs added 1. When using input-output descriptions.10. 1. prove that if the system is internally stable and none of the component systems has any unstable unobservable modes then the system is stable in the sense of Definition 1. all lie in the open left-half complex plane. if the closed loop with that controller is stable or internally stable. such as transfer functions.9 is stable then it is internally stable.3. The system is stable if and only if the eigenvalues all have strictly negative real parts. The characteristic polynomial χ of a system with state space representation x ˙(t ) y (t ) = = Ax (t ) + Bu (t ). Prove that if the system of Fig.1. then internal stability is usually easier to check than stability in the sense of Definition 1.9. or. Conversely. We say that a controller stabilizes a loop. that is. χ(s ) = det(sI − A ).3.

1.45) with R and Q polynomials. It is proved in § 1.9 is stable if and only if both the prefilter and the feedback loop are stable. We assume that L is proper. 1. We conclude that the configuration of Fig.46) We return to the configuration of Fig. L = PC is the transfer matrix of the series connection of the compensator and the plant. 1.3. respectively.9 consists of the series connection of the prefilter F with the feedback loop of Fig. Q (s ) (1. N and Y are their numerator polynomials.3. 1.44) We call χcl the closed-loop characteristic polynomial. Q (s ) (1.6 (Stability of a series connection). Note that we allow no cancellation between the numerator polynomial and the characteristic polynomial in the denominator.11(b).10 that the characteristic polynomial of the closed-loop system of Fig.11(a) represent it as in Fig. Again we allow no cancellation between the numerator polynomials and the characteristic polynomials in the denominators. To study the stability of the MIMO or SISO feedback loop of e C u P y L (a) Figure 1. Closed-loop stability The configuration of Fig.11. It follows from (1. Suppose that the series connection L has the characteristic polynomial χ. det[ I + L (∞ )] (1. 1. that is. L (∞ ) exists. Since 15 . where Q = χ is the open-loop characteristic polynomial.44) that within the constant factor 1 + L (∞ ) the closed-loop characteristic polynomial is χ(s )(1 + L (s )) = Q (s )[1 + R (s ) ] = Q (s ) + R (s ). Prove that the characteristic polynomial of the series connection of the two systems with state x z is χ1 χ2 . Consider two systems.11(a). D (s ) C (s ) = Y (s ) . one with state x and one with state z and let χ1 and χ2 denote their respective characteristic polynomials. X (s ) (1. In the SISO case we may write the loop transfer function as L (s ) = R (s ) .1.11(a) and write the transfer functions of the plant and the compensator as P (s ) = N (s ) . L is called the loop transfer matrix. We call χ the openloop characteristic polynomial. Exercise 1.: MIMO or SISO feedback systems (b) Fig. 1. 1.47) D and X are the open-loop characteristic polynomials of the plant and the compensator.11 is χcl (s ) = χ(s ) det[ I + L (s )] . From this it follows that the eigenvalues of the series connection consist of the eigenvalues of the first system together with the eigenvalues of the second system.

Otherwise.3. The stability of a feedback system may be tested by calculating the roots of its characteristic polynomial. Hence. 16 .11(a) is D (s ) X (s ) + N (s )Y (s ).3.49) may be solved by expanding the various polynomials as powers of the undeterminate variable and equate coefficients of like powers.1. The system is stable if and only if each root has strictly negative real part.49) may be considered as an equation in the unknown polynomials X and Y . 1. If the polynomials N and D have a common nontrivial polynomial factor that is not a factor of χ then obviously no solution exists.2. A necessary but not sufficient condition for stability is that all the coefficients of the characteristic polynomial have the same sign. then (1. The equations are known as the Sylvester equations (Kailath 1980). so that by equating coefficients of like powers we obtain n + m + 1 equations.4. Suppose that the polynomials N and D have a common poly- nomial factor. This leads to a set of linear equations in the coefficients of the unknown polynomials X and Y . (1. rational function or matrix P is strictly proper if lim|s|→∞ P ( s ) = 0. a solution always exists. which may easily be solved. In particular. The Routh-Hurwitz stability criterion. we expect to solve the pole assignment problem with a compensator of the same order as the plant. If the plant numerator and denominator polynomials N and D are known. We try to find a strictly proper compensator C = Y / X with degrees deg X = m and deg Y = m − 1 to be determined. which is reviewed in Section 5. allows to test for stability without explicitly computing the roots.7 (Hidden modes).48) With a slight abuse of terminology this polynomial is often referred to as the closed-loop characteristic polynomial. The B´ ezout equation (1.48) by its leading coefficient6. Thus. Exercise 1. with deg D = n and deg N < n given. Introduction to Feedback Control Theory R (s ) = N (s )Y (s ) and Q (s ) = D (s ) X (s ) we obtain from (1. Show that the closed-loop characteristic polynomial also contains this factor.46) the well-known result that within a constant factor the closed-loop characteristic polynomial of the configuration of Fig. The degree of χ = DX + NY is n + m. A rational function P is strictly proper if and only if the degree of its numerator is less than the degree of its denominator. The same observation holds for any unobservable and uncontrollable poles of the compensator. Suppose that P = N / D is strictly proper7 . the coefficient of the highest-order term. To set up the Sylvester equations we need to know the degrees of the polynomials X and Y . Pole assignment The relation χ = DX + NY (1. This equation is known as the B´ ezout equation. any unstable uncontrollable or unobservable modes cannot be stabilized. This condition is known as Descartes’ rule of signs. The actual characteristic polynomial is obtained by dividing (1. 6 That 7A is. This factor corresponds to one or several unobservable or uncontrollable modes of the plant. 1. and χ is specified.49) for the characteristic polynomial (possibly within a constant) may be used for what is known as pole assignment or pole placement. Setting this number equal to the number 2m + 1 of unknown coefficients of the polynomials Y and X it follows that m = n. the eigenvalues corresponding to unobservable or uncontrollable modes cannot be changed by feedback.

56) (1.52) the unknown coefficients follow by inspection. suppose that P (s ) C (s ) χ(s ) = = = b n−1 sn−1 + bn−2 sn−2 + · · · + b0 .52) Comparing (1.58) 17 .9 (Sylvester equations). √ and place a non-dominant pair at 5 2(−1 ± j ). Closed-loop stability Example 1. √ We aim at a dominant pole pair at 1 2 2(−1 ± j ) to obtain a closed-loop bandwidth of 1 [rad/s].57) Show that the equation χ = DX + NY may be arranged as   an 0 ··· ··· 0   a n−1 an 0 · · · 0   xn   · · · · · · · · · · · · · · ·   xn−1      ··· ··· ··· ··· ···  ···      a0   a1 · · · · · · a n   ···     0   · · · a a · · · a 0 1 n−1      ··· ··· ··· ··· ···  ···     · · · · · · · · · · · · · · ·  x0 0 ··· ··· 0 a0 x A   0 ··· ··· ··· 0      0 ··· ··· ··· 0  χ2n  yn−1  bn−1     0 0 ··· 0    yn−2  χ2n−1    ···   ···  b n−2 bn−1 0 · · · 0          ··· ··· ··· ···  +  ···  =  ···   ···   b0     b · · · · · · b · · · · · · 1 n−1         0     b b · · · b · · · · · · 0 1 n−2    ··· · · · · · · · · · · · ·  y0 χ0 0 ··· ··· 0 b0 y c B (1. xn sn + xn−1 sn−1 + · · · + x0 χ2n s2n + χ2n−1 s2n−1 + · · · + χ0 . an sn + an−1 sn−1 + · · · + a0 yn−1 sn−1 + yn−2 sn−2 + · · · + y0 . Hence. √ √ χ(s ) = (s2 + s 2 + 1 )(s2 + 10 2s + 100 ) √ √ (1. √ (1.3. and we see that √ (1. Consider a second-order plant with transfer function 1 . (1. P (s ) = Write X (s ) = x2 s2 + x1 s + x0 and Y (s ) = y1 s + y0 .55) (1.54) Y (s ) = 110 2s + 100.51) and (1. Then D ( s ) X ( s ) + N ( s )Y ( s ) = = s2 ( x 2 s2 + x 1 s + x 0 ) + ( y 1 s + y 0 ) x 2 s4 + x 1 s3 + x 0 s2 + y 1 s + y 0 .53) X (s ) = s2 + 11 2s + 121. Exercise 1. More generally.8 (Pole assignment).50) s2 Because the compensator is expected to have order two we need to assign four closed-loop poles.1. (1.3.3. (1.51) = s4 + 11 2s3 + 121s2 + 110 2s + 100.

60) Because for finite-dimensional systems L is a rational function with real coefficients. Consider the simple MIMO feedback loop of Fig. Nyquist criterion In classical control theory closed-loop stability is often studied with the help of the Nyquist stability criterion.12 shows the Nyquist plot of the loop gain transfer function L (s ) = k .5. (1. for short — of the feedback loop. 1.59) B and solved for x and y. Nyquist plot is discussed at more length in § 2.1.61) with k and θ positive constants. 9 The 8 That 18 . 1 + sθ (1. Associated with increasing ω we may define a positive direction along the locus.12. The block marked “ L” is the series connection of the compensator K and the plant P. 1. they have no nontrivial common factors.3. If the polynomials D and N are coprime8 then the square matrix A is nonsingular. This is the loop gain of the cruise control system of Example 1. We first state the best known version of the Nyquist criterion. is. the Nyquist plot is symmetric with respect to the real axis.: Nyquist plot of the loop gain transfer function L (s ) = k /(1 + sθ) This in turn may be represented as A B x =c y (1.2. For a rational function L this means that the degree of its numerator is not greater than that of its denominator.3.5 with k = g θ/ T . Fig. L is a scalar function. If L is proper10 and has no poles on the imaginary axis then the locus is a closed curve. Introduction to Feedback Control Theory Im ω = −∞ −1 ω=∞ ω ω=0 0 k Re Figure 1.11.4. For a SISO system. The transfer matrix L = PK is called the loop gain matrix — or loop gain. 1. By way of example. which is a well-known graphical test. 10 A rational matrix function L is proper if lim |s|→∞ L ( s ) exists. Define the Nyquist plot9 of the scalar loop gain L as the curve traced in the complex plane by L ( jω). ω ∈ R.

and define the loop gain L = PK . Assume that in the feedback configuration of Fig. this number may be negative.10. 11 This means the number of clockwise encirclements minus the number of anticlockwise encirclements. 1. and may be phrased as follows: Summary 1. Assume also that the Nyquist plot of det( I + L ) does not pass through the origin.11 (Nyquist plot).8(a) or (b).10 (Nyquist stability criterion for SISO open-loop stable systems). For SISO systems the loop gain L is scalar. Then the closed-loop system is stable if and only if the Nyquist plot of L does not encircle the point −1. Exercise 1.1. the “unstable open-loop poles” are the right-half plane eigenvalues of the system matrix of the state space representation of the open-loop system.3.3.3..3. 1. This includes any uncontrollable or unobservable eigenvalues.11 is proper such that I + L ( j∞ ) is nonsingular (this guarantees the feedback system to be well-defined) and has no poles on the imaginary axis. Exercise 1.11 the SISO system L is open-loop stable.3. Closed-loop stability Summary 1. The condition that det( I + L ) has no poles on the imaginary axis and does not pass through the origin may be relaxed. Consider a SISO single-degreeof-freedom system as in Fig. it follows from the generalized Nyquist criterion that if the open-loop system is stable then the closed-loop system is stable if and only if the number of encirclements is zero (i. In particular. Then the number of unstable closed-loop poles = the number of times the Nyquist plot of det( I + L ) encircles the origin clockwise11 + the number of unstable open-loop poles. at the expense of making the analysis more complicated (see for instance Dorf (1992)). 1.10 is a special case of the generalized Nyquist criterion. 1. 1. The proof of the Nyquist criterion is given in § 1. Verify the Nyquist plot of Fig.e.3.4. The result of Summary 1.11.. The “unstable closed-loop poles” similarly are the right-half plane eigenvalues of the system matrix of the closed-loop system. Similarly.12. the Nyquist plot of det( I + L ) does not encircle the origin). so that the number of times the Nyquist plot of det( I + L ) = 1 + L encircles the origin equals the number of times the Nyquist plot of L encircles the point −1. It follows that the closed-loop system is stable if and only if the number of encirclements of det( I + L ) equals the negative of the number of unstable open-loop poles. 19 .12 (Stability of compensated feedback system). It follows immediately from the Nyquist criterion and Fig. 1. I. The generalized Nyquist principle applies to a MIMO unit feedback system of the form of Fig.12 that if L (s ) = k /(1 + sθ) and the block “ L” is stable then the closed-loop system is stable for all positive k and θ .e. Prove that if both the compensator and the plant are stable and L satisfies the Nyquist criterion then the feedback system is stable. More about Nyquist plots may be found in § 2. Suppose that the loop gain transfer function L of the MIMO feedback system of Fig.13 (Generalized Nyquist criterion).3.

1.1.13 remains stable under perturbations of the loop gain L as long as the Nyquist plot of the perturbed loop gain does not encircle the point −1. Introduction In this section we consider SISO feedback systems with the configuration of Fig.4. 1. that is. This discussion focusses on the loop gain L = PC . see also Anderson and Jury (1976) and Blondel (1994). If m < n then the plant is said to have n − m zeros at infinity. Summary 1. and Lu (1974). Find a stabilizing compensator for each of these two plants (which for the first plant is itself stable. Exercise 1.3. and C the compensator transfer function. open-loop. The plant possesses the parity interlacing property if it has an even number of poles (counted according to multiplicity) between each pair of zeros on the positive real axis (including zeros at infinity.11(a) with plant P and compensator C .1.” The classic gain margin and phase margin are well-known indicators for how closely the Nyquist plot approaches the point −1. Stability robustness 1. with P the plant transfer function. Intuitively. 20 . Existence of a stable stabilizing compensator A compensator that stabilizes the closed-loop system but by itself is unstable is difficult to handle in start-up. For simplicity we assume that the system is open-loop stable. There are unstable plants for which a stable stabilizing controller does not exist. Consider the unit feedback system of Fig. Stability margins The closed-loop system of Fig. We also assume the existence of a nominal feedback loop with loop gain L0 . input saturating or testing situations.14 (Existence of stable stabilizing controller). 1. We discuss their stability robustness. The following result was formulated and proved by Youla. that is. the property that the closed-loop system remains stable under changes of the plant and the compensator.3.4. Introduction to Feedback Control Theory 1. which is the loop gain that is supposed to be valid under nominal circumstances.) 1.62) possesses the parity interlacing property while P (s ) = (1. If the denominator of the plant transfer function P has degree n and its numerator degree m then the plant has n poles and m (finite) zeros. both P and C represent the transfer function of a stable system.6.) There exists a stable compensator C that makes the closed-loop stable if and only if the plant P has the parity interlacing property.4. this may be accomplished by “keeping the Nyquist plot of the nominal feedback system away from the point −1.15 (Parity interlacing property). 1.2. Check that the plant P (s ) = s ( s − 1 )2 (s − 1 )(s − 3 ) s (s − 2 ) (1.63) does not.3.13. Bongiorno.

14). Cyrot.14 illustrates this.14.4. reciprocal of the gain margin Im L −1 1 Re modulus margin phase margin ω Figure 1.1. robustness is often specified by establishing minimum values for the gain and phase margin.: Robustness margins In classical feedback system design. The modulus margin very directly expresses how far the Nyquist plot stays away from −1. and Voda (1993) introduced two more margins. We have km = 1 . The gain and phase margin do not necessarily adequately characterize the robustness. | L ( jωr )| (1. Practical requirements are km > 2 for the gain margin and 30◦ < φm < 60◦ for the phase margin. 1.15 shows an example of a Nyquist plot with excellent gain and phase margins but where a relatively small joint perturbation of gain and phase suffices to destabilize the system. where ωm is the angular frequency where the Nyquist plot intersects the unit circle closest to the point −1 (see again Fig. Figure 1. Stability robustness e C u y P Figure 1.14). 12 12 French: marge de module. Figure 1.: Feedback system configuration Gain margin The gain margin is the smallest positive number km by which the Nyquist plot must be multiplied so that it passes through the point −1.64) where ωr is the angular frequency for which the Nyquist plot intersects the negative real axis furthest from the origin (see Fig. 21 . For this reason Landau. 1. The phase margin φm is the angle between the negative real axis and L ( jωm ). Rolland. Modulus margin The modulus margin sm is the radius of the smallest circle with center −1 that is tangent to the Nyquist plot.13. Phase margin The phase margin is the extra phase φm that must be added to make the Nyquist plot pass through the point −1.

even more spectacularly. Adequate margins of these types are not only needed for robustness. 1 − sm φm ≥ 2 arcsin sm . We introduce a more refined measure of stability robustness by considering the effect of plant perturbations on the Nyquist plot more in detail.1 (Relation between robustness margins). manifesting itself by closed-loop poles that are very near to the imaginary axis. for MIMO systems. If the margins are small. The delay margin is linked to the phase margin φm by the relation 13 τm = min ω∗ φ∗ . They break down when the system is not open-loop stable.15.65) Here ω∗ ranges over all nonnegative frequencies at which the Nyquist plot intersects the φm unit circle. ω∗ (1. Rolland. but also to achieve a satisfactory time response of the closed-loop system. the Nyquist plot approaches the point −1 closely. and φ∗ denotes the corresponding phase φ∗ = arg L ( jω∗ ). Both are relaxed later.66) 1 ◦ This means that if sm ≥ 1 2 then k m ≥ 2 and φ m ≥ 2 arcsin 4 ≈ 28. and Voda 1993). In particular τm ≤ ω . m A practical specification for the modulus margin is sm > 0. The delay margin should be at least of the order of 21B .4.: This Nyquist plot has good gain and phase margins but a small simultaneous perturbation of gain and phase destabilizes the system Delay margin The delay margin τm is the smallest extra delay that may be introduced in the loop that destabilizes the system. The converse is not true in general.5.) Exercise 1. Introduction to Feedback Control Theory Im -1 0 Re ω L Figure 1. and. Robustness for loop gain perturbations The robustness specifications discussed so far are all rather qualitative. These closedloop poles may cause an oscillatory response (called “ringing” if the resonance frequency is high and the damping small. This means that the stability boundary is approached closely.96 (Landau.3. Cyrot.4. For the time being the assumptions that the feedback system is SISO and open-loop stable are upheld. 2 (1. 22 . Prove that the gain margin km and the phase margin φm are related to the modulus margin sm by the inequalities km ≥ 1 . 13 French: marge de retard. 1. where B is the bandwidth (in terms of angular frequency) of the closed-loop system.1.

16.: Nominal and perturbed Nyquist plots Naturally. 1 + L0 (1. The actual closed-loop system is stable if also the Nyquist plot of the actual loop gain L does not encircle −1. Stability robustness Im −1 L ( jω) 0 L0 ( jω) L L0 Figure 1.67) Re T0 bears its name because its complement 1 − T0 = 1 = S0 1 + L0 (1.68) for all ω ∈ R. The factor | L ( jω) − L0 ( jω)|/| L0 ( jω)| in this expression is the relative size of the perturbation of the loop gain L from its nominal value L0 . By the Nyquist criterion.16.71) shows that the closed-loop system is guaranteed to be stable as long as the relative perturbations satisfy 1 | L ( jω) − L0 ( jω)| < | L0 ( jω)| | T0 ( jω)| for all ω ∈ R. | L0 ( jω)| | L0 ( jω) + 1| Define the complementary sensitivity function T0 of the nominal closed-loop system as T0 = L0 .5. as shown in Fig. (1. It is easy to see by inspection of Fig. it follows from (1. 1. We investigate whether the feedback system remains stable when the loop gain is perturbed from the nominal loop gain L0 to the actual loop gain L. (1. we suppose the nominal feedback system to be well-designed so that it is closedloop stable. the Nyquist plot of the nominal loop gain L0 does not encircle the point −1. Given T0 .69) (1. The relation (1. and is discussed in Section 1. if | L ( jω) − L0 ( jω)| < | L0 ( jω) + 1| This is equivalent to | L0 ( jω)| | L ( jω) − L0 ( jω)| · < 1 for all ω ∈ R.72) 23 .4.16 that the Nyquist plot of L definitely does not encircle the point −1 if for all ω ∈ R the distance | L ( jω) − L0 ( jω)| between any point L ( jω) and the corresponding point L0 ( jω) is less than the distance | L0 ( jω) + 1| of the point L0 ( jω) and the point −1.71) then the perturbed closed-loop system is stable. 1. that is.68) that if | L ( jω) − L0 ( jω)| · | T0 ( jω)| < 1 | L0 ( jω)| for all ω ∈ R (1.70) is the sensitivity function. The sensitivity function plays an important role in assessing the effect of disturbances on the feedback system.1.

6.6.73) is implied by the inequality | T0 ( jω)| < 1 | W ( jω)| for all ω ∈ R.74). Exercise 1. In fact.5. The result is further discussed in § 1. Moreover.73) but nevertheless do not destabilize the closedloop system.3. Suppose that the closed-loop system of Fig.13 is nominally stable. This means that there may well exist perturbations that do not satisfy (1. (1. Summary 1. (1. (1.76) with W a given function.4. robust stability is guaranteed for all perturbations satisfying (1. 1. if and only if | T0 ( jω)| < 1 | W ( jω)| for all ω ∈ R.13 to prove the result of Summary 1. Suppose that the relative perturbations are known to bounded in the form | L ( jω) − L0 ( jω)| ≤ | W ( jω)| | L0 ( jω)| for all ω ∈ R.75) Thus. the smaller is the allowable perturbation. This result is discussed more extensively in Section 5.3 (No poles may cross the imaginary axis).4.2 (Doyle’s stability robustness criterion).4. provided the number of right-half plane poles remains invariant under perturbation. it also holds for open-loop unstable systems.4. It remains stable under all perturbations that do not affect the number of openloop unstable poles satisfying the bound | L ( jω) − L0 ( jω)| ≤ | W ( jω)| | L0 ( jω)| for all ω ∈ R. (1. It originates from Doyle (1979).” Summary 1. if the latter condition holds. (1.1. The stability robustness condition has been obtained under the assumption that the open-loop system is stable. Introduction to Feedback Control Theory The larger the magnitude of the complementary sensitivity function is. 24 . Such perturbations are said to “fill the uncertainty envelope.73) with T0 the nominal complementary sensitivity function of the closed-loop system. Doyle’s stability robustness condition is a sufficient condition. Suppose that the closed-loop system of Fig. (1.74) with W a given function.75) is not only sufficient but also necessary to guarantee stability for all perturbations satisfying (1.2.77) Again. Use the general form of the Nyquist stability criterion of Summary 1.4 (Stability robustness).74) (Vidysagar 1985). where also its MIMO version is described. Then it remains stable under perturbations that do not affect the number of open-loop unstable poles if 1 | L ( jω) − L0 ( jω)| < | L0 ( jω)| | T0 ( jω)| for all ω ∈ R.13 is nominally stable. 1. This limits the applicability of the result. Then the condition (1. With a suitable modification the condition is also necessary. the MIMO version is presented in Section 5. however.

1. respectively). Explain this by deriving a stability condition based on the inverse Nyquist plot. the closed-loop system remains stable under perturbation as long as under perturbation the Nyquist plot of the loop gain does not cross the point −1.1. (1. It remains stable under all perturbations that do not affect the number of right-half plane zeros satisfying the bound 1 L( jω) − 1 L0 ( jω) 1 L0 ( jω) ≤ | W ( jω)| for all ω ∈ R.6 (Reversal of the role of the right half plane poles and the right-half plane zeros). the sufficient condition (1.80) with S0 the nominal sensitivity function of the closed-loop system.4. Suppose that the closedloop system of Fig.4.13 is nominally stable. Summary 1.4.4. Again the result may be generalized to a sufficient and necessary condition. Note that the role of the right-half plane poles has been taken by the right-half plane zeros. 1. This in turn leads to the following conclusions.78) Dividing by the inverse 1/ L0 of the nominal loop gain we find that a sufficient condition for robust stability is that 1 L( jω) − 1 L0 ( jω) 1 L0 ( jω) < 1 L0 ( jω) + 1 1 L0 ( jω) = |1 + L0 ( jω)| = 1 | S0 ( jω)| (1. for robustness both the sensitivity function S and its complement T are important. Summary 1. Thus. 25 . (1. the closed-loop system remains stable under perturbation as long as the inverse 1/ L of the loop gain does not cross the point −1. Stability robustness 1. We illustrate these results by an example. that is. Inverse loop gain perturbations According to the Nyquist criterion.81) with W a given function. It remains stable under perturbations that do not affect the number of open-loop right-half plane zeros of the loop gain if 1 L( jω) − 1 L0 ( jω) 1 L0 ( jω) < 1 | S0 ( jω)| for all ω ∈ R. Exercise 1. (1.13 is nominally stable.67) may be replaced with the sufficient condition 1 1 1 − < +1 L ( jω) L0 ( jω) L0 ( jω) for all ω ∈ R. if and only if | S0 ( jω)| < 1 | W ( jω)| for all ω ∈ R.82) Thus. Equivalently. (1. the polar plot of 1/ L. Suppose that the closedloop system of Fig.7 (Stability robustness under inverse perturbation).4. Later it is seen that for practical feedback design the complementary functions S and T need to be made small in complementary frequency regions (for low frequencies and for high frequencies.4.5 (Inverse loop gain stability robustness criterion).79) for all ω ∈ R.

1 + L0 ( s ) 1 + k0 + s θ 0 (1.86) Prove that 1/sm is the peak value of the magnitude of the sensitivity function S . Figure 1.9 (Landau’s modulus margin and the sensitivity function).4. on the other hand. 1 + L0 ( s ) 1 + k0 + s θ 0 k0 L0 ( s ) = . For high frequencies (greater than the bandwidth) the allowable relative size of the perturbations drops to the value 1.83) with k and θ positive constants. that for low frequencies (up to the frequency 1/θ0 ) relative perturbations of the loop gain up to 1 + k0 are permitted. By the result of Summary 1.84) (1.8 (Frequency dependent robustness bounds). By the result of Summary 1.85) with k0 and θ0 the nominal values of k and θ . Consider a SISO feedback loop with loop gain L (s ) = k .2 the modulus margin sm is defined as the distance from the point −1 to the Nyquist plot of the loop gain L: sm = inf |1 + L ( jω)|. 1 | T0 | (log scale) 1+k0 k0 1 1+k0 θ0 ω (log scale) 1 1 + k0 | S0 | (log scale) 1 1 θ0 1+k0 θ0 ω (log scale) Figure 1.4.4. we conclude from the magnitude plot of 1/ S0. 1. 1 + sθ (1. In Subsection 1.7. ω ∈R (1.1.17.4.4 we conclude from the magnitude plot of 1/ T0 that for low frequencies (up to the bandwidth (k0 + 1 )/θ0) relative perturbations of the loop gain L of relative size up to 1 and slightly larger are permitted while for higher frequencies increasingly larger perturbations are allowed without danger of destabilization of the closed-loop system.17 displays doubly logarithmic magnitude plots of 1/ S0 and 1/ T0. respectively. The nominal sensitivity and complementary sensitivity functions S0 and T0 are S0 ( s ) = T0 (s ) = 1 + sθ0 1 = . 26 . Introduction to Feedback Control Theory Example 1. Exercise 1.4.: Magnitude plots of 1/ T0 and 1/ S0 .

1.1. and • robustness of the closed-loop response. within the limitations set by • plant capacity. the number rm = inf 1 + ω ∈R 1 L ( jω) (1. • stability robustness.18. Therefore. 27 . for closed-loop stability the loop gain should be shaped such that the Nyquist plot of L does not encircle the point −1.2.1.13 is open-loop stable. Introduction In this section we translate the design targets for a linear time-invariant two-degree-of-freedom feedback system as in Fig. • satisfactory closed-loop command response. Prove that 1/ rm is the peak value of the magnitude of the complementary sensitivity function T . The design goals are r F e C u P z Figure 1. Frequency response design goals 2. If the Nyquist plot of the loop gain L approaches the point −1 closely then so does that of the inverse loop gain 1/ L.5.18 into requirements on various closed-loop frequency response functions. Frequency response design goals 1. We discuss these aspects one by one for single-input-single-output feedback systems.87) may also be viewed as a robustness margin. Closed-loop stability Suppose that the feedback system of Fig.5.5.5.: Two-degree-of-freedom feedback system • closed-loop stability. 1. By the Nyquist stability criterion. and • corruption by measurement noise. • disturbance attenuation. 1. 1.

5. not | S| and | T |. 1. Hence. Making the loop gain L large over a large frequency band easily results in error signals e and resulting plant inputs u that are larger than the plant can absorb. 1. The desired shape for the sensitivity function S implies a matching shape for the magnitude of the complementary sensitivity function T = 1 − S . the larger the inputs are the plant can handle before it saturates or otherwise fails.1. In terms of Laplace transforms. consider the block diagram of Fig. that is.19. For plants whose transfer functions have zeros with nonnegative real parts. Disturbance attenuation and bandwidth — the sensitivity function To study disturbance attenuation. It may be necessary to impose further requirements on the shape of the sensitivity function if the disturbances have a distinct frequency profile. 28 . the maximally achievable bandwidth is limited by the location of the right-half plane zero closest to the origin.3. or even at the plant input. Peaking easily happens near the point where the curve crosses over the level 1 (the 0 dB line). the signal balance equation may be written as z = v − Lz. 1+L S where S is the sensitivity function of the closed-loop system. however. the more the disturbances are attenuated at the angular frequency ω. 1. Solution for z results in z= 1 v = S v.5. The number B is called the bandwidth of the feedback loop. where v represents the equivalent disturbance at the output of the plant. L can only be made large over a limited frequency band.: Feedback loop with disturbance 1. The larger the “capacity” of the plant is.20(a). This is discussed in Section 1. Figure 1.20(a) then T is close to 1 at low frequencies and decreases to 0 at high frequencies. with ω ∈ R. Values greater than 1 and peaking are to be avoided. Introduction to Feedback Control Theory v e L z Figure 1. Then the equivalent disturbance at the output is also oscillatory.20(b) shows a possible shape14 for the complementary sensitivity T corresponding to the sensivity function of Fig. a band that ranges from frequency zero up to a maximal frequency B.88) that S and T are complementary. Therefore.20(a) shows an “ideal” shape of the magnitude of the sensitivity function. This is usually a low-pass band. Effective disturbance attenuation is only achieved up to the frequency B. that is. When S is as shown in Fig. It is small for low frequencies and approaches the value 1 at high frequencies. The smaller | S ( jω)| is. Figure 1. To attenuate these disturbances effectively the sensitivity function should be small at and near the resonance 14 Note (1. and that the plant is highly oscillatory. for disturbance attenuation it is necessary to shape the loop gain such that it is large over those frequencies where disturbance attenuation is needed. | S | is small if the magnitude of the loop gain L is large. the larger the maximally achievable bandwidth usually is. Consider for instance the situation that the actual disturbances enter the plant internally.19.

frequency. (b) A corresponding complementary sensitivity function. Consider an oscillatory second- order plant with transfer function P (s ) = ω2 0 . Frequency response design goals (a) 1 (log scale) (b) | S| (log scale) 1 |T | B angular frequency (log scale) B angular frequency (log scale) Figure 1. Exercise 1. select the resonance frequency ω0 well within the closed-loop bandwidth and take the relative damping ζ0 small so that the plant is quite oscillatory.89) be small over a suitable low frequency range. For this reason it sometimes is useful to replace the requirement that S be small with the requirement that | S ( jω) V ( jω)| (1. Show that this oscillatory behavior also appears in the response of the closed-loop system to a nonzero initial condition of the plant. s ( s + α) ω2 0 (1. The shape of the weighting function V reflects the frequency contents of the disturbances. s2 + 2 ζ 0 ω0 s + ω2 0 (1.20.5. If the actual disturbances enter the system at the plant input then a possible choice is to let V = P.1 (Plant input disturbance for oscillatory plant). 29 . Next.90) Choose the compensator C (s ) = k s2 + 2 ζ 0 ω0 s + ω2 0 . Demonstrate by simulation that the closed-loop response to disturbances at the plant input reflects this oscillatory nature even though the closed-loop sensitivity function is quite well behaved.91) Show that the sensitivity function S of the closed-loop system is independent of the resonance frequency ω0 and the relative damping ζ0 .1. Select k and α such that a well-behaved high-pass sensitivity function is obtained.5.: (a) “Ideal” sensitivity function.

and transits smoothly to zero above this frequency. If the loop gain L = C P is large then the input sensitivity M approximately equals the inverse 1/ P of the plant transfer function.93) (1.95) 30 . Thus. In particular.92) 1. This may be solved for u as C ( Fr − m − v).21 to the command signal r follows from the signal balance equation z = PC (−z + Fr ). without a prefilter F (that is. 1+ L T with L = PC the loop gain and T the complementary sensitivity function. Plant capacity — the input sensitivity function Any physical plant has limited “capacity. The configuration of Fig. Solution for z results in z= PC F r. the plant dynamics impose limitations on the shape that T may assume.5.” that is. If the bandwidth is less than required for good command signal response the prefilter may be used to compensate for this. A better solution may be to increase the closed-loop bandwidth. The input sensitivity function M may only be made equal to 1/ P up to the frequency which equals the magnitude of the right-half plane plant zero with the smallest magnitude. (1. Introduction to Feedback Control Theory 1. Adequate loop shaping ideally results in a complementary sensitivity function T that is close to 1 up to the bandwidth. If the shape and bandwidth of T are satisfactory then no prefilter F is needed. with F = 1). (1. If the open-loop plant has zeros in the right-half complex plane then 1/ P is unstable. The input sensitivity function M is connected to the complementary sensitivity function T by the relation T = M P.5. right-half plane plant poles constrain the frequency above which T may be made to roll off. the closed-loop transfer function H ideally is low-pass with the same bandwidth as the frequency band for disturbance attenuation. In terms of Laplace transforms we have the signal balance u = C ( Fr − m − v − Pu ). 1 + PC H The closed-loop transfer function H may be expressed as H= L F = T F.5.94) u= I + CP M The function M determines the sensitivity of the plant input to disturbances and the command signal. For this reason the right-half plane open-loop plant zeros limit the closed-loop bandwidth. Like for the sensitivity function.4.1. It is sometimes known as the input sensitivity function. (1. If the closedloop bandwidth is greater than necessary for adequate command signal response then the prefilter F may be used to reduce the bandwidth of the closed-loop transfer function H to prevent overly large plant inputs. can absorb inputs of limited magnitude only.21 includes both the disturbances v and the measurement noise m.5. If this is not possible then probably the plant capacity is too small. Command response — the complementary sensitivity function The response of the two-degree-of-freedom configuration of Fig. 1. 1. This is discussed in Section 1.

6. at frequencies above the bandwidth — M should decrease as fast as possible. Nevertheless it is an important notion. 1. or 2. 85% of its steady-state value. 85% of the largest feasible output amplitude.21. and ½ the unit step function. This rule of thumb is based on the observation that the first-order step response 1 − e−t /θ . 1 + PC 1 + PC 1 + PC S T T (1. with k and α selected as suggested in Exercise 1. By solving the signal balance z = v + PC ( Fr − m − z ) for the output z we find z= PC PC 1 v+ Fr − m. This is consistent with the robustness requirement that T decrease fast. 1.865 at time 2θ . Then the angular frequency 1/θ may be taken as an indication of the largest possible bandwidth of the system. t ∈ R.21. Consider a stabilizing feedback compensator of the form (1. whichever is less. Except by Horowitz (Horowitz 1963) the term “plant capacity” does not appear to be used widely in the control literature. Let θ be half the time needed until the output either reaches 1. with a the largest amplitude the plant can handle before it saturates or otherwise fails. generally M should not be too large. The maximum bandwidth determined by the plant capacity may roughly be estimated as follows.2 (Input sensitivity function for oscillatory plant).91) for the plant (1.90). Exercise 1. reaches the value 0. If these large values are not acceptable.5.5. t ≥ 0.5.96) 31 . for a fixed plant transfer function P design requirements on the input sensitivity function M may be translated into corresponding requirements on the complementary sensitivity T .5.: Two-degree-of-freedom system with disturbances and measurement noise By this connection. the plant capacity is inadequate. Consider the response of the plant to the step input a ½(t ). Measurement noise To study the effect of measurement noise on the closed-loop output we again consider the configuration of Fig. To prevent overly large inputs. Compute and plot the resulting input sensitivity function M and discuss its behavior. At high frequencies — that is.1. and either the plant needs to be replaced with a more powerful one. At low frequencies a high loop gain and correspondingly large values of M are prerequisites for low sensitivity. Frequency response design goals v r F e C u P z m Figure 1.1. or the specifications need to be relaxed. and vice-versa.

This emphasizes the need for good.7. S0 and T0 are the nominal sensitivity function and complementary sensitivity function.4. Alternatively. 1+ L H= L F = T F. The faster T decreases with frequency — this is called roll-off — the more protection the closed-loop system has against high-frequency loop perturbations.97) M= C = SC. and the closed-loop transfer function H . phase. Such perturbations are often caused by load variations and environmental changes. Inspection of (1. and modulus margins.98) We consider the extent to which each of these functions is affected by plant variations. It is consistent with the other design targets to have the sensitivity S small in the low frequency range. the loop gain that is believed to be representative and is used in the design calculations.4 it is seen that for stability robustness it is necessary to keep the Nyquist plot “away from the point −1. phase and modulus margins help to ensure this. successively given by S= 1 . Small values of the sensitivity function for low frequencies.5. where by the other design requirements T is close to 1. 1+ L T= L . that is.8.98) shows that under this assumption we only need to study the effect of perturbations on S and T : The variations in M are proportional to those in S . For simplicity we suppose that the system environment is sufficiently controlled so that the compensator transfer function C and the prefilter transfer function F are not subject to perturbation. Introduction to Feedback Control Theory This shows that the influence of the measurement noise m on the control system output is determined by the complementary sensitivity function T . low-noise sensors.” The target is to achieve satisfactory gain. and T small in the complementary high frequency range. In the crossover region neither S nor T can be small. on the other hand. the sensitivity function S needs to be small. Stability robustness In Section 1.1. at least not very small. the input sensitivity function M . This is important because owing to neglected dynamics — also known as parasitic effects — high frequency uncertainty is ever-present. Denote by L0 the nominal loop gain. the measurement noise fully affects the output. 32 .5. the complementary sensitivity function T .) It is the region that is most critical for robustness. Good gain. ensure protection against perturbations of the inverse loop gain at low frequencies. The solution is to make T and S small in different frequency ranges. which are required for adequate disturbance attenuation. By complementarity T and S cannot be simultaneously small. robustness for loop gain perturbations requires the complementary sensitivity function T to be small. Peaking of S and T in this frequency region is to be avoided. For robustness for inverse loop gain perturbations. 1+ L (1. 1. For low frequencies. 1+ L (1. and the variations in H are proportional to those in T . Correspondingly. as also seen in Section 1. The crossover region is the frequency region where the loop gain L crosses the value 1 (the zero dB line. 1.97–1. Performance robustness Feedback system performance is determined by the sensitivity function S .

4 (MIMO systems). T0 is required to be small at high frequencies. 1.5. Exercise 1. Prove (1. Complementarily. Show that the various closed-loop system functions encountered in this section generalize to the following matrix system functions: • the sensitivity matrix S = ( I + L )−1 . As seen before. This causes T to be robust at low frequencies. • The complementary sensitivity T should be small at high frequencies to prevent 33 . which is precisely the region where its values are significant.9. These requirements are conflicting. with L = PC the loop gain matrix and I an identity matrix of suitable dimensions.5. (1. normal control system design specifications require S0 to be small at low frequencies (below the bandwidth).99–1. the relative change of the reciprocal of the complementary sensitivity function may be written as 1 T − 1 T0 1 T0 = T0 − T L0 − L = S0 = S0 T L 1 L − 1 L0 1 L0 . Frequency response design goals It is not difficult to establish that when the loop gain changes from its nominal value L0 to its actual value L the corresponding relative change of the reciprocal of the sensitivity function S may be expressed as 1 S − 1 S0 1 S0 = S0 − S L − L0 = T0 . – good command response. Suppose that the configuration of Fig.3 (Formulas for relative changes). 1.100). • the complementary sensitivity matrix T = I − S = L ( I + L )−1 = ( I + L )−1 L. S L0 (1.1.100) These relations show that for the sensitivity function S to be robust with respect to changes in the loop gain we desire the nominal complementary sensitivity function T0 to be small. Exercise 1. • the closed-loop transfer matrix H = ( I + L )−1 LF = T F . because S0 and T0 add up to 1 and therefore cannot simultaneously be small.99) Similarly. for the complementary sensitivity function T to be robust we wish the nominal sensitivity function S0 to be small. • the input sensitivity matrix M = ( I + C P )−1 C = C ( I + PC )−1 . and – robustness at low frequencies.5. The solution is again to have each small in a different frequency range. Review of the design requirements We summarize the conclusions of this section as follows: • The sensitivity S should be small at low frequencies to achieve – disturbance attenuation. causing S to be robust in the high frequency range. On the other hand.21 represents a MIMO rather than a SISO feedback system.5.

101) shows that these targets may be achieved simultaneously by making the loop gain L large.22. Loop shaping 1. The design goals of the previous section may be summarized as follows. 1. that is. and – loss of robustness at high frequencies. by making | L ( jω)| frequency region. The problem is to determine the feedback compensator C as in Fig. has a suitable shape. by making | L ( jω)| 1 in the low-frequency region.6.22 shows how specifications on the magnitude of S in the low-frequency region and that of T in the high-frequency region result in bounds on the loop gain L. 1 in the high- Figure 1. – adverse effects of measurement noise. At low frequencies we need S small and T close to 1. and – loss of robustness.6. Introduction to Feedback Control Theory – exceeding the plant capacity. 1+ L T= L 1+ L (1. (log scale) 0 dB L ( jω) upper bound for high frequencies ω (log scale) lower bound for low frequencies Figure 1.1. 1. Introduction The design of a feedback control system may be viewed as a process of loop shaping.: Robustness bounds on L in the Bode magnitude plot 34 . At high frequencies we need T small and S close to 1. • In the intermediate (crossover) frequency region peaking of both S and T should be avoided to prevent – overly large sensitivity to disturbances.1.18 such that the loop gain frequency response function L ( jω). This may be ac- complished by making the loop gain L small. Low frequencies. ω ∈ R. High frequencies. that is. – excessive influence of the measurement noise. Inspection of S= 1 .

so does the inverse Nyquist plot.4. the more closely the Nyquist plot of L approaches −1 the more T= peaks. Let L be a minimum phase real-rational proper transfer function.1 (Bode’s gain-phase relationship). ω ∈ R.102) peaks. What is c? More precisely Bode’s gain-phase relationship may be phrased as follows (Bode 1945). 1 L = 1+ L 1+ 1 L (1. (1. while if | L ( jω)| behaves like 1/ω2 then the phase is approximately −π [rad] = −180◦ . In this frequency region it is not sufficient to consider the behavior of the magnitude of L alone. The more closely the Nyquist plot of L approaches the point −1 the more S= 1 1+ L (1.103) 1. L ( s ) is a rational function in s with real coefficients. Then magnitude and phase of the corresponding frequency response function are related by arg L ( jω0 ) = 15 A 1 π ∞ −∞ d log | Lω0 ( ju )| W (u ) du . (1. Relations between phase and gain A well-known and classical result of Bode (1940) is that the magnitude | L ( jω)|. 35 . 16 That is. In the crossover region we have | L ( jω)| ≈ 1.3).104) Thus. Make it plausible that between any two break points of the Bode plot the loop gain behaves as L ( jω) ≈ c ( jω)n . du ω0 ∈ R .2. If the Nyquist plot of L comes very near the point −1. 2 (1. if | L ( jω)| behaves like 1/ω then we have a phase of approximately −π/2 [rad] = −90◦ . Exercise 1. and the phase arg L ( jω). Why (1. that is.6. If on a log-log scale the plot of the magnitude | L ( jω)| versus ω has an approximately constant slope n [decade/decade] then arg L ( jω) ≈ n × π . the Nyquist plot of 1/ L. Hence. Show that n follows from the numbers of poles and zeros of L whose magnitude is less than the frequency corresponding to the lower break point.6. Loop shaping Crossover region.6. The behavior of the magnitude and phase of L in this frequency region together determine how closely the Nyquist plot approaches the critical point −1.105) with c a real constant.104) holds may be understood from the way asymptotic Bode magnitude and phase plots are constructed (see § 2. of a minimum phase15 linear time-invariant system with real-rational16 transfer function L are uniquely related. Gain and phase of the loop gain are not independent. ω ∈ R.6.6.2).106) rational transfer function is minimum phase if all its poles and zeros have strictly negative real parts (see Exercise 1.2 (Bode’s gain-phase relationship). Summary 1.1. This is clarified in the next subsection.

the loop gain is greater than 1 for low frequencies. Introduction to Feedback Control Theory with log denoting the natural logarithm.23. and enters the unit disk once (at the frequency ±ωm ) without leaving it again. If the slope of log | L| varies then the behavior of | L| in neighboring frequency regions also affects arg L ( jω0 ). If at crossover the phase of L is. If log | Lω0 | would have a constant slope then (1. that is.104) would be recovered exactly. Figure 1. The frequency ωm at which the Nyquist plot of the loop gain L crosses the unit circle is at the center of the crossover region.: Weighting function W in Bode’s gain-phase relationship behavior of Lω0 near 0. The intermediate variable u is defined by u = log ω . In (1.106) the first factor under the integral sign is the slope of the magnitude Bode plot (in [decades/decade]) as previously expressed by the variable n. finally.107) W is the function W (u ) = log coth Lω0 . 1. For stability the phase of the loop gain L at this point should be between −180◦ and 180◦. Suppose that the general behavior of the Nyquist plot of the loop gain L is as in Fig. so that arg L ( jω0 ) is determined by the 4 W 2 0 −1 1/ e 0 u ω/ω0 1 1 e Figure 1.24 (Freudenberg and Looze 1988) illustrates the bounds on the phase in the crossover region. u = log ω . say. ω0 (1. The Bode gain-phase relationship leads to the following observation. −90◦ then by Bode’s gain-phase relationship | L| decreases at a rate of about 1 [decade/decade]. is given by Lω0 ( ju ) = L ( jω).109) |u| = log 2 +1 −1 .108) Lω0 is the frequency response function L defined on the logaritmically transformed and scaled frequency axis u = log ω/ω0 .23.14. To achieve a phase margin of at least 60◦ the phase should be between −120◦ and 120◦ . 1. by the behavior of L near ω0 .1. W has most of it weight near 0. To avoid instability the rate of decrease cannot be 36 . and arg L ( jω0 ) would be determined by n alone. that is. ω0 ω ω0 ω ω0 (1. Since | L| decreases at the point of intersection arg L generally may be expected to be negative in the crossover region. (1. W is a weighting function of the form shown in Fig.

This bound on the rate of decrease of | L| in turn implies that the crossover region cannot be arbitrarily narrow. the trade-off between gain attenuation and limited phase lag is further handicapped. 37 . Let L be non-minimum phase but stable17 . L has right-half plane zeros but no right-half plane poles.6.24. Hence. For non-minimum phase systems the trade-off between gain attenuation and phase lag is even more troublesome. (1.111) is. Let L (s ) = k 17 That 18 That (s − z1 )(s − z2 ) · · · (s − zm ) (s − p1 )(s − p2 ) · · · (s − p n ) (1. for robust stability in the crossover region the magnitude | L| of the loop gain cannot decreases faster than at a rate somewhere between 1 and 2 [decade/decade].110) As L z only introduces phase lag. It follows that | L ( jω)| = | Lm ( jω)| and arg L ( jω) = arg Lm ( jω) + arg L z ( jω) ≤ arg Lm ( jω). Exercise 1.7.: Allowable regions for gain and phase of the loop gain L greater than 2 [decades/decade]. The effect of right-half plane zeros — and also that of right-half plane poles — is discussed at greater length in Section 1. Non-minimum phase behavior generally leads to reduction of the openloop gains — compared with the corresponding minimum phase system with loop gain Lm — or reduced crossover frequencies. is.3 (Minimum phase). Bode’s gain-phase relationship holds for minimum phase systems. Loop shaping (log scale) L ( jω) 0 dB lower bound for low frequencies upper bound for high frequencies ω (log scale) 180◦ allowable phase functions 0◦ ω (log scale) arg L ( jω) −180 ◦ Figure 1. where Lm is minimum phase and L z is an all-pass function18 such that | L z ( jω)| = 1 for all ω. Then L = Lm · L z .1. | L z ( jω)| is constant for all ω ∈ Ê.6.

ω ∈ R.25 illustrates this. (1. Suppose for the time being that L has no open right-half plane poles. Changing the sign of the real part of the ith zero zi of L leaves the behavior of the magnitude | L ( jω)|. Bode’s sensitivity integral implies that if. so that the sensitivity integral vanishes.1.” 38 . This is why transfer functions whose poles and zeros are all in the left-half plane and have positive gain are called minimum phase transfer functions. 19 That 20 In is. If the open-loop system has right-half plane poles this statement still holds. pn in the left-half complex plane. The proof of (1. Introduction to Feedback Control Theory be a rational transfer function with all its poles p1 . Suppose that the loop gain L has no poles in the open right-half plane19 . Then the integral over all frequencies of log | S| is zero.6.112) log | S ( jω)| d ω = 0. If L has right-half plane poles then (1. the integral on the lefthand side of (1. adaptive control the expression is “ L has relative degree greater than or equal to 2.3. For the feedback system to be useful. 1. Then if L has at least two more poles than zeros the sensitivity function S= satisfies ∞ 0 1 1+ L (1. or. no poles with strictly positive real part.113) needs to be modified to ∞ 0 log | S ( jω)| d ω = π i Re p i . Figure 1. p2 .113) The assumption that L is rational may be relaxed (see for instance Engell (1988)). of L unaffected.4 (Bode’s sensitivity integral). We discuss some of the implications of Bode’s sensitivity integral. This means that log | S | both assumes negative and positive values. is minimal for all frequencies if all zeros zi lie in the left-half complex plane and k is a positive gain. that | S| both assumes values less than 1 and values greater than 1. Bode’s sensitivity integral Another well-known result of Bode’s pioneering work is known as Bode’s sensitivity integral Summary 1. included according to their multiplicity.114) The right-hand side is formed from the open right-half plane poles pi of L. Then there exists a well-defined corresponding frequency response function L ( jω). equivalently.10.” If the pole-zero excess is one. Similarly. | S | needs to be less than 1 over an effective lowfrequency band.6. ω ∈ R. ω ∈ R. · · · . this can be achieved at all then it is at the cost of disturbance amplification (rather than attenuation) at high frequencies.114) may be found in § 1. but modifies the behavior of the phase arg L ( jω). changing the sign of the gain k does not affect the magnitude. Prove that under such changes the phase arg L ( jω).113) is finite but has a nonzero value. If the pole-zero excess of the plant is zero or one then disturbance attenuation is possible over all frequencies. The statement that L should have at least two more poles than zeros is sometimes phrased as “the pole-zero excess of L is at least two20 . (1. ω ∈ R.

Loop shaping | S| 1 0 B ω (log scale) Figure 1. Does Bode’s theorem apply? 2.115) . (1. with k and θ positive constants. Then the peak value of the sensitivity function is bounded by sup | S ( jω)| ≥ 1 ωH − ωL ω L log 1 3εω H − α 2k . Also if the frequencies ω L and ω H are required to be closely together. Calculate the sensitivity function S and plot its magnitude.5 and k ≥ 0. (1. Control system design therefore involves trade-offs between phenomena that occur in different frequency regions. Freudenberg and Looze (1988) use Bode’s sensitivity integral to derive a lower bound on the peak value of the sensitivity function in the presence of constraints on the low-frequency behavior of S and the high-frequency behavior of L. Suppose that the loop gain is L (s ) = k /(1 + sθ).25. and | L ( jω)| ≤ ε ωH ω k +1 0 ≤ ω ≤ ωL. Let ω L and ω H be two frequencies such that 0 < ω L < ω H .6.10. This result is an example of the more general phenomenon that bounds on the loop gain in different frequency regions interact with each other. Suppose that L is real-rational. First check that the closed-loop system is stable for all positive k and θ.5 (Bode integral). Repeat this for the loop gain L (s ) = k /(1 + sθ)2 . then this is paid for by a high peak value of S in the crossover region.1. the crossover region is required to be small.117) ωl ≤ω≤ω H The proof is given in § 1. The interaction becomes more severe as the bounds become tighter.: Low frequency disturbance attenuation may only be achieved at the cost of high frequency disturbance amplification Exercise 1.116) with 0 < ε < 0. Assume that S and L are bounded by | S ( jω)| ≤ α < 1. 1.6. 39 . minimum phase and has a pole-zero excess of two or more. (1. ω ≥ ωH . If α or ε decrease or if ω L or k increase then the lower bound for the peak value of the sensitivity function increases. that is.

Kwakernaak 1995). The inequality (1. Exercise 1.7.3). Let T be the complementary sensitivity function of a stable feedback system that has integrating action of at least order two21 . e C u y P Figure 1.7. The result that follows was originally obtained by Freudenberg and Looze (1985) and Freudenberg and Looze (1988). 1. Prove that ∞ log T 0 1 jω dω = π i Re 1 .6. right-half plane zeros and poles play an important role. its inverse is unstable. This is the subject of Section 1. The reason is that if the the plant has right-half plane poles. 40 .7. zi (1.1.: SISO feedback system Summary 1. Introduction In this section22 we present a brief review of several inherent limitations of the behavior of the sensitivity function S and its complement T which result from the pole-zero pattern of the plant. which imposes limitations on the way the dynamics of the plant may be compensated. title has been taken from Engell (1988) and Boyd and Barratt (1991). If the plant has right-half plane zeros. it is unstable. More extensive discussions on limits of performance may be found in Engell (1988) for the SISO case and Freudenberg and Looze (1988) for both the SISO and the MIMO case.7. Freudenberg-Looze equality A central result is the Freudenberg-Looze equality.7. 21 This 22 The means that 1/ L ( s ) behaves as O ( s2 ) for s → 0.115) and (1.26. (see § 2.118) with the zi the right-half plane zeros of the loop gain L (Middleton 1991. The natural limitations on stability. and that the loop gain has a right-half plane zero z = x + j y with x > 0.2.117) demonstrates that stiff bounds in the low.1 (Freudenberg-Looze equality). performance and robustness as discussed in this section are aggravated by the presence of right-half plane plant poles and zeros.and high-frequency regions may cause serious stability robustness problems in the crossover frequency range. which is based on the Poisson integral formula from complex function theory. Introduction to Feedback Control Theory The bounds (1. In particular.26 is stable. 1. Limits of performance 1.1.116) are examples of the bounds indicated in Fig.6 (Bode’s integral for the complementary sensitivity). which imposes extra requirements on the loop gain. 1. Suppose that the closed-loop system of Fig.24. What does this equality imply for the behavior of T ? 1.

+ 2 2 + (ω − y ) x + (ω + y )2 (1.119). Exercise 1. 1. hence. that is. π x π x (1.120) formed from the open right-half plane poles p i of the loop gain L = PC . Show that the weight function Wz represents the extra phase lag contributed to the phase of the plant by the fact that the zero z is in the right-half plane.7. at any right-half plane zero of the plant.3. We first rewrite the equality in the form ∞ 0 −1 log(| S ( jω)| ) w z (ω) d ω = log | Bpoles ( z )|. (1.1. x2 + ( y − ω)2 (1. Trade-offs for the sensitivity function We discuss the consequences of the Freudenberg-Looze relation (1.125) 41 .124) The function Wz is plotted in Fig.7. The plot shows that Wz increases monotonically from 0 to 1.7. p ¯i + s (1. and. Limits of performance Then the sensitivity function S = 1/(1 + L ) must satisfy ∞ −∞ log (| S ( jω)| ) x −1 d ω = π log | Bpoles ( z )|. The overbar denotes the complex conjugate. It relies on the Poisson integral formula from complex function theory.2 (Weight function). 1.122) We arrange (1. The presentation follows Freudenberg and Looze (1988) and Engell (1988). Wz (ω) is the phase of z + jω . which holds at any right-half plane zero z = x + j y of the loop gain.123) with Wz the function Wz (ω) = 0 ω w z (η) d η = ω−y 1 ω+ y 1 arctan + arctan . Its steepest increase is about the frequency ω = |z|.10. (1.119) Bpoles is the Blaschke product Bpoles (s ) = i pi − s . z − jω (1. The proof is given in § 1.121) with w z the function w z (ω) = 1 π x2 x x .27 for different values of the ratio y/ x of the imaginary part to the real part.121) in turn as ∞ 0 −1 log(| S ( jω)| ) dWz (ω) = log | Bpoles ( z )|.

The weighted length equals the extra phase added by the right-half plane zero z over the frequency interval. (1. then necessarily | S| is greater than 1 over a complementary frequency range. | S | is less than 1. The Freudenberg-Looze equality strengthens the Bode integral because of the weighting function w z included in the integrand.28 shows the numbers ε and µ and the desired behavior of | S |.: The function Wz for values of arg z increasing from (a) 0 to (b) almost π/2 Exercise 1. Hence. with ω1 given. We should like to know something about the peak value µ of | S | in the complementary frequency range. Introduction to Feedback Control Theory 1 Wz 0. This we already concluded from Bode’s sensitivity integral. Define the bounding function b (ω) = ε µ for |ω| ≤ ω1 .1.7.27. We argue that if | S| is required to be small in a certain frequency band—in particu- µ 1 | S| ε 0 ω1 ω (log scale) Figure 1.121) implies that if The function w z is positive and also log | Bpoles log | S | is negative over some frequency range so that.28. ω1 ]. The quantity dWz ( jω) = w z ( jω)d ω may be viewed as a weighted length of the frequency interval. Suppose that we wish | S ( jω)| to be less than a given small number ε in the frequency band [0. The weighting function determines to what extent small values of | S| at low frequencies need to be compensated by large values at high frequencies.126) 42 . −1 ( z )| is positive. Figure 1. The larger the weighted length is.3 (Positivity of right-hand side of the Freudenberg-Looze equality). (1. The comments that follow hold for plants that have at least one right-half plane zero. a low-frequency band—it necessarily peaks in another band. equivalently.: Bounds on | S | lar.5 0 (b) (a) 0 1 2 3 4 ω/|z| 5 Figure 1.123). the more the interval contributes to the right-hand side of (1. Prove that −1 log | Bpoles ( z )| is positive for any z whose real part is positive. for |ω| > ω1 .

If the plant has unstable poles. it is greater than 1. Thus.1. plants with a right-half plane zero close to a right-half plane pole are difficult to control.128) results in the inequality 1 Wz (ω1 ) −1 (z) µ ≥ ( ) 1−Wz (ω1 ) · Bpoles ε We note this: • For a fixed zero z = x + j y and fixed ω1 > 0 the exponents in this expression are positive.129) p ¯i + z pi − z (1. the plant is difficult to control. the peak value assumes excessive values. if excessive peak values are to be avoided. (1. for ε < 1 the peak value µ is greater than 1. the smaller ε is. as a result.4 (Effect of right-half plane open-loop zeros). Moreover. −1 By 1.7. Limits of performance Then | S ( jω)| ≤ b (ω) for ω ∈ R and the Freudenberg-Looze equality together imply that b needs to satisfy ∞ 0 −1 log(b (ω)) dWz (ω) ≥ log | Bpoles ( z )|. if the width of the band over which sensitivity is required to be small is greater than the magnitude of the right-half plane zero z. and.3 we have | Bpoles ( z )| ≥ 1. Hence. the larger is the peak value. 2. Summary 1. right-half plane poles make the plant more difficult to control. the peak values of | S | are correspondingly large. Otherwise.130) is large if the right-half plane zero z is close to any of the right-half plane poles p i . small sensitivity at low frequencies is paid for by a large peak value at high frequencies. the achievable disturbance attenuation is further impaired. the two exponents increase monotonically with ω1 . The Freudenberg-Looze equality holds for any right-half plane zero. 43 . • For fixed ε. The Freudenberg-Looze equality holds for any right-half plane zero.130) is replaced with 1 if there are no right-half plane poles. The number (1.127) Evaluation of the left-hand side leads to −1 ( z )|. and approach ∞ as ω1 goes to ∞. The situation is worst when a right-half plane zero coincides with a right-half plane pole—then the plant has either an uncontrollable or an unobservable unstable mode. The number −1 | Bpoles ( z )| = i 1 1−Wz (ω1 ) . the width of the band over which sensitivity may be made small cannot be extended beyond the magnitude of the smallest right-half plane zero. Hence.7. 1. This effect is especially pronounced when one or several right-half plane pole-zero pairs are close. If this situation occurs. (1.7. The first exponent (that of 1/ε) crosses the value 1 at ω = x2 + y2 = |z|. Therefore. We summarize the qualitative effects of right-half plane zeros of the plant on the shape of the sensitivity function S (Engell 1988). The upper limit of the band over which effective disturbance attenuation is possible is constrained from above by the magnitude of the smallest right-half plane zero. Hence. Therefore. in particular the one with smallest magnitude.128) Resolution of (1. Wz (ω1 ) log ε + (1 − Wz (ω1 )) log µ ≥ log | Bpoles (1.

(s + 1.8018 ) .62293. Introduction to Feedback Control Theory 3. and poles at ±0. The measured output is the angle φ that the lower pendulum makes with the vertical. The pendulums have equal lengths L. L ( K 2 + 6 K + 1 ) s4 − 4 ( K + 2 ) s2 + 3 (1. 1. and ±1.22474. Two pendulums are mounted on top of each other.6051 )(s − 0. The plant has two right-half plane poles and one right-half plane zero.6292 (s + 1. This compensator transfer function is C (s ) = 1.132) This plant transfer function has zeros at 0. If we furthermore let L = 1 then P (s ) = 28 4 9 s s 2 ( −2 s 2 + 3 ) . For a pendulum whose mass is homogeneously distributed along its length K is 1 3 . 0.2247 )s2 (1. The input to the system is the horizontal position u of the pivot of the lower pendulum. As seen in the next subsection the right-half plane pole with largest magnitude constrains the smallest bandwidth that is required.: Double inverted pendulum system Example 1.7. φ u Figure 1.60507 and ±1.5 (Double inverted pendulum).6229 )(s + 0. The transfer function P from the input u to the angle φ may be found (Kwakernaak and Westdijk 1985) to be given by P (s ) = 1 s2 ( − ( 3 K + 1 ) s2 + 3 ) .29. By techniques that are explained in Chapter 6 we may calculate the transfer function C of the compensator that makes the transfer matrices S and T stable and at the same time minimizes the peak value of supω∈R | S ( jω)| of the sensitivity function S of the closed-loop system. equal masses m and equal moments of inertia J (taken with respect to the center of gravity). We consider an example.1. The trade-off between disturbance attenuation and amplification is subject to Bode’s sensitivity integral. 2 − 28 3 s +3 (1.133) 44 . To illustrate these results we consider the double inverted pendulum system of Fig.29.131) with K the ratio K = J /(mL2 ). If the plant has no right-half plane zeros the maximally achievable bandwidth is solely constrained by the plant capacity.

1.2247 of the right-half plane plant zero is approached the more the peak value increases. The plot of Fig.6 (p.2247 )(s + 0. This compensator no longer cancels the double zero at 0.01. 100 | S| 10 (a) (b) (c) 40 dB 20 dB (d) 1 10−4 10−2 100 102 ω 104 0 dB 106 Figure 1. The reason is that the loop gain does not approach 0. Exercise 1. at the price of an increase in the peak value of the magnitude of S .3. Actually.6051 )(s − 0. say.7.7.9. Limits of performance Figure 1.134) Figure 1. which results in large plant input amplitudes and high sensitivity to measurement noise. which means that constant disturbances are not attenuated. No compensator exists with a smaller peak value. the plant is unable to compensate for constant disturbances. The compensators considered so far all result in sensitivity functions that do not approach the value 1 as frequency increases to ∞. Check that the double inverted pendulum does not have the parity interlacing property of § 1.30 shows the magnitude plot (a) of the corresponding sensitivity function S . This undesirable phenomenon. no stabilizing compensator exists that by itself is stable.134) reduces the sensitivity to disturbances up to a frequency of about 0. (s + 1. the sensitivity S at zero frequency is 1. By further modifying the compensator this frequency may be pushed up.133) the compensator (1. There exist compensators that achieve magnitudes less than 1 for the sensitivity in the frequency range. between 0.8273 ) . The reason is structural: Because the plant transfer function P equals zero at frequency 0. 1.7. This is achieved.6229 )(s + 0. by the compensator with transfer function C (s ) = 1.1. It causes the plant input u to drift.1178. Note that the compensator has a double pole at 0. As a result. is removed in Example 1. The corresponding double closed-loop pole at 0 actually makes the closed-loop system unstable. The cost is a further increase of the peak value. This magnitude does not depend on the frequency ω and has the constant value 21.30 shows two more magnitude plots (c) and (d). To reduce the sensitivity of the feedback system to low frequency disturbances it is desirable to make | S | smaller at low frequencies.6 (Interlacing property).1178 for the magnitude of the sensitivity function is quite large (as compared with 1). Exercise 1. 20). the zeros at 0 play the role of right-half plane zeros. which in the closed loop cancels against the double zero at 0 of the plant.30 shows the magnitude (b) of the corresponding sensitivity function. The closer the magnitude 1. for instance.30.: Sensitivity functions for the double pendulum system The peak value 21.7. Hence.017390 )(s − 0. except that they bound the frequencies where S may be made small from below rather than from above.01 and 0. Figure 1.7 (Lower bound for the peak value of S ).025715 ) (1. 45 .6202 (s + 1.30 also shows that compared with the compensator (1.

Introduction to Feedback Control Theory 1. | T | may only be made small at frequencies that exceed the magnitude of the open-loop right-half plane pole with largest magnitude. We consider the implications of the equality (1. Trade-offs for the complementary sensitivity function Symmetrically to the results for the sensitivity function well-defined trade-offs hold for the complementary sensitivity function.137) on the shape of T . close right-half plane pole-zero pairs make things worse. 1. By an argument that is almost dual to that for the sensitivity function it follows that if excessive peaking of the complementary sensitivity function at low and intermediate frequencies is to be avoided. ∞ = supω∈R | S ( jω)|. This is seen by writing the complementary sensitivity function as T= 1 L = 1+ L 1+ 1 L . 1. and vice-versa. Practically. The lower limit of the band over which the complementary sensitivity function may be made small is constrained from below by the magnitude of the largest right-half plane open-loop pole. and z any right-half plane zero of the plant.138) formed from the open right-half plane zeros zi of L. Summary 1.1.4.1 leads to the conclusion that for any righthalf plane open-loop pole p = x + j y we have (Freudenberg and Looze 1988) ∞ −∞ log (| T ( jω)| ) x2 x −1 d ω = π log | Bzeros ( p )|.7.129) to (1. If the plant has right-half plane zeros.7. that is. S prove that if the closed-loop system is stable then S ∞ −1 ≥ Bpoles (z) . Check that the compensator (1. 2. We summarize the qualitative effects of right-half plane zeros of the plant on the shape achievable for the complementary sensitivity function T (Engell 1988). the achievable reduction of T is further impaired.7. the achievable bandwidth is always greater than this magnitude. Again. T needs to be small at high frequencies. 2.136) Comparison with Freudenberg-Looze equality of 1.133) actually achieves this lower bound.137) with Bzeros the Blaschke product Bzeros (s ) = i zi − s z ¯i + s (1.8 (Effect of right-half plane open-loop poles).135) where Bpoles is the Blaschke product formed from the right-half plane poles of the plant. (1. 46 . Use (1. Whereas the sensitivity S is required to be small at low frequencies. The role of the right-half plane zeros is now taken by the righthalf plant open-loop poles. + ( y − ω)2 (1. This effect is especially pronounced when one or several right-half plane pole-zero pairs are very close. Define S ∞ as the peak value of | S |.

Correspondingly.: Right-half plane zeros and poles constrain S and T Figure 1.00061682s ) (1. The compensator C (s ) = − 1.3 we introduced the two-degrees-of-freedom configuration of Fig. 1.82453 ) .32 shows that for low frequencies the magnitude plot (e) of the corresponding sensitivity function closely follows the magnitude plot (c) of Fig.30. S can only be small up to the magnitude of the smallest right-half plane zero.6229 )(s + 0.33.32.8. Two-degrees-of-freedom feedback systems In § 1. Figure 1.6229 of the right-half plane plant pole with largest magnitude. We return to the double inverted pendulum of Example 1. starting at a frequency of about 100. 1.8. T can only start to roll off to zero at frequencies greater than the magnitude of the largest righthalf plane pole.60507 )(s − 0.139) whose transfer function is strictly proper.31 summarizes the difficulties caused by right-half plane zeros and poles of the plant transfer function P.31. For robustness to plant uncertainty and reduction of the susceptibility to measurement noise it is necessary that the loop gain decreases to zero at high frequencies. the complementary sensitivity function also decreases to zero while the sensitivity function approaches 1. extends over the intermediate frequency range. This. in turn. Example 1.5136(s + 1. The crossover region.025394 )(1 + 0. 1. 1. (s + 1. At high frequencies the magnitude of S drops off to 1. accomplishes this.7.1.32 shows that making | S | drop off at a lower frequency than in (e) causes the peak value to increase. The lowest frequency at which | S| may start to drop off to 1 coincides with the lowest frequency at which the complementary sensitivity may be made to start decreasing to zero.5.9 (Double inverted pendulum).017226 )(s − 0. The function of 47 .2247 )(s + 0. Two-degrees-of-freedom feedback systems | S| (log scale) 0 dB ω (log scale) magnitude of the smallest right-half plane zero magnitude of the smallest right-half plane pole |T | (log scale) 0 dB ω (log scale) magnitude of the smallest right-half plane zero magnitude of the smallest right-half plane pole Figure 1. which is repeated in Fig. 1. is determined by the magnitude 1. however. The magnitude plot (f) in Fig.7. where S and T assume their peak values.

142) Right-half plane zeros of this uncompensated transfer function are a handicap for the compensation. In this section we study whether it makes a difference which configuration is chosen.34(a).1.33 we write the plant and compensator transfer functions in the polynomial fraction form P= N . In the configuration of Fig.32. 1 + PC Dcl (1. X (1.33. hence. In this same configuration. the closed-loop transfer function H from the command signal r to the control system output z is H= NY PC F= F. D C= Y . Next consider the two-degrees-of-freedom configuration of Fig. Right-half plane roots of N (that is. and.: Two-degrees-of-freedom feedback system configuration the precompensator F is to improve the closed-loop response to command inputs r . We restrict the discussion to SISO systems. Introduction to Feedback Control Theory 100 | S| 10 (f) 1 10 −4 40 dB (c) (e) 20 dB 0 dB 10 −2 10 0 10 ω 2 10 4 10 6 Figure 1. open-loop right-half plane zeros) and right-half plane roots of Y may well be present. Dcl (1. 1.: More sensitivity functions for the double pendulum system r F C P z Figure 1. Figure 1. We now have for 48 . Such zeros cannot be canceled by corresponding poles of F because this would make the precompensator. 1.141) The prefilter transfer function F is available to compensate for any deficiencies of the uncompensated closed-loop transfer function H0 = NY . the whole control system.140) The feedback loop is stable if and only if the roots of the closed-loop characteristic polynomial Dcl = DX + NY are all in the open left-half complex plane.34 shows two other two-degrees-of-freedom configurations. unstable.

144) (1. The closed-loop transfer function now is H= N X2 Y1 PC1 = . 1 + PC Dcl (1. if the compensator has right-half plane poles or zeros.147) 49 .33. 1. C2 = = Y1 Y2 = Y. (1.34. This implies that X1 X2 = X and Y1 Y2 = Y . Two-degrees-of-freedom feedback systems r F P z C (a) r C1 u P z C2 (b) Figure 1. 1. it is impossible to achieve identical overall closed-loop transfer functions for the two configurations. 1 + PC Dcl H0 Inspection shows that the open-loop plant zeros re-occur in the uncompensated closed-loop transfer function H0 but that instead of the roots of Y (the compensator zeros) now the roots of X (the compensator poles) appear as zeros.143) suggests that there may exist a configuration such that the numerator of the uncompensated closed-loop transfer function is independent of the compensator. C1 and C2 have the polynomial fractional representations C1 = Y1 . consider the configuration of Fig. X1 C2 = Y2 .142) and (1. or both. In fact.8.145) Inspection shows that the numerator of H is independent of the compensator if we let X2 Y1 = 1. Hence. Comparison of (1.146) C1 = X1 X1 X2 X X2 The closed-loop transfer function now is H= N . Dcl (1.34(a) we need C = C1 C2 . To investigate this.33 and 1.1.34(b).: Further two-degrees-of-freedom feedback system configurations the closed-loop transfer function H= P NX F= F. X2 (1.143) To match the closed-loop characteristics of the configurations of Figs. the precompensator design problem for this configuration is different from that for the configuration of Fig. so that Y1 1 1 Y2 = = . 1.

The second disadvantange may be overcome by modifying (1.36 the closed-loop transfer function becomes H= 23 Half NF F0 . (1. This still allows implementation by a state realization of order equal to the degree of X (see Exercise 1. By application of a further prefilter F0 as in Fig. (1. Dcl (1. though with a different connotation than the one used here.34(b) has two disadvantages: 1. This might be called a 1 2 23 degrees-of-freedom control system .149) Y 1 r + e. The closed-loop transfer function is H= NF .: 1 2 The corresponding configuration of Fig. Introduction to Feedback Control Theory C0 r F X e Y X u P z 1 -degrees-of-freedom control system Figure 1. 1.152) degrees of freedom were introduced in control engineering terminology by Grimble (1994). The combined block C0 is jointly realized as a single input-output-state system.151) 1 The design of the prefilter amounts to choosing the polynomial F . which is physically impossible (unless Y is of degree zero). The reason is that one degree of freedom has been used to make the numerator of the closed-loop transfer function independent of the compensator. 1.35. 1. The first difficulty may be remedied by noticing that from the block diagram we have u = C1 r − C1 C2 z = This implies Xu = r + Ye.148) This input-output relation — with r and e as inputs and u as output — may be implemented by a state realization of order equal to the degree of the polynomial X .8. 2. X X (1.149) to Xu = Fr + Ye. 50 .35.1). The configuration actually has only one degree of freedom. The compensator is represented in the block diagram of Fig.150) with F a polynomial of degree less than or equal to that of X . The configuration appears to require the implementation of a block with the purely polynomial transfer function C2 (s ) = Y (s ).1. Dcl (1.

r e b0 c0 1 s b1 x0 c1 1 s bn−1 cn−1 x1 1 s bn xn−1 cn u a0 a1 an−1 Figure 1. has the closed-loop characteristic polynomial DX + NY . Find a state representation for the compensator.37. 1.37.8. 3.35.37. bn sn + bn−1 sn−1 + bn−2 sn−2 + · · · + b0 . Prove that the feedback system of Fig. cn sn + cn−1 sn−1 + cn−2 sn−2 + · · · + c0 .155) F (s ) = Y (s ) = 1.: Realization of the 1 1 2 -degrees-of-freedom compensator Xu = Fr + Ye 51 .5. Two-degrees-of-freedom feedback systems C0 r F0 F X e Y X u P z 1 -degrees-of-freedom configuration Figure 1. F and Y as X (s ) = sn + an−1 sn−1 + an−2 sn−2 + · · · + a0 .154) (1. 1.5 in § 2. 1. Exercise 1. Show that the 1 1 2 -degrees-of-freedom compensator Xu = Fr + Ye may be realized as in Fig.9.9. An application is described in Example 2.1 (Realization of the 1 1 2 -degrees-of-freedom compensator).36. with the dashed block realized as in Fig.: 2 2 1 This results in a 2 2 -degrees-of-freedom control system.8. Represent the poly- nomials X . 2.1. (1.153) (1.

10. Conclusions It is interesting to observe a number of “symmetries” or “dualities” in the results reviewed in this chapter (Kwakernaak 1995). y = Cx + Du. Proof 1. Note that to a large extent performance and robustness go hand in hand. so that u = −( I + D )−1 Cx. Closed-loop characteristic polynomial We first prove (1. • right-half plane open-loop zeros limit the frequency up to which S may be made small at low frequencies. the requirements for good performance imply good robustness. Appendix: Proofs In this section we collect a number of proofs for Chapter 1. 1.1 (Closed-loop characteristic polynomial).10. Such well-designed feedback systems are • robust with respect to perturbations of the inverse loop gain at low frequencies. This is also true for the critical crossover region. 1. As the result. Introduction to Feedback Control Theory 1. The complementary sensitivity function T is • approximately equal to 1 at low frequencies and • small at high frequencies. Since by assumption I + D = I + L ( j∞ ) is nonsingular the closed-loop system is well-defined. From u = − y we obtain with the output equation that u = −Cx − Du. and • robust with respect to perturbations of the loop gain at high frequencies. 1. where peaking of both S and T is to be avoided. and • right-half plane open-loop poles limit the frequency from which T may be made small at high frequencies.1. Let x ˙ = Ax + Bu. Furthermore.3. the sensitivity function S is • small at low frequencies and • approximately equal to 1 at high frequencies.156) be a state realization of the block L in the closed-loop system of Fig. For good performance and robustness the loop gain L of a well-designed linear feedback system should be • large at low frequencies and • small at high frequencies.9.44) in Subsection 1. 52 .11.1. and vice-versa. both for performance and robustness. It follows that L (s ) = C (sI − A )−1 + D. (1.3.10. that is.

10.159) Denoting the open-loop characteristic polynomial as det (sI − A ) = χ(s ) we thus have χcl (s ) det[ I + L (s )] = . image R (s ) under R traverses a closed contour that is denoted as R (C ).2.1. As the complex number s traverses the contour C in clockwise direction.160) 1. (1.38. Then as s traverses the contour C exactly once in clockwise direction.158) (1. 1.: Principle of the argument. its Im R Im Re 0 Re R (C ) C pole of R zero of R Figure 1. The Nyquist criterion The proof of the generalized Nyquist criterion of Summary 1. 1.13 in Subsection 1.2. also shown in Fig.10. χ(s ) det[ I + L ( j∞ )] (1. 24 See Henrici (1974). Summary 1. Right: the image R (C ) of C under a rational function R. 53 .5 relies on the principle of the argument of complex function theory24 . (the number of times R (s ) encircles the origin in clockwise direction as s traverses C ) = (the number of zeros of R inside C ) − (the number of poles of R inside C ). Appendix: Proofs Substitution of u into the state differential equation shows that the closed-loop system is described by the state differential equation x ˙ = [ A − B ( I + D )− 1 C ] x .38. Principle of the argument Let R be a rational function.10. and C a closed contour in the complex plane as in Fig. The generalized form in which we state the principle may be found in Postlethwaite and MacFarlane (1979).3. The characteristic polynomial χcl of the closed-loop system hence is given by χcl (s ) = = det[sI − A + B ( I + D )−1 C ] det (sI − A ) · det[ I + (sI − A )−1 B ( I + D )−1 C ].157) Using the well-known determinant equality det ( I + M N ) = det ( I + N M ) it follows that χcl (s ) = = = det (sI − A ) · det[ I + ( I + D )−1 C (sI − A )−1 B] det (sI − A ) · det[( I + D )−1 ] · det[ I + D + C (sI − A )−1 B] det (sI − A ) · det[( I + D )−1 ] · det[ I + L (s )].3.38. Left: a closed contour C in the complex plane. (1.

Bode’s sensitivity integral The proof of Bode’s sensitivity integral is postponed until the next subsection.13. 1. Introduction to Feedback Control Theory We prove the generalized Nyquist criterion of Summary 1.10.10..1.3. Accepting it as true we use it to derive the inequality (1.161) From the assumption that | S ( jω)| ≤ α < 1 for 0 ≤ ω ≤ ω L it follows that if 0 < ω L < ω H < ∞ then 0 = 0 ∞ log | S ( jω)| d ω log | S ( jω)| d ω + ωH ωL = 0 ωL log | S ( jω)| d ω + log | S ( jω)| + ∞ ωH ∞ ωH log | S ( jω)| d ω (1. Then by the principle of the argument the number of times that the image of det( I + L ) encircles the origin equals the number of right-half plane roots of χcl (i.117) of Subsection 1. ω L log α + (ω H − ω L ) ω L ≤ω≤ω H sup log | S ( jω)| d ω. where we choose the contour C to be the so-called Nyquist contour or D-contour indicated in Fig. If the open-loop system is stable then we have according to Bode’s sensitivity integral ∞ 0 log | S ( jω)| d ω = 0.39.e. the number of unstable closed-loop poles) minus the number of right-half plane roots of χol (i.162) ≤ As a result. The radius ρ of the semicircle is chosen so large that the contour encloses all the right-half plane roots of both χcl and χol .3.39.160).3. (1.3 (Lower bound for peak value of sensitivity).6. the number of unstable open-loop poles).164) 54 .e. We apply the principle of the argument to (1. (1. The Nyquist criterion follows by letting the radius ρ of the semicircle approach ∞. (ω H − ω L ) ω L ≤ω≤ω H sup log | S ( jω)| ≥ ω L log 1 − α ∞ ωH log | S ( jω)| d ω.163) Next consider the following sequence of (in)equalities on the tail part of the sensitivity integral ∞ ωH log | S ( jω)| d ω ≤ ≤ ∞ ωH ∞ ωH | log | S ( jω)|| d ω | log S ( jω)| d ω = ∞ ωH | log[1 + L ( jω)]| d ω.. Proof of the generalized Nyquist criterion. Note that as ρ approaches ∞ the image of the semicircle under det ( I + L ) shrinks to the single point det ( I + L ( j∞ )).: Nyquist contour 1. (1. Proof 1. Im 0 ρ Re Figure 1.

10. 2 k (1. implies that | L ( jω)| ≤ ε < 0.4.168) ω L ≤ω≤ω H This is what we set out to prove. We present a brief proof of Poisson’s integral formula based on elementary properties of the Laplace and Fourier transforms (see for instance Kwakernaak and Sivan (1991)).5 for ω > ω H . With this it follows from (1.7.4 (Poisson’s integral formula). Appendix: Proofs From the inequalities (4. Hence. ω ∈ Ê. x2 + ( y − ω)2 (1. t ∈ Ê. its inverse Laplace transform f is zero on (−∞. is the inverse Fourier transform of the frequency function 1 2x 1 + = 2 . (1.167) The final step of of the proof is to conclude from (1. 2 (1. by the integral relation F (s ) = 1 π ∞ −∞ F ( jω) x d ω. 0 ). (1.165) that ∞ ωH log | S ( jω)|d ω ≤ ∞ ωH 3 | L ( jω)| d ω ≤ 2 ∞ ωH 3ε ω H 2 ω k +1 dω = 3ε ω H .164) and (1.1 relies on Poisson’s integral formula from complex function theory. 1. Then the value of F (s ) at any point s = x + j y in the open right-half plane26 can be 2 2 recovered from the values of F ( jω).166) with 0 < ε < 0.171) For x > 0 the function e−x|t| . Let F be a function → that is analytic in the closed right-half plane25 and is such that | F ( R e jθ )| =0 R→∞ R lim (1. for Re ( s ) > 0. Proof of Poisson’s integral formula. 55 . Summary 1.35) of Abramowitz and Stegun (1965) we have for any complex number z such that 0 ≤ |z| ≤ 0.1.38) and (4. x − jω x + jω x + ω2 25 For 26 That ω ∈ Ê. (1.10.165) for ω > ω H (1.169) for all θ ∈ [− π .172) rational F this means that F has no poles in Re ( s ) ≥ 0.1.1. Since by assumption the function F is analytic in the closed right-half plane.5. for s = x + j y we may write F (s ) = ∞ −∞ f (t ) e−(x+ j y)t dt = ∞ −∞ f (t ) e−x|t| e− j yt dt . −π ].10. is.5828 | log (1 + z )| ≤ − log (1 − |z| ) ≤ | log (1 − |z| )| ≤ The assumption that | L ( jω)| ≤ ε ωH ω k +1 3|z| . Limits of performance The proof of the Freudenberg-Looze equality of Summary 1.163) that sup log | S ( jω)| ≥ 1 ωH − ωL ω L log 1 3εω H − α 2k .170) A sketch of the proof of Poisson’s integral formula follows.

179) −1 ˜ ( z ) = Bpoles ˜ ( jω)| = | S ( jω)| for ω ∈ Ê.1.7. Proof 1. 56 . Moreover. Taking the real parts of the left. Bode’s sensitivity integral (1.176) ˜ yields Application of Poisson’s formula to log S ˜ (s ) = log S 1 π ∞ −∞ ˜ ( jω)) log ( S x2 x dω + ( y − ω)2 (1. We first write L as L = N / D.114) follows from Proof 1.5.180) which is what we set out to prove. hence. Because of the right-half plane roots pi of D. 27 That is. 1 + L(z) (1. The proof of the Freudenberg-Looze equality of Sum- mary 1.178) Now replace s with a right-half plane zero z = x + j y of L. Hence. after setting s = z we may reduce (1. x2 + ( y − ω)2 (1. the denominator D + N has all its roots in the open left-half plane.173) F ( j( y − ω)) By replacing the integration variable ω with y − ω we obtain the desired result F (s ) = 1 π ∞ −∞ F ( jω) x d ω. N and D have no common factors. however. we have F (s ) = = ∞ −∞ f (t ) e− j yt ∞ ∞ −∞ 2x dω dt e jω t x2 + ω 2 2π ∞ 1 π −∞ x2 x + ω2 −∞ f (t ) e− j( y−ω)t dt d ω (1. Then S(z) = 1 = 1.and right-hand sides we have ˜ ( s )| = log | S 1 π ∞ −∞ ˜ ( jω)| ) log(| S x dω x2 + ( y − ω)2 (1. Introduction to Feedback Control Theory Thus.174) We next consider the Freudenberg-Looze equality of Summary 1. | Bpoles ( jω)| = 1 for ω ∈ Ê.10.1. so that | S Thus.177) for any open right-half plane point s = x + j y.5 (Freudenberg-Looze equality). so that S ( z ).178) to −1 ( z )| = log | Bpoles 1 π ∞ −∞ log (| S ( jω)| ) x d ω.7. a zero of S. D+ N (1. Then S= D . any right-half plane pole z of L is a root of D and. x2 + ( y − ω)2 (1.1 follows that of Freudenberg and Looze (1988). We should like to apply Poisson’s formula to the logarithm of the sensitivity function.10. S (1. Furthermore. with N and D coprime polynomials27 . S is analytic in the closed right-half plane. To remedy this we cancel the right-half plane zeros of S by considering −1 ˜ = Bpoles S. a right-half plane zero of N . that is.175) Since by assumption the closed-loop system is stable. and Poisson’s formula cannot be used. log S is not analytic in the right-half plane.

178). replacing S we obtain (exploiting the fact that | Bpoles | = 1 on the imaginary axis) ∞ −∞ log(| S ( jω)| ) x2 −1 d ω = π x log | S ( x )| + π x log | Bpoles ( x )| .10. The starting point for the proof of Bode’s sensitivity integral −1 ˜ with Bpoles S. the left-hand side of this expression approaches the Bode integral.114) is (1. Setting y = 0. Finally. 57 . x→∞ −1 lim x log | Bpoles ( x )| = = x→∞ lim x log i p ¯i + x = lim Re x→∞ pi − x x log i 1+ 1− p ¯i x pi x 2 i Re pi . Appendix: Proofs Proof 1.181) Letting x approach ∞.10. (1.6 (Bode’s sensitivity integral).1. while under the assumption that L has pole excess two the quantity x log | S ( x )| approaches 0. and multiplying on the left and the right by π x (1. x2 + ω 2 (1.182) This completes the proof.

Introduction to Feedback Control Theory 58 .1.

The presentation is far from comprehensive. Nyquist’s stability criterion (Nyquist 1932). In this chapter an impression is given of some of the classical highlights of control. Classical Control System Design Overview – Classical criteria for the performance of feedback control systems are the error constants and notions such as bandwidth and peaking of the closed-loop frequency response. Nyquist. The main results in classical control theory emerged in the period 1930–1950. lead compensation and lag-lead compensation. W. With these techniques a designer attempts to synthesize a compensation network or controller that makes the closed-loop system perform as required. single-output linear control systems. Quantitative feedback design (QFT) allows to satisfy quantitative bounds on the performance robustness. Much of what now is called modern robust control theory has its roots in these classical results. together with an abundance of additional material. the initial period of development of the field of feedback and control engineering. R. The historical development of the “classical” field started with H. Nichols plots. To an extent these methods are of a heuristic nature. S. Nichols. Important classical control system design methods consist of loop shaping by lag compensation (including integral control). Bode’s frequency domain analysis (Bode 1940). N . Black’s analysis of the feedback amplifier (Black 1934). 2.and root loci belong to the basic techniques of classical and modern control.2. The methods obtained a degree of maturity during the fifties and continue to be of great importance for the practical design of control systems. H. Introduction In this chapter we review methods for the design of control systems that are known under the name of classical control theory. Evans (1954).” “compensation.” and “network”) is from the field of amplifier circuit design (Boyd and Barratt 1991).1. The terminology in use in that era (with expressions such as “synthesize. and Philips (1947). More extensive introductions may be found in classical and modern textbooks. and W. Evans’ root locus method (Evans 1948). settling time and overshoot of the step response. James. and rise time. H. 59 . and M -. Truxal (1955). Well-known sources are for instance Bode (1945). especially for the case of single-input. The graphical tools Bode. which both accounts for their success and for their limitations.

(2. Steady state error behavior 2. is (2. tn ½(t ). if it exists. Thus.2. They rely on an underlying and explicit model of the plant that needs to be controlled. 2. The main emphasis in classical control theory is on frequency domain methods. 89). ½(t ) = 1 for t ≥ 0 and ½(t ) = 0 for t < 0. 79). and Dorf (1992). In § 2. 92). Thaler (1973). 60) we discuss the steady-state error properties of feedback control systems. (2.1.6 (p. Powell. D’Azzo and Houpis (1988). The Laplace transform of the reference input is r ˆ(s ) = 1/sn+1 .5 (p. 81) the basic classical techniques of lead. 2.7 (p.1. 87).3 (p. for n = 1 the input is a ramp with unit slope and for n = 2 it is a parabola with unit second derivative. Horowitz (1963). All the design methods are model-based. This naturally leads to review of the notion of integral control in § 2.3 (p.4 (p. In the 1970s quantitative feedback theory (QFT) was initiated by Horowitz (1982).2 (p. The historically interesting Guillemin-Truxal design procedure is considered in § 2.2.3) r F C P z Figure 2. We analyze the steady-state behavior of this system. Nyquist and Nichols plots.8 (p.9 (p. that is. 69) we review various important classical graphic representations of frequency responses: Bode. Ogata (1970). lag and lag-lead compensation are discussed. Classical Control System Design Savant (1958). The control error is the signal ε defined by r (t ) = ε(t ) = r (t ) − y (t ). Franklin.2. t ≥ 0. t →∞ t ≥ 0. A brief survey of root locus theory as a method for parameter selection of compensators is presented in § 2. In § 2. the asymptotic behavior in the time domain for t → ∞ when the reference input r is a polynomial time function of degree n. This powerful extension of the classical frequency domain feedback design methodology is explained in § 2.: Single-loop control system configuration 60 . The design goals and criteria of classical control theory are considered in § 2. 64) form an exception. Powell. Franklin. 64). In § 2. Van de Vegte (1990). ε∞ = lim ε(t ).1.1) n! where ½ is the unit step function. Consider the typical single-loop control system of Fig. Steady-state behavior One of the fundamental reasons for adding feedback control to a system is that steady-state errorsare reduced by the action of the control system. and EmamiNaeini (1991). The experimental Ziegler-Nichols rules for tuning a PID-controller mentioned in § 2.2) The steady-state error. and EmamiNaeini (1986). For n = 0 we have a step of unit amplitude.

lim ε∞ (2.8) we have for a type k system without prefilter (that is. if F (s ) = 1)  for 0 ≤ n < k . a nonzero finite steady-state error for an input of order k.6) with n the order of the polynomial reference input.8) This equation allows to compute the steady-state error of the response of the closed-loop system to the polynomial reference input (2. a finite steady-state error for a ramp input. if it exists.10) s ↓0  = ∞ for n > k . Type k systems A closed-loop system with loop gain L is of type k if for some integer k the limit lims↓0 sk L (s ) exists and is nonzero. The system is of type k if and only if the loop gain L has exactly k poles in the origin. 2. Assuming that the closed-loop system is stable.9) lim sn L (s )  s ↓0 =0 for n > k . sn (2. A type 1 system has zero steady-state error for a step input.4) 1 − H (s ) . Consequently. Steady state error behavior The Laplace transform of the control error is ˆ (s ) = r ε ˆ( s ) − y ˆ (s ) = The function L PC F = F (2. It follows that the steady-state error. Figure 2. sn [1 + L (s )] (2. 61 .2. and infinite steady-state error for inputs of order two or higher. and an infinite steady-state error for inputs of order greater than k.2. 1 + L (s ) (2. a finite steady-state error for a second-order input. if F (s ) = 1.5) 1 + PC 1+ L is the closed-loop transfer function. If the system is of type k then   = ∞ for 0 ≤ n < k . (2. if the system is of type k and stable then it has a zero steady-state error for polynomial reference inputs of order less than k.1). Assume for the moment that no prefilter is installed. so that all the poles of H are in the left-half plane. and an infinite steady-state error for ramp and higher-order inputs. and infinite steady-state error for inputs of order three or higher. we may apply the final value theorem of Laplace transform theory. A type 0 system has a nonzero but finite steady-state error for a step reference input. Then 1 − H (s ) = 1 . Hence. =0 for n = k .2.7) and the steady-state error is (n) ε∞ = lim s ↓0 1 . A type 2 system has zero steady-state error for step and ramp inputs. that is. sn+1 s sn+1 (2.2 illustrates the relation between system type and steady-state errors. L = PC is the loop gain.  =0 (n) =0 for n = k .2. from (2. is given by H= (n) ˆ (s ) = lim ε∞ = lim sε s ↓0 s ↓0 1 H (s ) 1 − H (s ) − n+1 = .

11) The number K p = L (0 ) is the position error constant. the larger the error constant is the smaller is the corresponding steady-state error.5 1 Zero position error type 0 10 20 30 0.13) The number Ka = lims↓0 s2 L (s ) is the acceleration constant. Table 2.5 0 0 type 1 10 20 30 0 0 Nonzero velocity error 20 20 Zero velocity error type 1 10 10 type 2 0 0 0 5 10 15 20 0 5 10 15 20 Nonzero acceleration error 100 100 Zero acceleration error 50 type 2 50 type 3 0 0 5 time 10 0 0 5 time 10 Figure 2. 1 + L (0 ) 1 + Kp (2.3. In each case.: System type and steady-state errors Exercise 2. What condition needs to be imposed on F in case a prefilter is used? 2. The results of this subsection have been proved if no prefilter is used. s↓0 sL ( s ) s[1 + L (s )] Kv (2.2. For a type 1 system the steady-state error for a ramp input is (1 ) ε∞ = lim s ↓0 1 1 1 = lim = .2. Ka (2. The steady-state error of a type 2 system to a second-order input is (2 ) = lim ε∞ 1 s ↓0 s 2 L ( s ) = 1 .2. Error constants If the system is of type 0 then the steady-state error for step inputs is (0 ) = ε∞ 1 1 = . Classical Control System Design Nonzero position error 1 0. that is. 62 .1 summarizes the various steady-state errors.1 (Type k systems with prefilter).2. F (s ) = 1.12) The number Kv = lims↓0 sL (s ) is the velocity constant.

2.: Steady-state errors Input System type 0 type 1 type 2 step 1 1 + Kp 0 0 ramp ∞ 1 Kv 0 parabola ∞ ∞ 1 Ka The position.18) 63 . a nonzero finite steady-state response for polynomial disturbances of order k. As long as the type 1 property is not destroyed the system preserves its zero steady-state position error. Prove that the velocity and acceleration error constants may be expressed as 1 = Kv n j=1 1 − pj m j=1 1 zj (2. and an infinite steady-state response for polynomial disturbances of order greater than k. and that the disturbance is polynomial of the form tn ½(t ). n! Show that the steady-state response of the output is given by v(t ) = n = lim z (t ) = lim z∞ t →∞ (2. t ≥ 0.3 (Steady-state errors and closed-loop poles and zeros).14) 1 L (s )] s↓0 sn [1 + .2 (Steady-state response to polynomial disturbance inputs).2.1. Suppose that the closed-loop system is stable. Consider the feedback loop with disturbance input of Fig. It follows that a type k system has a zero steady-state response to polynomial disturbances of order less than k. velocity and acceleration error provide basic requirements that must be satisfied by servomechanism systems depending upon their functional role. Prove that the position error constant may be expressed as Kp = zj m j=1 pj − k zj . (2. that is. The steady-state error results are robust in the sense that if the coefficients of the transfer function of the system vary then the error constants also vary but the zero steady-state error properties are preserved — as long as the system does not change its type. Exercise 2. Thus.2. (2. Steady state error behavior Table 2. Exercise 2.8) for the steady-state error of the response to a polynomial reference input.2. the system has zero position error.5) is expanded in factored form as Gcl (s ) = k (s + z1 )(s + z2 ) · · · (s + zm ) . for a type 1 system the velocity error constant may vary due to parametric uncertainty in the system.17) 2. 2.16) 1. Suppose that K p = ∞.3.15) This formula is identical to the expression (2. Suppose that the closed-loop transfer function Gcl of (2. (s + p1 )(s + p2 ) · · · (s + p n ) k n j=1 m j=1 (2.

(2. constant disturbances.20) s The rational function C0 remains to be selected. and indeed infinite at zero frequency. the larger the velocity constant Kv is. We observe that the further the closed-loop poles are from the origin.22) is zero at zero frequency and.2 (p. has the property t →∞ lim z (t ) = 0. Exercise 2. 2. which is given by L (s ) = S (s ) = 1 s 1 = . It moreover has a useful robustness property. by continuity.3. Disturbance attenuation is achieved by making the loop gain large. a compensator of this type is said to have integrating action.19) respectively. = L ( s ) 0 1 + L (s ) s + L0 ( s ) 1+ s (2. The relations (2. If the loop gain L (s ) has a factor 1/s then in the terminology of § 2. Classical Control System Design and 1 1 = Ka 2 m j=1 1 − z2 j n j=1 1 1 − 2 2 Kv pj . The velocity constant may also be increased by having closed-loop zeros close to the origin. Next differentiate ln Gcl (s ) twice with respect to s at s = 0. small at low frequencies. 64 . This is called constant disturbance rejection. The fact that S is zero at zero frequency implies that zero frequency disturbances. If P (s ) has no “natural” factor 1/s then this is accomplished by including the factor in the compensator transfer function C by choosing C0 (s ) . the sensitivity function S . 60) the system is of type 1. The compensator C (s ) may be considered as the series connection of a system with transfer function C0 (s ) and another with transfer function 1/s.3. Integral control Integral control is a remarkably effective classical technique to achieve low-frequency disturbance attenuation. Because a system with transfer function 1/s is an integrator.3 has integrating action and is stable. (2. Hint: Prove that 1/ Kv = − Gcl (0 ) and 1/ Ka = − Gcl (0 )/2. As a result. then its response z (from any initial condition) to a constant disturbance v(t ) = 1. (2.19) represent the connection between the error constants and the system response characteristics. that is.18) and (2.2.16).1 (Rejection of constant disturbances). Its response to a step reference input has a zero steady-state error. are completely eliminated. if L0 (s ) contains no factor s then the loop gain C (s ) = L0 ( s ) (2. by including a factor 1/s in the loop gain L (s ) = P (s )C (s ). with Gcl given by (2. Obviously. The loop gain may be made large at low frequencies.23) Hint: Use Laplace transforms and the final value property. 2. with the prime denoting the derivative. t ≥ 0.21) s is infinite at zero frequency and very large at low frequencies. These results originate from Truxal (1955). Make the last statement clearer by proving that if the closed-loop system of Fig.

In a feedback system with integrating action. Prove that if a type k closed-loop system as in Fig. The loop gain of a type k system contains a factor 1/sk . which are precisely the disturbances that the feedback system rejects.3. sTi (2.: Feedback loop The constant disturbance rejection property is robust with respect to plant and compensator perturbations as long as the perturbations are such that • the loop gain still contains the factor 1/s. also known as PI control. This principle states that if a feedback system is to reject certain disturbances then it should contain a model of the mechanism that generates the disturbances. Compensators with integrating action are easy to build. A system with transfer function 1/s is capable of generating a constant output with zero input. Integral control v e L z Figure 2. with compensator transfer function C (s ) = 1 . (2. Exercise 2. with k a positive integer. • Proportional and integral control. 2.3.3. sTi (2.24) with n any nonnegative integer such that n ≤ k − 1.2 (Type k control). “Rejects” means that for such disturbances limt→∞ z (t ) = 0.25) The single design parameter Ti (called the reset time) does not always allow achieving closed-loop stability or sufficient bandwidth.3 is stable then it rejects disturbances of the form v(t ) = tn . n! t ≥ 0.2. and • the closed-loop system remains stable. Their effectiveness in achieving low frequency disturbance attenuation and the robustness of this property make “integral control” a popular tool in practical control system design. with compensator transfer function C (s ) = g 1 + 1 . 65 .26) gives considerably more flexibility. Hence. The following variants are used: • Pure integral control. the series connection may be thought of as containing a model of the mechanism that generates constant disturbances. This notion has been generalized (Wonham 1979) to what is known as the internal model principle. the transfer function of the series connection of the plant and compensator contains a factor 1/s.

2. The Ziegler-Nichols rules (Ziegler and Nichols 1942) were developed under the assumption that the plant transfer function is of a well-damped low-pass type.or PID-controller according to the Ziegler-Nichols rules first a P-controller is connected to the plant. (2. Classical Control System Design • PID (proportional-integral-derivative) control. Then the parameters of the PID-controller are given by P-controller: PI-controller: PID-controller: g = 0.27) Td is the derivative time. In any case it would be undesirable because it greatly amplifies noise at high frequencies. Td = 0. The corresponding gain is denoted as as g0 and the period of oscillation as T0 . is based on a compensator transfer function of the type C (s ) = g sTd + 1 + 1 . PI. Assume that the plant in the block diagram of Fig. and its closed-loop step response to disturbances has a pak value of about 0. and that the plant may maintain any constant output w0 . 66 . v e u PIDcontroller nonlinear plant z Figure 2.28) The corresponding closed-loop system has a relative damping of about 0.4 has the reasonable property that for every constant input u0 there is a unique constant steady-state output w0 . Derivative action technically cannot be realized. and Emami-Naeini (1991)).125 T0.27) in practice is replaced with a “tame” differentiator sTd .85 T0. if the closed-loop system is stable then it rejects constant disturbances.4. Td = 0. if the disturbance is a constant signal v0 then the closed-loop system is in steady-state if and only if the error signal e is zero. Normally experimental fine tuning is needed to obtain the best results.4. The derivative action may help to speed up response but tends to make the closed-loop system less robust for high frequency perturbations. Ti = 0. 2. The controller gain g is increased until undamped oscillations occur.5 T0.25. g = 0. Ti = 0. PI or PID) has the property that it maintains a constant output u0 if and only if its input e is zero.: Nonlinear plant with integral control Integral control also works for nonlinear plants. The “integral controller” (of type I. Powell. sTi (2. Hence. Therefore.6 g0 . Ti = ∞. Standard PID controllers are commercially widely available. finally. g = 0. When tuning a P-.45 g0. Therefore the derivative term sTd in (2. In practice they are often tuned experimentally with the help of the rules developed by Ziegler and Nichols (see for instance Franklin. Td = 0.5 g0 . 1 + sT with T a small time constant.

The ×s mark the openloop poles Example 2. (2.3.33) .31) The roots of the denominator polynomial 1 1 s2 + s + θ T Ti (2. ζ0 = = ω0 = √ 2 ω0 2θ T Ti √ The best time response is obtained for ζ0 = 1 2 2.5 shows the loci of the roots as Ti varies from ∞ to 0 (see also § 2. √ It follows that ω0 = 1/ 200 ≈ 0. 67 .3.6 shows the Bode magnitude plot of the resulting sensitivity function.3 (Integral control of the cruise control system). but that the closed-loop bandwidth is not greater than the bandwidth of about 0.32) are the closed-loop poles. 87)). Integral control Ti ↓ 0 Im −1 θ −1 2θ 0 Re Figure 2.34) If T = θ = 10 [s] (corresponding to a cruising speed of 50% of the top speed) then Ti = 20 [s]. 3) has the linearized plant transfer function P (s ) = 1 T s+ 1 θ .2. Write the closed-loop denominator polynomial (2.7 (p. The linearized cruise control system of Example 1. It easily follows that √ 1 1/θ T Ti .32) as s 2 + 2 ζ 0 ω0 s + ω2 0 . The plot indicates that constant disturbance rejection is obtained as well as low-frequency disturbance attenuation. Figure 2.1 (p.1 [rad/s] of the open-loop system. T (2.07 [rad/s].2.: Root loci for the cruise control system. (2. or Ti = 2θ 2 . T and Ti are all positive these roots have negative real parts.30) then the loop gain and sensitivity functions are L ( s ) = P ( s )C ( s ) = 1 T Ti s (s + 1 θ) .29) If the system is controlled with pure integral control C (s ) = 1 sTi (2. 1 + L (s ) s + θ s + T Ti (2. with ω0 the resonance frequency and ζ0 the relative damping. Since θ. S (s ) = s (s + 1 ) 1 = 2 1 θ 1 . so that the closed-loop system is stable. Figure 2.5.

) 101 | S| 100 10−1 10−2 10−3 20 dB -20 dB -40 dB 100 20 dB 10−2 10−1 angular frequency [rad/s] Figure 2. Show that by PI control constant disturbance rejection and low-frequency disturbance attenuation may be achieved for the cruise control system with satisfactory gain and phase margins for any closed-loop bandwidth allowed by the plant capacity.: Magnitude plot of the sensitivity function of the cruise control system with integrating action Exercise 2. 84. the loop gain L = PC has a pole at 0. that is.2.6 (Integral control and constant input disturbances). Not infrequently disturbances enter the plant at the input.2. Prove that the feedback loop rejects every constant disturbance if and only if L−1 (0 ) = 0. 65) represents a stable MIMO feedback loop with rational loop gain matrix L such that also L−1 is a well-defined rational matrix function. 2. Exercise 2. p.35) Suppose that the system has integrating action.4 (PI control of the cruise control system). Suppose that Fig.7.3. Classical Control System Design Increasing Ti decreases the bandwidth. 1 + PC (2. Prove that constant disturbances at the input are rejected if and only the integrating action is localized in the compensator.5 (Integral control of a MIMO plant). 68 .: Plant with input disturbance Exercise 2.7. In this case the transfer function from the disturbances v to the control system output z is R= P . Bandwidth improvement without peaking may be achieved by introducing proportional gain. This adversely affects robustness. (See Exercise 2. 2.6. disturbance v C P y Figure 2.3 (p. Decreasing Ti beyond 2θ2 / T does not increase the bandwidth but makes the sensitivity function S peak.3.6.3. as schematically indicated in Fig.

φ = arg L ( jω). Individually. When the frequency and magnitude scales are logarithmic the combined set of the two graphs is called the Bode diagram of L.9 shows the Bode diagrams of a second-order frequency response function for different values of the relative damping. results in the steady-state sinusoidal output y (t ) = y ˆ sin (ωt + φ). This results in large magnitude | L ( jω)| at this frequency. the graphs are referred to as the Bode magnitude plot and the Bode phase plot.8. · · · .8. Frequency response plots 2. t ≥ 0. The magnitude of L follows by appropriately multiplying and dividing the lengths. Bode plots A frequency response function L ( jω). z1 . 2.4.4.: Frequency response determined from the pole-zero pattern s = jω the magnitude and phase of L ( jω) may be determined by measuring the vector lengths and angles from the pole-zero pattern as in Fig. Figure 2.1. · · · .36) The magnitude | L ( jω)| of the frequency response function L ( jω) is the gain at the frequency ω. may be plotted in two separate graphs. magnitude as a function of frequency. (s − p1 )(s − p2 ) · · · ( z − pn ) (2. t ≥ 0. Introduction In classical as in modern control engineering the graphical representation of frequency responses is an important tool in understanding and studying the dynamics of control systems. In this section we review three well-known ways of presenting frequency responses: Bode plots. A sinusoidal input u (t ) = u ˆ sin (ωt ). z2 .37) with k a constant. The phase of L follows by adding and subtracting the angles.4. Nyquist plots. 2. and Nichols plots. p2. (2.2. pn the poles of the system. Frequency response plots 2. Consider a stable linear time-invariant system with input u. A pole p i that is close to the imaginary axis leads to a short vector length s − pi at values of ω in the neighborhood of Im p i . and explains why a pole that is close to the imaginary axis leads to a resonance peak in the frequency response.4. ω ∈ R. Write L (s ) = k (s − z1 )(s − z2 ) · · · (s − zm ) . and p1 . 69 . Its argument arg L ( jω) is the phase shift. The amplitude y ˆ and phase φ of the output are given by y ˆ = | L ( jω)| u ˆ. Then for any p1 jω resonance peak | L| p3 pole zero z1 p2 0 Re 0 Im p1 ω Im p1 Figure 2. and phase as a function of frequency. zm the zeros.2. output y and transfer function L.

The phase moves from arg(ω0 ) (0◦ if ω0 is positive) at low frequencies to 90◦ (π/2 rad) at high frequencies. Write the frequency response function in the factored form L ( jω) = k ( jω − z1 )( jω − z2 ) · · · ( jω − zm ) .10 shows the amplitude and phase curves for the first order factor and their asymptotes (for ω0 positive).41) (2.42) The asymptotes for the doubly logarithmic amplitude plot are straight lines.01 ζ0 = 1 ζ0 = . If the amplitude is plotted in decibels then the slope is 20 dB/decade.39) and (2.: Bode diagram of the transfer function ω2 0 /( s + 2ζ0 ω0 + ω0 ) for various values of ζ0 The construction of Bode diagrams for rational transfer functions follows simple steps.2. 70 . The amplitude asymptotes intersect at the frequency |ω0 |.38) It follows for the logarithm of the amplitude and for the phase m n log | L ( jω)| = log |k| + i=1 m log | jω − zi | − i=1 n log | jω − p i |.1 ζ0 = 1 40 dB 0 dB -40 dB -80 dB -100 dB magnitude 10 10 -2 10-4 10 -6 10 -8 10 -4 0 10 -2 10 0 ζ0 = . (2. (2.40) may be drawn without computation.1 10 2 10 4 -120 dB phase [deg] -45 -90 -145 -180 10 -4 ζ0 = 100 ζ0 = 10 10 -2 10 0 10 2 10 4 normalized angular frequency ω/ω0 2 2 Figure 2. (2. for ω | ω 0 |. For a first-order factor s + ω0 we have log | jω + ω0 | ≈ arg( jω + ω0 ) ≈ log |ω0 | log ω arg(ω0 ) 90◦ for 0 ≤ ω |ω0 |. ( jω − p1 )( jω − p2 ) · · · ( jω − pn ) (2.40) The asymptotes of the individual terms of (2. The high frequency asymptote has slope 1 decade/decade. Classical Control System Design 10 2 0 ζ0 = 100 ζ0 = 10 ζ0 = . for 0 ≤ ω |ω0 |.01 ζ0 = . for ω |ω0 |.9. Figure 2.39) arg L ( jω) = arg k + i=1 arg( jω − zi ) − i=1 arg( jω − pi ). The low frequency asymptote is a constant.

Consider the factor s2 + 2ζ0 ω0 s + ω2 0 . Factors corresponding to complex conjugate pairs of poles or zeros are best combined to a second-order factor of the form s 2 + 2 ζ 0 ω0 s + ω2 0.10.and second-order factors. The low frequency amplitude asymptote is again a constant.: Bode amplitude and phase plots for the factor s + ω0 . for ω |ω0 |. As shown in Fig. Asymptotically. Bode plots of high-order transfer functions. are obtained by adding log magnitude and phase contributions from first.12 the gain and phase margins of a stable feedback loop may easily be identified from the Bode diagram of the loop gain frequency response function.44) (2. 2.11 shows amplitude and phase plots for different values of the relative damping ζ0 (with ω0 positive). The “asymptotic Bode plots” of first. The asymptotes intersect at the frequency |ω0 |. positive number ω0 is the characteristic frequency and ζ0 the relative damping.2. for ω | ω 0 |. Frequency response plots 60 dB magnitude 10 2 40 dB 20 dB/decade 10 0 10 -3 20 dB 0 dB 10 -2 10 -1 10 0 10 1 10 2 10 3 90 phase [deg] 0 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 normalized angular frequency ω/ω0 Figure 2. In the Bode magnitude plot the high frequency amplitude asymptote is a straight line with slope 2 decades/decade (40 dB/decade).and high-frequency asymptotes.and second-order factors follow by replacing the low frequency values of magnitude and phase by the low frequency asymptotes at frequencies below the break frequency.and second-order factors that make up the transfer function.45) (2. in particular asymptotic Bode plots.43) for 0 ≤ ω |ω0 |. Exercise 2.4.1 (Complex conjugate pole pair).4. and similarly using the high-frequency asymptotes above the break frequency. The phase goes from 0◦ at low frequencies to 180◦ at high frequencies. (2. Figure 2. log |( jω)2 + 2ζ0 ω0 ( jω) + ω2 0| ≈ arg(( jω)2 + 2ζ0 ω0 ( jω) + ω2 0) ≈ 2 log |ω0 | 2 log ω 0 180◦ for 0 ≤ ω |ω0 |. Dashed: low. High-order asymptotic Bode plots follow by adding and subtracting the asymptotic plots of the first. The 71 .

05 0 10-2 10-1 100 101 102 normalized angular frequency ω/ω0 Figure 2.13 the distance of the complex conjugate pole pair to the origin is ω0 . 34) to conclude from the Bode plot that the loop gain is (probably) minimum-phase.and high-frequency asymptotes 1. 1. Fit the Bode diagram as best as possible by a rational pole-zero representation.14. stable.46) For |ζ0 | ≥ 1 the roots are real. Use Bode’s gain-phase relationship (§ 1.and high-frequency asymptotes inferred from the plot.5 40 dB/decade -60 dB 10-1 100 101 102 phase [deg] ζ0 = 1 90 ζ0 = . Slow dynamics that change the number of encirclements of the Nyquist plot invalidate the result. Classical Control System Design 104 80 dB 60 dB 0 dB ζ0 = . Prove that for |ζ0 | < 1 the roots of the factor are the complex conjugate pair 2 .2 ζ0 = . At which frequency has the amplitude plot of the factor ( jω)2 + 2ζ0 ω0 ( jω) + ω2 0 . The conclusion of (a) is correct as long as outside the frequency range considered the frequency response behaves in agreement with the low.: Bode plots for a second-order factor s2 + 2ζ0 ω0 s + ω2 0 for different values of the relative damping ζ0 .6. 2. 3. Assume that 0 < ζ0 < 1. Prove that in the diagram of Fig. hence. 2. ω0 − ζ 0 ± j 1 − ζ 0 (2. 2. has the amplitude plot of its reciprocal its maximum)? Note that this frequency is not precisely ω0 . This is especially important for the low-frequency behavior. its minimum (and.11. Exercise 2. hence.2 ζ0 = .5 ζ0 = . Consider the loop gain L whose Bode dia- gram is given in Fig. 72 .2 (Transfer function and Bode plot).05 magnitude 10 2 10 0 10-2 10-2 180 ζ0 = 1 ζ0 = .2. ω ∈ R.4. 2. and that the angle φ equals arccos ζ0 . Dashed: low. Next argue that the corresponding closed-loop system is stable. and. p.

2.4. Frequency response plots

Loop gain gain margin (dB)

0 dB

magnitude

ω1 0◦
phase

ω2

frequency (log scale)

−180

phase margin

Figure 2.12.: Gain and phase margins from the Bode plot of the loop gain Im j ω0 1 − ζ 2

φ −ζω0
0

Re

Figure 2.13.: Complex conjugate root pair of the factor s2 + 2ζ0 ω0 + ω2 0
50

Gain [dB]

0 -50 -100 -2 10 -90

10

-1

10

0

10

1

10

2

Phase [deg]

-120 -150 -180 -2 10

10

-1

10

0

10

1

10

2

Frequency [rad/s] Figure 2.14.: Bode diagram of a loop gain L

73

2. Classical Control System Design

2.4.3. Nyquist plots
0
ω/ω0 = 10 ω/ω0 = 1.4 ζ0 = 1 ζ0 = 10 ω/ω0 = .5

-0.5

ω/ω0 = .7

-1 Im -1.5

ζ 0 = .5

-2
ζ 0 = .2

-2.5

ω/ω0 = 1

-3

-1.5

-1

-0.5

0 Re

0.5

1

1.5

2

2 2 Figure 2.15.: Nyquist plots of the transfer function ω2 0 /( s + 2ζ0 ω0 s + ω0 ) for different values of the relative damping ζ0

In § 1.3 (p. 11) we already encountered the Nyquist plot, which is a polar plot of the frequency response function with the frequency ω as parameter. If frequency is not plotted along the locus — a service that some packages fail to provide — then the Nyquist plot is less informative than the Bode diagram. Figure 2.15 shows the Nyquist plots of the second-order frequency response functions of Fig. 2.9. Normally the Nyquist plot is only sketched for 0 ≤ ω < ∞. The plot for negative frequencies follows by mirroring with respect to the real axis. If L is strictly proper then for ω → ∞ the Nyquist plot approaches the origin. Write L in terms of its poles and zeros as in (2.37). Then asymptotically L ( jω) ≈ k ( jω)n−m for ω → ∞. (2.47)

If k is positive then the Nyquist plot approaches the origin at an angle −(n − m ) × 90◦ . The number n − m is called the pole-zero excess or relative degree of the system. In control systems of type k the loop gain L has a pole of order k at the origin. Hence, at low frequencies the loop frequency response asymptotically behaves as L ( jω) ≈ c ( jω)k for ω ↓ 0, (2.48)

with c a real constant. If k = 0 then for ω ↓ 0 the Nyquist plot of L approaches a point on the real axis. If k > 0 and c is positive then the Nyquist plot goes to infinity at an angle −k × 90◦ . Figure 2.16 illustrates this.

74

2.4. Frequency response plots

2 1.5 1 0.5

ω

ω

k=2

k=3 ω L (s ) = 1 s k (1 + s )

Im

0 -0.5 -1

k=1

k=0

-1.5 -2 -2 -1.5 -1

ω
-0.5 0 0.5 1 1.5 2

Re Figure 2.16.: Nyquist plots of the loop gain for different values of system type
Exercise 2.4.3 (Nyquist plots). Prove the following observations.

1. The shape of the Nyquist plot of L (s ) = 1/(1 + sT ) is a circle whose center and radius are independent of T . 2. The shape of the Nyquist plot of L (s ) = 1/(1 + sT1 )(1 + sT2 ) only depends on the ratio T1 / T2 . The shape is the same for T1 / T2 = α and T2 / T1 = α.
2 2 3. The shape of the Nyquist plot of L (s ) = ω2 0 /(ω0 + 2ζω0 s + s ) is independent of ω0 .

2.4.4.

M - and N -circles

Consider a simple unit feedback loop with loop gain L as in Fig. 2.17. The closed-loop transfer function of the system equals the complementary sensitivity function H=T= L . 1+ L (2.49)

M - and N -circles are a graphical tool — typical for the classical control era — to determine the closed-loop frequency response function from the Nyquist plot of the loop gain L. r L y

Figure 2.17.: Unit feedback loop

75

2. Classical Control System Design

An M -circle is the locus of points z in the complex plane where the magnitude of the complex number z 1+z is constant and equal to M . An M -circle has center and radius center M2 ,0 , 1 − M2 radius M . 1 − M2 (2.51) (2.50)

An N -circle is the locus of points in the complex plane where the argument of the number (2.50) is constant and equal to arctan N . An N -circle has center and radius center 1 1 , − , 2 2N radius 1 1 1+ 2. 2 N (2.52)

Figure 2.18 shows the arrangement of M - and N -circles in the complex plane. The magnitude of the closed-loop frequency response and complementary sensitivity function T may be found from the points of intersection of the Nyquist plot of the loop gain L with the M -circles. Likewise, the phase of T follows from the intersections with the N -circles. M -circles
1dB .5dB -.5dB -1dB -2dB

N -circles
3 2

3
2dB

2
3dB -3dB

30 15
o

o

60

o

1

6dB

-6dB

1

Im

0 -1 -2 -3 -4

Im

5

o

0
-5

o

-1 -2 -3 -3 -2 -1 0 1 2 -4 -3

-15

o o

-30

-60

o

-2

-1

0

1

2

Re

Re

Figure 2.18.: M - circles (left) and N -circles (right) Figure 2.18 includes a typical Nyquist plot of the loop gain L. These are some of the features of the closed-loop response that are obtained by inspection of the intersections with the M -circles: • The height of the resonance peak is the maximum value of M encountered along the Nyquist plot. • The resonance frequency ωm is the frequency where this maximum occurs. • The bandwidth is the frequency at which the Nyquist plot intersects the 0.707 circle (the −3 dB circle).

76

2.4. Frequency response plots

These and other observations provide useful indications how to modify and shape the loop frequency response to improve the closed-loop properties. The M - and N -loci are more often included in Nichols plots (see the next subsection) than in Nyquist plots.
Exercise 2.4.4 ( M - and N -circles). Verify the formulas (2.51) and (2.52) for the centers and radii of the M - and N -circles.

2.4.5. Nichols plots
The linear scales of the Nyquist plot sometimes obscure the large range of values over which the magnitude of the loop gain varies. Also, the effect of changing the compensator frequency response function C or the plant frequency response function P on the Nyquist plot of the loop gain L = PC cannot always easily be predicted. Both difficulties may be overcome by plotting the loop gain in the form of a Nichols plot (James, Nichols, and Philips 1947). A Nichols plot is obtained by plotting the log magnitude of the loop gain frequency response function versus its phase. In these coordinates, the M - en N -circles transform to M - and N - loci. The phase–log magnitude plane together with a set of M - and N -loci is called a Nichols chart. In Fig. 2.19 Nichols plots are given of the second-order frequency response functions whose Bode diagrams and Nyquist plots are shown in Figs. 2.9 (p. 70) and 2.15 (p. 74), respectively. In a Nichols diagram, gain change corresponds to a vertical shift and phase change to a horizontal shift. This makes it easy to assess the effect of changes of the compensator frequency response function C or the plant frequency response function P on the loop gain L = PC .
Exercise 2.4.5 (Gain and phase margins in the Nichols plot). Explain how the gain margin and phase margin of a stable feedback loop may be identified from the Nichols plot. Exercise 2.4.6 (Lag-lead compensator). Consider a compensator with the second-order trans-

fer function C (s ) = (1 + sT1 )(1 + sT2 ) . (1 + sT1 )(1 + sT2 ) + sT12 (2.53)

T1 , T2 and T12 are time constants. The corresponding frequency response function is C ( jω) = (1 − ω2 T1 T2 ) + jω( T1 + T2 ) , (1 − ω2 T1 T2 ) + jω( T1 + T2 + T12 ) ω ∈ R. (2.54)

By a proper choice of the time constants the network acts as a lag network (that is, subtracts phase) in the lower frequency range and as a lead network (that is, adds phase) in the higher frequency range. Inspection of the frequency response function (2.54) shows that numerator and denominator √ simultaneously become purely imaginary at the frequency ω = 1/ T1 T2 . At this frequency the frequency response function is real. This frequency is the point where the character of the network changes from lag to lead, and where the magnitude of the frequency response is minimal. 1. Plot the Bode, Nyquist, and Nichols diagrams of this frequency response function. 2. Prove that the Nyquist plot of C has the shape of a circle in the right half of the complex plane with its center on the real axis. Since C (0 ) = C ( j∞ ) = 1 the plot begins and ends in the point 1.

77

2. Classical Control System Design

.25 dB

20
0 dB -1 dB

.5 dB 1 dB

3 dB

10
-3 dB 6 dB

open-loop gain [dB]

0

-6 dB

ω ω0 ω ω0 ω ω0
-20 dB

= .2

-10

-12 dB

= .5

=1

-20

ω ω0

=2

-30

ζ0 = .01 ζ 0 = .1
-40 dB

ω ω0

=5

ζ0 = 1 ζ0 = 2
-180 -90 0

-40 -360

-270

open-loop phase [deg]
2 2 Figure 2.19.: Nichols plots of the transfer function ω2 0 /( s + 2ζ0 ω0 s + ω0 ) for different values of ζ0

78

These are the basic requirements for a wellr C P y Figure 2. It is defined as 180◦ plus the phase angle φ1 of the loop frequency response L at the frequency ω1 where the gain is unity. evaluated at the frequency ωπ at which the phase angle is −180 degrees.20. The transient response shows satisfactory damping. 4. Figures 2. 79 . It is defined as the reciprocal of the magnitude of the loop frequency response L. The frequency ωπ is called the phase crossover frequency. 3. 73) and 2.20. These basic requirements may be further specified in terms of both a number of frequency-domain specifications and certain time-domain specifications.2 (p. 2.4 — also measures relative stability. The system is sufficiently insensitive to external disturbances and variations of internal parameters. Phase margin.: Basic feedback system designed control system: 1. 60). Design goals and criteria For SISO systems we have the following partial list of typical classical performance specifications.12 (p. Classical control system design 2. The transient response satisfies accuracy requirements.5. The phase margin — again see § 1.5. 2. The frequency ω1 is called the gain crossover frequency.21 illustrate several important frequency-domain quantities: | H| 0 dB -3 dB M ωr B frequency Figure 2.5. Classical control system design 2. The transient response is sufficiently fast. The gain margin — see § 1.4 (p.1. Consider the feedback loop of Fig. 20) — measures relative stability.2.: Frequency-domain performance quantities Gain margin. often expressed in terms of the error constants of § 2.21.

3 (p. The rise time expresses the “sharpness” of the leading edge of the response. M and the phase at the frequency B on the other. 80 . Rise time Tr . Cruise control system Evaluate the various time and frequency performance indicators for the integral cruise control system design of Example 2. Percentage overshoot PO. The bandwidth B measures the speed of response in frequency-domain terms. The FVE is the steady-state position error.5. pp.2. Var- ious definitions exist. Horowitz (1963. Exercise 2. The settling time is often defined as time required for the response to a step input to reach and remain within a specified percentage (typically 2 or 5%) of its final value. These specifications can not be easily expressed in general terms. occurring at the resonance frequency ωr . delay time measures the total average delay between reference and output. Classical Control System Design Bandwidth. One defines TR as the time needed to rise from 10% to 90% of the final value. It may for instance be defined as time where the response is at 50% of the step amplitude. This quantity expresses the maximum difference (in % of the steady-state value) between the transient and the steady-state response to a step input.: Time-domain quantities Delay time Td . Settling time Ts . Final value of error FVE.707 (3 dB) of its value at zero frequency.22. Relative stability may also be measured in terms of the peak value M of the magnitude of the closed-loop frequency response H (in dB). Tr .22 shows five important time-domain quantities that may be used for performance specifications for the response of the control system output to a step in the reference input: Step response PO 100 % 90 % FVE Td 50 % 10 % 0% Tr Ts time Figure 2. They should be considered individually for each application. It includes no specifications of the disturbance attenuating properties. Ts and the overshoot on the one hand and the frequency domain parameters B. The author advises to use them with caution.1.3. 190–194) lists a number of quasi-empirical relations between the time domain parameters Td . It is defined as the range of frequencies over which√ the closed-loop frequency response H has a magnitude that is at least within a factor 1 2 2 = 0. Resonance peak. 67). This list is not exhaustive. Figure 2.

Compensator design In the classical control engineering era the design of feedback compensation to a great extent relied on trial-and-error procedures. 2. single-output systems.6. Introduction In this section we discuss the classical techniques of lead. ω ∈ R. In § 2.2. 2.6. Nyquist or Nichols diagram of the loop frequency response of the compensated system. The design sequence summarizes the main ideas of classical control theory developed in the period 1940–1960. lag. Choose a compensator structure — for instance by introducing integrating action or lag compensation — that provides the required steady-state error characteristics of the compensated system. In this section we consider the basic goals that may be pursued from a classical point of view. It is presented in terms of shaping loop transfer functions for single-input. Lead. Lead. Use lag.2. 92) is devoted to Horowitz’s Quantitative Feedback Theory (Horowitz and Sidi 1972).and N -circles are useful tools. An extensive account of these techniques is given by Dorf (1992). • If the specifications are not met then determine the adjustment of the loop gain frequency response function that is required. • Consider the desired steady-state error properties of the system (see § 2. lag-lead or other compensation to realize the necessary modification of the loop frequency response function. and lag-lead compensation 2.8 (p. M . Section 2.9 (p.2. Even if the 81 . • Investigate the shape of the frequency response P ( jω). • Plot the Bode.6.6 (p. which allows to impose and satisfy quantitative bounds on the robustness of the feedback system. Adjust the gain to obtain a desired degree of stability of the system. Lead compensation Making the loop gain L large at low frequencies — by introducing integrating action or making the static gain large — may result in a Nyquist plot that shows unstable behavior. Experience and engineering sense were as important as a thorough theoretical understanding of the tools that were employed. 81) we consider techniques for loop shaping using simple controller structures — lead. In the classical view the following series of steps leads to a successful control system design: • Determine the plant transfer function P based on a (linearized) model of the plant.1. 89) we discuss the Guillemin-Truxal design procedure. The graphic tools essential to go through these steps that were developed in former time now are integrated in computer aided design environments. and lead-lag compensators. p.6. In § 2. The gain and phase margins are measures for the success of the design. lag.5. to understand the properties of the system fully. lag. 60).2. and lag-lead compensation 2. lag. The Bode gain-phase relation sets the limits. lead. and lag-lead compensation.

makes the closed-loop system faster. The required phase advance in the resonance frequency region may be obtained by utilizing a phase-advance network in series with the plant. oscillatory behavior.5 6 dB 1 dB 3 dB -1 dB -3 dB -6 dB -12 dB 12 dB 0 d -0. 2. The bandwidth increase associated with making α small may aggravate the effect of high frequency parasitic dynamics in the loop. Phase lead compensation results in an increase of the resonance frequency.2. increases the bandwidth and. By making α sufficiently small the compensator may be given the character of a differentiator over a large enough frequency range. An application of lead compensation is described in Example 2.24 shows the Bode diagrams. Figure 2. in the second it creates extra phase lag.5 co -1 mp en sa te d -0.6. Phase lead compensation.55) For 0 < α < 1 we obtain a lead compensator and for α > 1 a lag compensator.2. resulting in nearly unstable. 1/α T ) the phase advance compensator has the character of a differentiating network. If very small values of α are used then the danger of undesired amplification of measurement noise in the loop exists. also used in PD control.3 (p. Exercise 2.5 sat e en mp co -2 -1 un -1. In the first case the compensator creates phase advance.: Nyquist plot of uncompensated and compensated plant closed-loop system is stable the gain and phase margins may be unacceptably small. Keeping the Nyquist plot away from the critical point −1 has the effect of improving the transient response. The network may be of first order with frequency response function C ( jω) = α 1 + jω T . The characteristics of phase-lead compensation are reviewed in Table 2.5 0 Re Figure 2. Over the frequency interval (1/ T .24 shows that the maximum amount of phase lead or lag that may be obtained with the compensator 82 .23 shows an instance of this.6. Classical Control System Design 1 Im 0. A minimal value of M = 1.4 (3 dB) might be a useful choice. Figure 2. To obtain satisfactory stability we may reshape the loop gain in such a way that its Nyquist plot remains outside an M -circle that guarantees sufficient closed-loop damping. 85). hence.1 (Specifics of the first-order lead or lag compensator). Inspection of Fig. 1 + jωα T (2.23.

83 .001 α = . Show that the width of the window over which phase lead or lag is effected is roughly log10 α decades. First-order phase advance compensation is not effective against resonant modes in the plant corresponding to second order dynamics with low damping.01 α = 10000 50 0 0 α = . Finally.01 α = .2. the low frequency gain loss (for lead compensation) or gain boost (for lag compensation) depend on α. 3.57) (2. Show that the low frequency gain loss or boost is 20 log10 α dB. lag.25 shows plots of the peak phase lead or lag. Also the width of the frequency window over which significant phase lead or lag is achieved depends on α. 1. the window width.55) is determined by α. Lead. and that the peak phase lead or lag equals φmax = arctan √ 1 1 √ − α . and the low-frequency gain loss or boost.24.56) 2.: Log magnitude and phase of lead and lag compensators C (s ) = +sT α 11 +s α T (2. The rapid change of phase from 0 to −180 degrees caused by lightly damped second-order dynamics cannot adequately be countered.1 -50 -100 -4 10 α = 1000 α = 10000 10 0 10 2 10 4 10 -2 10 0 10 2 normalized angular frequency ω T normalized angular frequency ω T Figure 2. This requires compensation by a second order filter (called a notch filter) with zeros near the lightly damped poles and stable poles on the real line at a considerable distance from the imaginary axis.1 Lag compensation magnitude [dB] 0 -50 -100 100 α = .0001 phase [deg] α = 10 α = 100 50 0 -2 10 α = . 2 α (2.0001 α = . Prove that the peak phase lead or lag occurs at the normalized frequency √ ωpeak T = 1/ α.001 α = 1000 α = 100 α = 10 α= . and lag-lead compensation Lead compensation 100 α = .6. Figure 2.

To accomplish this modify the pure integral compensation scheme to the PI compensator C (s ) = k 1 + sTi . making the closed-loop system slower.25.58) C ( jω) = α 1 + jωα T is chosen such that 1/ T is much greater than the resonance frequency ω R of the loop gain then there is hardly any additional phase lag in the crossover region.6. jω T (2. Classical Control System Design 100 peak value of phase or lag [deg] 50 0 -4 10 -3 -2 -1 0 10 10 10 10 4 lead or lag window width [decades] 80 40 0 -4 -3 -2 -1 0 2 0 low frequency gain loss or boost [dB] 10 10 10 10 10 α or 1/α Figure 2. If the time constant T in 1 + jω T (2.3 (p. A suitable choice for the frequency 1/ Ti is. An example of phase lag compensation is the integral compensation scheme for the cruise control system of Example 2. Exercise 2. 67).2.3. say. Table 2. sTi (2. To speed up the response additional phase lead compensation is needed. It also has the effect of decreasing the bandwidth. At higher frequencies the associated 90◦ phase lag vanishes. The first-order plant requires a large gain boost at low frequencies for good steady-state accuracy. 84 .2 reviews and summarizes the characteristics of lead and lag compensation. On the other hand. 67) pure integral control limits the bandwidth. This gain is provided by integral control.3.60) This provides integrating action up to the frequency 1/ Ti .3. Lag compensation The loop gain may be increased at low frequencies by a lag compensator.59) This is a compensator with proportional and integral action.2 (Phase lag compensation). the effect of high frequency measurement noise is reduced.3 (p.: Peak phase lead or lag 2.6. hence. and. Lag compensation is fully compatible with phase-lead compensation as the two compensations affect frequency regions that are widely apart. half a decade below the desired bandwidth. Increasing the low frequency gain by lag compensation reduces the steady-state errors. In the limit α → ∞ the compensator frequency response function becomes C ( jω) = 1 + 1 . As we also saw in Example 2.

6 makes sure that the crossover frequency of the loop gain is 1 [rad/s].6.1 [rad/s] and ζ0 = 0. and lag-lead compensation Table 2.6. and choose the gain k such that the loop gain crossover frequency is 0. so that the closed-loop system is unstable. Ti = 10 [s]. Step 1: Lag compensation.4. Note the successive design steps. Letting k0 = 98.: Characteristics of lead and lag compensation design Compensation Method Purpose Phase-lead Addition of phase-lead angle near the crossover frequency Improve the phase margin and the transient response When a fast transient response is desired Increases the system bandwidth Yields desired response Speeds dynamic response Increases the bandwidth and thus the susceptibility to measurement noise If the phase decreases rapidly near the crossover frequency Phase-lag Increase the gain at low frequencies Increase the error constants while maintaining the phase margin and transient response properties When error constants are specified Decreases the system bandwidth Suppresses high frequency noise Reduces the steady-state error Slows down the transient response Applications Results Advantages Disadvantages Not applicable If no low frequency range exists where the phase is equal to the desired phase margin Suppose that the desired bandwidth is 0. To achieve integral control we introduce lag compensation of the form C0 (s ) = k0 1 + sTi . sTi (2.26 shows the Bode diagram of the resulting loop gain.2.6. • A closed-loop bandwidth of 1 [rad/s].3 (Lag-lead compensator).2. • Satisfactory gain and phase margins.2. Consider the simple second-order plant with transfer function P (s ) = ω2 0 . The system is poorly damped.3 [rad/s]. 2.62) The phase lag compensation may be extended to 1 decade below the desired bandwidth by choosing 1/ Ti = 0.3 [rad/s]. Example 2. Lag-lead compensation We illustrate the design of a lag-lead compensator by an example. that is.61) with ω0 = 0. Inspection reveals a negative phase margin. Check whether the resulting design is satisfactory. Lead. 85 . s2 + 2 ζ 0 ω0 s + ω2 0 (2. Figure 2. lag. Select Ti as recommended.1 [rad/s]. The design specifications are • Constant disturbance rejection by integral action.

Figure 2. The gain margin is now about 17 dB and the phase margin about 45◦ .1 and T = 3 [rad/s].26. The corresponding loop gain is shown in Fig.63) Phase advance is needed in the frequency region between. say.64) Setting ω1 = 10 [rad/s] and ζ1 = 0. Classical Control System Design 100 Lag compensation only gain [db] 0 Lead-lag compensation -100 -2 10 0 Lead-lag compensation with HF roll-off 10 -1 10 0 10 1 10 2 phase [deg] -90 -180 Lead-lag compensation Lag compensation only -270 -360 -2 10 Lead-lag compensation with HF roll-off 10 -1 10 0 10 1 10 2 frequency [rad/s] Figure 2. High-frequency roll-off. The resulting Bode diagram of the loop gain is included in Fig.1 and 10 [rad/s].3 makes the crossover frequency equal to 1 [rad/s].27 shows the Bode magnitude plot of the closed-loop frequency response function and of the closed-loop step response.27 shows the extra roll-off of the closed-loop frequency response. Setting k1 = 3.26. 1 + sα T (2. For measurement noise reduction and high-frequency robust- ness we provide high-frequency roll-off of the compensator by including additional lag compensation of the form C2 (s ) = ω2 1 . Enhancing high-frequency roll-off slightly increases the overshoot of the closed-loop step response.25 and some experimenting leads to the choice α = 0.5 makes the roll-off set in at 10 [rad/s] without unnecessary peaking and without appreciable effect in the crossover region. 86 . 2. 2. Figure 2. The closed-loop system is stable with infinite gain margin (because the phase never goes below −180◦) and a phase margin of more than 50◦ .2.26. 2. 0. s2 + 2 ζ 1 ω1 s + ω2 1 (2.: Bode diagrams of the loop gain Step 2: Phase lead compensation. We stabilize the closed loop by lead compensation of the form C1 (s ) = k1 α 1 + sT . Step 3. They are quite adequate.24 or 2. Inspection of Fig.

7.1. 2. The constant k is the gain. The root locus method then allows to adjust the gain.2. p n of the denominator polynomial are the open-loop poles. m ≤ n. p2 .: Closed-loop frequency and step responses 2. This graphical approach yields a clear picture of the stability properties of the system as a function of the gain. Introduction The root locus technique was conceived by Evans (1950) — see also Evans (1954). It consists of plotting the loci of the roots of the characteristic equation of the closed-loop system as a function of a proportional gain factor in the loop transfer function. It leads to a design decision about the value of the gain. The roots z1 .5 0 -200 -2 10 10 0 10 2 0 0 5 10 15 20 magnitude [dB] 100 0 -100 -200 -2 10 frequency response 1.7.27. Inspection of the loci often provides useful indications how to revise the choice of the compensator poles and zeros. including its pole and zero locations. The root locus approach to parameter selection 2. (s − p1 )(s − p2 ) · · · (s − p n ) (2. The root locus approach to parameter selection magnitude [dB] 200 frequency response 1.7. First the controller structure. 87 . Let the loop transfer function of a feedback system be given in the form L (s ) = k (s − z1 )(s − z2 ) · · · (s − zm ) . · · · .65) For physically realizable systems the loop transfer function L is proper. · · · .5 0 0 5 10 time [s] 15 20 lag-lead compensation with extra HF roll-off 10 0 10 2 frequency [rad/s] Figure 2. that is. The root locus method is not a complete design procedure.2.5 step response amplitude 1 0. zm of the numerator polynomial are the open-loop zeros of the system.5 step response amplitude 1 lag-lead compensation 0.7. Root loci rules We review the basic construction rules for root loci. The roots p1 . should be chosen. z2 .

for n − m = 2 we have α = ±π/2.: Examples of root loci: k ( s+2 ) k s ( s+2 ) . (2.2. 4. The remaining n − m closed-loop poles tend to infinity. The directions of the asymptotes are given by the angles αi = 2i + 1 π n−m [rad] i = 0. (c) L ( s ) = k s ( s+1 )( s+2 ) 7. Classical Control System Design The closed-loop poles are those values of s for which 1 + L (s ) = 0. Those sections of the real axis located to the left of an odd total number of open-loop poles and zeros on this axis belong to a locus. n − m − 1. For k = 0 the closed-loop poles coincide with the open-loop poles p1 . equivalently. (s − p1 )(s − p2 ) · · · (s − p n ) + k (s − z1 )(s − z2 ) · · · (s − zm ) = 0. or.66) Under the assumption that m ≤ n there are precisely n closed-loop poles. If k → ∞ then m of the closed-loop poles approach the (finite) open-loop zeros z1 . There are as many locus branches as there are open-loop poles. with s0 = (sum of open-loop poles) − (sum of open-loop zeros) . If m < n then n − m branches approach infinity along straight line asymptotes. z2 . 88 . Each branch starts for k = 0 at an open-loop pole location and ends for k = ∞ at an open-loop zero (which thus may be at infinity). (a) 4 Im 4 (b) 4 (c) 0 0 0 -4 -4 0 Re 4 -4 -4 0 Re 4 -4 -4 0 Re 4 (a) L (s ) = Figure 2. The angles are evenly distributed over [0. and so on. 5. 2. The root loci are the loci of the closed-loop poles as k varies from 0 to +∞. (b) L ( s ) = s ( s+1 ) . · · · . As we consider real-rational functions only the loci are symmetrical about the real axis. Summary 2. 3.7. n−m (2. (2. Computer calculations based on subroutines for the calculation of the roots of a polynomial are commonly used to provide accurate plots of the root loci. All asymptotes intersect the real axis at a single point at a distance s0 from the origin. · · · .67) Thus. 1. The graphical rules that follow provide useful insight into the general properties of root loci.1 (Basic construction rules for root loci).68) 6. 2π].28. for n − m = 1 we have α = π. 1. p2 . · · · . pn . zm .

: Unit feedback system C= 1 H .28 which of the rules (a)–(h) of Summary 2. 2. and Krall and Fornaro (1967).8. Krall (1961). Powell. The Guillemin-Truxal design procedure 8. Powell. Franklin. 89 .28 illustrates several typical root loci plots. Procedure Let H be the chosen closed-loop transfer function.69) Solving for the compensator transfer function C we obtain r C P y Figure 2. The Guillemin-Truxal design procedure 2.7.2 (Root loci). 2. Generally an approximation is necessary to arrive at a practical compensator of low order.8. Instead of designing a compensator on the basis of an analysis of the open-loop transfer function the closed-loop transfer function H is directly chosen such that it satisfies a number of favorable properties. Sometimes H may be selected on the basis of a preliminary analysis of the behavior of a closed-loop system with a low-order compensator. and Van de Vegte (1990).1 applies. Berman and Stanton (1963). Exercise 2. Breakaway points occur only if the part of the real axis located between two open-loop poles belongs to a locus. Introduction A network-theory oriented approach to the synthesis of feedback control systems was proposed by Truxal (1955). Krall (1970). respectively. For unit feedback systems as in Fig.70) The determination of the desired H is not simple. we have H= PC .1. and Emami-Naeini (1986). 1 + PC (2. Franklin. Check for each of the root locus diagrams of Fig. Krall (1963).29. The application of the root locus method in control design is described in almost any basic control engineering book — see for instance Dorf (1992).7. The idea is simple.8.2.8.29 with plant and compensator transfer functions P and C . The root locus method has received much attention in the literature subsequent to Evans’ pioneering work. Figure 2. There may exist points where a locus breaks away from the real axis and points where a locus arrives on the real axis. 2. the compensator that realizes this behavior is computed. 2. Next. and Emami-Naeini (1991). P 1− H (2. Its theoretical background has been studied by F¨ ollinger (1958). Arrival points occur only if the part of the real axis located between two open-loop zeros belongs to a locus.2.

76) 90 .3 shows the coefficients for increasing orders. For the normalized case ω0 = 1 the reference step responses are given in Fig.74) is minimal. Another popular choice is to choose the coefficients such that the integral of the time multiplied absolute error ∞ 0 t|e (t )| dt (2. 2.3. sn + bn−1 sn−1 + · · · + b0 (2. The resulting polynomials are known as Butterworth polynomials.8. sk D (s ) (2. Suppose that we select the closed-loop transfer function as H (s ) = b0 .75) with T = θ = 10 [s]. b1 . Exercise 2.72) 1 + L (s ) s D (s ) + N (s ) cn sn + · · · + ck sk + bk−1 sk−1 + · · · + b0 Conversely.30(b) and Table 2. Classical Control System Design A classical approach is to consider the steady-state errors in selecting the closed-loop system.71) with N and D polynomials that have no roots at 0. It follows that H (s ) = N (s ) am sm + · · · + a k sk + bk−1 sk−1 + · · · + b0 L (s ) = k = .73) which implies a zero steady-state error for step inputs r (t ) = ½(t ). Prove this. Then the loop gain L needs to be of the form L (s ) = N (s ) .3. Table 2. 2. 67).30(a). with e the error for a step input (Graham and Lathrop 1953). This yields closedloop responses with a desired degree of damping. We specify the desired closed-loop transfer function H (s ) = ω2 0 .3 (p. respectively.8.2.1 (Zero steady-state error). 4 ω0 + ω2 0 (2. One way to choose the coefficients b0 . p. 2. choosing the first k coefficients b j in the numerator polynomial equal to that of the denominator polynomial ensures the system to be of type k.2. (2. The plant has the first-order transfer function P (s ) = 1 T s+ 1 θ . For general ω0 the polynomials follow by substituting s := s/ω0 . Example We consider the Guillemin-Truxal design procedure for the cruise control system of Example 2. s2 + 1 . bn−1 is to place the closed-loop poles evenly distributed on the left half of a circle with center at the origin and radius ω0 . The resulting step responses and the corresponding so-called ITAE standard forms are shown in Fig. (2. Suppose that the closed-loop system is desired to be of type k (see § 2. 60).4. This still leaves considerable freedom to achieve other goals. · · · . The ITAE step responses have a shorter rise time and less overshoot than the corresponding Butterworth responses.

type 1 control.14s6 + 21.475s6 + 10.: Normalized denominator polynomials for ITAE criterion Order 1 2 3 4 5 6 7 8 3 2 Denominator s+1 s + 1.20s7 + 12.5 1 n=1 n=8 0.20s3 + 13.49s6 + 10.2.86s5 + 7.60s3 + 7.86s + 1 s7 + 4.08s4 + 15.25s5 + 6.6s3 + 10.45s2 + 3.13s7 + 13. 0s 2 + 2. It is easy to find that the required compensator transfer 91 .85s5 + 25. 4s 2 + 2.14s3 + 7.24s2 + 3. 0s 3 + 5.15s + 1 s 4 + 2.24s4 + 5.64s2 + 4.80s6 + 21. 4s + 1 s6 + 3.4. 6s 3 + 3.13s + 1 Table 2.30. 4s 2 + 2.75s4 + 22. that is. 1s 3 + 3.60s4 + 8.: Step responses for Butterworth (left) and ITAE (right) denominator polynomials Table 2. 4s + 1 s 3 + 2.5 0 0 5 10 15 20 0 5 10 15 20 normalized time normalized time Figure 2.24s + 1 s6 + 3.58s + 1 s8 + 5.6s4 + 14.1s5 + 14.42s5 + 15.5 0 0.: Normalized denominator polynomials for Butterworth pole patterns Order 1 2 3 4 5 6 7 8 2 Denominator s+1 s + 1.5 1n=1 n=8 ITAE step responses 1.85s3 + 13.54s3 + 10. 0s + 1 s 4 + 2.3.95s + 1 s7 + 4.46s2 + 3. 5s 2 + 3.60s5 + 25.8.14s2 + 5.30s2 + 5.69s4 + 21.49s + 1 s8 + 5.75s2 + 2. The numerator has been chosen for a zero position error. 8s 4 + 5. 7s + 1 s 5 + 2.46s4 + 9.15s + 1 The denominator is a second-order ITAE polynomial. The Guillemin-Truxal design procedure Butterworth step responses 1.1s2 + 4.24s3 + 5. 4s + 1 s + 1. 6s + 1 s5 + 3.

75) and (2.77) shows | H| 10 magnitude 0 Step response 1. For the Guillemin-Truxal design we aim for a closed-loop bandwidth ω0 = 1/ 2 ≈ 0. As seen √ in Example 2.2.3. Inspection of (2. It confirms that the desired bandwidth has been obtained. 2. and illustrates what is meant by the comment that the selection of the closed-loop transfer function is not simple.31 shows the resulting sensitivity function and the closed-loop step response. It aims at satisfying quantatitive bounds that are imposed on the variations in the closed-loop transfer function as a result of specified variations of the loop gain. This does not bode well for the design.4s ) (2. The design method relies on the graphical representation of the loop gain in the Nichols chart.31 look quite attractive. The canceling pole at −1/θ is also a closed-loop pole.2.5 1 10-1 0. This cancelation phenomenon is typical for na¨ ıve applications of the Guillemin-Truxal method. The method is deeply rooted in classical control. Figure 2. 67) the largest obtainable bandwidth with pure integral control is about 1 / 200 √ ≈ 0. This constrains the choice of H . Introduction Quantitative feedback theory (QFT) is a term coined by Horowitz (1982) (see also Horowitz and Sidi (1972)).: Closed-loop transfer function H and closed-loop step response of a Guillemin-Truxal design for the cruise control system that in the closed loop the plant pole at −1/θ is canceled by a compensator zero at the same location. It causes a slow response (with the open-loop time constant θ) to nonzero initial conditions of the plant and slow transients (with the same time constant) in the plant input.77) The integrating action is patent.31.5 10-2 -1 10 0 0 5 time [s] 10 0 10 1 10 angular frequency [rad/s] Figure 2. Effect of parameter variations Nichols plots may be used to study the effect of parameter variations and other uncertainties in the plant transfer function on a closed-loop system. A useful account is given by Lunze (1989). Classical Control System Design function is C (s ) = ω2 T ( s + 1 1 H (s ) θ) = 0 . 2.1. 92 . Quantitative feedback theory (QFT) 2. 2. Inspection of (2. even though the sensitivity function and the closed-loop step response of Fig.9.3 (p.07 [rad/s].70) shows that cancelation may be avoided by letting the closed-loop transfer function H have a zero at the location of the offending pole.9.9.7 [rad/s]. P (s ) 1 − H (s ) s ( s + 1.

32.78) Nominally g = 1 and θ = 0.2. These diagrams are constructed by calculating the loop gain L ( jω) with ω fixed as C (s ) = 93 .33 shows the Nichols plot of the nominal loop gain L0 = P0 C . In the Nichols chart.2 [s]. The parasitic time constant may independently vary from 0 to 0.1 [s]. the critical point that is to be avoided is the point (−180◦. it may be checked whether the closed-loop system remains stable. (2.5 1 -50 0.: Nominal and perturbed complementary sensitivity functions and step responses for the nominal design study has led to a tentative design in the form of a lead compensator with transfer function k + Td s . closed-loop stability is retained as long as the loop gain does not cross the point −1 under perturbation.5 -100 -2 10 0 0 5 10 nominal step response [dB] 10 0 10 2 ω [rad/s] perturbed | T | 0 1. The effect of the perturbations on the closed-loop transfer function may be assessed by studying the width of the track that is swept out by the perturbations among the M -loci. located at the heart of the chart. s2 (1 + sθ) (2. By the Nyquist criterion. Under perturbation the gain g varies between 0.7715 and −8.5 and 2. Figure 2. with P0 (s ) = 1/s2 . The nominal system has closed-loop poles −0. The figure also shows with the uncertainty regions caused by the parameter variations at a number of fixed frequencies.32 shows the nominal and perturbed complementary sensitivity function and closed-loop step response.79) 1 + T0 s √ with k = 1. The closed-loop bandwidth is 1 [rad/s]. Td = 2 [s] and T0 = 0.5 1 -50 0.9.4697.7652 ± j0. As an example we consider the plant with transfer function P (s ) = g . Example 2. 0 dB).1 (Uncertain second-order system). We assume that a preliminary nominal | T | 0 1. Figure 2.5 -100 0 0 time [s] perturbed step responses [dB] 10 -2 10 0 10 2 5 10 ω [rad/s] time [s] Figure 2.9. Quantitative feedback theory (QFT) In particular.

2.2 20 -1 dB 1 dB ω = 0.: Nominal Nichols plot and uncertainty regions.5 dB ω = 0. Classical Control System Design 30 0. (Shaded: forbidden region) 94 .25 dB 0 dB 0.33.5 3 dB 10 open-loop gain (dB) -3 dB 6 dB ω=1 ω=2 0 -6 dB D -10 -12 dB C ω=5 -20 dB A B ω = 10 -20 -30 -360 -270 -180 -90 0 open-loop phase (degrees) Figure 2.

Besides providing stability robustness. not even under perturbation. correspond to extreme values of the parameters as follows: A: B: C: D: θ = 0. (2.9.2. Since H = T F . If the Nichols plot of L never enters the forbidden region.2 (Performance robustness of the design example). for each frequency ω the maximally allowable variation (ω) of | H ( jω)|. Changing the compensator frequency response C ( jω) for some frequency ω amounts to shifting the loop gain L ( jω) at that same frequency in the Nichols plot.33 reveals that the perturbations sweep out a very narrow band of variations of | T | at frequencies less than 0. θ = 0.9. called the tolerance band. in the simplest situation performance robustness is specified in the form of bounds on the variation of the magnitude of the closed-loop frequency response function H . Typically. This is borne out by Fig.2.4.32. The forbidden region is a region enclosed by an M -locus. In the next subsection we discuss how to design the feedback loop such that T satisfies the stability and performance robustness conditions. with T the complementary sensitivity function and F the prefilter transfer function. 2. Example 2. Stability and performance robustness Robust stability of the closed-loop system is guaranteed if perturbations do not cause the Nichols plot of the loop gain to cross over the center of the chart. Whether this condition is satisfied may be verified graphically by checking in the Nichols chart whether the uncertainty region fits between two M -loci whose values differ by less than . a band with a width of about 10 dB between the frequencies 2 and 10. θ = 0. a band with a width of about 5 dB at frequency 1.2. g = 2. This often may be remedied by re-shaping the loop gain L. the guaranteed distance of L from the critical point prevents ringing. destabilization caused by unmodeled perturbations is prevented by specifying a forbidden region about the origin for the loop gain as in Fig.80) shows that if F is not subject to uncertainty then robust performance is obtained if and only if for each frequency log | T | varies by at most on the uncertainty region.3. 2.80) For simplicity we suppress the angular frequency ω. is specified. 2. Inspection shows that no perturbation makes the Nichols plot cross over the center of the chart.5 g = 0. for instance the 6 dB locus. 2. θ = 0.9.9. In the QFT approach. Inspection of (2. In the QFT approach. g = 0. as marked in the plot for ω = 10.33. g = 2.5.2. Inspection of the plots of Fig. Quantitative feedback theory (QFT) a function of the uncertain parameters g and θ along the edges of the uncertainty regions. By visual inspection the shape 95 . then the modulus margin is always greater than 6 dB. The corners of the uncertainty regions. while the width of the band further increases for higher frequencies. QFT design of robust feedback systems A feedback system design may easily fail to satisfy the performance robustness specifications. it follows after taking logarithms that log | H | = log | T | + log | F |. This means that the closed-loop system remains stable under all perturbations. 2.

at each critical frequency ω the corresponding loop gain L ( jω) is to the right of the corresponding stability robustness boundary.9. Figure 2. The first step of the procedure is to trace for each selected critical frequency the locus of the nominal points such that the tolerance band is satisfied with the tightest fit. The crucial step in the design is to shape the loop gain such that 1.5. by shifting the template around the for- bidden region so that it touches it but does not enter it the robustness boundary is traced for each critical frequency.3 (QFT design). The procedure is best explained by an example. at each critical frequency ω the corresponding loop gain L ( jω) is on or above the corresponding performance boundary. and shifting the template up or down until the lowest position is found where the tolerance band condition is satisfied. and shifting these around in the Nichols chart. If it is on the boundaries then the bounds are satisfied tightly. frequency tolerance band 0. and also for ω = 0. 2 and 5 rad/s to the right in the Nichols chart. Loop gain shaping. Note that changing the loop gain by changing the compensator frequency response function does not affect the shapes of the templates.5 dB 1 2 dB 2 5 dB 5 10 dB 10 18 dB Determination of the performance boundaries. Inspection shows that the nominal design satisfies the specifications for the critical frequencies ω = 2. 96 . The translations needed to shift the Nichols plot to make it fit the tolerance requirements are achieved by a frequency dependent correction of the compensator frequency response C . Classical Control System Design of the Nichols plot of L may be adjusted by suitable shifts at the various frequencies so that the plot fits the tolerance bounds on T . Next.2 0. Determination of the robustness boundaries. The performance boundary for the frequency .34 shows the performance boundaries thus obtained for the critical frequencies 1. The robustness boundaries are shown for all five critical frequencies to the right of the center of the chart. but not for ω = 1 rad/s. 2.5. Points on the performance boundary may for instance be obtained by fixing the nominal point at a certain phase. This locus is called the performance boundary.: Tolerance band specifications. Example 2.2 it may be shown that the specifications are not satisfied. Table 2. and begin by specifying the tolerance band for a number of critical frequencies as in Table 2. The desired bandwidth is 1 rad/s. We continue the second-order design problem of the previous examples. 5 and 10 rad/s.1 rad/s is above the portion that is shown and that for 10 rad/s below it.2. A feedback design satisfies the performance bounds and robustness bounds if for each critical frequency the corresponding value of the loop gain is on or above the performance boundary and to the right of or on the robustness boundary. Part of the technique is to prepare templates of the uncertainty regions at a number of frequencies (usually not many more than five).

2 20 -1 dB 1 dB ω = 0.2 ω=1 ω=2 ω=5 ω = 10 10 open-loop gain [dB] -3 dB 6 dB ω=1 0 -6 dB ω=2 ω=5 -10 -12 dB -20 dB -20 ω = 10 ω=5 -30 -360 -270 -180 -90 0 open-loop phase [degrees] Figure 2.2. Quantitative feedback theory (QFT) 30 0.5 3 dB ω=1 ω = 0.34.: Performance and robustness boundaries 97 .25 dB 0 dB 0.9.5 dB ω = 0.

81) The curves marked T1 = 1.2 ω = 0. In the problem at hand a design may be found in the following straightforward manner.02 is also included in Fig. This is provided with a compensator with transfer function C (s ) = 1 + sT1 . Traditional QFT relies on shifting paper templates around on Nichols charts to determine the performance and robustness bounds 1 The requirements may be completely satisfied by adding a few dB to the loop gain. and is the least satisfactory from a methodical point of view. the latter with wide margins. This stage of the design requires experience and intuition. T1 = 3 and T1 = 9 represent the corresponding loop gains L = PC . The vertical line (a) in Fig.2 ω=1 ω=1 ω=2 ω =5 5 ω = 10 ω=2 ω=1 ω=2 ω = 10 ω=5 ω=2 ω=5 T1 = 3 ω=1 20 dB ω=2 ω=1 0 ω=2 ω = 10 ω=5 -10 dB (a) ω = 10 -20 dB ω=5 T2 = 0.: Redesign of the loop gain This target should be achieved with a compensator transfer function of least complexity and without overdesign (that is.02 ω=5 -30 dB -180 -90 0 open-loop phase [degrees] Figure 2. 1 + sT2 (2.82) The resulting loop gain for T2 = 0. Figure 2.35.35.9. Exercise 2. (2. It very nearly satisfies the requirements1.4 (Performance and robustness boundaries).2 30 dB T1 = 9 T1 = 1 ω = 0. The robustness improvement is evident.2 ω = 0. We choose T1 = 3. 2.36 gives plots of the resulting nominal and perturbed step responses and complementary sensitivity functions. The loop gains for T1 = 3 and T1 = 9 satisfy the requirements. To reduce the high-frequency compensator gain we modify its transfer function to C (s ) = 1 + sT1 . Classical Control System Design ω = 0. 98 open-loop gain [dB] 10 dB .35 is the Nichols plot of the nominal plant transfer function P (s ) = 1/s2 . 2.5 ω=1 ω = 0. the loop gain should be on the boundaries rather than above or to the right). Obviously phase lead is needed.2.

47). Figure 2. 2.37 shows the block diagram.35. and complete it as a 2 2 - e Y X u P z 1 Figure 2.8 (p.36.2.34. Quantitative feedback theory (QFT) nominal | T | 0 [dB] 1.5 (Prefilter design). We continue the design example.: 2 2 -degree-of-freedom feedback system transfer function F0 of the rational prefilter constitutes another degree of freedom.9.5 -100 0 0 5 10 -2 0 2 10 10 10 angular frequency [rad/s] time [s] 1. The closed- 99 .5 1 perturbed | T | 0 [dB] perturbed step responses -50 0. Prefilter design Once the feedback compensaor has been selected the QFT design needs to be completed with the design of the prefilter. degree-of-freedom design as proposed in § 1. Practice these ideas by re-creating Fig.37.5 1 nominal step response -50 0.5 -100 0 0 5 10 -2 0 2 10 10 10 angular frequency [rad/s] time [s] Figure 2. 2. Think of ways to do this using routines from the M ATLAB Control Toolbox. The choice of the numerator polynomial F provides half a degree of freedom and the rational C0 r F0 F X 1 Example 2.5. 2.9.: Nominal and perturbed complementary sensitivity functions and step responses of the revised design such as in Fig.9.

USA. Concluding remarks This chapter deals with approaches to classical compensator design. If the design problem has a much more complex structure. 1995 release.33 makes it clear that the robustness has been drastically improved. The compensator Y (s ) = 3s + 1. so we cancel it by selecting the polynomial F — whose degree can be at most 1 — as F (s ) = s/0. 2.9. Classical Control System Design loop transfer function (from the reference input r to the controlled output z) is NF F0 . 2 Quantitative Feedback Theory Toolbox. Even in this case a designer needs considerable expertise and experience with classical techniques to appreciate and understand the design issues involved.7995.9.3815 slows the response down. The design goals in terms of shaping the loop gain are extensively considered in the classical control literature.84) Its roots are −0. Concluding remark The QFT method has been extended to open-loop unstable plants and non-minimum phase plants. −2. 96) results in the closed-loop characteristic polynomial H= Dcl (s ) = 0. The classical design techniques cope with them in an ad hoc and qualitative manner.10. F0 (s ) = s2 2. The MathWorks Inc.38 displays the ensuing nominal and perturbed step responses and closed-loop transfer functions. The focus is on compensation by shaping the open-loop frequency response.3815 )(s + 2. (2.83) Dcl P = N / D is the plant transfer function and Dcl = DX + NY the closed-loop characteristic polynomial. Figure 2.. 100 . Horowitz (1982) provides references and a review.7995 )(s + 46. Comparison with Fig.8190. X (s ) = 0.3 (p. (2.3815 + 1. Recently a M ATLAB toolbox for QFT design has appeared2. 0. then the analytical techniques are the only reliable tools.2.6. 2.02s + 1 constructed in Example 2. but if this experience is available then for single-input single-output systems it is not easy to obtain the quality of classical design results by the analytical control design methods that form the subject of the later chapters of this book.02s3 + s2 + 3s + 1. Natick. for instance with multi-input multioutput plants or when complicated uncertainty descriptions and performance requirements apply. and also to MIMO and nonlinear plants. MA.3815. In the problem at hand N (s ) = 1 and D (s ) = s2 .02(s + 0.8190 ) (2. and −46. Completing the design amounts to choosing the correct polynomial F and transfer function F0 to provide the necessary compensation in H (s ) = F (s ) F0 (s ). (2.86) + 2 ζ 0 ω0 s + ω2 0 √ with ω0 = 1 rad/s and ζ = 1 2 2. It requires profound experience to handle the classical techniques. To reduce the bandwidth to the desired 1 rad/s and to obtain a critically damped closed-loop step response we let ω2 0 .85) The pole at −0.

5 1 -50 0.10.5 -100 -2 10 0 0 time [rad/s] perturbed closed-loop step responses 10 0 10 2 5 10 ω [rad/s] time [rad/s] Figure 2.5 nominal closed-loop step response -100 -2 10 10 0 10 2 0 0 5 10 ω [rad/s] perturbed closed-loop transfer functions | H | [dB] 0 1.: Nominal and perturbed step responses and closed-loop transfer functions of the final QFT design 101 . Concluding remarks nominal closed-loop transfer function | H | 0 [dB] -50 1.38.5 1 0.2.

2. Classical Control System Design 102 .

and is dealt with at some length in Section 3. the choice of the weighting matrices. Standard references are Anderson and Moore (1971). 115) deals with the LQG problem.1.2 (p. The theoretical results obtained for this class of design methods are known under the general name of LQG theory. Glover. The system may be affected by disturbances and measurement noise represented as stochastic processes. In the period since 1980 the theory has been further refined under the name of H2 theory (Doyle. Besides presenting the solution to the LQ problem we discuss its properties. by Gaussian white noise. 3. with quadratic performance criteria. in the wake of the attention for the so-called H∞ control theory. It permits a frequency domain view and allows the introduction of frequency dependent weighting functions. Kwakernaak and Sivan (1972) and Anderson and Moore (1990). Disturbances and measurement noise are modeled as stochastic processes. Introduction The application of optimal control theory to the practical design of multivariable control systems attracted much attention during the period 1960–1980. MIMO problems can be handled almost as easily as SISO problems. LQG and H2 Control System Design Overview – LQ and LQG design methods convert control system design problems to an optimization problem with quadratic time-domain performance criteria. LQ theory is basic to the whole chapter.3. The H2 formulation of the same method eliminates the stochastic element. In the present chapter we present a short overview of a number of results of LQG and H2 theory with an eye to using them for control system design. in particular. This theory considers linear finitedimensional systems represented in state space form. and Francis 1989). LQ. The LQ paradigm leads to state feedback. Khargonekar.3 (p. By using optimal observers — Kalman filters — compensators based on output feedback may be 103 . Using the notion of return difference and the associated return difference equality we discuss the asymptotic properties and the guaranteed gain and phase margins associated with the LQ solution. and how to obtain systems with a prescribed degree of stability. The section concludes with a subsection on the numerical solution of Riccati equations. 104). The deterministic part is called LQ theory. Section 3.

(3. Multi-input multi-output systems are handled almost (but not quite) effortlessly in the LQG and H2 framework.7 (p. A useful application is the design of feedback systems with integral control.6 (p. that is. To this end we introduce the performance index J= 0 ∞ [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt . The second term measures the accumulated amplitude of the control input. Consider a linear time-invariant system represented in state space form as x ˙(t ) = Ax (t ) + Bu (t ). For well-behaved plants — specifically. If the matrices are diagonal then this means that their diagonal entries should be nonnegative. This is known as loop transfer recovery. and reduces the role of the intensity matrices that describe the white noise processes in the LQG problem to that of design parameters. that is. Q = QT and R = RT . (3. 139) a number of proofs for this chapter are collected. The first term in the integral criterion (3. 133) we present both a SISO and a MIMO design example. LQG and H2 Control System Design designed.2. The two terms zT (t ) Qz (t ) and uT (t ) Ru (t ) are quadratic forms in the components of the output z and the input u.2) Q and R are symmetric weighting matrices. The acronym refers to Linear systems with Quadratic performance criteria. the input u (t ) a k-dimensional vector. We wish to control the system from any initial state x (0 ) such that the output z is reduced to a very small value as quickly as possible without making the input u unduly large. t ≥ 0. Often it is adequate to let the two matrices simply be diagonal. It is most sensible to choose the weighting matrices Q and R such that the two terms are nonnegative.2) measures the accumulated deviation of the output from zero. plants that have no right-half plane zeros — the favorable properties of the LQ solution may asymptotically be recovered by assuming very small measurement noise. The H2 interpretation naturally leads to H2 optimization with frequency dependent weighting functions. LQ. and the output z (t ) an m-dimensional vector. These permit a great deal of extra flexibility and make H2 theory a tool for shaping closed-loop system functions. LQ theory 3.2) is minimal along all possible trajectories of the system is the optimal linear regulator problem .1) For each t ≥ 0 the state x (t ) is an n-dimensional vector. R is positivedefinite if xT Rx > 0 for all nonzero x. respectively.3. This interpretation removes the stochastic ingredient from the LQG framework.2. 124) it is demonstrated that LQG optimization amounts to minimization of the H2 -norm of the closed-loop system. to take Q and R nonnegative-definite1. In Section 3.4 (p. In Section 3. The problem of controlling the system such that the performance index (3. Introduction In this section we describe the LQ paradigm. 1 An n × n symmetric matrix R is nonnegative-definite if xT Rx ≥ 0 for every n-dimensional vector x. 104 .1. In Section 3. 3. z (t ) = Dx (t ).

3 (p.2. Assumptions: • The system (3. Solution of the LQ problem There is a wealth of literature on the linear regulator problem. 2 That 3 That t ≥ 0. The minimal value of the performance index (3. There are finitely many other solutions of the ARE. with F = R−1 BT X .1) is stabilizable2 and detectable3. An optimal trajectory is generated by choosing the input for t ≥ 0 as u (t ) = − Fx (t ).7 (p.3. The algebraic Riccati equation (ARE) AT X + X A + DT QD − X B R−1 BT X = 0 (3. 3. z (t ) = Dx (t ) is observable then X is positive-definite. The reason why it attracted so much attention is that its solution may be represented in feedback form. An outline of the proof is given in § 3. 114). LQ theory 3.6) has a unique nonnegative-definite symmetric solution X . the appendix to this chapter.7) is. We summarize a number of well-known important facts about the solution of the LQ problem. 105 . 2. If the system x ˙(t ) = Ax (t ).3) This solution requires that the state x (t ) be fully accessible for measurement at all times.1 (p. The following facts are well documented (see for instance Kwakernaak and Sivan (1972) and Anderson and Moore (1990)). (3. 142). 139). 115).2. • The weighting matrices Q and R are positive-definite.1 (Properties of the solution of the optimal linear regulator problem).5) The proof is sketched in § 3. The k × n state feedback gain matrix F is given by F = R−1 BT X .4) The symmetric n × n matrix X is the nonnegative-definite solution of the algebraic matrix Riccati equation (ARE) AT X + X A + DT QD − X B R−1 BT X = 0. Sufficient for stabilizability is that the system is controllable. Summary 3.7. Sufficient for detectability is that it is observable.2.9 (p. 1. (3. there exists a state feedback u ( t ) = − Fx ( t ) such that the closed-loop system x ˙( t ) = ( A − B F ) x ( t ) is stable.2) is Jmin = xT (0 ) Xx (0 ).2.2. (3. (3. is. The minimal value of the performance index is achieved by the feedback control law u (t ) = − Fx (t ). The solution of the algebraic Riccati equation is discussed in § 3. there exists a matrix K such that the system e ˙( t ) = ( A − K D )e ( t ) is stable. We return to this unreasonable assumption in § 3.

Determine the ARE.2. · · · . If Q is not positive-definite then there may be unstable closed-loop modes that have no effect on the performance index. Thus. The linearized dynamics of the vehicle of Example 1. Ri = 1 . The reasons for the assumptions may be explained as follows. or until the designer is satisfied with the result. Plot the closed-loop response of the state x (t ) and input u (t ) to the initial state x (0 ) = 1 in dependence on ρ. Compute the closed-loop pole of the resulting optimal feedback system and check that the closed-loop system is always stable. m .10) · · · · · · · · · · · · · · ·  . Exercise 3. 3. LQ. The closed-loop system x ˙(t ) = ( A − B F ) x (t ). An initial guess is to choose both Q and R diagonal     Q1 0 0 ··· 0 0 ··· 0 R1 0  0 Q2 0 · · · 0   0 R2 0 · · · 0    Q= R= (3. Increasing both Q and R by the same factor leaves the optimal solution invariant.11) The number zmax denotes the maximally acceptable deviation value for the ith component of the i output z. Explain the way the closed-loop responses vary with ρ. only relative values are relevant. The Q and R parameters generally need to be tuned until satisfactory behavior is obtained. (3. Consider finding the linear state feedback that minimizes the criterion ∞ 0 [ x2 (t ) + ρu2 (t )] dt (3.3.2. stability of the optimal solution is not guaranteed. Choice of the weighting matrices The choice of the weighting matrices Q and R is a trade-off between control performance ( Q large) and low input energy ( R large).8) is stable.2 (Cruise control system). find its positive solution. The other quantity umax has a similar meaning for the ith component of the input u. umax i i = 1. · · · . 106 . Without loss of generality we may take a = 1. all the eigenvalues of the matrix A − BF have strictly negative real parts. How does the closed-loop pole vary with ρ? Explain. and compute the optimal state feedback gain.1 (p. i Starting with this initial guess the values of the diagonal entries of Q and R may be adjusted by systematic trial and error. 3) may be described by the equation x ˙(t ) = −ax (t ) + au (t ).3. If the system is not stabilizable then obviously it cannot be stabilized.9) with ρ a positive constant. 2. LQG and H2 Control System Design 4.2. k . R needs to be positive-definite to prevent infinite input amplitudes. 2. (3. zmax i i = 1. t ≥ 0. If it is not detectable then there exist state feedback controllers that do not stabilize the system but hide the instability from the output—hence. where a = 1/θ is a positive constant. that is. · · · · · · · · · · · · · · · . 0 · · · · · · 0 Qm 0 · · · · · · 0 Rk where Q and R have positive diagonal entries such that Qi = 1 .

16) with Fα = R−1 BT Xα . t ≥ 0. 106) while explaining and interpreting the effect of α. n .17) (3. Define x α ( t ) = x ( t ) eα t .19) that its system (3. (3.13) (3. (3. Application of the control law (3. choosing α positive results in an optimal closed-loop system with a prescribed degree of stability. The stabilizing property of the optimal solution of the modified problem implies that Re λi ( A + α I − B Fα ) < 0. Re λi ( A (3. The modified performance index is Jα = 0 ∞ e2αt [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt . 107 . (3.19) (3. Rework Exercise 3. Exercise 3. (3.2. It follows from (3.1) creates a closed-loop system matrix A eigenvalues satisfy ¯ α ) < −α.20) Thus.3 (Cruise control system). (3.9) to ∞ 0 e2αt [ x2 (t ) + ρu2 (t )] dt .3.2. (3. or u (t ) = − Fα x (t ).2 (p.18) with λi ( A + α I − B Fα ) denoting the ith eigenvalue.2. i = 1.4.21) with α a positive constant. · · · . Xα is the positive-definite solution of the modified algebraic Riccati equation ( AT + α I ) Xα + Xα ( A + α I ) + DT QD − Xα B R−1 BT Xα = 0. These signals satisfy x ˙α (t ) = ( A + α I ) xα (t ) + Bu α (t ) and ∞ 0 u α ( t ) = u ( t ) eα t . Prescribed degree of stability By including a time-dependent weighting function in the performance index that grows exponentially with time we may force the optimal solutions to decay faster than the corresponding exponential rate. the minimizing u α is u α (t ) = − R−1 BT Xα xα (t ). 2.12) with α a real number. Modify the criterion (3.17) to the ¯ α = A − BFα .15) Consequently. LQ theory 3.14) Jα = T [ zT α ( t ) Qzα ( t ) + u α ( t ) Ru α ( t )] dt .2.

with ω ∈ R.2. 3.2. 108 .2. Consider det J (s ) = det[ I + L (s )] = det[ I + F (sI − A )−1 B]. (3. 4 If (3.and the closedloop characteristic polynomial. Several properties of the closed-loop system may be related to the return difference.23) is known as the return difference. J (s )u is the difference between the signal u in Fig.1(b) and the “returned” signal v = − L (s )u. The quantity J (s ) = I + L (s ) (3. (3.3 (p.1(b) then the loop gain is L (s ) = F (sI − A )−1 B.1). we obtain the return difference inequality4 J T (− jω) R J ( jω) ≥ R for all ω ∈ R.2 (p. LQ.22) (3. If the loop is broken as in Fig. respectively.3.24) P and Q are symmetric matrices of the same dimensions then P ≥ Q means that xT Px ≥ xT Qx for every real n-dimensional vector x. The relation (3. 3. It is proved in § 3.25) det(sI − A ) χol (s ) The quantities χol (s ) = det(sI − A ) and χcl (s ) = det(sI − A + BF ) are the open. 109) we use the return difference equality to study the root loci of the optimal closed-loop poles. By setting s = jω. LQG and H2 Control System Design (a) u − (sI−A)−1B x − v u (b) (sI−A)−1B x F F Figure 3.5. We found the same result in § 1.26) G (s ) = D (sI − A )−1 B is the open-loop transfer matrix of the system (3. 143) by manipulation of the algebraic Riccati equation (3.6 (p. (3. In Subsection 3. Suppose that the gain matrix F is optimal as in Summary 3.1(a) shows the feedback connection of the system x ˙ = Ax + Bu with the state feedback controller u = − Fx. 11) in the form χcl (s ) = χol (s ) det[ I + L (s )].6) that the corresponding return difference satisfies the equality = J T (−s ) R J (s ) = R + GT (−s ) QG (s ).26) is known as the return difference equality or as the Kalman-Jakuboviˇ cPopov (KJP) equality.7 (p.2.27) In Subsection 3.1.1. Return difference equality and inequality Figure 3. after its discoverers. Using the well-known matrix equality det( I + M N ) = det( I + N M ) we obtain det J (s ) = det[ I + (sI − A )−1 BF ] = det (sI − A )−1 det(sI − A + BF ) χcl (s ) det(sI − A + BF ) = .: State feedback 3. 112) we apply the return difference inequality to establish a well-known robustness property of the optimal state feedback system.7.

31) with χcl the closed-loop characteristic polynomial and χol the open-loop characteristic polynomial. Infinite weight on the input term.6. From (3.32) 1 G (s ) G (−s ). because by stability the closed-loop poles are always in the left-half complex plane. (3.30) with ρ a positive number. Under these assumptions the return difference equality (3. (3. z (t ) = dx (t ). ρ (3. LQ theory 3.29) with f a row vector. the coefficient of the term of highest degree is 1.31–3. 109 . Asymptotic performance weighting For simplicity we first consider the case that (3.26) reduces to J (−s ) J ( s ) = 1 + From (3.1) is a SISO system. From the right-hand side of (3.28) with b a column vector and d a row vector. It is easy to separate the two sets of poles. Without loss of generality we consider the performance index J= 0 ∞ [z2 (t ) + ρu2 (t )] dt . Similarly. ρ (3.1) in the form x ˙(t ) = Ax (t ) + bu (t ). the loop gain L (s ) = f (sI − A )−1 b and the return difference J (s ) = 1 + L (s ) now all are scalar functions.34) we may determine the following facts about the loci of the closed-loop poles as the weight ρ on the input varies. This amounts to setting Q = 1 and R = ρ. If ρ → ∞ then the closed-loop poles and their mirror images approach the roots of χol (s )χol (−s ).2.33) The constant k is chosen such that the polynomial ψ is monic5 . The open-loop transfer function G (s ) = d (sI − A )−1 b. we represent the optimal state feedback controller as u (t ) = − f x (t ). We furthermore write G (s ) = k ψ(s ) .3. χol (s ) (3. This means that the closed-loop poles approach 5 That is. To reflect this in the notation we rewrite the system equations (3.2. χol (s ) (3.33) we now obtain χcl (−s )χcl (s ) = χol (−s )χol (s ) + k2 ψ(−s )ψ(s ).27) we have J (s ) = χcl (s ) . (3.34) The left-hand side χcl (−s )χcl (s ) of this relation defines a polynomial whose roots consists of the closed-loop poles (the roots of χcl (s )) together with their mirror images with respect to the imaginary axis (the roots of χcl (−s )).

the number of open-loop zeros. (3. They form a Butterworth pattern of order n − q and radius ωc = k2 ρ 1 2(n−q ) . Let L (s ) = f (sI − A )−1 b and G (s ) = d (sI − A )−1 b. From this the 2(n − q ) roots that go to infinity as ρ ↓ 0 may be estimated as the roots of s 2 ( n − q ) + ( −1 ) n − q k2 = 0. • A further q+ closed-loop poles approach the mirror images in the left-half plane of the q+ right-half plane open-loop zeros. 3. Exercise 3.7 (p. If the open-loop system is stable to begin with then the closed-loop poles approach the open-loop poles as the input is more and more heavily penalized. In fact.37) If the plant has no right-half plane zeros then this radius is an estimate of the closed-loop bandwidth. 87). We first estimate the radius of the Butterworth pole configuration that is created as ρ decreases. The bandwidth we refer to is the bandwidth of the closed-loop system with z as output.2. This agrees with the limits of performance established in § 1. and • the mirror images of those open-loop poles that lie in the right-half complex plane (the “unstable” open-loop poles). Suppose that q− open-loop zeros lie in the left-half complex plane or on the imaginary axis.35) with q the degree of ψ. • The remaining n − q− − q+ closed-loop poles approach infinity according to a Butterworth pattern of order n − q− − q+ (see § 2. As ρ ↓ 0 the closed-loop poles the open-loop zeros (the roots of ψ) come into play.36) The n − q left-half plane roots are approximations of the closed-loop poles. LQG and H2 Control System Design • those open-loop poles that lie in the left-half complex plane (the “stable” open-loop poles). in the limit ρ → ∞ all entries of the gain F become zero—optimal control in this case amounts to no control at all. Taking leading terms only. The smaller ρ is the more accurate the estimate is. Verify these statements by considering the SISO configuration of Fig. the right-hand side of (3. Vanishing weight on the input term. and.3. hence.4 (Closed-loop frequency response characteristics). and q+ zeros in the right-half plane. • If ρ ↓ 0 then q− closed-loop poles approach the q− left-half plane open-loop zeros.34) reduces to (−s )n s n + k2 (−s )q s q . LQ.2. Just how small or large ρ should be chosen depends on the desired or achievable bandwidth. If the plant has right-half plane open-loop zeros then the bandwidth is limited to the magnitude of the right-half plane zero that is closest to the origin. If the open-loop system is unstable then in the limit ρ → ∞ the least control effort is used that is needed to stabilize the system but no effort is spent on regulating the output. 110 . ρ (3. ρ (3.7. p. 40).

the frequency response function H ( jω)/ H (0 ) − 1. with ρ a positive number. and R0 a fixed positive-definite symmetric matrix. 1 + L (s ) H (s ) It follows that the closed-loop transfer function H is H (s ) = k ψ(s ) . • If ρ ↓ 0 then those closed-loop poles that remain finite approach the left-half plane zeros of det GT (−s ) QG (s ). Let R = ρ R0 .3. 3.2. In this case the closed-loop poles approach the lefthalf plane open-loop zeros and the left-half plane mirror images of the right-half plane open-loop zeros. The results may be summarized as follows.3. is small over the frequency range [0. For the MIMO case the situation is more complex. Prove that as ρ ↓ 0 the closed-loop transfer function behaves as H (s ) ≈ k .41) Argue that this means that asymptotically the bandwidth is ζ (that is. the closedloop bandwidth is ωc .40) Bk denotes a Butterworth polynomial of order k (see Table 2.38) 2. p. If the open-loop transfer matrix G (s ) = D (sI − A )−1 B is square then we define the zeros of det G (s ) as the open-loop zeros. LQ theory r − u (sI−A)−1b x d z f Figure 3. s + ζ Bn−q (s/ωc ) (3. 91). Prove that u= 1 r. 111 . Next assume that the open-loop transfer function has a single (real) right-half plane zero ζ. ω ∈ R.2. Hence. ζ]). Bn−q (s/ωc ) (3.: State feedback with reference input r and output z 1. Prove that asymptotically the closed-loop transfer function behaves as H (s ) ≈ s−ζ k . Assume that the open-loop transfer function G has no right-half plane zeros. χcl (s ) (3. 1 + L (s ) z= G (s ) r. We study the root loci of the closed-loop poles as a function of ρ.39) (3. • As ρ → ∞ the closed-loop poles approach those open-loop poles that lie in the left-half plane and the mirror images in the left-half plane of the right-half plane open-loop poles.

27) takes the form |1 + L ( jω)| ≥ 1.4. Correspondingly. The gain margin is infinite and the phase margin is at least 60◦ . = unit circle Inspection shows that the modulus margin of the closed-loop system is 1. LQ.3. (a): Open-loop stable plant. some or all of its entries go to infinity. (3.: Examples of Nyquist plots of optimal loop gains.6 (see Eqn. For the single-input case discussed in Subsection 3. and that all its zeros are in the left-half plane. We note some further facts: • As ρ ↓ 0 the gain matrix F approaches ∞. Figure 3.7.: Loop gain for state feedback 3. The number of patterns and their radii depend on the open-loop plant (Kwakernaak 1976). that is. for open-loop stable systems the gain 112 . (b): Open-loop unstable plant with one right-half plane pole.42) This inequality implies that the Nyquist plot of the loop gain stays outside the circle with center at −1 and radius 1. ω ∈ R. More precisely. the solution X of the Riccati equation approaches the zero matrix. (3. Then as we saw the closed-loop bandwidth inreases without bound as ρ ↓ 0.2.2. • Assume that the open-loop transfer matrix D (sI − A )−1 B is square.31)) the return difference inequality (3. v − u (sI−A)−1B x F Figure 3. LQG and H2 Control System Design The closed-loop poles that do not remain finite as ρ ↓ 0 go to infinity according to several Butterworth patterns of different orders and different radii. Understanding the asymptotic behavior of the closed-loop poles provides insight into the properties of the closed-loop systems. Im Im Re Re (a) (b) Figure 3. 3. Guaranteed gain and phase margins If the state feedback loop is opened at the plant input as in Fig.4 shows two possible behaviors of the Nyquist plot.3 then the loop gain is L (s ) = F (sI − A )−1 B.3.

5 (Phase margin). z (t ) = Dx (t ). If the i th diagonal entry is Wi ( jω) = e ◦ then the closed-loop system remains stable as long as the angle φ is less than π . 3. 144) of § 3.7. Some caution in interpreting these results is needed. (3. For open-loop unstable systems it may vary between 1 2 and ∞. The guaranteed stability margins are very favorable. however. ω ∈ R.7.47) is positive-definite.45) we may consider the generalized quadratic performance index J= 0 ∞ zT ( t ) uT (t ) Q ST S R z (t ) dt . that is. (3. The closed-loop system may well be very sensitive to perturbations at any other point.43) If both R and W are diagonal then this reduces to W ( jω) + W T (− jω) > I. Prove that the phase margin is at least 60◦ . It is proved in Subsection 3.2.2. Exercise 3. Then minimization of J is equivalent to minimizing J= 0 ∞ [zT (t )( Q − S R−1 ST )z (t ) + vT (t ) Rv(t )] dt (3.48) for the system x ˙(t ) = ( A − B R−1 ST D ) x (t ) + Bv(t ). 3 Thus. (3. Assume that the loop gain L (s ) is perturbed to W (s ) L (s ).2. that is.49) 113 . the SISO results applies to each input channel separately.3. The margins only apply to perturbations at the point where the loop is broken. 60 .44) This shows that if the ith diagonal entry Wi of W is real then it may have any value in the interval jφ (1 2 . ω ∈ R.27). u (t ) (3. t ≥ 0.3 (p. at the plant input. that the closed-loop remains stable provided RW ( jω) + W T (− jω) R > R . (3. LQ theory may vary between 0 and ∞ without destabilizing the system. Define v(t ) = u (t ) + R−1 ST z (t ). ∞ ) without destabilizing the closed-loop system. Cross term in the performance index In the optimal regulator problem for the stabilizable and detectable system x ˙(t ) = Ax (t ) + Bu (t ). with W a stable transfer matrix.8.46) We assume that Q ST S R (3. Suppose that the loop gain satisfies the return difference inequality (3. the appendix to this chapter. The SISO results may be generalized to the multi-input case.

LQG and H2 Control System Design The condition that (3. p.2. For all but the simplest problem recourse needs to be taken to numerical computer calculation. Thus we satisfy the conditions of Summary 3. Prove that the condition that the matrix (3. The system (3.2. 3.2. By redefining DT QD as Q and DT S as S the ARE (3. Hence.3.50) (3. H has exactly n eigenvalues 114 .9.55) H = A − B R−1 ST − Q + S R−1 ST − B R−1 BT −( A − B R−1 ST )T (3. 114).50) is the most general form of the Riccati equation.50).1. Exercise 3.7 (Positive-definiteness and Schur complement). Q − S R−1 ST is called the Schur complement of R. R − ST Q−1 S is the Schur complement of Q. Hint: Q ST S I = R 0 S R−1 I Q − S R−1 ST 0 0 R I R−1 ST 0 . Under the assumptions of Summary 3. Both R and Q − S R−1 ST are positive-definite. Both Q and R − S T Q−1 S are positive-definite. 112) the Hamiltonian matrix H has no eigenvalues on the imaginary axis.52) has what is called “direct feedthrough” because of the term with u in the output z.53) for this system may be converted into the cross term problem of this subsection.6 (LQ problem for system with direct feedthrough).47) be positive-definite is equivalent to either of the following two conditions: 1.6) or (3.50) reduces to AT X + X A + Q − ( X B + S ) R−1 ( BT X + ST ) = 0.2. Show that the problem of minimizing J= 0 ∞ [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt (3. LQ.47) be positive-definite is equivalent to the condition that both R and Q − S R−1 ST be positive-definite (see Exercise 3. z (t ) = Dx (t ) + Eu (t ). (3. Exercise 3.54) 2.1 (p.2.7 (p. If λ is an eigenvalue of the 2n × 2n matrix H then −λ is also an eigenvalue. I (3. 105) or § 3.7. The most dependable solution method relies on the Hamiltonian matrix (3.2.51) x ˙(t ) = Ax (t ) + Bu (t ). Equation (3. with F = R−1 ( BT X + S T D ). t ≥ 0.2. The Riccati equation now is AT X + X A + DT QD − ( X B + DT S ) R−1 ( BT X + ST D ) = 0.45) is u (t ) = − Fx (t ). Solution of the ARE There are several algorithms for the solution of the algebraic Riccati equation (3. The optimal input for the system (3.56) associated with the LQ problem.

7.1. The idea is to reconstruct the state as accurately as possible using an observer or Kalman filter.57) with E1 and E2 both square. U is unitary. we may take E = U1 .2. It is proved in § 3. Let the columns of the real 2n × n matrix E form a basis for the ndimensional space spanned by the eigenvectors and generalized eigenvectors of H corresponding to the eigenvalues with strictly negative real parts. 115) we see how instead of using state feedback control systems may be designed based on feedback of selected output variables only.58) is the desired solution of the algebraic Riccati equation. that is. An up-to-date account of the numerical aspects of the solution of the ARE may be found in Sima (1996). Hence. By including Gaussian white noise in the LQ paradigm linear optimal feedback systems based on output feedback rather than state feedback may be found.8 (“Cruise control system”).3. T is upper triangular.3. 115 . they may be ordered such that the eigenvalues with negative real part precede those with positive real parts. 106).3. Concluding remarks The LQ paradigm would appear to be useless as a design methodology because full state feedback is almost never feasible. Normally it simply is too costly to install the instrumentation needed to measure all the state variables. 145) that −1 X = E2 E1 (3. E2 (3. 3. Partition U = [U1 U2 ].2. Partition E= E1 . E may efficiently be computed by Schur decomposition (Golub and Van Loan 1983) H = UTU H (3. This is the algorithm implemented in most numerical routines for the solution of algebraic Riccati equations. The diagonal entries of T may be arranged in any order. Exercise 3. By basing feedback on estimates of the state several of the favorable properties of state feedback may be retained or closely recovered.59) of the Hamiltonian matrix H .3. LQG stands for Linear Quadratic Guassian. all entries below the main diagonal of T are zero. In particular. Sometimes it is actually impossible to measure some of the state variables. where U1 and U2 both have n columns. In Section 3. The diagonal entries of T are the eigenvalues of H . UU H = U HU = I . I is a unit matrix and the superscript H denotes the complex-conjugate transpose.10. Then the columns of U1 span the same subspace as the eigenvectors and generalized eigenvectors corresponding to the eigenvalues of H with negative real parts. 3. Use the method of this subsection to solve the ARE that arises in Exercise 3.4 (p. that is.3 (p. LQG Theory with negative real part. LQG Theory 3.2 (p.2. Introduction In this section we review what is known as LQG theory.

is assumed to be available for feedback. (3. t ∈ R. v u Plant z + + w y Controller Figure 3. 64) the output z is the controlled output. The signals v and w are vector-valued Gaussian white noise processes with  E v(t )vT (s ) = V δ(t − s )  E v(t )wT (s ) = 0 t . t ∈ R. Observers Consider the observed system x ˙(t ) = Ax (t ) + Bu (t ). called the intensity matrices of the two white noise processes. LQG and H2 Control System Design We consider the system  x ˙(t ) = Ax (t ) + Bu (t ) + Gv(t )  y (t ) = Cx (t ) + w(t )  z (t ) = Dx (t ) t ∈ R. (3.60) The measured output y is available for feedback. y (t ) = Cx (t ). As a result. LQ. The problem of controlling the system such that the integrated expected value T 0 E [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt (3. Figure 3. We do not go into the theory of stochastic processes in general and that of white noise in particular. At any time t the entire past measurement signal y (s ). The noise signal v models the plant disturbances and w the measurement noise.3.5 clarifies the situation. T ] at this point is taken to be finite but eventually we consider the case that T → ∞.2.63) is minimal is the stochastic linear regulator problem. (3. (3. and the controlled output z (t ). t ≥ 0.3 (p. s ≤ t. also the quadratic error expression zT (t ) Qz (t ) + uT (t ) Ru (t ).62) is a random process. but refer to texts such as Wong (1983) and Bagchi (1993). The initial state x (0 ) is assumed to be a random vector. The time interval [0. as random processes. s ∈ R. The various assumptions define the state x (t ). t ∈ R.64) 116 .5. As in § 2.3.: LQG feedback 3.61)  E w(t )wT (s ) = W δ(t − s ) V and W are nonnegative-definite symmetric constant matrices.

t ∈ R. (3. (3.69) 117 . The observation error y (t ) − C x ˆ(t ) is the difference between the actual measured output y (t ) and the output y ˆ (t ) = C x ˆ(t ) as reconstructed from the estimated state x ˆ (t ). Figure 3. The Kalman filter Suppose that we connect the observer ˙ x ˆ(t ) = A x ˆ(t ) + Bu (t ) + K [ y (t ) − C x ˆ ( t )] . The state x of the system (3.64) that the error satisfies the differential equation e ˙(t ) = ( A − KC )e (t ). K is the observer gain matrix. The extra input term K [ y (t ) − C x ˆ(t )] on the right-hand side of (3. Hence.70) t ∈ R. We may reconstruct the state with arbitrary precision by connecting an observer of the form ˙ x ˆ(t ) = A x ˆ(t ) + Bu (t ) + K [ y (t ) − C x ˆ] .3.65) The signal x ˆ is meant to be an estimate of the state x (t ). LQG Theory This is the system (3.60) but without the state noise v and the measurement noise w. If the error system is stable then e (t ) → 0 as t → ∞ for any initial error e (0 ). t →∞ (3.6 u Plant + K z y − Plant model Observer Figure 3. (3.65) provides a correction that is active as soon as the observation error is nonzero.: Structure of an observer shows the structure of the observer. t ∈ R.3.6.66) as the state estimation error.3. (3.65) and (3. Differentiation of e yields after substitution of (3.64) with an additional input term K [ y (t ) − C x ˆ(t )] on the right-hand side.3. It satisfies the state differential equation of the system (3. t ∈ R. Define e (t ) = x ˆ(t ) − x (t ) (3. x ˆ(t ) −→ x (t ).64) is detectable then there always exists a gain matrix K such that the error system (3.67) If the system (3. It needs to be suitably chosen.68) 3.64) is not directly accessible because only the output y is measured.67) is stable. y (t ) = Cx (t ) + w(t ). so that the estimated state converges to the actual state. to the noisy system x ˙(t ) = Ax (t ) + Bu (t ) + Gv(t ).

7. Assumptions: • The system x ˙(t ) = Ax (t ) + Gv(t ). (3.73) (3. Substitution of the optimal gain matrix (3. This inequality is to be taken in the sense that Y nonnegative-definite. 149) that as t → ∞ the error covariance matrix Ee (t )eT (t ) converges to a constant steady-state value Y that satisfies the linear matrix equation ( A − KC )Y + Y ( A − KC )T + GV GT + K W K T = 0.7 (p. (3. and the closed-loop system matrix A − B F is the transpose of the error system matrix A − KC. They are the duals of the properties listed in Summary 3. t ∈ R.73) yields AY + Y AT + GV GT − Y C T W −1 CY = 0.74) into the Lyapunov equation (3.77) optimal regulator and the Kalman filter are dual in the following sense. the gain minimizes the weighted mean square construction error limt→∞ EeT (t ) We e (t ) for any nonnegative-definite weighting matrix We .74) and the covariance matrix Y the nonnegative-definite solution of the Riccati equation (3. 104). 6 The t ∈ R. and R with W .2 (p.76) (3.6) becomes the observer Riccati equation (3.3. even if the error system is stable. We review several properties of the Kalman filter. 118 .1 (Properties of the Kalman filter). t ∈ R. Given the regulator problem of § 3. 105) for the Riccati equation associated with the regulator problem6 . This is another matrix Riccati equation. Q with V .75) is the famous Kalman filter (Kalman and Bucy 1961). Summary 3. y (t ) = Cx (t ). As a matter of fact. its solution X becomes Y . LQ. By matching substitutions the observer problem may be transposed to a regulator problem. Then the regulator Riccati equation (3.72) This type of matrix equation is known as a Lyapunov equation. A consequence of this result is that the gain (3.2.7. replace A with AT . (3. LQG and H2 Control System Design Differentiation of e (t ) = x ˆ(t ) − x (t ) leads to the error differential equation e ˙(t ) = ( A − KC )e (t ) − Gv(t ) + K w(t ). is stabilizable and detectable. the state feedback gain F is the transpose of the observer gain K .74) minimizes the steady-state mean square state reconstruction error limt→∞ EeT (t )e (t ).75).5 that as a function of the gain matrix K the steady-state error covariance matrix Y is minimal if K is chosen as K = Y C T W −1 .74) ¯ is the steady-state error covariance matrix corresponding to any “Minimal” means here that if Y ¯ ¯ ¯ − Y is other observer gain K then Y ≥ Y . It is proved in § 3. (3.1 (p.71) Owing to the two noise terms on the right-hand side the error now no longer converges to zero. D with GT . B with CT . (3.75) with the gain chosen as in (3.3. The observer ˙ x ˆ(t ) = A x ˆ(t ) + Bu (t ) + K [ y (t ) − C x ˆ] . Suppose that the error system is stable. It is made plausible in Subsection 3.

If the system is not stabilizable (with v as input) then there exist observers that are not stable but are immune to the state noise v.2 (Asymptotic results). The minimal value of the mean square reconstruction error is achieved by the observer gain matrix K = Y C T W −1 . Define the “observer return difference” J f (s ) = I + C (sI − A )−1 K . The minimal value of the steady-state weighted mean square state reconstruction error limt→∞ EeT (t ) We e (t ) is7 tr Y We .81) where χol (s ) = det(sI − A ) is the system characteristic polynomial and χ f (s ) = det(sI − A + KC ) the observer characteristic polynomial.3. i.2. 3.82) (3. 2. Prove that det J f (s ) = χ f (s ) . all the eigenvalues of the matrix A − KC have strictly negative real parts. W needs to be positive-definite to prevent the Kalman filter from having infinite gain.77) is controllable rather than just stabilizable then Y is positive-definite. The reasons for the assumptions may be explained as follows. As a consequence also the observer ˙ x ˆ(t ) = A x ˆ(t ) + Bu (t ) + K [ y (t ) − C x ˆ(t )]. The asymptotic results for the regulator problem may be t ∈ R. 2.80) “dualized” to the Kalman filter. LQG Theory • The noise intensity matrices V and W are positive-definite. m. j = 1. t ∈ R. that is. are not stabilized in the error system. χol (s ) (3. (3. Exercise 3.3. is stable.1 (p. 119 . Hence. 4. If the system (3. 105) by duality: 1. hence. · · · . 1.3. (3.78) has a unique nonnegative-definite symmetric solution Y . If the system (3. stability of the error system is not guaranteed. The algebraic Riccati equation AY + Y AT + GV GT − Y C T W −1 CY = 0 (3. The following facts follow from Summary 3.77) is not detectable then no observer with a stable error system exists. 7 The quantity tr M = m i=1 Mii is called the trace of the m × m matrix M with entries Mij .79) is stable. If V is not positive-definite then there may be unstable modes that are not excited by the state noise and. The error system e ˙(t ) = ( A − KC )e (t ).

Exercise 3.86) Assume that V UT U W (3. 3. 3.3. Suppose that E v(t ) w(t ) vT ( s ) wT ( s ) = V UT U δ(t − s ).83) where M is the open-loop transfer matrix M (s ) = C (sI − A )−1 G. W t . (3. Kalman filter with cross correlated noises A useful generalization of the Kalman filter follows by assuming cross correlation of the white noise processes v and w. Prove this result. as before. LQ. J f and M are now scalar functions. • Prove that as σ → ∞ the optimal observer poles approach the open-loop poles that lie in the left-half plane and the mirror images of the open-loop poles that lie in the right-half plane. s ∈ R. y (t ) = Cx (t ) is stabilizable and detectable.87) is positive-definite. and. Then the optimal observer gain is K = (Y C T + GU ) W −1 . Prove that the return difference J f of the Kalman filter satisfies T J f (s ) W J T f (−s ) = W + M ( s ) V M (−s ). Write M (s ) = g ζ(s ) .84) with ζ a monic polynomial and g a constant.3 (Cross correlated noises).89) 120 .88) where the steady-state error covariance matrix Y is the positive-definite solution of the Riccati equation AY + Y AT + GV GT − (Y C T + GU ) W −1 (CY + U T GT ) = 0. • Establish the asymptotic pattern of the optimal observer poles that approach ∞ as σ ↓ 0. σ (3. Consider the SISO case with V = 1 and W = σ . (3. χol (s ) (3.3. (3.4. (3.85) 1 M (s ) M (−s )]. that the system x ˙ (t ) = Ax (t ) + Gv(t ).3. • Prove that as σ ↓ 0 the optimal observer poles that do not go to ∞ approach the open-loop zeros that lie in the left-half plane and the mirror images of the open-loop zeros that lie in the right-half plane. Prove that χ f (s )χ f (−s ) = χol (s )χol (−s )[1 + 4. LQG and H2 Control System Design 2.

Using the estimated state as if it were the actual state is known as certainty equivalence. ˙ x ˆ(t ) = A x ˆ(t ) + Bu (t ) + K [ y (t ) − C x ˆ(t )]. 148) that the state feedback law (3. however. (3. u (t ) = − F x ˆ(t ).92) State feedback. Solution of the stochastic linear regulator problem The stochastic linear regulator problem consists of minimizing T 0 E [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt (3.1 (p.5. It is proved in Subsection 3.7.91) We successively consider the situation of no state noise. This idea is often referred to as the separation principle.92) with the estimated state x ˆ(t ).90) does not converge to a finite number as T → ∞. 121 .3. The state may be optimally estimated. and the integrated generalized square error (3.95) The controller minimizes the steady-state mean square error (3. t ∈ R.92) minimizes the rate at which (3.7 shows the arrangement of the closed-loop system. it minimizes 1 T →∞ T lim T 0 E [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt .3.94) under output feedback. From § 3. with the help of the Kalman filter. 105).93) This limit equals the steady-state mean square error indexsteady-state mean square error t →∞ lim E [zT (t ) Qz (t ) + uT (t ) Ru (t )]. If the white noise disturbance v is present then the state and input cannot be driven to 0. the state feedback law minimizes the steady-state mean square error.2. (3.3. (3. (3.2 (p. with the feedback gain F as in Summary 3.90) for the system  x ˙(t ) = Ax (t ) + Bu (t ) + Gv(t )  y (t ) = Cx (t ) + w(t )  z (t ) = Dx (t ) t ∈ R. and output feedback. the optimal controler is given by Output feedback. 139) we know that if the disturbance v is absent and the state x (t ) may be directly and accurately accessed for measurement. respectively. Figure 3.2.90) approaches ∞. LQG Theory 3. that is. 118). The feedback gain F and the observer gain K follow from Summaries 3. 105) and 3. state feedback.1 (p. No state noise. (3. It divorces state estimation and control input selection.6 (p.1 (p. then for T → ∞ the performance index is minimized by the state feedback law u (t ) = − Fx (t ). Then the solution of the stochastic linear regulator problem with output feedback (rather than state feedback) is to replace the state x (t ) in the state feedback law (3.94) Hence. Thus. We next consider the situation that the state cannot be accessed for measurement.3.

71) we thus have x ˙ (t ) A − BF = e ˙( t ) 0 −BF A − KC x (t ) Gv(t ) + . Inspection shows that these eigenvalues consist of the eigenvalues of A − BF (the regulator poles) together with the eigenvalues of A − KC (the observer poles). This is most easily recognized as follows.1. 2.8.3. The configuration of Fig. 3. of course.1 and 3.97) are the eigenvalues of the closed-loop system.: Equivalent unit feedback configuration 3.2.91) has order n then the compensator also has order n. Show (most easily by a counterexample) that the fact that the observer and the closed-loop system are stable does not mean that the compensator (3.91) with the compensator (3. − Ce u z P y Figure 3.4 (Closed-loop eigenvalues). Together with (3. with W0 a fixed symmetric positive-definite weighting matrix and σ a 122 .7. Asymptotic analysis and loop transfer recovery In this subsection we study the effect of decreasing the intensity W of the measurement noise. Suppose that W = σ W0 . Exercise 3.8.3. If the plant (3.96) The eigenvalues of this system are the eigenvalues of the closed-loop system. 1.95) is stable — under the assumptions of Summaries 3. Hence.3.6.5 (Compensator transfer function).97) (3.95) by itself is stable. LQG and H2 Control System Design u z − Plant y F Observer Figure 3. Show that the equivalent compensator Ce has the transfer matrix Ce (s ) = F (sI − A + B F + KC )−1 K .3.7 may be rearranged as in Fig. e (t ) − Gv(t ) + K w(t ) (3.: Observer based feedback control The closed-loop system that results from interconnecting the plant (3. 3. there are 2n closed-loop poles. Substitution of u (t ) = − F x ˆ (t ) into x ˙(t ) = Ax (t ) + Bu (t ) + Gv(t ) yields with the further substitution x ˆ(t ) = x (t ) + e (t ) x ˙(t ) = ( A − B F ) x (t ) − B Fe (t ) + Gv(t ). Exercise 3. Prove that the eigenvalues of (3. LQ.3.

9. and its zeros all have negative real parts. Extensive treatments may be found in Anderson and Moore (1990) and Saberi. Breaking the loop at the plant input as in Fig. 127). Chen.7 (p. Make it plausible that the loop gain approaches L0 (s ) = C (sI − A )−1 K . 122. (3.: Breaking the loop loop gain are each provided with a subscript. G = B. § 5. As σ ↓ 0 the gain Kσ approaches ∞. that is.2. LQG Theory positive number. • The open-loop plant transfer matrix C (sI − A )−1 B is square. We use the method in the design examples of § 3.7. We let R = ρ R0 and consider the asymptotic behavior for ρ ↓ 0. Accordingly. 109) that the state feedback gain F goes to ∞ and the solution X of the Riccati equation approaches the zero matrix as the weight on the input decreases. At the same time the error covariance matrix Yσ approaches the zero matrix.3. the guaranteed gain and phase margins of Subsection § 3.3.6).100) 123 .6 (Dual loop transfer recovery). the controlled output is measured.7 (p.6 (p.3. Dual loop recovery provides an alternative approach to loop recovery. Dual loop recovery results when the loop is broken at the plant output y rather than at the input (Kwakernaak and Sivan 1972. (3. (3.2. Before doing this we need to introduce two assumptions: • The disturbance v is additive to the plant input u. The fact that Yσ ↓ 0 indicates that in the limit the observer reconstructs the state with complete accuracy. This allows the tightest control of the disturbances.99) The asymptotic loop gain L0 is precisely the loop gain for full state feedback.98) (Compare Exercise 3. that is. In this case it is necessary to assume that D = C . 149) that as σ ↓ 0 the loop gain Lσ approaches the expression L0 (s ) = F (sI − A )−1 B. p.5.) To emphasize the dependence on σ the observer gain and the u z − Plant y F ^ x Observer Figure 3. We investigate the asymptotic behavior of the closed-loop system as σ ↓ 0. It is proved in § 3. and Sannuti (1993). This is called loop transfer recovery (LTR). Exercise 3. 1. The term loop transfer recovery appears to have been coined by Doyle and Stein (1981). 112) are recouped.9 we obtain the loop gain Lσ (s ) = Ce (s ) P (s ) = F (sI − A + BF + Kσ C )−1 Kσ C (sI − A )−1 B.3. Again we need C (sI − A )−1 B to be square with left-half plane zeros only. This is the dual of the conclusion of Subsection 3.6 (p. 3.

t ∈ R. Prove that the H2 -norm of the stable system (3. the mean square output is EyT (t ) y (t ) = tr ∞ −∞ f ∈ R.1. (3.7 (p.101) 3. LQ. H2 optimization 3. Very often in the application of the LQG problem to control system design the noise intensities V and W play the role of design parameters rather than that they model reality.3.102) is T given by H 2 2 = tr CY C . consider the stable system x ˙(t ) = Ax (t ) + Bv(t ). The quantity H = ∞ 2 tr −∞ H ( j2π f ) H ∼ ( j2π f ) d f (3.1 (Lyapunov equation).4. The stochastic element is eliminated by recognizing that the performance index for the LQG problem may be represented as a system norm—the H2 -norm. this approach allows to remove the stochastic ingredient of the LQG formulation. 3.104) Here we introduce the notation H ∼ (s ) = H T (−s ). 124 . Suppose that the signal v is white noise with covariance function E v(t )vT (s ) = V δ(t − s ). (3. Show that the corresponding return difference J0 (s ) = I + L0 (s ) satisfies the return difference inequality T (− jω) ≥ W . As a result. Most importantly. J0 ( jω) W J0 ω ∈ R. 112). Then the output y of the system is a stationary stochastic process with spectral density matrix S y ( f ) = H ( j2π f ) V H T (− j2π f ).4.103) S y ( f ) d f = tr ∞ −∞ H ( j2π f ) V H ∼ ( j2π f ) d f. (3. LQG and H2 Control System Design 2. In many applications it is difficult to establish the precise stochastic properties of disturbances and noise signals. If the white noise v has intensity V = I then the mean square output EyT (t ) y (t ) equals precisely the square of the H2 -norm of the system. (3. Introduction In this section we define the LQG problem as a special case of a larger class of problems. Exercise 3. y (t ) = Cx (t ). Show that gain and phase margins apply that agree with those found in Subsection 3. where the matrix Y is the unique symmetric solution of the Lyapunov T equation AY + Y A + B BT = 0. To introduce this point of view. which lately has become known as H2 optimization. H2 refers to the space of square integrable functions on the imaginary axis whose inverse Fourier transform is zero for negative time.4.102) The system has the transfer matrix H (s ) = C (sI − A )−1 B.105) is called the H2 -norm of the system.2.

(3.4. Dx. For the open-loop system x ˙ z y = = = = = Ax + Bu + Gv. 3. Cx + w D (sI − A )−1 G v + D (sI − A )−1 B u .109) we have in terms of transfer matrices z (3. H11 (s ) A more compact notation is z H11 (s ) = H21 (s ) u H12 (s ) H22 (s ) v .4. H21 (s ) From z = P11 v + P12 u we obtain z = P11 − P12 ( I + Ce P22 )−1 Ce P21 v − P12 ( I + Ce P)−1 Ce w. P11 (s ) y −1 (3.113) H22 (s ) (3. H2 optimization 3. that is.110) (3. H2 optimization In this subsection we rewrite the time domain LQG problem into an equivalent frequency domain H2 optimization problem.106) This assumption causes no loss of generality because by scaling and transforming the variables z and u the performance index may always be brought into this form. so that u = −( I + Ce P22 )−1 Ce P21 v −( I + Ce P22 )−1 Ce w.111) P12 (s ) P22 (s ) u v z C (sI − A ) G v + C (sI − A )−1 B u + w. the H2 optimization problem is in terms of transfer matrices.107) (3. w (3. P21 (s ) Interconnecting the system as in Fig.: Feedback system with stochastic inputs and outputs u = −Ce y = −Ce ( P21 v + P22 u + w).2.3.10 with a compensator Ce we have the signal balance − Ce P + + y w Figure 3.10.114) H12 (s ) (3.112) H (s ) 125 . the LQG performance index is t →∞ lim E [zT (t )z (t ) + uT (t )u (t )]. To simplify the expressions to come we assume that Q = I and R = I . While the LQG problem requires state space realizations.108) (3.

122) 126 .4.11. Note that the sign reversion at the compensator input in Fig. u ) as output. In the latter diagram w is the external input (v and w in Fig. C1 x (t ) + D11 w(t ) + D12 u (t ). The standard H2 problem and its solution The standard H2 optimization problem is the problem of choosing the compensator K in the block diagram of Fig.11 such that it 1.10). 3. LQ.10 with (v. 3. Furthermore. stabilizes the closed-loop system.10 has been absorbed into the compensator Ce . =  0  x ( t ) + 0 0 w(t ) 0 I C 0 = (3. 3.115) (3.11. and y the observed output. Show that for the LQG problem the generalized plant in state space form may be represented as x ˙(t )   z (t ) u (t ) y (t ) v(t ) Ax (t ) + G 0 + Bu (t ). The signal z is the error signal.117) tr −∞ H 2 2. solving the LQG problem amounts to minimizing the H2 norm of the closed-loop system of Fig. H ( j2π f ) H ∼ ( j2π f ) d f Hence.4. The configuration of Fig. 3.3. w) as input and ( z.3. LQG and H2 Control System Design From this we find for the steady-state mean square error lim E zT (t )z (t ) + uT (t )u (t ) = = = lim E ∞ t →∞ t →∞ z (t ) u (t ) T z (t ) u (t ) (3.120) (3. and Ce the compensator.118) (3. C2 x (t ) + D21 w(t ) + D22 u (t ).10 is a special case of the configuration of Fig. We represent the generalized plant G of Fig. which ideally should be zero (z and u in Fig.2 (Generalized plant for the LQG problem).11 in state space form as x ˙(t ) z (t ) y (t ) = = = Ax (t ) + B1 w(t ) + B2 u (t ). (3. w(t )       0 0 0 D v( t ) +  I  u (t ). w u z G y Ce Figure 3. 3.121) (3. 3. minimizes the H2 -norm of the closed-loop system (with w as input and z as output).116) (3. The block G is the generalized plant. 3. and 2. 3. u is the control input.: The standard H2 problem Exercise 3.119) 3.10).

5.127) The observer and state feedback gain matrices are T T T D12 )−1 ( B2 X + D12 C1 ). Consider the standard H2 optimization problem for the generalized plant x ˙(t ) z (t ) y (t ) = = = Ax (t ) + B1 w(t ) + B2 u (t ).129) The condition that D12 has full column rank means that there is “direct feedthrough” from the input u to the error signal z. which are listed in the summary that follows. y (t ) = C2 x (t ) is stabilizable and detectable. + D12 u (t ).5 (p.123) (3. Under these assumptions the optimal output feedback controller is ˙ x ˆ(t ) u (t ) = = Ax ˆ(t ) + B2 u (t ) + K [ y (t ) − C2 x ˆ (t ) − D22 u (t )] −Fx ˆ(t ). C1 x (t ) C2 x (t ) + D21 w(t ) + D22 u (t ). and D12 has full column rank. Introduction In this section we review how LQG and H2 optimization may be used to design SISO and MIMO linear beedback systems. the condition that D21 has full row rank means that the noise w is directly fed through to the observed output y. They are natural assumptions for LQG problems.5. The derivation necessitates the introduction of some assumptions.6 (p. (3.1. In §§ 3.124) (3. has full column rank for every s = jω. 133) we discuss the application of H2 optimization to control system design. (3. 127) and 3.4. Feedback system design by H2 optimization 3. 149). (3.3 (Solution of the H2 problem).126) (3. This is done in § 3. and Chen (1995).128) The symmetric matrices X and Y are the unique positive-definite solutions of the algebraic Riccati equations T T T T T C1 − ( X B2 + C1 D12 )( D12 D12 )−1 ( B2 X + D12 C1 ) AT X + X A + C1 T T T T −1 T − (Y C2 + B1 D21 )( D21 D21 ) (C2 Y + D21 B1 ) AY + AY T + B1 B1 = 0.8 (p. 127 . Sannuti. Feedback system design by H2 optimization The H2 problem may be solved by reducing it to an LQG problem. = 0. The H2 optimization problem and its solution are discussed at length in Saberi. • The matrix • The matrix A−s I B1 C2 D21 A−s I B2 C1 D12 has full row rank for every s = jω.125) Assumptions: • The system x ˙ (t ) = Ax (t ) + B2 u (t ). Summary 3. Likewise. (3. 3.7. and D21 has full row rank.5.3. F = ( D12 T T T −1 K = (Y C2 + B1 D21 )( D21 D21 ) .

the design parameters for the observer part are determined. LQ. First the parameters D.130) for the system  x ˙(t ) = Ax (t ) + Bu (t ) + Gv(t )  z (t ) = Dx (t )  y (t ) = Cx (t ) + w(t ) t ∈ R. In the MIMO case one may let R = ρ R0 . It is adjusted by trial and error until the desired bandwidth is achieved (see also § 3. as described in § 3. or only has right-half plane zeros whose magnitudes are sufficiently much greater than the desired bandwidth.6 (p. c) In the SISO case R is a scalar design parameter. The case where z is not y is called inferential control. where the fixed matrix R0 is selected according to the rules of § 3. 2. We discuss some rules of thumb for selecting the design parameters.3 (p. 106). b) In the SISO case we let V = 1. This amounts to choosing each diagonal entry of V proportional to the inverse of the square root of the amplitude of the largest disturbance that may occur at the corresponding input channel. a) To take advantage of loop transfer recovery we need to take G = B. In the MIMO case we may select V to be diagonal by the “dual” of the rules of § 3.3 (p. p.3. 122). Q and R for the regulator part are selected. The LQG problem consists of the minimization of t →∞ lim E [zT (t ) Qz (t ) + uT (t ) Ru (t )] (3. Often the controlled output z is also the measured output y.2.2.2.2. 106). There may be compelling engineering reasons for selecting z different from y.2. 1. Next. In the MIMO case Q is best chosen to be diagonal according to the rules of § 3.3 (p. LTR is only effective if the open-loop transfer function P (s ) = C (sI − A )−1 B has no right-half plane zeros.6. They are chosen to achieve loop transfer recovery. These quantities determine the dominant characteristics of the closed-loop system.5. Finally there usually is some freedom in the selection of the control output z. 106) and ρ is selected to achieve the desired bandwidth. LQG and H2 Control System Design 3. Parameter selection for the LQG problem We discuss how to select the design parameters for the LQG problem without a cross term in the performance index and without cross correlation between the noises. a) D determines the controlled output z. In the absence of specific information about the nature of the disturbances also the noise input matrix G may be viewed as a design parameter.3. They are based on the assumption that we operate in the asymptotic domain where the weighting matrices R (the weight on the input) and W (the measurement noise intensity) are small. this means that also the matrix D may be considered a design parameter.131) Important design parameters are the weighting matrices Q and R and the intensities V and W . 109)). b) In the SISO case Q may be chosen equal to 1. (3. 128 .

Asymptotically the finite observer poles are the zeros of det C (sI − A )−1 B. −1 (3. The scalar σ is chosen small enough to achieve LTR. (3. 3. The magnitude of the dominant observer poles should be perhaps a decade larger than the magnitude of the dominant regulator poles.132) (3. − T v − U w.133) The corresponding block diagram is represented in Fig. Feedback system design by H2 optimization c) In the SISO case W is a scalar parameter that is chosen small enough to achieve loop transfer recovery. These closed-loop poles correspond to canceling pole-zero pairs between the plant and the compensator.3. If P is invertible then by block v u R z + − Ce + P + + y + w Figure 3. In the MIMO case we let W = σ W0 .137) (3.5.: Block diagram for loop recovery design diagram substitution Fig. W0 is chosen diagonally with each diagonal entry proportional to the inverse of the square root of the largest expected measurement error at the corresponding output channel. 3. T = ( I + Ce P ) Ce P. 3. where W0 = R P−1 . 3.12. The far-away observer poles determine the bandwidth of the compensator and the high-frequency roll-off frequency for the complementary sensitivity.13.12 may be redrawn as in Fig.111) may be rewritten as z y where P (s ) = C (sI − A )−1 B. By setting up two appropriate signal balances it is easy to find that z u Here S = ( I + PCe )−1 . In the case of non-inferential control W0 = I .138) 129 .135) (3. (3.136) (3.3. H2 Interpretation In this subsection we discuss the interpretation of the LQG problem as an H2 optimization problem.13. We consider the frequency domain interpretation of the arrangement of Fig. If we choose G = B to take advantage of loop transfer recovery then the open-loop equations (3. P (s )(u + v) + w. U = Ce ( I + PCe ) . 3. T = ( I + PCe )−1 PCe .12. R (s ) = D (sI − A )−1 B.134) = = R (s )(u + v).5.110–3. −1 = = W0 S Pv − W0 Tw.

13.142).: Equivalent block diagram S is the sensitivity matrix. It is not difficult to establish that z1 z2 = = W1 S PV1 v − W1 T V2 w. the complementary sensitivities T and T . T is the complementary sensitivity function if the loop is broken at the plant input rather than at the plant output. In the SISO case T = T . and the input sensitivity U .4. For more freedom we generalize the block diagram of Fig. ∞ (3. − W2 T V1 v − W2 UV2 w. For simplicity we only consider the SISO case.1 (Transfer matrices). From (3.5. 3. 3.142) In the next subsections some applications of this generalized problem are discussed.14. which requires making S (0 ) = 0. The importance given to each of the system functions depends on W0 and P.139) arise from the LQG problem and are not very flexible. Integral control aims at suppressing constant disturbances. (3. the performance index now takes the form H 2 2 = tr −∞ ∼ ∼ ∼ ∼ ∼ ∼ ∼ ( W1 S PV1 V1 P S W1 + W1 T V2 V2 T W1 ∼ ∼ ∼ ∼ + W2 T V1 V1∼ T ∼ W2 + W2 UV2 V2 U W2 ) d f. and U the input sensitivity matrix.135–3. (3. Design for integral control There are various ways to obtain integrating action in the LQG framework. T the complementary sensitivity matrix.139) The argument j2π f is omitted from each term in the integrand.135–3.136) we find for the performance index H 2 2 = tr ∞ −∞ ∼ ∼ ( W0 S P P∼ S∼ W0 + W0 T T ∼ W0 + T T ∼ + UU ∼ ) d f. For the MIMO case the idea may be carried through similarly by introducing integrating action in each input channel as for the SISO case. The weighting functions in (3. LQG and H2 Control System Design v u + − Ce + P + W o z + + y w Figure 3. V1 and V2 are shaping filters and W1 and W2 weighting filters that may be used to modify the design. LQ. which act as frequency dependent weighting functions.13 to that of Fig. We discuss a solution that follows logically from the frequency domain interpretation (3.138). 3. Exercise 3. If the system has no natural integrating action then integrating action needs to be introduced in 130 . Inspection reveals that the performance index involves a trade-off of the sensitivity S .5. Derive (3.3.140) (3.141) As a result.

3.5. Feedback system design by H2 optimization

z2 W2

v

V1

Ce

u

+ +
P W1 z1

+

+
y

V2

w

Figure 3.14.: Generalized configuration the compensator. Inspection of (3.142) shows that taking V1 (0 ) = ∞ forces S (0 ) to be zero — otherwise the integral cannot be finite. Hence, we take V1 as a rational function with a pole at 0. In particular, we let V1 (s ) = s+α . s (3.143)

V1 may be seen as a filter that shapes the frequency distribution of the disturbance. The positive constant α models the width of the band over which the low-frequency disturbance extends. Further inspection of (3.142) reveals that the function V1 also enters the third term of the integrand. In the SISO case the factor T in this term reduces to T . If S (0 ) = 0 then by complementarity T (0 ) = 1. This means that this third term is infinite at frequency 0, unless W2 has a factor s that cancels the corresponding factor s in the numerator of V1 . This has a clear interpretation: If the closed-loop system is to suppress constant disturbances then we need to allow constant inputs — hence we need W2 (0 ) = 0. More in particular we could take W2 (s ) = s W2o (s ), s+α (3.144)

where W2o remains to be chosen but usually is taken constant. This choice of W2 reduces the weight on the input over the frequency band where the disturbances are large. This allows the gain to be large in this frequency band. A practical disadvantage of chooosing V1 as in (3.143) is that it makes the open-loop system unstabilizable, because of the integrator outside the loop. This violates one of the assumptions of § 3.4.3 (p. 126) required for the solution of the H2 problem. The difficulty may be circumvented by a suitable partitioning of the state space and the algebraic Riccati equations (Kwakernaak and Sivan 1972). We prefer to eliminate the problem by the block diagram substitutions (a) → (b) α → (c) of Fig. 3.15. The end result is that an extra factor s+ s is included in both the plant transfer function and the weighting function for the input. The extra factor in the weighting function on s the input cancels against the factor s+ α that we include according to (3.144). W2o remains. s Additionally an extra factor s+α is included in the compensator. If the modified problem leads to an optimal compensator C0 then the optimal compensator for the original problem is C (s ) = s+α C0 (s ). s (3.145)

131

3. LQ, LQG and H2 Control System Design

z2

v

W2

s+α s
u

(a)

+ +
P W1 z1

Ce

+

+
y

V2

w

z2 W2 W2 o s+α s

v

z2 W2

v

Ce

s s+α

+ +

Co s+α s P Ce s s+α

+ +

Po s+α s P

(b)

(c)

Figure 3.15.: Block diagram substitutions for integral control (a)→(b)→(c) This compensator explicitly includes integrating action. Note that by the block diagram substitutions this method of obtaining integral control comes down to including an integrator in the plant. After doing the design for the modified plant the extra factor is moved over to the compensator. This way of ensuring integrating action is called the integrator in the loop method. We apply it in the example of § 3.6.3 (p. 135). The method is explained in greater generality in § 6.7 (p. 279) in the context of H∞ optimization.

3.5.5. High-frequency roll-off
Solving the LQG problems leads to compensators with a strictly proper transfer matrix. This means that the high-frequency roll-off of the compensator and of the input sensitivity is 1 decade/decade (20 dB/decade). Correspondingly the high-frequency roll-off of the complementary sensitivity is at least 1 decade/decade.
Exercise 3.5.2 (High-frequency roll-off). Prove that LQG optimal compensators are strictly proper. Check for the SISO case what the resulting roll-off is for the input sensitivity function and the complementary sensitivity function, dependent on the high-frequency roll-off of the plant.

For some applications it may be desirable to have a steeper high-frequency roll-off. Inspection of (3.142) shows that extra roll-off may be imposed by letting the weighting function W2 increase with frequency. Consider the SISO case and suppose that V2 (s ) = 1. Let W2 (s ) = ρ(1 + rs ), (3.146)

with r a positive constant. Then by inspecting the fourth term in the integrand of (3.142) we conclude that the integral can only converge if at high frequencies the input sensitivity U , and, hence, also the compensator transfer function Ce , rolls off at at least 2 decades/decade.

132

3.6. Examples and applications

The difficulty that there is no state space realization for the block W2 with the transfer function (3.146) may be avoided by the block diagram substitution of Fig. 3.16. If the modified problem

z2 W2

v

V1

(a)

Ce

u

+ +
P W1 z1

+

+
y

V2

w

z2

v

V1

(b)
Ce

Co W2 W2−1

v

+ +
P

Figure 3.16.: Block diagram substitution for high-frequency roll-off is solved by the compensator C0 then the optimal compensator for the original problem is Ce (s ) = C0 (s ) C0 (s ) = . W2 (s ) ρ(1 + rs ) (3.147)

The extra roll-off is apparent. Even more roll-off may be obtained by letting W2 (s ) = O (sm ) as |s| → ∞, with m ≥ 2. For a more general exposition of the block diagram substitution method see § 6.7 (p. 279).

3.6. Examples and applications
3.6.1. Introduction
In this section we present two design applications of H2 theory: A simple SISO system and a not very complicated MIMO system.

3.6.2. LQG design of a double integrator plant
We consider the double integrator plant P (s ) = 1 . s2 (3.148)

133

3. LQ, LQG and H2 Control System Design

The design target is a closed-loop system with a bandwidth of 1 rad/s. Because the plant has a natural double integration there is no need to design for integral control and we expect excellent low-frequency characteristics. The plant has no right-half zeros that impair the achievable performance. Neither do the poles. In state space form the system may be represented as x ˙= 0 1 0 x+ u, 0 0 1 A x ˙ y z = = = B y= 1 C 0 x. (3.149)

Completing the system equations we have Ax + Bu + Gv, Cx + w, Dx. (3.150) (3.151) (3.152)

We choose the controlled variable z equal to the measured variable y. so that D = C . To profit from loop transfer recovery we let G = B. In the SISO case we may choose Q = V = 1 without loss of generality. Finally we write R = ρ and W = σ , with the constants ρ and σ to be determined. We first consider the regulator design. In the notation of § 3.2.6 (p. 109) we have k = 1, ψ(s ) = 1 and χol (s ) = s2 . It follows from (3.34) that the closed-loop characteristic polynomial χcl for state feedback satisfies k2 1 ψ(−s )ψ(s ) = s4 + . (3.153) ρ ρ √ 1 4 The roots of the polynomial on the right-hand side are 1 2 2 (±1 ± j )/ρ . To determine the closed-loop poles we select those two roots that have negative real parts. They are given by 1 1√ 2(−1 ± j )/ρ 4 . (3.154) 2 Figure 3.17 shows the loci of the closed-loop poles as ρ varies. As the magnitude of the closedχcl (−s )χcl (s ) = χol (−s )χol (s ) +
2 1
Im 1/ρ

0 -1
1/ρ

-2 -2

-1.5

-1
Re

-0.5

0

Figure 3.17.: Loci of the closed-loop poles loop pole pair is 1/ρ 4 the desired bandwidth of 1 rad/s is achieved for ρ = 1. We next turn to the observer design. By Exercise 3.3.2(c) (p. 119) the observer characteristic polynomial χ f satisfies χ f (−s )χ f (s ) = χol (s )χol (−s )[1 + 1 1 M (s ) M (−s )] = s4 + 4 , σ σ (3.155)
1

134

3.6. Examples and applications

where M (s ) = C (sI − A )−1 G = 1/s2 and χol (s ) = s2 . This expression is completely similar to that for the regulator characteristic polynomial, and we conclude that the observer characteristic values are
1 1√ 2(−1 ± j )/σ 4 . 2 1

(3.156)

By the rule of thumb of § 3.5.2 (p. 128) we choose the magnitude 1/σ 4 of the observer pole pair 10 times greater than the bandwidth, that is, 10 rad/s. It follows that σ = 0.0001. By numerical computation8 it is found that the optimal compensator transfer function is Ce (s ) = 155.6 (s + 0.6428 ) . s2 + 15.56s + 121.0 (3.157)

This is a lead compensator.
|S |
20 0
σ = .01 σ = .0001 σ = .000001

|T |
20 0
σ = .000001

[dB]

-20 -40 -60 -80 -2 10
-1

[dB]

-20 -40 -60
σ = .0001 σ = .01

10 10 10 angular frequency [rad/s]

0

1

10

2

-80 -2 10

10 10 10 angular frequency [rad/s]

-1

0

1

10

2

Figure 3.18.: Sensitivity function and complementary sensitivity function for the H2 design Figure 3.18 shows the magnitude plots of the sensitivity and complementary sensitivity functions for σ = .01, σ = .0001 and σ = .000001. The smaller σ is the better loop transfer is recovered. Since the high-frequency roll-off of the complementary sensitivity of 40 dB/decade 1 sets in at the angular frequency 1/σ 4 it is advantageous not to choose σ too small. Taking σ large, though, results in extra peaking at crossover. The value σ = .0001 seems a reasonable compromise. Figure 3.19 gives the closed-loop step response. The value σ = .000001 gives the best response but that for σ = .0001 is very close.

3.6.3. A MIMO system
As a second example we consider the two-input two-output plant with transfer matrix (Kwakernaak 1986) 1 2 P (s ) =  s 0
8 This

 1 s + 2 . 1 s+2

(3.158)

may be done very conveniently with M ATLAB using the Control Toolbox (Control Toolbox 1990).

135

3. LQ, LQG and H2 Control System Design

step response 1.5
σ = .01 σ = .0001 σ = .000001

amplitude

1

0.5

0 0

5 time [s]

10

Figure 3.19.: Closed-loop step response for the H2 design Figure 3.20 shows the block diagram. The plant is triangularly coupled. It is easy to see that it may be represented in state space form as     0 0 0 1 0 1 0 1 y= x ˙ = 0 0 0  x + 1 0 u , x. (3.159) 0 0 1 0 1 0 0 −2 C B A The first two components of the state represent the block 1/s2 in Fig. 3.20, and the third the block 1/(s + 2 ).
u1
1 + +

s2
1

y1

u2

s+2

y2

Figure 3.20.: MIMO system The plant has no right-half plane poles or zeros, so that there are no fundamental limitations to its performance.
Exercise 3.6.1 (Poles and zeros). What are the open-loop poles and zeros? Hint: For the definition of the zeros see § 3.2.6 (p. 109).

We aim at a closed-loop bandwidth of 1 rad/s on both channels, with good low- and highfrequency characteristics. We complete the system description to x ˙ y z = = = Ax + Bu + Gv, Cx + w, Dx. (3.160) (3.161) (3.162)

To take advantage of loop transfer recovery we let G = B. As the controlled output z is available for feedback we have D = C . Assuming that the inputs and outputs are properly scaled we choose Q = I, R = ρ I, V = I, W = σ I, (3.163)

136

First we consider the design of the regulator 2 1 1/ρ 1/ρ Im 0 -1 -2 -4 1/ρ -3 -2 Re -1 0 Figure 3. 3. Again following the rule that the dominant observer poles have magnitude 10 times that of the dominant regulator poles we let σ = 5 × 10−5 .5.: Loci of the regulator poles part.23.22 shows the magnitudes of the four entries of the resulting 2 × 2 sensitivity matrix S . After completion of the design the extra block is absorbed into the compensator. which determines the dominant characteristics. as indicated in Fig.2 (Identical loci). Check that this is a consequence of choosing Q = V = I . Application of the integrator-in-the-loop method of § 3. This results in the optimal observer poles −200. R = ρ I . Representing the extra block by the state space realization 137 . For ρ = 0.6. 130) amounts to including an extra block s+α s (3.21.21. As ρ decreases the closed-loop poles move away from the double open-loop pole at 0 and the open-loop pole at −2. We improve the performance of the second channel by introducing integrating action. thanks to the double integrator in the corresponding input channel.8 the closed-loop poles are −2.7162 ± j . By numerical solution of the appropriate algebraic Riccati equation for a range of values of ρ we obtain the loci of closed-loop poles of Fig.6. however. D = C and G = B. which is the correct value for a closed-loop bandwidth of 1 rad/s. Using standard software the H2 solution may now be found.0038.067. Like in the double integrator example.5424 and −. 3.4 (p.076 ± j 7.01 and −7. W = σ I . The latter pole pair has magnitude 10.164) in the second channel.7034. Figure 3.3. Next we consider the loci of the optimal observer poles as a function of σ . The attenuation of disturbances that affect the second output (represented by S12 and S22 ) is disappointing. Examples and applications with the positive scalars ρ and σ to be determined. Exercise 3. they are identical to those of the regulator poles. The reason is that the low-frequency disturbances generated by the double integrator completely dominate the disturbances generated in in the other channel. The latter pole pair is dominant with magnitude 1. The attenuation of disturbances that enter the system at the first output correponds to the entries S11 and S21 and is quite adequate.

23. LQG and H2 Control System Design |S11| |S12| 0 [dB] -50 0 [dB] -50 -100 10 -2 10 -1 10 0 10 1 102 angular frequency [rad/s] 10 3 -100 -2 10 10 10 10 10 angular frequency [rad/s] -1 0 1 2 10 3 |S21| |S22| 0 [dB] -50 0 [dB] -50 -100 -2 10 10 10 10 10 angular frequency [rad/s] -1 0 1 2 10 3 -100 -2 10 10 10 10 10 angular frequency [rad/s] -1 0 1 2 10 3 Figure 3.: Magnitudes of the entries of the sensitivity matrix S u1 s +α s u2 1 + + s2 1 y1 ' u2 s+2 y2 Figure 3.: Expanded plant for integrating action in the second channel 138 .3. LQ.22.

which has magnitude 1.: Loci of the regulator poles of the extended system Three of the loci move out to ∞ from the open-loop poles 0. The pole pair −0. 1 1 y= 1 0 0 0 C 1 0 x. Appendix: Proofs x ˙4 = u2 . The latter pole turns out to be nondominant.0998. Figure 3.6079. Appendix: Proofs In this appendix we provide sketches of several of the proofs for this chapter. 2 1 1/ρ Im 0 -1 -2 -4 1/ρ -1 1/ρ -3 -2 Re 0 Figure 3. This means that the feedback compensator to an extent achieves decoupling. Infinite gain at all frequencies with unit feedback would make the closed-loop transfer matrix equal to the unit matrix. x2 . hence. determines the bandwidth. 1 0 (3.3. G = B. The latter pole is nondominant and the pole pair −7.24 shows the loci of the regulator poles with ρ as parameter. The fourth moves out from the open-loop pole at 0 to the open-loop zero at −α = −1. and. 3. Again we let D = C .067 and −1.076 ± j 7. −0. Q = V = I .24. x3 .7200. j = 1. 3. For ρ = 0. 2 of the closed-loop response to unit steps on the two inputs.7394. The loci of the optimal observer poles are again identical to those for the regulator poles.8141 ± j 0. This is a consequence of the high feedback gain at low frequencies. Note that the off-diagonal entries of S and T are small (though less so in the crossover region). completely decouple the system.26. The decoupling effect is also visible in Fig. Figure 3. R = ρ I and W = σ I .25 shows the magnitudes of the four entries of the sensitivity and complementary sensitivity matrices S and T that follow for ρ = .7394 and −0.5 the regulator poles are −2. i. The results are now much more acceptable.7.01.067 has magnitude 10. and x has the components x1 .7. with ρ and σ to be determined. For σ = 5 × 10−5 the observer poles are −200.076 ± j 7.5 and σ = 5 × 10−5 .165) The input u now has the components u1 and u2 . and −2.8141 ± j 0. u2 = α x4 + u2 we obtain the modified plant  0 0 x ˙= 0 0 1 0 0 0 0 −2 0 0 A   0 0 1 0 x+ 0 α 0 0 B  0 0  u. We choose α = 1 so that the low-frequency disturbance at the second input channel extends up to the desired bandwidth. −7. and x4 . which shows the entries si j. 139 . 0.

LQ. LQG and H2 Control System Design |S11| 0 [dB] -50 0 [dB] -50 |S12| -100 -2 10 10 -1 10 0 10 1 10 2 10 3 -100 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] |S21| 0 [dB] -50 0 [dB] -50 angular frequency [rad/s] |S22| -100 -2 10 10 -1 10 0 10 1 10 2 10 3 -100 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] angular frequency [rad/s] |T11| 0 [dB] -50 0 [dB] -50 |T12| -100 -2 10 10 10 10 10 angular frequency [rad/s] |T21| -1 0 1 2 10 3 -100 -2 10 10 10 10 10 angular frequency [rad/s] |T22| -1 0 1 2 10 3 0 [dB] -50 0 [dB] -50 -100 -2 10 10 10 10 10 angular frequency [rad/s] -1 0 1 2 10 3 -100 -2 10 10 10 10 10 angular frequency [rad/s] -1 0 1 2 10 3 Figure 3.: Magnitudes of the entries of the sensitivity matrix S and the complementary sensitivity matrix T 140 .25.3.

5 0 1.5 1 0.5 0 -0.5 0 1.5 1 0.5 1 0.5 0 -0. Appendix: Proofs s11 1.3.5 s22 5 time [s] 10 0 5 time [s] 10 Figure 3.26.7.: Entries of the closed-loop unit step response matrix 141 .5 0 -0.5 1 0.5 0 -0.5 s12 5 time [s] 10 0 5 time [s] 10 s21 1.

minimizes the performance criterion t1 0 [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt + xT (t1 ) X1 x (t1 ) = 0 t1 (3. X1 is a nonnegative-definite symmetric matrix. It follows that ∞ 0 [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt + xT (t1 ) X1 x (t1 ) = xT (0 ) X (0 ) x (0 ). 0 = AT X (3. Define the time-dependent matrix X (t ) = t t1 T (s.172) ˙ (t ) = AT (t ) X (t ) + X (t ) A (t ) + Q (t ). 0 ≤ t ≤ t1 . t ) Q (s ) (s. (3.168) for solutions of the time-varying state differential equation x ˙ (t ) = A (t ) x (t ). We consider how to determine the time-dependent gain F (t ) such that the state feedback u (t ) = − F (t ) x (t ). t ) X1 (t1 . z (t ) = Dx (t ). Substitution of x (s ) = (s.174) xT (t ) DT Q D + F T (t ) RF (t ) x (t ) dt + xT (t1 ) X1 x (t1 ) for the system x ˙(t ) = Ax (t ) + Bu (t ) = [ A − BF (t )] x (t ).166) for the stabilizable and detectable system x ˙(t ) = Ax (t ) + Bu (t ). −X X (t1 ) = X1 . 0 ) x (0 ). 0 ) X1 (t1 . yields t1 0 xT (s ) Q (s ) x (s ) ds + xT (t1 ) X1 x (t1 ) = xT ( 0 ) 0 t1 (3.173) Solution of the regulator problem.169) T (s.1. Outline of the solution of the regulator problem We consider the problem of minimizing ∞ 0 [zT (t ) Qz (t ) + uT (t ) Ru (t )] dt (3. This equation is a Lyapunov matrix differential equation.170) Then t1 0 xT (s ) Q (s ) x (s ) ds + xT (t1 ) X1 x (t1 ) = xT (0 ) X (0 ) x (0 ).7. t ) = (s. If A is constant and stable and R is constant then ¯ that is independent of X1 as t1 → ∞ the matrix X (t ) approaches a constant nonnegative-definite matrix X and is the unique solution of the algebraic Lyapunov equation ¯+X ¯ A + Q. t ) ds + T (t1 . LQG and H2 Control System Design 3. LQ. 0 ) ds + T (t1 . 0 ) Q (s ) (s. (3.171) Differentiation of X (t ) with respect to t using differential equation and terminal condition ( s.175) 142 . ∂ ∂t (3. We first study how to compute the quantity t1 0 xT (s ) Q (s ) x (s ) dt + xT (t1 ) X1 x (t1 ) (3. t ) A (t ) shows that X (t ) satisfies the matrix (3. t ).167) Lyapunov equation.3. Q (t ) is a time-dependent nonnegative-definite symmetric matrix and X1 is a nonnegative-definite symmetric matrix. 0 ) x (0 ). (3. with the state transition matrix of the system.

177) (3. By detectability this implies that x (t ) → 0. Then R + G∼ (s ) QG (s ) = J ∼ (s ) L J (s ).181) (3. also X (t ) has a well-defined limit as t1 → ∞.183) (3.177) reduces to the matrix differential Riccati equation ˙ (t ) = AT X (t ) + X (t ) A + DT Q D − X (t ) B R−1 BT X (t ). J (s ) = I + F (sI − A )−1 B.3.179) (3.7.2.1 (Kalman-Jakuboviˇ c-Popov equality).184) with F = L−1 ( BT X + DT QC ). The zeros of the numerator of det J are the eigenvalues of the matrix A − BF .178) If the system is stabilizable then the minimum of the performance criterion on the left-hand side of (3. (3. Because of the time ¯ is independent of t. (3.5 (p.176) Tracing the solution X (t ) backwards from t1 to 0 we see that at each time t the least increase of X (t ) results if the gain F (t ) is selected as F (t ) = − R−1 BT X (t ). −X X (t1 ) = X1 . F = − R −1 B T X (3. −X X (t1 ) = X1 . Consider the linear time-invariant system x ˙ (t ) = Ax (t ) + Bu (t ). Kalman-Jakubovic-Popov equality The Kalman-Jakuboviˇ c-Popov equality is a generalization of the return difference equality that we use in § 3. Completion of the square on the right-hand side (with respect to F (t ) results in ˙ (t ) −X = [ F ( t ) − R − 1 B T X ( t ) ]T R [ F ( t ) − R − 1 B T X ( t ) ] + AT X (t ) + X (t ) A + DT Q D − X (t ) B R−1 BT X (t ). Because it is bounded from below (by 0) the criterion has a well-defined limit as t1 → ∞. (3. otherwise there exist nonzero initial states such that z (t ) = Dx (t ) = 0 for t ≥ 0. X (t1 ) = X1 . If the system is observable ¯ is positive-definite. The limit is obviously nonnegative-definite. Correspondingly. Summary 3.174) is a nonincreasing function of the terminal time t1 . The KJP equality establishes the connection between factorizations and algebraic Riccati equations.180) The left-hand side of (3.174) can only converge to a finite limit if z (t ) = Dx (t ) → 0 as t → ∞. y (t ) = Cx (t ) + Du (t ). then X ˇ 3. Hence.182) (3. that is. £ 143 . The constant symmetric matrix L and the rational matrix function J are given by L = R + DT Q D .2. independence of the system this limit X and satisfies the algebraic Riccati equation ¯ B R −1 B T X ¯+X ¯ A + DT Q D − X ¯. and let Q and R be given symmetric constant matrices. Appendix: Proofs where X (t ) is the solution of ˙ (t ) = [ A − BF (t )]T X (t ) + X (t )[ A − BF (t )] + DT Q D + F T (t ) RF (t ). 108). with transfer matrix G (s ) = C (sI − A )−1 B + D. the closed-loop system is stable.7. 0 = AT X Correspondingly the optimal feedback gain is time-invariant and equal to ¯. Suppose that the algebraic matrix Riccati equation 0 = AT X + X A + C T QC − ( X B + C T Q D )( DT Q D + R )−1 ( BT X + DT QC ) has a symmetric solution X .7. (3.

(3. and u (t ) = − Fx (t ) is the corresponding optimal state feedback law. The KJP equality is best known for the case D = 0 (see for instance Kwakernaak and Sivan (1972)). y (t ) = Cx (t ) + Du (t ). (3. LQG and H2 Control System Design We use the notation G∼ (s ) = GT (−s ).193) We prove that if the loop gain is perturbed to W L.190) (3.191) −1 B = G (s ) − D and F (sI − A ) B = J (s ) − I (3.7.188) (3. From the relation F = L−1 ( BT X + DT QC ) we have BT X + DT QC = LF . (3.192) 3. It follows from the return difference inequality that L∼ + L + L∼ L ≥ 0 on the imaginary axis. First consider the case that R = I .195) 144 . substitution of C (sI − A ) and simplification lead to the desired result R + G∼ (s ) QG (s ) = J ∼ (s ) L J (s ).194) The proof follows Anderson and Moore (1990). −1 (3. The proof starts from the algebraic Riccati equation 0 = AT X + X A + C T QC − ( X B + C T Q D ) L−1 ( BT X + DT QC ). The KJP equality arises in the study of the regulator problem for the system x ˙ (t ) = Ax (t ) + Bu (t ). then the closed-loop system remains stable as long as RW + W ∼ R > R on the imaginary axis.3. Robustness under state feedback We consider an open-loop stable system with loop gain matrix L that satisfies the return difference inequality ( I + L )∼ R ( I + L ) ≥ R on the imaginary axis. or L −1 + ( L −1 )∼ + I ≥ 0 on the imaginary axis.189) (3.3.187) (3. Kalman-Jakuboviˇ c-Popov equality.182) is the algebraic Riccati equation associated with this problem. (3. It then reduces to the return difference equality J ∼ (s ) R J (s ) = R + G∼ (s ) QG (s ). Premultiplication by BT (−sI − AT )−1 and postmultiplication by (sI − A )−1 B results in 0 = − BT X (sI − A )−1 B − BT (−sI − AT )−1 X B + BT (−sI − AT )−1 (C T QC − F T LF )(sI − A )−1 B.185) The equation (3. so that the Riccati equation may be written as 0 = AT X + X A + C T QC − F T LF. with W stable rational. Substituting BT X = L F − DT QC we find 0 = ( DT QC − L F )(sI − A )−1 B + BT (−sI − AT )−1 (C T Q D − F T L ) + BT (−sI − AT )−1 (C T QC − F T LF )(sI − A )−1 B.186) with L = R + DT Q D. LQ. (3. with the criterion ∞ 0 [ yT (t ) Qy (t ) + uT (t ) Ru (t )] dt . This in turn we rewrite as 0 = −(−sI − AT ) X − X (sI − A ) + C T QC − F T LF. Expansion of this expression.

105) — then the eigenvalues of H consist of these n eigenvalues of A − BF and their negatives. if W + W ∼ > I on the imaginary axis then the perturbed system is stable.196) is nonsingular on the imaginary axis for all 0 ≤ ε ≤ 1.203) Hence.7.3 (Riccati equation and the Hamiltonian matrix).7. the perturbed system is stable if for 0 ≤ ε ≤ 1 no zeros of I + [(1 − ε) I + ε W ] L cross the imaginary axis. 1.3.201) Summary 3. £ 3.202) 3. 2.198) Inspection shows that if W + W∼ > I (3. − ( A − B R −1 S T )T (3. the perturbed system is stable if only if L−1 + (1 − ε) I + ε W = Mε (3.7. Exercise 3. which means that Mε is on the imaginary axis then Mε + Mε nonsingular on the imaginary axis. Equivalently. Then H I I = ( A − BF ). If λ is an eigenvalue of A − BF corresponding to the eigenvector x then λ is also an eigenvalue of H . Work this out to prove that the closed-loop system remains stable under perturbation satisfying RW + W ∼ R > R on the imaginary axis.1 (p. If λ is an eigenvalue of H then also −λ is an eigenvalue of H .195) yields ∼ Mε + Mε ≥ 2(1 − ε) I + ε( W + W ∼ ) = (2 − ε) I + ε( W + W ∼ − I ). Appendix: Proofs The perturbed system is stable if I + W L has no zeros in the right-half complex plane. if the n × n matrix A − BF has n eigenvalues with negative real parts — such as in the solution of the LQ problem of Summary 3.2 (The case R = I ). Riccati equations and the Hamiltonian matrix We consider the algebraic Riccati equation A T X + X A + Q − ( X B + S ) R −1 ( B T X + S T ) = 0 and the associated Hamiltonian matrix (3.2. 145 . X (3.199) ∼ > 0 on the imaginary axis for all 0 ≤ ε ≤ 1. Show that the case that R is not necessarily the unit matrix may be reduced to the previous case by factoring R — for instance by Choleski factorization — as R = RT 0 R0 . Hence. (3. define F = R−1 ( BT X + ST ). corresponding to the eigenvector I x.197) (3. Hence. X X (3.7.4. Given a solution X of the Riccati equation.200) H = A − B R −1 S T − Q + S R −1 S T − B R −1 B T . Substitution of L−1 = Mε − (1 − ε) I − ε W into (3.

207) with all blocks square.3.2. 1.204) where the eigenvalues of the n × n diagonal block T11 all have negative real parts and those of T22 have positive real parts.208) = = (−1 )n det det R −λ I + A − λ I − A T (3 ) −λ I − A T = det Q Q (4 ) −λ I + A R Q −λ I + A (5 ) = det −λ I − A T −R −Q −λ I − A T In step (1) we multiply the matrix by −1. £ For the transformation under 4 there are several possibilities. if λ is an eigenvalue. We have for the characteristic polynomial of H det (λ I − H ) = det (2 ) λI − A −R −Q −λ I + A (1 ) = det λ I + AT R Q −λ I − A T R −λ I + A (3.206) is the unique solution of the Riccati equation such that the eigenvalues of A − BF all have strictly negative real part. 105) U11 is nonsingular. For numerical computation it is to great advantage to use the Schur transformation. T22 (3.211) 146 . X A − X BF X H = = (3. Then there is a similarity transformation U that brings H into upper triangular form T such that H = UTU −1 = U T11 0 T12 U. − AT (3. LQ. U22 (3. Inspection shows that the characteristic polynomial does not change if λ is replaced with λ.205) where each of the subblocks has dimensions n × n. In step (5) we multiply the second row and the second column of blocks by −1. The Hamiltonian matrix is of the form H = A R Q . LQG and H2 Control System Design 4. If ( A − BF ) x = λ x then H I I I x= ( A − BF ) x = λ x. Riccati equation and the Hamiltonian matrix (sketch). One is to bring H into Jordan normal form. if and only if U11 is nonsingular. 2.210) 3.209) (3.1 (p.201) I X A − BF − Q + S R −1 S T − ( A − B R −1 S T )T X A − BF I = ( A − BF ). Assume that H has no eigenvalues with zero real part. Using the Riccati equation we obtain from (3. For the LQ problem of Summary 3. In step (2) we interchange the first and second rows of blocks and in step (3) the firt and second columns of blocks. Hence. In that case −1 X = U21 U11 (3. X X X (3. and Q and R symmetric. so is −λ. Then there is a solution X of the Riccati equation such that the eigenvalues of A − BF all have strictly negative real part. Write U= U11 U21 U12 . In step (4) we transpose the matrix.

212) After multiplying on the right by U11 it follows that H I I −1 −1 = −1 U11 T11 U11 . if the error system is stable then the error covariance matrix Y = Ee (t )eT (t ) is the unique solution of the Lyapunov equation ( A − KC )Y + Y ( A − KC )T + GT VG + K T W K = 0. U21 U21 11 (3. Hence. (3. (3. (3. For the LQ problem the nonsingularity of U11 follows by the existence of X such that A − BF is stable. t ∈ Ê. Appendix: Proofs 4.219) (3. The state of the system t x (t ) = −∞ e A(t−s) v(s ) ds. From H U = UT we obtain H U11 U11 = T . The Kalman filter In this subsection we outline the derivation of the Kalman filter. Linear system driven by white noise Consider the stable linear system x ˙(t ) = Ax (t ) + v(t ).7.214) is a stationary stochastic process with covariance matrix t t −∞ Y = = Ex (t ) xT (t ) = t −∞ −∞ T ( t −s ) e A(t−s1 ) E v(s1 )vT (s2 ) e A ∞ 0 T ( t −s ) 2 ds1 ds2 (3. The estimation error e (t ) = x ˆ (t ) − x (t ) of the observer ˙ x ˆ(t ) = A x ˆ(t ) + Bu (t ) + K [ y (t ) − C x ˆ ( t )] satisfies the differential equation e ˙(t ) = ( A − KC )e (t ) − Gv(t ) + K w(t ). y (t ) = Cx (t ) + w(t ). 3. (3.7. where v is white noise with intensity V and w white noise with intensity W . It follows that AY + Y AT = 0 ∞ A e Aτ V e A τ + e Aτ V e A τ AT d τ T T = 0 ∞ d T e Aτ V e A τ dτ d τ = e Aτ V e A Tτ ∞ 0 = −V.216) Hence.3. the covariance matrix Y is the unique solution of the Lyapunov equation AY + Y AT + V = 0. We consider the system x ˙(t ) = Ax (t ) + Bu (t ) + v(t ).215) e A ( t −s ) V e A ds = e Aτ V e A Tτ d τ. U21 U11 U21 U11 (3. E v(t )vT (s ) = V δ(t − s ).218) The noise process −Gv(t ) + K w(t ) is white noise with intensity GT VG + K T W K . (3. driven by white noise with intensity V .213) −1 −1 We identify X = U21 U11 and A − BF = U11 T11 U11 . that is.5.217) Observer error covariance matrix.220) 147 .

226) We rewrite this in the form E zT (t ) Qz (t ) + uT (t ) Ru (t ) = tr Y DT Q D + F T RF = = ∞ tr 0 e( A− BF )s V e( A− BF ) ∞ Ts ds DT Q D + F T RF (3. (3. tr V X . with X the solution of the Riccati equation AT X + X A + DT Q D − X B R−1 BT X = 0.221) Suppose that there exists a gain K that stabilizes the error system and minimizes the error variance matrix Y .228) X and.227) tr V 0 e( A− BF ) Ts DT Q D + F T RF e( A− BF )s ds = tr V X . 148 . Inspection of (3. LQ. (3.7. We discuss how to choose the observer gain K to minimize the error covariance matrix Y . The white noise v has intensity V . ˜ .222) With this gain the observer reduces to the Kalman filter. (3.3.224) for the system x ˙ (t ) = Ax (t ) + Bu (t ) + v(t ).220) as ( K − Y C T W −1 ) W ( K − Y C T W −1 )T + AY + Y AT + GVGT − YC T W −1 CY = 0. should only affect Y quadratically in ε.221) shows that this implies K = Y C T W −1 .6. Minimization of the steady-state mean square error under state feedback We consider the problem of choosing the gain F of the state feedback law u (t ) = − Fx (t ) to minimize the steady state mean square error E zT (t ) Qz (t ) + uT (t ) Ru (t ) (3. To this end we complete the square (in K ) and rewrite the Lyapunov equation (3. hence. X X is the solution of the Lyapunov equation ( A − BF )T X + X ( A − BF ) + DT Q D + F T RF = 0. If the feedback law stabilizes the system x ˙ (t ) = ( A − BF ) x (t ) + v(t ) then the steady-state covariance matrix Y of the state is given by Y = Ex (t ) xT (t ) = 0 ∞ e( A− BF )s V e( A− BF ) Ts ds. with ε a small scalar and K ˜ an arbitrary matrix of the same dimensions Then changing the gain to K + ε K as K . LQG and H2 Control System Design The Kalman filter. (3. is minimized by choosing F = R−1 BT X . (3.223) 3. (3.225) Hence we have for the steady-state mean square error E zT (t ) Qz (t ) + uT (t ) Ru (t ) = E xT (t ) DT Q Dx (t ) + xT (t ) F T RFx (t ) = E tr x (t ) xT (t ) DT Q D + x (t ) xT (t ) F T RF = tr Y DT Q D + F T RF . The minimal error variance matrix Y satisfies the Riccati equation AY + Y AT + GVGT − Y C T W −1 CY = 0.

234) In step (1) we use the well-known matrix identity ( I + A B ) A = A ( I + BA ) . that is. Loop transfer recovery We study an LQG optimal system with measurement noise intensity W = σ W0 as σ ↓ 0 under the assumptions that G = B and that the plant transfer matrix G (s ) = C (sI − A )−1 B is square with stable inverse G −1 ( s ).7. we set out to minimize the steady-state value of EzT (t )z (t ) under the assumption that w is a white noise input with intensity matrix I . (3. We rewrite this as T AYσ + Yσ AT + BV BT − σ Kσ W0 Kσ = 0. Inspection shows that if Yσ ↓ 0 then necessarily Kσ → ∞. −1 (3.7.8.230) with sufficient accuracy. Under the assumption G = B we have in the absence of any measurement noise w y = C (sI − A )−1 B (u + v) = G (s )(u + v). Inspection of the final equality shows that Lσ (s ) −→ F (sI − A )−1 B.233) Kσ ≈ √ BUσ as σ ↓ 0. 126) as if it is an LQG problem. −1 −1 (3.235) 3.7.4.231) (3. σ T = V. we expect that as σ decreases to 0 the covariance Yσ of the estimation error decreases to the zero matrix. which is the loop gain under full state feedback. where Uσ is a square nonsingular matrix (which may depend on σ ) such that Uσ W0 Uσ We study the asymptotic behavior of the loop gain Lσ ( s ) = ≈ ≈ = (1 ) F (sI − A + BF + Kσ C )−1 Kσ C (sI − A )−1 B 1 1 F (sI − A + BF + √ BUσ C )−1 √ BUσ C (sI − A )−1 B σ σ 1 −1 1 F (sI − A + √ BUσ C ) √ BUσ C (sI − A )−1 B σ σ 1 F (sI − A )−1 I + √ BUσ C (sI − A )−1 σ −1 1 √ BUσ C (sI − A )−1 B σ −1 = = 1 1 F (sI − A )−1 B √ Uσ I + √ C (sI − A )−1 BUσ C (sI − A )−1 B σ σ √ −1 F (sI − A )−1 BUσ I σ + C (sI − A )−1 BUσ C (sI − A )−1 B. In fact we may write 1 (3. Under the assumption G = B the Riccati equation for the optimal observer is AYσ + Yσ AT + BV BT − Yσ C T W −1 CYσ = 0. Appendix: Proofs 3.232) with Kσ = Yσ C T W −1 the gain.3. Hence.7.3 (p. Solution of the H2 optimization problem We solve the standard H2 optimization problem of § 3. σ ↓0 (3. From the noise v and the known input u the state x may in turn be reconstructed with arbitrary precision. (3.229) Because by assumption G is stable the input noise v may be recovered with arbitrary precision by approximating the inverse relation v = G −1 ( s ) y − u (3.236) 149 .

249) 150 .3. C2 x (t ) + D21 w(t ) + D22 u (t ). 118). (3. u(t ) (3. C1 x (t ) + D11 w(t ) + D12 u (t ). (3.245) (3.238) If D11 = 0 then the output z has a white noise component that may well make the mean square output (3. and the weighting matrix I T D12 D12 T D12 D12 (3. Under this assumption we have z (t ) = C1 x (t ) + D12 u (t ) = I where z0 (t ) = C1 x (t ). 113) and is given in Summary 3. We therefore assume that D11 = 0.244) The second equation may be put into the standard form for the Kalman filter if we consider y (t ) − D22 u (t ) as the observed variable rather than y (t ).240) D12 C1 x (t ) = I u(t ) D12 z0 ( t ) . Output feedback.236) infinite.242) The gain matrix F may easily be found from the results of § 3.246) defines a stochastic system with cross correlated noise terms. To this end we consider the equations x ˙ (t ) y (t ) = = Ax (t ) + B1 w(t ) + B2 u (t ). EzT (t )z (t ) = = E zT 0 (t ) E zT 0 (t ) uT ( t ) uT ( t ) I T D12 I T D12 I D12 z0 u(t ) z0 . C2 x (t ) + v(t ) (3. z0 (t ) = C1 x (t ) is stabilizable and detectable.241) T D12 be nonsingular. For this it is enough to study the equations x ˙(t ) z (t ) = = Ax (t ) + B1 w(t ) + B2 u (t ). We have E w(t ) v(t ) = wT ( s ) I D21 vT ( s ) = E I w(t )wT (s ) I D21 T D21 (3.4. We first consider the solution with state feedback.248) T D21 T δ( t − s ). y (t ) = C2 x (t ) is stabilizable and detectable.243) (3.237) (3. and the intensity matrix I D21 T D21 T D21 D21 (3. If the state is not available for feedback then it needs to be estimated with a Kalman filter. LQ. The is positive-definite.239) D12 T D12 D12 This defines a linear regulator problem with a cross term in the output and input.8 (p.247) (3. As a result. D21 D21 Suppose that the system x ˙(t ) = Ax (t ) + B1 w(t ). u(t ) (3. It has a solution if the system x ˙ (t ) = Ax (t ) + B2 u (t ).3 (p. LQG and H2 Control System Design State feedback. If we denote the observation noise as v(t ) = D21 w(t ) then x ˙ (t ) y (t ) − D22 u (t ) = = Ax (t ) + B1 w(t ) + B2 u (t ). A necessary and sufficient condition for the latter is that D12 solution to the regulator problem is a state feedback law of the form u (t ) = − Fx (t ).2. (3.

242).7. F is the same state feedback gain as in (3.3. (3.251) 151 .3. Then there exists a well-defined Kalman filter of the form ˙ x ˆ(t ) = A x ˆ(t ) + B2 u (t ) + K [ y (t ) − C2 x ˆ(t ) − D22 u (t )]. Once the Kalman filter (3.250) is in place the optimal input for the output feedback problem is obtained as u(t ) = − F x ˆ (t ).4 (p.250) The gain matrix K may be solved from the formulas of § 3. (3. 120). Appendix: Proofs T is positive-definite. A necessary and sufficient condition for the latter is that D21 D21 be nonsingular.

3. LQG and H2 Control System Design 152 . LQ.

may be used to design for multivariable integral action.1 (Two-tank liquid flow process). Example 4. Multivariable Control System Design Overview – Design of controllers for multivariable systems requires an assessment of structural properties of transfer matrices. The internal model principle applies to multivariable systems and. Examples are decentralized control and decoupling control. Examples of multivariable feedback systems The following two examples discuss various phenomena that specifically occur in MIMO feedback systems and not in SISO systems.1. Introduction Many complex engineering systems are equipped with several actuators that may influence their static and dynamic behavior. Consider the flow process of Fig. The zeros and gains in multivariable systems have directions. Systems with more than one actuating control input and more than one sensor output may be considered as multivariable systems or multi-input-multi-output (MIMO) systems. in cases where some form of automatic control is required over the system. The control objective for multivariable systems is to obtain a desirable behavior of several output variables by simultaneously manipulating several input channels. 4. for example. The control objective is to keep the liquid levels h1 and h2 between acceptable limits by applying feedback from measurements of these liquid levels. With norms of multivariable signals and systems it is possible to obtain bounds for gains. such as interaction between loops and multivariable non-minimum phase behavior. bandwidth and other system properties.1. and the outgoing flow φ3 acts as a disturbance. The incoming flow φ1 and recycle flow φ4 act as manipulable input variables to the system.1. 4. A possible approach to multivariable controller design is to reduce the problem to a series of single loop controller design problems. Commonly. also several sensors are available to provide measurement information about important system variables that may be used for feedback control purposes. 4.1. while 153 .4.1.

ref h2.1.2.4.: Two-loop feedback control of the two-tank liquid flow process 154 .: Two-tank liquid flow process with recycle h1. Multivariable Control System Design φ1 h1 φ2 h2 φ4 φ3 Figure 4.ref k1 φ1 φ4 P h1 k2 h2 Figure 4.

as a consequence the transfer matrix P (s ) = 1 s+1 1 s ( s+1 ) 1 s+1 1 − s+ 1 (4. is an example of a decentralized control structure.1) Here the h i and φi denote the deviations from the steady state levels and flows. a dynamic model. • A possible feedback scheme using two separate loops is shown in Fig. A control structure using individual loops. is ˙ 1 (t ) −1 h = ˙ 1 h2 (t ) 0 0 h1 ( t ) 1 + h2 ( t ) 0 1 −1 φ1 ( t ) 0 + φ (t ). Laplace transformation yields s+1 H1 (s ) = H2 (s ) −1 0 s −1 1 0 1 −1 1 (s ) 4 (s ) + 0 −1 3 (s ) .1. which results in the transfer matrix relationship H1 (s ) = H2 (s ) 1 s+1 1 s ( s+1 ) 1 s+1 1 − s+ 1 1 (s ) 4 (s ) + 0 −1 s 3 ( s ).3) has nonzero entries both at the diagonal and at the off-diagonal entries. 155 . the plant transfer function in the open upper loop from φ1 to h1 . As a result. Owing to the negative steady state gain in the lower loop. Note that in this control scheme. s+1 s ( s+2 ) . there exists a coupling between both loops due to the non-diagonal terms in the transfer matrix (4. if we control the two output variables h1 and h2 using two separate control loops. 1 s. linearized about a given steady state. depends on k2 .4. φ4 ( t ) −1 3 (4.6. The phenomenon that the loop gain in one loop also depends on the loop gain in another loop is called interaction.2. as opposed to a centralized control structure where all measured information is available for feedback in all feedback channels. negative values of k2 stabilize the lower loop. The dynamics of P11 (s ) llc changes with k2 : k2 = 0 : k 2 = −1 : k2 = −∞ : P11 (s ) P11 (s ) P11 (s ) llc llc llc = = = 1 s+1 . with the lower loop closed with proportional gain k2 .2) The example demonstrates the following typical MIMO system phenomena: • Each of the manipulable inputs φ1 and φ4 affects each of the outputs to be controlled h1 and h2 . and which one to control h2 .4) Here llc indicates that the lower loop is closed. As derived in Appendix 4. Introduction accommodating variations in the output flow φ3 . it is not immediately clear which input to use to control h1 . • Consequently. 4. (4. P11 (s ) llc = k2 1 1− . The multiple loop control structure used here does not explicitly acknowledge interaction phenomena.3). This issue is called the input/output pairing problem. such as multiple loop control. s+1 s ( s + 1 − k2 ) (4.

−1 s (4. Multivariable Control System Design h1. Consider the two input. s+1 s+3 (s + 1 )(s + 3 ) (4. u2 (s ) (4. two output linear system y1 ( s ) = y2 ( s ) 1 s+1 1 s+1 2 s+3 1 s+1 u1 ( s ) . In Example 4.ref k1 K pre k2 controller K φ1 φ4 P h1 h2 Figure 4.2 (Multivariable non-minimum phase behavior). Figure 4.7) Closing both loops with proportional gains k1 > 0 and k2 < 0 leads to a stable system. pre K21 (s ) = pre The compensated plant PK pre step.8) This approach of removing interaction before actually closing multiple individual loops is called decoupling control. reads P (s ) K pre (s ) = 1 s 1 . a multivariable controller K has been designed having transfer matrix K (s ) = K pre (s ) k1 0 0 k1 −k2 = .4. (4. The precompensator K pre is to be designed such that the transfer matrix PK pre is diagonal. u2 (s ) = −k2 y2 (s ). y1 ( s ) = 2 s−1 1 − u1 ( s ) = − u1 (s ).1 a precompensator K pre in series with the plant can be found which removes the interaction completely.9) If the lower loop is closed with constant gain.1.ref h2.3 shows the feedback structure.3.1.: Decoupling feedback control of two-tank liquid flow process • A different approach to multivariable control is to try to compensate for the plant interaction. The next example shows that a multivariable system may show non-minimum phase behavior. k1 / s k2 k2 (4.6) s for which a multi-loop feedback is to be designed as a next 0 0 .5) then PK pre is diagonal by selecting K12 (s ) = −1.10) 156 . In doing so. then for high gain values (k2 → ∞) the lower feedback loop is stable but has a zero in the right-half plane. Suppose that the precompensator K pre is given the structure K pre (s ) = 1 pre K21 (s ) K12 (s ) 1 pre (4. Example 4.

as it is the result of our model formulation that determines which inputs and which outputs are ordered as one. K22 (s ) = 1. Thus the selection of the most useful input-output pairs is a non-trivial issue in multiloop control or decentralized control. This approach of decoupling control may have some advantages in certain practical situations. Inserting the matrix (4.e. In the next section it will be shown that s = 1 is an unstable transmission zero of the multivariable system and this limits the closed loop behavior irrespective of the controller used. A classical approach towards dealing with multivariable systems is to bring a multivariable system to a structure that is a collection of one input. In the example the input-output pairing has been the natural one: output i is connected by a feedback loop to input i. then y2 ( s ) = − s−1 u2 (s ). Consider the method of decoupling precompensation. (4.12) shows that it loses rank at s = 1.4.11) Apparently. i. Express the precompensator as K pre (s ) = K11 (s ) K21 (s ) K12 (s ) . u1 = −k1 y1 with k1 → ∞. one output control problems.12) into PK pre . (s + 1 )(s + 3 ) (4.13) Decoupling means that PK pre is diagonal. The example suggests that the occurrence of non-minimum phase phenomena is a property of the transfer matrix and exhibits itself in a matrix sense. two and so on. and was thought to lead to a simpler design approach.14) Then the compensated system is P (s ) K pre (s ) = 1 s+1 1 s+1 2 s+3 1 s+1 s−1 s+1 − (s+1 1 −2 s )( s+3 ) +3 = 0 −1 1 0 .1. Conversely if the upper loop is closed under high gain feedback. s+3 K21 (s ) = −1. 157 . One loop can be closed with high gains under stability of the loop. Analysis of the transfer matrix P (s ) = 1 s+1 1 s+1 2 s+3 1 s+1 = 1 s+3 (s + 1 )(s + 3 ) s + 3 2s + 2 s+3 (4. K22 (s ) (4. in control configurations where one has individual loops as in the example.15) The requirement of decoupling has introduced a right-half plane zero in each of the loops. so that either loop now only can have a restricted gain in the feedback loop. and the other loop is then restricted to have limited gain due to the non-minimum phase behavior. Introduction Thus under high gain feedback of the lower loop. This is however quite an arbitrary choice. s−1 − (s+1 )( s+3 ) (4. a solution for a decoupling K pre is K11 (s ) = 1. not connected to a particular input-output pair. the upper part of the system exhibits nonminimum phase behavior. K12 (s ) = −2 s+1 . but shows up in both relations. decoupling may introduce additional restrictions regarding the feedback properties of the system. as the above example showed. However. It shows that a zero of a multivariable system has no relationship with the possible zeros of individual entries of the transfer matrix. the non-minimum phase behavior of the system is not connected to one particular input-output relation.

if and only if its inverse is polynomial as well. The systems are analyzed both in the time domain and frequency domain. and (Skogestad and Postlethwaite 1995).2. Gohberg. or. (1985).1.g. Poles and zeros of multivariable systems In this section. Survey of developments in multivariable control The first papers on multivariable control systems appeared in the fifties and considered aspects of noninteracting control. a matrix whose entries are polynomials. e. Lancaster. Modern approaches to frequency domain methods can be found in (Raisch 1993).2. The matrix generalization is as follows. Definition 4. the books by MacDuffee (1956).1. The polynomial representation was also studied by Wolovich (1974).4. Any rational matrix P may be written as a fraction P= 1 N d where N is a polynomial matrix1 and d is the monic least common multiple of all denominator polynomials of the entries of P. process-control oriented approach to multivariable control is presented in (Morari and Zafiriou 1989). The geometric approach to multivariable state-space control design is contained in the classical book by Wonham (1979) and in the book by Basile and Marro (1992). In the sixties. a number of structural properties of multivariable systems are discussed. A survey of classical design methods for multivariable control systems can be found in (Korn and Wilfert 1982). Unimodular matrices are considered having no zeros and. In the scalar case a polynomial has no zeros if and only if it is a nonzero constant. (Lunze 1988) and in the two books by Tolle (1983). Real-rational means that every entry Pi j of P is a ratio of two polynomials having real coefficients. It may be shown that a square polynomial matrix U is unimodular if and only if det U is a nonzero constant.2. multiplication by unimodular matrices does not affect the zeros.2. Multivariable Control System Design 4. 4. Polynomial matrices will be studied first. to put it differently. The generalization of the Nyquist criterion and of root locus techniques to the multivariable case can be found in the work of Postlethwaite and MacFarlane (1979). A modern. the work of Rosenbrock (1970) considered matrix techniques to study questions of rational and polynomial representation of multivariable systems.. 1 That is.1 (Unimodular polynomial matrix). A polynomial matrix is unimodular if it square and its inverse exists and is a polynomial matrix. and Rodman (1982) and Kailath (1980). These properties are important for the understanding of the behavior of the system under feedback. Interaction phenomena in multivariable process control systems are discussed in terms of a process control formulation in (McAvoy 1983). The use of Nyquist techniques for multivariable control design was developed by Rosenbrock (1974a). 4. 158 . hence. The numerical properties of several computational algorithms relevant to the area of multivariable control design are discussed in (Svaricek 1995). Polynomial and rational matrices Let P be a proper real-rational transfer matrix. The books by Kailath (1980) and Vardulakis (1991) provide a broad overview. (Maciejowski 1989). Many results about polynomial matrices may be found in.

Example 4. . we interchanged the rows and then divided the (new) second row by a factor 2.2. The matrix S is known as the Smith form of N and the polynomials εi the invariant polynomials of N. For every polynomial matrix N there exist unimodular U and V such that  ε1 0 0 0 0  0 ε2 0 0 0   0 0 . The Smith form S of N may be obtained by applying to N a sequence of elementary row and column operations. Finally. r ) are monic polynomials with the property that εi /εi−1 is polynomial. which are: • multiply a row/column by a nonzero constant. .16) where εi . and the zeros of N are defined as the zeros of one or more of its invariant polynomials. In this case r is the normal rank of the polynomial matrix N .2.3 (Elementary row/column operations). Elementary row operations correspond the premultiplication by unimodular matrices. . 1 1 . and elementary column operations correspond the postmultiplication by unimodular matrices. For the above four elementary operations these are (1) premultiply by U L1 (s ) := (2) postmultiply by V R2 (s ) := (3) premultiply by U L3 (s ) := 1 −1 1 0 1 0 0 . in step (2) we added the first column to the second column. s + 1 times the second row was subtracted from row 1. • add a row/column multiplied by a polynomial to another row/column. Poles and zeros of multivariable systems Summary 4. in step (4). 1 159 . s Here. 0 0 U NV =  0 0 0 ε 0  r 0 0 0 0 0 0 0 0 0 0 S  0 0  0  0  0 0 (4. • interchange two rows or two columns. in step (3).2 (Smith form of a polynomial matrix).2..4. in step (1) we subtracted row 1 from row 2.. (i = 1. 1 −(s + 1 ) . . The following sequence of elementary operations brings the polynomial matrix N (s ) = s+1 s+2 s−1 s−2 to diagonal Smith-form: s+1 s+2 s−1 s−2 ⇒ (1 ) s+1 1 s−1 −1 ⇒ (2 ) s + 1 2s 1 0 ⇒ (3 ) 0 2s 1 0 ⇒ (4 ) 1 0 0 .

. Example 4. s N (s ) S (s ) This clearly shows that S is the Smith-form of N . We immediately get a generalization of the Smith form. . Consider the transfer matrix P (s ) = 1 s 1 s+1 1 s ( s+1 ) 1 s+1 = 1 1 s+1 1 N (s ). (i = 1. The matrix M is known as the Smith-McMillan form of P. Now consider a rational matrix P.18) (4.2. 1 /2 0 Instead of applying sequentially multiplication on N we may also first combine the sequence of unimodular matrices into two unimodular matrices U = U L4 U L3 U L1 and V = V R2 . and then apply the U and V . and in (Schrader and Sain 1989).2.4. = s s s (s + 1 ) d (s ) The Smith form of N is obtained by the following sequence of elementary operations: s+1 1 s+1 ⇒ s s − s2 1 1 ⇒ 0 0 1 s+1 ⇒ 0 s2 0 s2 160 .19) and the εi . .  0 .. Multivariable Control System Design (4) premultiply by U L4 (s ) := 0 1 . The transmission zeros of P are r defined as the zeros of r i=1 ε i and the poles are the zeros of i=1 ψi .5 (Smith-McMillan form).17) where N is a polynomial matrix and d a scalar polynomial. In this case r is the normal rank of the rational matrix P. −1 1 1 + s / 2 −1 / 2 − s / 2 U (s ) = U L4 (s )U L3 (s )U L1 (s ) s+1 s+2 s−1 s−2 1 1 0 1 V (s ) = V R2 (s ) = 1 0 0 . .4 (Smith-McMillan form of a rational matrix). 0 0 0 0 U PV =   εr  0 0 0 0 0 ψr   0 0 0 0 0 0 0 0 0 0 0 0 M where the εi and ψi are coprime such that ε εi = i ψi d (4. We write P as P= 1 N d (4. The polynomial matrix N has rank 2 and has one zero at s = 0. Definitions for other notions of zeros may be found in (Rosenbrock 1970). Summary 4. For every rational matrix P there exist unimodular polynomial matrices U and V such that   ε1 0 0 0 0 0 ψ1  0 ε2 0 0 0 0   ψ2   . (Rosenbrock 1973) and (Rosenbrock 1974b). r ) are the invariant polynomials of N = d P.

If the normal rank of an n y × nu plant P is r then at most r entries of the output y = Pu can be given independent values by manipulating the input u. This indicates an undesired situation which is to be prevented by ascertaining the rank of P to equal the number of the be controller outputs. For square invertible matrices P there further holds that s0 is a transmission zero of P if and only it is pole if P−1 . This generally means that r entries of the output are enough for feedback control. T = ( I + PK )−1 PK . (4. (Generally P need not be square so its determinant may not even exist.6 (Squaring down). −1. as required by unity feedback. Let K be any nu × n y controller transfer matrix. For example P (s ) = 1 1/ s 0 1 1 −1/s has has a transmission zero at s = 0 — even though det P (s ) = 1 — because P−1 (s ) = 0 1 a pole at s = 0. so that S has one or more eigenvalues λ = 1 for any s in the complex plane. In such cases S ( jω) in whatever norm does not converge to zero as ω → 0. Poles and zeros of multivariable systems Division of each diagonal entry by d yields the Smith-McMillan form 1 s ( s+1 ) 0 s s+1 0 . The plant has no transmission zeros and has a double pole at s = 0. Consider P (s ) = 1 s 1 s2 . then the n y × n y loop gain PK is singular. 0}. in particular everywhere on the imaginary axis. In the determinant they do cancel. Therefore from the determinant det P (s ) = 1 ( s + 1 )2 we may not always uncover all poles and transmission zeros of P.2.2.4) S = ( I + PK )−1 . then the sensitivity matrix S and complementary sensitivity matrix T as previously defined are for the multivariable case (see Exercise 1. a 161 . Feedback around a nonsquare P can only occur in conjunction with a compensator K which makes the series connection PK square. As pre-compensator we propose K0 (s ) = 1 . An s0 ∈ C that is not a pole of P is a transmission zero of P if and only if the rank of P (s0 ) is strictly less than the normal rank r as defined by the Smith-McMillan form of P.) An s0 ∈ C is a pole of P if and only if it is a pole of one or more entries Pi j of P. We investigate down squaring. the set of poles is {−1. The set of transmission zeros is {0}.20) If r < n y . Its Smith-McMillan form is M (s ) = 1/s2 0 .5. Example 4. This must be realized by proper selection and application of actuators and sensors in the feedback control system.4. This shows that transmission zeros of a multivariable transfer matrix may coincide with poles without being canceled if they occur in different diagonal entries of the Smith-McMillan form. Such controllers K are said to square down the plant P.

In the example. det d e f s 2 −1 where the second matrix in this expression are the first two columns of U −1 . Multivariable Control System Design This results in the squared-down system P (s ) K0 (s ) = s+a . 0 (4. A general 1 × 2 plant P (s ) = a (s ) b (s ) has transmission zeros if and only if a and b have common zeros. In this example the choice a > 0 creates a zero in the left-half plane. and (Shaked 1976).21) 1 . (Stoorvogel and Ludlage 1994). so that these cannot be considered as belonging to the class of generic systems. and results regarding the squaring problem are described in (Horowitz and Gera 1979). However. Example 4. Consider the transfer matrix P and its Smith-McMillan form 1 P (s ) =  1 0 s  1 s ( s2−1 ) 1  s2 −1 s s2 −1 = =  s2 − 1 1 1 s ( s 2 − 1 ) s  s ( s2 − 1 ) 0 s2   1  0 1 0 0 s ( s2−1 ) s2 − 1 s 0 1  0 s 1 s 2 −1 0 0 0 U −1 ( s ) M (s ) V −1 ( s )  (4. many physical systems bear structure in the entries of their transfer matrix. This leads for example to the choice a = 0. c = 1. The theory of transmission zero placement by squaring down is presently not complete. b = 0. f We obtain the newly formed set of transmission zeros as the set of zeros of   1 0 a b c  s 0  = (cd − a f ) + (ce − b f )s. f = 0. ElSingaby. Generally it holds that nonsquare plants have no transmission zeros assuming the entries of Pi j are in some sense uncorrelated with other entries.4. allowing a stable closed-loop system with high loop gain. P does not have transmission zeros. Choose this zero at s = −1. (Karcanias and Giannakopoulos 1989). 0}. and ElArabawy 1986).2.7 (Squaring down). If a and b are in some sense randomly chosen then it is very unlikely that a and b have common zeros. e = 1. which we parameterize as K0 = a d b e c . Many physically existing nonsquare systems actually turn out to possess transmission zeros. so that subsequent high gain feedback can be applied.22) There is one transmission zero {0} and the set of poles is {−1. Sain and Schrader (1990) describe the general problem area. d = 1. Thus one single transmission zero can be assigned to a desired location. (Sebakhy. (Le and Safonov 1992). s2 Apparently squaring down may introduces transmission zeros. 1. although a number of results are available in the literature. The postcompensator K0 is now 2 × 3. K0 (s ) := 0 1 0 1 1 0 162 .

4.2. Poles and zeros of multivariable systems

and the squared-down system K0 (s ) P (s ) =
s+1 s

0

s s2 −1 1 s ( s−1 )

=

s ( s2

1 0 − 1 ) (s + 1 )(s2 − 1 )

s2 . s+1

This squared-down matrix may be shown to have Smith-McMillan form
1 s ( s2 −1 )

0

0 s (s + 1 )

and consequently the set of transmission zeros is {0, −1}, the set of poles remain unchanged as {−1, 1, 0}. These values are as expected, i.e., the zero at s = 0 has been retained, and the new zero at −1 has been formed. Note that s = 0 is both a zero and a pole.

4.2.2. Transmission zeros of state-space realizations
Any proper n y × nu rational matrix P has a state-space realization P (s ) = C (sIn − A )−1 B + D, A ∈ Rn× n , B ∈ Rn× nu , C ∈ Rn y × n , D ∈ Rn y × nu .

Realizations are commonly denoted as a quadruple ( A , B, C, D ). There are many realizations ( A , B, C, D ) that define the same transfer matrix P. For one, the order n is not fixed, A realization of order n of P is minimal if no realization of P exists that has a lower order. Realizations are minimal if and only of the order n equals the degree of the polynomial ψ1 ψ2 . . . ψr formed from the denominator polynomials in (4.18), see (Rosenbrock 1970). The poles of P then equal the eigenvalues of A, which, incidentally, shows that computation of poles is a standard eigenvalue problem. Numerical computation of the transmission zeros may be more difficult. In some special cases, computation can be done by standard operations, in other cases specialized numerical algorithms have to be used.
Lemma 4.2.8 (Transmission zeros of a minimal state-space system). Let ( A , B, C, D) be a minimal state-space realization of order n of a proper real-rational n y × nu transfer matrix P of rank r . Then the transmission zeros s0 ∈ C of P as defined in Summary 4.2.4 are the zeros s0 of the polynomial matrix

A − s0 I C

B . D
A−s0 I B C D

(4.23) < n + r.

That is, s0 is transmission zero if and only if rank

The proof of this result may be found in Appendix 4.6. Several further properties may be derived from this result.
Summary 4.2.9 (Invariance of transmission zeros). The zeros s0 of (4.23) are invariant under

the following operations: • nonsingular input space transformations ( T2 ) and output space transformations ( T3 ): A − sI C B A − sI ⇒ T3 C D BT2 T3 DT2

• nonsingular state space transformations ( T1 ): A − sI C B T AT −1 − sI ⇒ 1 1 −1 CT1 D T1 B D

163

4. Multivariable Control System Design

• Static output feedback operations, i.e. for the system x ˙ = Ax + Bu, y = Cx + Du, we apply the feedback law u = − Ky + v where v acts as new input. Assuming that det( I + DK ) = 0, this involves the transformation: A − sI C B A − BK ( I + DK )−1 C − sI ⇒ ( I + DK )−1 C D B ( I + K D )−1 ( I + DK )−1 D

• State feedback ( F ) and output injection ( L ) operations, A − sI C B A − BF − sI ⇒ D C − DF B A − BF − LC + LDF − sI ⇒ D C − DF B − LD D

The transmission zeros have a clear interpretation as that of blocking certain exponential inputs, (Desoer and Schulman 1974), (MacFarlane and Karcanias 1976).
Example 4.2.10 (Blocking property of transmission zeros). Suppose that the system realization P (s ) = C (sI − A )−1 B + D is minimal and that P is left-invertible, that is rank P = nu . Then s0 I B s0 is a transmission zero if and only if A− C D has less than full column rank, in other words if and only if there exist vectors xs0 , us0 —not both2 zero—that satisfy

A − s0 I C

B D

x s0 = 0. u s0

(4.24)

Then the state and input defined as x ( t ) = x s0 es0 t , u ( t ) = u s0 es0 t

satisfy the system equations x ˙(t ) = Ax (t ) + Bu (t ), Cx (t ) + Du (t ) = 0. (4.25)

We see that y = Cx + du is identically zero. Transmission zeros s0 hence correspond to certain exponential inputs whose output is zero for all time. Under constant high gain output feedback u = Ky the finite closed loop poles generally approach the transmission zeros of the transfer matrix P. In this respect the transmission zeros of a MIMO system play a similar role in determining performance limitations as do the zeros in the SISO case. See e.g. (Francis and Wonham 1975). If P is square and invertible, then the transmission zeros are the zeros of the polynomial det A − sI C B D (4.26)

If in addition D is invertible, then det A − sI C B D I − D−1 C ( A − BD−1 C ) − sI 0 = det 0 I B . D (4.27)

So then the transmission zeros are the eigenvalues of A − BD−1 C . If D is not invertible then computation of transmission zeros is less straightforward. We may have to resort to computation of the Smith-McMillan form, but preferably we use methods based on state-space realizations.
2 In

fact here neither xs0 nor us0 is zero.

164

4.2. Poles and zeros of multivariable systems

Example 4.2.11 (Transmission zeros of the liquid flow system). Consider the Example 4.1.1

with plant transfer matrix P (s ) =
1 s+1 1 s ( s+1 ) 1 s+1 1 − s+ 1

=

1 s s . s ( s + 1 ) 1 −s

Elementary row and column operations successively applied to the polynomial part lead to the Smith form s s (1 ) s + 1 ⇒ 1 −s 1 0 (2 ) s + 1 ⇒ −s 1 s ( s + 1 ) (3 ) 1 ⇒ 0 0 0 . s (s + 1 )

The Smith-McMillan form hence is M (s ) = 1 1 s (s + 1 ) 0 0 = s (s + 1 )
1 s ( s+1 )

0

0 . 1

The liquid flow system has no zeros but has two poles {0, −1}. We may compute the poles and zeros also from the state space representation of P,   −1 − s 0 1 1  1 − s 0 −1  A − sI B . = P (s ) = C (sI − A )−1 B + D,  −1 C D 0 0 0  0 −1 0 0 The realization is minimal because both B and C are square and nonsingular. The poles are therefore the eigenvalues of A, which indeed are {0, −1} as before. There are no transmission sI B zeros because det A− C D is a nonzero constant (verify this).
Example 4.2.12 (Transmission zeros via state space and transfer matrix representation).

Consider the system with state space realization ( A , B, C, D ), with     0 0 0 0 1 0 1 0 A = 1 0 0 , B = 1 0 , C= , 0 0 1 0 1 0 0 0 The system’s transfer matrix P equals P (s ) = C (sI − A )−1 B + D = 1/ s 1 / s2 1 s2 1 / s2 3 = 3 1/ s s s s . 1

D=

0 0 . 0 0

Elementary operations on the numerator polynomial matrix results in s2 s s 0 s 1 ⇒ ⇒ 1 0 1 0 0 0

so the Smith-McMillan form of P is M (s ) = 1 1 s3 0 0 1 / s3 = 0 0 0 . 0

The system therefore has three poles, all at zero: {0, 0, 0}. As the matrix A ∈ R3×3 has an equal number of eigenvalues, it must be that the realization is minimal and that its eigenvalues

165

4. Multivariable Control System Design

coincide with the poles of the system. From the Smith-McMillan form we see that there are no transmission zeros. This may also be verified from the matrix pencil   −s 0 0 0 1  1 −s 0 1 0    A − sI B 0 1 −s 0 0  =  . C D  0 −1 0 0 0  0 0 −1 0 0 Elementary row and column operations do not the rank equals that of    −s 0 −s 0 0 0 1 1  1 −s 0 1 0 0    ⇒ 0 0 0 1 − s 0 0     0 −1  0 −1 0 0 0 0 0 0 0 −1 0 0 affect the rank of this matrix pencil. Therefore 0 0 0 0 −1 0 1 0 0 0   0 0 1 0 0 0    0  ⇒ 0 0  0 −1 0 0 0 0 0 0 0 0 −1 0 1 0 0 0  1 0  0 . 0 0

As s has disappeared from the matrix pencil it is direct that the rank of the matrix pencil does not depend on s. The system hence has no transmission zeros.

4.2.3. Numerical computation of transmission zeros
If a transfer matrix P is given, the numerical computation of transmission zeros is in general based on a minimal state space realization of P because reliable numerical algorithms for rational and polynomial matrix operations are not yet numerically as reliable as state space methods. We briefly discuss two approaches.
Algorithm 4.2.13 (Transmission zeros via high gain output feedback). The approach has been proposed by Davison and Wang (1974), (1978). Let ( A , B, C, D ) be a minimal realization of P and assume that P is either left-invertible or right-invertible. Then:

1. Determine an arbitrary full rank output feedback matrix K ∈ Rnu ×n y , e.g. using a random number generator. 2. Determine the eigenvalues of the matrix 1 Zρ := A + BK ( I − DK )−1 C ρ for various large real values of ρ, e.g. ρ = 1010 , . . . , 1020 at double precision computation. Now as ρ goes to ∞ we observe that some eigenvalues go to infinity will others converge (the so called finite eigenvalues). 3. For square systems, the transmission zeros equal the finite eigenvalues of Zρ . For nonsquare systems, the transmission zeros equal the finite eigenvalues of Zρ for almost all choices of the output feedback matrix K . 4. In cases of doubt, vary the values of ρ and K and repeat the calculations. See Section 4.6 for a sketch of the proof.

166

4.3. Norms of signals and systems

Algorithm 4.2.14 (Transmission zeros via generalized eigenvalues). The approach has been proposed in (Laub and Moore 1978) and makes use of the QZ algorithm for solving the generalized eigenvalue problem. Let ( A , B, C, D ) be a minimal realization of P of order n having the additional property that it is left-invertible (if the system is right-invertible, then use the dual system ( A T , C T , B T , D T )). Then:

1. Define the matrices M and L as M= In 0 0 , 0 L= A −C B . D

2. Compute a solution to the generalized eigenvalue problem i.e. determine all values s ∈ C and r ∈ Cn+nu satisfying [sM − L]r = 0 3. The set of finite values for s are the transmission zeros of the system. Although reliable numerical algorithms exist for the solution of generalized eigenvalue problems, a major problem is to decide which values of s belong to the finite transmission zeros and which values are to be considered as infinite. However, this problem is inherent in all numerical approaches to transmission zero computation.

4.3. Norms of signals and systems
For a SISO system with transfer function H the interpretation of | H ( jω)| as a “gain” from input to output is clear, but how may we define “gain” for a MIMO system with a transfer matrix H ? One way to do this is to use norms H ( jω) instead of absolute values. There are many different norms and in this section we review the most common norms for both signals and systems. The theory of norms leads to a re-assessment of stability.

4.3.1. Norms of vector-valued signals
We consider continuous-time signals z defined on the time axis R. These signals may be scalarvalued (with values in R or C) or vector-valued (with values in Rn or Cn ). They may be added and multiplied by real or complex numbers and, hence, are elements of what is known as a vector space. Given such a signal z, its norm is a mathematically well-defined notion that is a measure for the “size” of the signal.
Definition 4.3.1 (Norms). Let X be a vector space over the real or complex numbers. Then a

function · : X→R (4.28)

that maps X into the real numbers R is a norm if it satisfies the following properties: 1. 2. x ≥ 0 for all x ∈ X (nonnegativity), x = 0 if and only if x = 0 (positive-definiteness),

3. λ x = |λ| · x for every scalar λ and all x ∈ X (homogeneity with respect to scaling),

167

4. Multivariable Control System Design

4.

x + y ≤ x + y for all x ∈ X and y ∈ X (triangle inequality).

The pair ( X , · ) is called a normed vector space. If it is clear which norm is used, X by itself is often called a normed vector space. A well-known norm is the p-norm of vectors in Cn .
Example 4.3.2 (Norms of vectors in Cn ). Suppose that x = ( x1 , x2 , · · · , xn ) is an n-dimensional complex-valued vector, that is, x is an element of Cn . Then for 1 ≤ p ≤ ∞ the p-norm of x is defined as

x

p

=

n p 1/ p i=1 | x i |

for 1 ≤ p < ∞, for p = ∞.

maxi=1,2,··· ,n | xi |

(4.29)

Well-known special cases are the norms
n n 1 /2

x x
2

1

=
i=1

| x i |,

x

2

=
i=1

| x i |2

,

x

=

i=1,2,··· , n

max | xi |.

(4.30)

is the familiar Euclidean norm.

The p-norm for constant vectors may easily be generalized to vector-valued signals.
Definition 4.3.3 (L p -norm of a signal). For any 1 ≤ p ≤ ∞ the p-norm or L p -norm z continuous-time scalar-valued signal z, is defined by
Lp

of a

z

Lp

=

∞ p −∞ | z ( t )|

dt

1/ p

for 1 ≤ p < ∞, for p = ∞.

supt∈R |z (t )|

(4.31)

If z (t ) is vector-valued with values in Rn or Cn , this definition is generalized to z
Lp

=

∞ −∞

z (t )

p

dt

1/ p

for 1 ≤ p < ∞, for p = ∞,

supt∈R z (t )

(4.32)

where · is any norm on the n-dimensional space Rn or Cn . The signal norms that we mostly need are the L2 -norm, defined as z
L2

=

∞ −∞

z (t )

1 /2 2 2

dt

(4.33)

and the L∞ -norm, defined as z
L∞

= sup z (t )
t ∈R

∞.

(4.34)

2 of the L2 -norm is often called the energy of the signal z, and the L∞ -norm The square z L 2 z L∞ its amplitude or peak value.

Example 4.3.4 (L2 and L∞ -norm). Consider the signal z with two entries

z (t ) =

e− t ½ ( t ) . 2e−3t ½(t )

168

3. min(n .3. i = 1. Then there exist square unitary matrices U and V such that A = U V H. Moreover. if any.. Singular values of vectors and matrices We review the notion of singular values of a matrix3 . ordered in decreasing magnitude. It is known as the spectral norm. . . are zero. 2. and are denoted /2 1 /2 H H σ i ( A ) = λ1 i ( A A ) = λi ( A A ). 2. m ) largest eigenvalues λi ( AH A ). m ). with I a unit matrix of correct dimensions. From t = 0 onwards both entries of z (t ) decay to zero as t increases. Norms of signals and systems Here ½(t ) denotes the unit step. σ( A ) = σmin(n. m ). There exist numerically reliable methods to compute the matrices U and V and the singular values. The square roots of these min(n . The representation (4. (4. min(n. . 3 See for instance Section 10. . . The largest and smallest singular value σ1 ( A ) and σmin(n. . they are nonnegative real numbers. m ) eigenvalues are called the singular values of the matrix A. so z (t ) is zero for negative time.5 (Singular value decomposition). As a result. .35) of AH A and A AH . be the diagonal n × m matrix whose diagonal elements are σi ( A ). 2. t =∞ t =0 = 7 1 4 + = 2 6 6 The energy of z equals 7/6. If A is an n × m complex-valued matrix then the matrices AH A and A AH are both nonnegative-definite Hermitian. (4.m ) ( A ) respectively are commonly denoted by an overbar and an underbar σ( A ) = σ1 ( A ). . (4. 169 .4. min(n . λi ( A AH ). are equal.2. The remaining eigenvalues. The square of the L2 -norm follows as z L2 2 = 0 ∞ e−2t + 4e−6t dt = e−6t e−2t +4 −2 −6 √ 7/6.37) is known as the singular value decomposition (SVD) of the matrix A. the eigenvalues of both AH A and A AH are all real and nonnegative.37) Summary 4. respectively.8 of Noble (1969).3. m ). the largest singular value σ( A ) is a norm of A. i = 1. its L2 -norm is 4. Given n × m matrix A let A (complex-valued) matrix U is unitary if U H U = UU H = I . . . i = 1. The number of nonzero singular values equals the rank of A. .m ) ( A ). Therefore z L∞ = 1 2 ∞ = 2. The min(n.36) Obviously. These methods are numerically more stable than any available for the computation of eigenvalues.

equal to 5. for i = 1. j | Ai j|.3.7 (Singular value decomposition). min(n . 7.48 0. For i = 1. j | Ai j| ≤ σ( A ) ≤ n maxi. x=0 Ax 2 x 2 .6 (Singular value decomposition).4. √ 9.39) The matrix A has a single nonzero singular value (because it has rank 1). n 2 i=1 σi ( A ) = tr( AH A ). maxi. σ( A ) = min x∈Cn . with α any complex number. min(n . Exercise 4. 0. σ(α A ) = |α|σ( A ).40) Exercise 4. m ) the column vector ui is an eigenvector of A AH corresponding to the eigenvalue σi2 . R-5 of (Chiang and Safonov 1988)): 1. For i = 1. σ( B )).3. 170 . 10. Similarly. m. AH u i = σ i vi . 3.48 . Prove the following statements: 1. · · · . 0 (4.38) V = 1. where   0 −0 . min(n . (4. i = 1. (4. 5. Let A = U V H be the singular value decom- position of the n × m matrix A. σ( A B ) ≤ σ( A )σ( B ). 2. 2. · · · . 6. 8. Prove the following prop- erties (compare p.3. 3. · · · . 6 −0 . · · · . 2. 2. and those of the m × m unitary matrix V as vi .64 −0. Any remaining columns are eigenvectors corresponding to the eigenvalue 0. max(σ( A ). Denote the columns of the n × n unitary matrix U as u i . Any remaining columns are eigenvectors corresponding to the eigenvalue 0. 8 U = 0. 11. Hence. σ( A ) ≤ |λi ( A )| ≤ σ( A ). σ( B )) ≤ σ([ A B] ) ≤ 2 max(σ( A ). 2. σ( A ) = maxx∈Cn . · · · . i = 1.36   5 = 0 . with λi ( A ) the ith eigenvalue of A. the spectral norm of A is σ 1 ( A ) = 5.8 −0. Given is a square n × n matrix A. with singular values σi . 2. n. Ax 2 x 2 . j ) element of A.8 (Singular values). 2. m ) the vectors ui and vi satisfy Avi = σi ui . · · · . σ( A ) − σ( B ) ≤ σ( A + B ) ≤ σ( A ) + σ( B ). i = 1. 4. with Ai j the (i. The singular value decomposition of the 3 × 1 matrix A given by   0 A = 3 4 is A = U V H. m ). m ) the column vector vi is an eigenvector of AH A corresponding to the eigenvalue σi2 . min(n . x=0 2.6 0. σ( A + B ) ≤ σ( A ) + σ( B ). If A−1 exists then σ( A ) = 1/σ( A−1 ) and σ( A ) = 1/σ( A−1 ). Multivariable Control System Design Example 4.

3. to a normed space Y with norm · norm of the operator φ induced by the norms · U and · Y is defined by U Definition 4.4.11 (Matrix norms induced by vector norms). Prove that the norm of the matrix M induced by the ∞-norm (both for U and Y ) is the maximum absolute row sum M = max i j | Mi j|.1. then y Y ≤ φ · u U for any u ∈ U . Prove that this norm has the following additional properties: 2.3. Matrix norm induced by the 1-norm.9 (Induced norm of a linear operator). Constant matrices represent linear maps.3.3.41) Exercise 4. with U . Prove these statements. Exercise 4. Then φ1 φ2 ≤ φ1 · φ2 . The spectral norm σ( M ) is the norm induced by the 2-norm (see Exercise 4.3.8(1)). Matrix norm induced by the ∞-norm. Then depending on the norms defined on Ck and Cm we obtain different matrix norms.41) may be used to define norms of matrices. V and Y normed spaces. Consider the m × k complex-valued matrix M as a linear map Ck → Cm defined by y = Mu. Prove that the norm of the matrix M induced by the 1-norm (both for U and Y ) is the maximum absolute column sum M = max j i | Mi j|.43) with Mi j the (i. u U Y. If y = φu.41) is indeed a norm on this space satisfying the properties of Definition 4. The space of linear operators from a vector space U to a vector space Y is itself a vector space.3.3. Norms of linear operators and systems On the basis of normed signal space we may define norms of operators that act on these signals. with all the norms induced. Show that the induced norm · as defined by (4. Suppose that φ is a linear map φ : Y from a normed space U with norm · φu Y .3. 1. 1. 3. Norms of signals and systems 4. so that (4. j ) entry of M . (4. 2. (4. U→ Then the φ = sup u U =0 (4.42) with Mi j the (i. j ) entry of M . 171 . Submultiplicative property: Let φ1 : U → V and φ2 : V → Y be two linear operators.10 (Induced norms of linear operators).

4.2.13 (Norms for SISO systems).3.m −∞ j=1 max |h i j (t )| dt . ω ∈R (4.34) equals the H∞ -norm of the transfer matrix defined as H H∞ = sup σ( H ( jω)). (4. Summary 4.12 (Norms of linear time-invariant systems). 1.6.. L2 -induced norm. Suppose H is a rational matrix.··· .e. i. we denote the output of the system as y = φu. (4.48) If H has unstable poles then the induced norms do not exist.3.and L2 -norms are successively given by action of the impulse response h L1 = ∞ −∞ |h (t )| dt . The norm of the system induced by the L2 -norm exists if and only if H is proper and has no poles in the closed right-half plane.3. t ∈ R.33) if it exists. For SISO systems the expressions for the two norms obtained in Summary 4. Multivariable Control System Design u φ y Figure 4. Norms of linear systems We next turn to a discussion of the norm of a system. H is the Laplace transform of h. which maps the input signal u into an output signal y. Given an input u. Consider a MIMO convolution system with impulse response matrix h. Consider a system as in Fig.4.and L∞ -norms of the input and output signals. 2.4.3. (4. If φ is a linear operator the system is said to be linear.45) where h i j is the (i. The norms of a SISO system with (scalar) impulse response h and transfer function H induced by the L∞ .44) Moreover let H denote the transfer matrix.4.47) and the peak value on the Bode plot if H has no unstable poles H H∞ = sup | H ( jω)|.: Input-output mapping system 4. L∞ -induced norm.4. j ) entry of the m × k impulse response matrix h. ω ∈R (4. In that case the L2 -induced norm (4.12 simplify considerably. so that its IO map y = φu is determined as y (t ) = ∞ −∞ h (τ)u (t − τ) d τ.46) A sketch of the proof is found in § 4. 172 . We establish formulas for the norms of a linear time-invariant system induced by the L2 . The norm of the system induced by the L2 -norm (4. Summary 4. The norm of the system is now defined as the norm of this operator. is given by ∞ k i=1.

The impulse response matrix h defines an operator y (t ) = 0 −∞ h (t − τ)u (τ) d τ. 3.4.53) is a norm.3. From this.3.3. Norms of signals and systems Example 4.54) which maps continuous-time signals u defined on the time axis (−∞. Summary 4. The Hankel norm H H of the system with impulse response matrix h is defined as the norm of the map given by (4. prove that H H = 2 . θ (4. ∞ ). Two further norms of linear time-invariant systems are commonly encountered. Hint: To compute the Hankel norm first show that if y satisfies (4. H2 -norm. Hankel norm. 2. Usually they are not.51) The norm induced by the L2 -norm follows as H H∞ = sup 1 1 = sup √ = 1. Suppose a system has proper transfer matrix H and let H (s ) = C (sIn − A )−1 B + D be a minimal realization of H .14 (Norms of a simple system).50) 0 1 1 τ/θ 2 then y 2 2 = 2θ ( −∞ e u (τ) d τ) . 0] → Rnu and y : [0. Prove that this is indeed a norm.49) with θ a positive constant.52) For this example the two system norms are equal.15 (H2 -norm and Hankel norm of MIMO systems). As a simple example.54) induced by the L2 -norms of u : (−∞. t ≥ 0.14. 0 (4.54) and h is given by (4.50) It is easily found that the norm of the system induced by the L∞ -norm is h 1 = 0 ∞ 1 −t /θ e dt = 1. | 1 + j ωθ | 1 + ω2 θ 2 ω ∈R ω ∈R (4. Then 173 . 0] to continuoustime signals y defined on the time axis [0. (4. consider a SISO first-order system with transfer function H (s ) = 1 . 1. The corresponding impulse response is h (t ) = 1 −t /θ θe for t ≥ 0. ∞ ) → Rn y . for t < 0. Exercise 4. The notation tr indicates the trace of a matrix. Show that the H2 -norm defined as H H2 = ∞ tr −∞ H T (− j2π f ) H ( j2π f ) d f = ∞ tr −∞ hT (t )h (t ) dt (4. Consider a stable MIMO system with transfer matrix H and impulse response matrix h. 1 + sθ (4. Compute the H2 -norm and the Hankel norm of the SISO system of Example 4.3.3.16 (State-space computation of common system norms).

and so different norms may yield different versions of BIBO stability. The Hankel norm H H H H is finite only if A is a stability matrix. Normally the L∞ . Multivariable Control System Design 1. Computation of the H∞ -norm involves iteration on γ . 4. The solutions X and Y are unique and are respectively known as the controllability gramian and observability gramian.or L2 -norms of u and y are used. 2.and the L2 -norms of u and y are used) if and only if H is proper and has all its poles in the open left-half complex plane. Now bounded means having finite norm. 4A constant matrix A is a stability matrix if it is square and all its eigenvalues have negative real part.6. then the system is BIBO stable. Suppose that the system is linear time-invariant with rational transfer matrix H . 2. In § 1. Then the system is BIBO stable (both when the L∞ .4. 1. In that case H H2 = √ tr BT Y B = √ tr C XC T where X and Y are the solutions of the linear equations A X + X AT = − B BT . In that case = λmax ( XY ) where X and Y are the controllability gramian and observability gramian. and λmax ( XY ) denotes the largest of them. Summary 4. AT Y + Y A = −C T C .4. If φ is finite. The H∞ -norm H bound H H∞ H∞ is finite if and only if A is a stability matrix. 174 .3.55) has no imaginary eigenvalues. H H2 is finite if and only if D = 0 and A is a stability matrix4 . The matrix XY has real nonnegative eigenvalues only. 4. Then γ ∈ R is an upper <γ if and only if σ( D ) < γ and the 2n × 2n matrix A −C T C 0 −B − T (γ 2 I − DT D )−1 DT C − AT C D BT (4.4.4 with linear input-output map φ. 3. Proofs are listed in Appendix 4.1 (BIBO stability). Consider the MIMO input-output mapping system of Fig. We review a few fairly obvious facts. with · the norm induced by the norms of the input and output signals. BIBO and internal stability Norms shed a new light on the notions of stability and internal stability.2 we defined the system to be BIBO stable if any bounded input u results in a bounded output y = φu.

In terms of the transfer matrices. these maps are e u = = I −K P I −1 r w − P ( I + K P )−1 ( I + K P )−1 r w (4.5. u defining internal stability Assume that feedback system shown in Fig.2 we already defined the notion of internal stability for MIMO interconnected systems: An interconnected system is internally stable if the system obtained by adding external input and output signals in every exposed interconnection is BIBO stable. Show by a counterexample that the converse of 1 is not true. 2.: Inputs r .57) I − P ( I + K P )−1 K ( I + K P )−1 K Necessary for internal stability is that det( I + P (∞ ) K (∞ )) = 0.: Multivariable feedback structure w r e K u P y Figure 4.3. Prove this. y r e K u P Figure 4.2 (Proof). u or y have more than one entry. 175 . In § 1.6. With this definition the system is BIBO stable if and only if φ < ∞.5. The closed loop of Fig. 4.4.56) (4. for certain systems and norms the system may be BIBO stable in the sense as defined while φ is not necessarily finite. In the literature the following better though more complicated definition of BIBO stability is found: The system is BIBO stable if for every positive real constant N there exists a positive real constant M such that u ≤ N implies y ≤ M . 4.5 is by definition internally stable if in the extended closed loop of Fig. 1.4..4. Prove the statements 1 and 2.6 the maps from (w. 4.e. BIBO and internal stability Exercise 4. w and outputs e. 3. Given the above it will be no surprise that we say that a transfer matrix is stable if it is proper and has all its poles in the open left-half complex plane. Here we specialize this for the MIMO system of Fig. u ) are BIBO stable. that is.5 is multivariable i. 4. r ) to (e.

Lemma 4. The unstable pole s = 1 of the controller does not appear in the product PK . Consider the loop gain L (s ) = 1 s+1 0 1 s−1 1 s+1 . C P . Example 4. The unity feedback around L gives as closed-loop characteristic polynomial: det( I + L (s )) det(sI − A L ) = s+2 s+1 2 (s + 1 )2 (s − 1 ) = (s + 2 )2 (s − 1 ). Multivariable Control System Design Feedback systems that satisfy this nonsingularity condition are said to be well-posed . The following is a variation of Eqn. K (s ) = 1 0 2s s−1 s − s2 −1 . Example 4. P (s ) K (s ) = 1 s 2 s+1 0 2 s+1 . but the complementary sensitivity matrix ( I + L (s ))−1 L (s ) = 1 s+2 s2 +2s+3 ( s−1 )( s+2 )2 1 s+2 0 is unstable. Consider P (s ) = 1 s 2 s+1 1 s 1 s .3 (Internal stability). D P ) and ( A K . Then the feedback system of Fig. C K . If the n y × nu system P is not square.4. D K ). B P . ∗ 176 . Indeed the lower-left entry of this map is unstable. having minimal realizations ( A P . 4.5 (Internal stability and pole-zero cancellation). Evaluation of the sensitivity matrix ( I + L (s ))−1 = s+1 s+2 +1 − (ss+ 2 )2 s+1 s+2 0 shows that this transfer matrix in the closed-loop system is stable. K ( I + PK )−1 = ∗ s2 − (s−1)(s4+ 1 )( s+3 ) ∗ . (4. Suppose P and K are proper. (Hsu and Chen 1968).5 is internally stable if and only if det( I + D P D K ) = 0 and the polynomial det( I + P (s ) K (s )) det(sI − A P ) det(sI − A K ) has all its zeros in the open left-half plane.4 (Internal stability of closed-loop system). This may suggest as in the SISO case that the map from r to u will be unstable. (1.44). confirming that the closed-loop system indeed is not internally stable. B K .4. then the dimension n y × n y of I + PK and dimension n u × nu of I + K P differ.4.4. The fact that det( I + PK ) = det( I + K P ) allows to evaluate the stability of the system at the loop location having the lowest dimension.58) The closed loop system hence is not internally stable.

2. Let P be the 2 × 2 system P (s ) = s+ 1 −s 1 and suppose that v is a persistent harmonic iω0 t 1 for some known ω0 . MIMO structural requirements and design methods Instability may also be verified using Lemma 4. s+1 s 2 s+1 s+3 s+1 0 s2 (s + 1 )(s − 1 ) = s (s − 1 )(s + 1 )(s + 3 ) v r e K u P z Figure 4.1. In Example 4. 177 .2 one decoupling precompensator is found that diagonalizes the loop gain PK with the property that both diagonal elements of PK have a right half-plane zero at s = 1.5. Why is it useful to express K and S in terms of P and Q? 1 1 s 3.4. disturbance v(t ) = 2 e a) Assume ω0 = ±1. Express K and S := ( I + PK )−1 in terms of P and Q. it is readily verified that det( I + P (s ) K (s )) det(sI − A P ) det(sI − A K ) = det and this has a zero at s = 1.3..6 (Non-minimum phase loop gain). 1. Exercise 4.e.4. the choice of placement of the actuators and sensors may affect the number of righthalf plane zeros of the plant and thereby may limit the closed loop performance.7 (Internal model control & MIMO disturbance rejection). For example.4.: Disturbance rejection for MIMO systems Exercise 4. MIMO structural requirements and design methods During both the construction and the design of a control system it is mandatory that structural requirements of plant and controller and the control configuration are taken into account. Find a stabilizing K (using the parameterization by stable Q) such that in closed loop the effect of v(t ) on z (t ) is asymptotically rejected (i.7. 4. Show that a controller with transfer matrix K stabilizes if and only if Q := K ( I + PK )−1 is a stable transfer matrix. These requirements show up in the MIMO design methods that are discussed in this section. b) Does there exist such a stabilizing K if ω0 = ±1? (Explain.5.4. Show that every internally stabilizing decoupling controller K has this property. irrespective which controller is used.7 and suppose that the plant P is stable. e (t ) → 0 as t → ∞ for r (t ) = 0).) 4. Consider the MIMO system shown in Fig. In this section a number of such requirements will be identified.

A system capable of satisfying this requirement is said to be output functional reproducible or functional controllable. C An−1 B (4. which is a natural assumption because otherwise the entries of y are linearly dependent. Summary 4. A system having proper real-rational n y × nu transfer matrix is said to be functionally reproducible if rank P = n y . B. Example 4. 178 . Multivariable Control System Design 4. This property is formalized in the concept of output controllability. B. and C has full rank.60) . (4. to be included in any control objective.5. termed output controllability by Rosenbrock (1970). Consider the linear time-invariant state-space system of Example 4..2 (Output functional reproducibility).59) has full row rank If in the above definition C = In then we obtain the definition of state controllability.1. 0 1 (4. C ) may be output controllable and yet not be (state) controllable5. Output controllability and functional reproducibility Various concepts of controllability are relevant for the design of control systems.61) The system ( A .2. Brockett and Mesarovic (1965) introduced the notion of reproducibility. t2 ] with t1 < t2 that brings the output from y (t1 ) = y1 to y (t2 ) = y2 . C ). A minimal requirement for almost any control system. C= 0 0 1 0 0 0 This forms a minimal realization of the transfer matrix P (s ) = C (sI − A )−1 B = 1 s 1 s2 1 s2 1 s3 1 0 .5. Functional controllability is a necessary requirement for output regulator and servo/tracking problems. A time-invariant linear system with input u (t ) and output y (t ) is said to be output controllable if for any y1 .5. 5 Here we assume that C has full row rank. The property of output controllability is an input-output property of the system while (state) controllability is not. C ) is controllable and observable.3 (Output controllability and Functional reproducibility). In particular for functionally reproducible it is necessary that n y ≤ nu .1 (Output controllability)..4. t ∈ [t1 . The concept of output controllability is generally weaker: a system ( A . B. is the requirement that the output of the system can be steered to any desired position in output space by manipulating the input variables. y2 ∈ R p there exists an input u (t ). Thus the system also is output controllable. B = 1 0 . Summary 4. the rank of P is 1. Note that the concept of output controllability only requires that the output can be given a desired value at each instant of time. Thus the system is not functionally reproducible.12. A stronger requirement is to demand the output to be able to follow any preassigned trajectory in time over a given time interval.     0 0 0 0 1 0 A = 1 0 0 .5. the system is output controllable if and only if the constant matrix CB C AB C A2 B . However. In case the system has a strictly proper transfer matrix and is described by a state-space realization ( A .

The subject of noninteracting or decoupling control as discussed in this section is based on the works of Silverman (1970). This allows the tuning of individual controllers in separate feedback loops. s fm ) is such that (4. MIMO structural requirements and design methods 4. with subsequent design of the actual feedback loops for the various single-input. and it is thought to provide an acceptable control structure providing ease of survey for process operators and maintenance personnel. m ) such that D f defined as D f (s ) = diag(s f1 . Exercise 4.4. Suppose that P has state-space representation x ˙(t ) y (t ) = = Ax (t ) + Bu (t ). n y = nu = m. . The f i equal the maximum of the relative degrees of the entries in the ith row of P. Cx (t ) + Du (t ) with u and y having equally many entries. .2.5. Verify this.62) D := lim D f (s ) P (s ) |s|→∞ (4. . i. Decoupling control A classical approach to multivariable control design consists of the design of a precompensator that brings the system transfer matrix to diagonal form. In this case we proceed as follows. It may be shown that f i ≤ n.5. In this section we investigate whether and how a square P can be brought to diagonal form by applying feedback and/or static precompensation.5. then by construction the largest relative degree in each row of D f P is zero. Indeed. If D is singular then the inverse P−1 —assuming it exists—is not proper. then the inverse P−1 is proper and has realization P−1 (s ) = − D−1 C (sI − A + BD−1 C )−1 BD−1 + D−1 . . . P (s ) = C (sI − A )−1 B + D square. and as a consequence lim|s|→∞ D f (s ) P (s ) has a nonzero value at precisely the entries of relative degree zero. Summary 4. . . If the direct feedthrough term D = P (∞ ) of P is invertible.63) is nonsingular. This identifies the indices f i uniquely. Assume in what follows that P is invertible. The presentation follows that of Williams and Antsaklis (1996). The inverse transfer matrix P−1 —necessarily a rational matrix—may be viewed as a precompensator of P that diagonalizes the series connection P P−1 . otherwise f i = min{ k > 0 | row i of C Ak−1 B is not identical to zero }. Define the indices f i ≥ 0 (i = 1.. single-output channels separately. then the regular state feedback u (t ) = D −1 (−C x (t ) + v(t )) 179 .4 (Realization of inverse). s f2 .5. The indices f i so defined are known as the decoupling indices. If D as defined in (4. The indices f i may also be determined using the realization of P: If the ith row of D is not identical to zero then f i = 0.5 (Decoupling state feedback).e. Williams and Antsaklis (1986). where n is the state dimension.63) is defined and every row of D has at least one nonzero entry.

Consider a plant with transfer matrix P (s ) = 6 7A 1/ s −1 / s 2 1 / s2 .6 (Decoupling. Let ( A . We also know sI B that regular state-feedback does not change the zeros of the matrix A− C D . In other words. . 4. Its realization is 1 . s− fm ). .7. A proof is given in Appendix 4. . after state feedback the realization has unobservable modes at the open loop transmission zeros. .8(b). controllable and detectable and has a stable transfer matrix diag( p 1 Example 4. p1m ).4. Then there is a regular state feedback u (t ) = Fx (t ) + W v(t ) for which the loop from v(t ) to y (t ) is decoupled. . This is undesirable if P has unstable transmission zeros as it precludes closed loop stability using output feedback. The compensated plant is shown in Fig. monic polynomial is a polynomial whose highest degree coefficient equals 1. 180 . . B.8. If P has no unstable transmission zeros. C. (Diagonal decoupling by state feedback precompensation). . pm ) where the pi are strictly Hurwitz6 if monic7 polynomials of degree f i . The 1 decoupled plant D− f has all its poles at s = 0 and it has no transmission zeros. stabilizing state feedback).   .5. A decoupled plant C e K u P y e K v x u P y D −1 1 Decoupled plant D− f (a) (b) Figure 4. (b) with decoupled plant that is stable may offer better perspectives for further performance enhancement by multiloop feedback in individual channels.: (a) closed loop with original plant. To obtain a stable decoupled plant we take instead of (4.  Cm∗ A fm  renders the system with output y and new input v decoupled with diagonal transfer matrix 1 − f1 − f2 D− . . . . Multivariable Control System Design where  C1∗ A f1  C2∗ A f2   C =  .6. The formulae are now more messy.8. −1 / s 2 A strictly Hurwitz polynomial is a polynomial whose zeros have strictly negative real part. . but the main results holds true also for this choice of D f : Summary 4.5. f ( s ) = diag( s Here Ci∗ is denotes the ith row of the matrix C . . square m × m and invertible. This implies that after state feedback all non-zero transmission zeros of the system are canceled by poles. then we may proceed with the 1 decoupled plant D− f and use (diagonal) K to further the design. s . see Fig.62) a D f of the form D f = diag( p1 . Suppose the transfer matrix P of the plant is proper. D ) be a minimal realization of P and let f i be the decoupling indices and suppose that pi are Hurwitz polynomials of degree f i . 4. and suppose it has no unstable transmission zeros. .

MIMO structural requirements and design methods A minimal realization of P is  0 0  1 0   0 0 A B =  0 0 C D   0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 1 0 −1 0 0 0     .5. Likewise  1 0 = 0 0 . A realization of D f (s ) P (s ) = is ( A . (s + 1 )/s2 Thus P has one transmission zeros at s = −1. As it is nonzero we have f 2 = 2. −1 The matrix D is nonsingular so a decoupling state feedback exists. C . D ) with (s + 1 )/s ( s + 1 )2 / s 2 (s + 1 )/s2 − ( s + 1 )2 / s 2 C= 1 0 1 0 0 2 0 . 1 D= 1 0 .4. C2∗ A B = 1 −1 . It is a stable zero hence we may proceed. B.    Its transmission zeros follow after a sequence of elementary row and column operations P (s ) = 1 s s2 1 1 −1 ⇒ 1 s+1 1 0 1 s2 ⇒ 1 1 s2 0 0 1 / s2 = 0 s+1 0 . ( s + 1 )2 Its diagonal entries have the degrees f 1 = 1 and f 2 = 2 as required. The direct feedthrough term D is the zero matrix so the decoupling indices f i are all greater than zero.   0 1 1 0   C1∗ B = 0 1 0 0   1 −1  = 1 0 0 0 it is not identically zero so that f 1  0 1 C2∗ B = 0 0 0 1  1 0 = 1. Take D f (s ) = s+1 0 0 . We need to compute Ci∗ Ak−1 B. −1  0 It is zero hence we must compute C2∗ AB. 1 −1 181 . This gives us D= C1∗ B 1 = C2∗ A B 1 0 .

A trivial example is the 2 × 2 system with constant transfer matrix P (s ) = 1 0 . Multivariable Control System Design Now the regular state feedback u (t ) = 1 1 0 −1 −1 − 1 0 1 0 0 2 0 x (t ) + v(t ) 1 renders the system with input v and output y. Exercise 4. a MIMO system generally does not have a unique gain.5. This is typical for decoupling procedures. After applying this state feedback we arrive at the state-space system     −1 −1 −2 1 1 −1  0 −1 0   0  x (t ) + 1 0  v(t ) x ˙(t ) = A + BD −1 (−C x (t ) + v(t )) =  0 0 1  0 −2 −1 0 0 1 0 0 0 0 1 0 0 y (t ) = Cx (t ) + Dv(t ) = x (t ) 0 0 0 1 (4. 0 2 The gain is in between 1 and 2 depending on the direction of the input. 182 .5. 1/(s − 1 ) 1/s 1.4. 2.9 (Bounds on gains). A natural generalization of the SISO gain | y ( jω)|/|u ( jω)| from input u ( jω) to output y ( jω) is to use norms instead of merely absolute values.5.64) and its transfer matrix by construction is 1 D− f (s ) = 1/(s + 1 ) 0 0 . Therefore (4. decoupled. 1/(s + 1 )2 1 Note that a minimal realization of D− f has order 3 whereas the realization (4.64) is not minimal.5 and Summary 4.8 (An invertible system that is not decouplable by state feedback).6 are not applicable here. For any given input u and fixed frequency ω there holds σ( P ( jω)) ≤ P ( jω)u ( jω) u ( jω) 2 2 ≤ σ( P ( jω)) (4.5. Directional properties of gains In contrast to a SISO system. Summary 4. Show that the methods of Summary 4. Find the transmission zeros.65) and the lower bound σ( P ( jω)) and upper bound σ( P ( jω)) are achieved for certain inputs u.5. 3. Find a stable precompensator K0 that renders the product PK0 diagonal.3. Consider the plant with transfer matrix P (s ) = 1/(s + 1 ) 1/s . There are various ways to define gains for MIMO systems. We take the 2-norm.64) has order 4. 4.

and MacFarlane 1981) and they have the special property that their gains are precisely the corresponding singular values P ( jω)uk ( jω) uk ( jω) 2 2 (4.10 (Upper and lower bounds on gains).5. The columns uk ( jω) of U ( jω) the input principal directions. The columns of Y ( jω) are the output principal directions and the σk ( jω) the principal gains. So at each frequency ω the matrix T ( jω) has two singular values. Edmunds. σ2 ( jω). .nu ) ( jω) ≥ 0.9 shows the two singular values σ( jω) and σ( jω) as a function of frequency. (Postlethwaite. . that is. Let P ( jω) = Y ( jω) ( jω)U ∗ ( jω) be an SVD (at each frequency) of P ( jω).67) 183 .9.5.4. .: Lower and upper bound on the gain of T ( jω) Example 4. MIMO structural requirements and design methods 10 1 10 0 10 -1 10 -2 10 -3 10 -4 10 -2 10 -1 10 0 10 1 10 2 Figure 4. Near the crossover frequency the singular values differ considerably. σmin(n y .66) = σk ( jω) and the response yk ( jω) = P ( jω)uk ( jω) in fact equals σk ( jω) times the kth column of Y ( jω). Consider the plant with transfer matrix P (s ) = 1 s − 2(s1 +1 ) 1 √ s2 + 2s+1 1 s . Figure 4. Y ( jω) and U ( jω) are unitary and ( jω) = diag(σ1 ( jω). . The condition number κ defined as κ( P ( jω)) = σ( P ( jω)) σ( P ( jω)) (4. In unity feedback the complementary sensitivity matrix T = P ( I + P )−1 has dimension 2 × 2. . . ≥ σmin(n y .nu ) ( jω)) with σ1 ( jω) ≥ σ2 ( jω) ≥ .

The upperbound shows that where σ( P ( jω)) is not too large that σ( K ( jω)) 1 guarantees a small gain from r ( jω) to u ( jω). A high condition number indicates that the system is “close” to losing its full rank. u ( jω) r ( jω) 2 2 = ≤ ≤ ≤ ≤ ≤ ( I + K ( jω) P ( jω))−1 K ( jω)r ( jω) r ( jω) 2 σ(( I + K ( jω) P ( jω))−1 K ( jω)) σ(( I + K ( jω) P ( jω))−1 )σ( K ( jω)) σ( K ( jω)) σ( I + K ( jω) P ( jω)) σ( K ( jω)) 1 − σ( K ( jω) P ( jω)) σ( K ( jω)) .69) (4. The performance of a multivariable feedback system can be assessed using the notion of principal gains as follows. (4. 1 − σ( K ( jω))σ( P ( jω)) 2 (4.e. At the same time the restriction of the propagation of measurement noise requires σ( T ( jω)) 1. then κ( jω) = 1.11 (Multivariable control system—design goals). Indeed. The upper bound (4. The disturbance rejecting properties of the loop transfer require the sensitivity matrix to be small: σ( S ( jω)) 1. The basic idea follows the ideas previously discussed for single input.8 are used and in the last two inequalities it is assumed that σ( K ( jω))σ( P ( jω)) < 1. and between σ( S ) and σ( S ). We must also be concerned to make the control inputs not too large. Thus the transfer matrix ( I + K P )−1 K from r to u must not be too large. Example 4. 184 . As in the SISO case these two requirements are conflicting. not too large in the cross over region.70) Here several properties of Exercise 4.5.70) of this gain is easily determined numerically. Multivariable Control System Design is a possible measure of the difficulty of controlling the system.4. using the triangle inequality we have the bounds |1 − σ( S ( jω))| ≤ σ( T ( jω)) ≤ 1 + σ( S ( jω)) and |1 − σ( T ( jω))| ≤ σ( S ( jω)) ≤ 1 + σ( T ( jω)). Describing the same system in terms of a different set of physical units leads to a different condition number. single output systems.3. Note that the condition number is not invariant under scaling. close to not satisfying the property of functional controllability. Making them equal would imply that the whole plant behaves identical in all directions and this is in general not possible. such as when P ( jω) is unitary.68) It is useful to try to make the difference between σ( T ) and σ( T ). i. If κ( jω) 1 then the loop gain L = PK around the crossover frequency may be difficult to shape. If for a plant all principal gains are the same.

respectively.67). (4. Example 4. that is. and similarly u2 is controlled only by y2 .4. Consider a two-input-two- output system y1 u =P 1 y2 u2 and suppose we decide to close the loop by a diagonal controller u j = K j (r j − y j ). Decentralized control structures Dual to decoupling control where the aim is make the plant P diagonal..12 (Relative gain of a two-input-two-output system). The most and least favorable disturbance directions are those for which v is into the direction of the principal output directions corresponding to σ( P ( jω)) and σ( P ( jω)).   K1 0   K2   (4.5. It follows that 1 ≤ κv ( P ) ≤ κ( P ) and κv ( P ) generalizes the notion of condition number of the plant defined in (4.74) 4. κv ( P ( jω)) = P−1 ( jω)v( jω) v( jω) 2 2 σ( P ( jω)) (4.71) = P−1 ( jω)v( jω) v( jω) 2 2 (4. MIMO structural requirements and design methods The direction of an important disturbance can be taken into consideration to advantage. This leads to the disturbance condition number of the plant. Complete rejection of the disturbance would require an input u ( jω) = − P−1 ( jω)v( jω). 0 Km This control structure is called multiloop control and it assumes that there as many control inputs as control outputs. . Let v( jω) be a disturbance acting additively on the output of the system.72) measures the magnitude of u needed to reject a unit magnitude disturbance acting in the direction v.5.   . is decentralized control where it is the controller K that is restricted to be diagonal or block diagonal. This is the input-output pairing problem. Thus u ( jω) v( jω) 2 2 (4. it is save to say that we can design the second control loop independent 185 . the first component of u is only controlled by the first component of y.4.75) K = diag{ Ki } =  .73) which measures the input needed to reject the disturbance v relative to the input needed to reject the disturbance acting in the most favorable direction. If the open loop gain from u2 to y2 does not change a lot as we vary the first loop u1 = K1 (r1 − y1 ). In a multiloop control structure it is in general desirable to make an ordering of the input and output variables such that the interaction between the various control loops is as small as possible.5. Suppose we leave the second control loop open for the moment and that we vary the first control loop.

k = j closed 2. Now if the first loop is open then the first input entry u1 is zero (or constant).77) where ◦ denotes the Hadamard product which is the entry-wise product of two matrices of the same dimensions.76) as λ= dy2 /du2 dy2 /du2 u1 y1 where u1 expresses that u1 is considered constant in the differentiation. To make this λ more explicit we assume now that the reference signal r is constant and that all signals have settled to their constant steady state values. Multivariable Control System Design from the first. In summary. yk ). we want the ratio λ := gain from u2 to y2 if first loop is open gain from u2 to y2 if first loop is closed (4. yk ). This is desirable. This expression for λ exists and may be computed if P (0 ) is invertible: dy2 du2 = u1 d P21 (0 )u1 + P22 (0 )u2 du2 = P22 (0 ) u1 and because u = P−1 (0 ) y we also have dy2 du2 = y1 1 du2 /dy2 = y1 1 −1 −1 d P21 (0 ) y1 + P22 (0 ) y2 y1 dy2 = 1 −1 (0 ) P22 −1 = P22 (0 ). This allows us to express (4. Summary 4. The relative gain λ hence equals P22 (0 ) P22 For general square MIMO systems P we want to consider the relative gain array (RGA) which is the (rational matrix) defined element-wise as ij 1 = Pi j P− ji or. equivalently as a matrix. −1 (0 ). k = j open dyi du j all loops ( uk . as = P ◦ ( P−1 )T (4. However if the first loop is closed then—assuming perfect control—it is the output y1 that is constant (equal to r1 ). that the second control loop is insensitive to tuning of the first. 1.5. In constant steady state there holds that i j (0 ) = dyi du j all loops ( uk .13 (Properties of the RGA). The sum of the elements of each row or each column of is 1 3.4.76) preferably close to 1. Any permutation of rows and columns in P results in the same permutations of rows and columns in 186 . or to put it differently.

λ We immediate see that rows and columns add up to one. If P is diagonal or upper or lower triangular then 6. In the fourth example all entries of the RGA are the same. and (Skogestad and Morari 1987). two-output case P= P11 P21 P12 . P11 P22 − P21 P12 1−λ . In the first example this pairing is also direct from the fact that P is diagonal.5 0.4.5. is invariant under diagonal input and output scaling = Im 5. It 1 indicates that P becomes singular by an element perturbation from Pi j to Pi j[1 − λ− i j ].5 0. hence no pairing can be deduced from the RGA. For the two-input. Some interesting special cases are P= P= P= P= P11 0 P11 P21 0 P21 p −p 0 P22 0 P22 P12 0 p p ⇒ ⇒ ⇒ ⇒ = = = = 1 0 1 0 0 1 0 1 0 1 1 0 0. so we want to pair u1 with y2 and u2 with y1 . For example P= with δ ≈ 0.5. If deviates a lot from Im then this is an indication that interaction is present in the system. Morari. P11 the relative gain array = λ 1−λ = P ◦ P−T equals λ := P11 P22 . and possibly no sensible pairing exists.5 0. The corresponding pairings are to be avoided. P22 ( P−1 )T = 1 P22 P11 P22 − P21 P12 − P12 − P21 . Example 4. (Grosdidier. The norm σ( ( jω)) of the RGA is believed to be closely related to the minimized condition number of P ( jω) under diagonal scaling. 7. and thus serves as an indicator for badly conditioned systems. and u2 with y1 . The following i/o pairing rules have been in use on the basis of an heuristic understanding of the properties of the RGA: a a a a (1 + δ) ⇒ = 1+δ δ −1 δ −1 δ 1+δ δ 187 . Although the RGA has been derived originally by Bristol (1966) for the evaluation of P (s ) at steady-state s = 0 assuming stable steady-state behavior. the relative gain suggests to pair u1 with y1 .14 (Relative Gain Array for 2 × 2 systems).5 In the first and second example. In the third example the RGA is antidiagonal. as claimed. If P is close to singular then several entries of are large. and Holt 1985). MIMO structural requirements and design methods 4. The RGA measures the sensitivity of P to relative element-by-element uncertainty. See (Nett and Manousiouthakis 1987). the RGA may be useful over the complete frequency range.

The following result by Hovd and Skogestad (1992) makes more precise the statement in (Bristol 1966) relating negative entries in the RGA.15 (Relative Gain Array and RHP transmission zeros). The loop with the negative steady-state relative gain is unstable by itself 3. If i j shows different signs when evaluated at s = 0 and s = ∞. Let the plant P be stable and square. The following result has been shown by Grosdidier. If the RGA (s ) of the plant contains a negative diagonal value for s = 0 then the closed-loop system satisfies at least one of the following properties: 1. The overall closed-loop system is unstable 2. and nonminimum-phase behavior. Multivariable Control System Design 1. Avoid pairings with negative steady-state or low-frequency values of λii ( jω) on the diagonal. In a problem setting employing integral feedback. and Holt 1985). Summary 4. Assume that the entries of are nonzero and finite for s → ∞. 2. See also the counterexample against Bristol’s claim in (Grosdidier and Morari 1987). a requirement arises regarding the ability to take one loop out of operation without destabilizing the other loops. Assume further that the loop gain PK is strictly proper. then at least one of the following statements holds: 1. The entry Pi j has a zero in the right half plane 2. and (Morari 1985). this leads to the notion of decentralized integral controllability (Campo and Morari 1994). The subsystem of P with input j and output i removed has a transmission zero in the right half plane As decentralized or multiloop control structures are widely applied in industrial practice. It is also assumed that the integrator in the control loop is put out of order when εi = 0 in loop i.5. and Holt (1985).17 (Steady-state RGA and stability). The closed-loop system is unstable if the loop corresponding to the negative steady-state relative gain is opened A further result regarding stability under multivariable integral feedback has been provided by Lunze (1985). see also (Grosdidier.16 (Decentralized integral controllability (DIC)). The system P is DIC if there exists a decentralized (multiloop) controller having integral action in each loop such that the feedback system is stable and remains stable when each loop i is individually detuned by a factor ε i for 0 ≤ εi ≤ 1. P has a transmission zero in the right half plane 3. This prevents undesirable stability interaction with other loops. Summary 4. Morari. The steady-state RGA provides a tool for testing on DIC. Suppose that P has stable elements having no poles nor zeros at s = 0.5. 188 . Definition 4.4. Prefer those pairing selections where ( jω) is close to unity around the crossover frequency region. and consider a multiloop controller K with diagonal transfer matrix having integral action in each diagonal element. The definition of DIC implies that the system P must be open-loop stable. Morari.5.

Frazho.81) yields x ˙(t ) x ˙s (t ) y (t ) = = A − Bs C C 0 0 As x (t ) B 0 (r (t ) − v(t )) + u (t ) + xs (t ) 0 Bs x (t ) + v(t ). Cv xv (t ). and Frazho 1991) and in (Nwokah.5. Without loss of generality we further assume that ( Av . and Lundstr¨ om (1991). Internal model principle and the servomechanism problem The servomechanism problem is to design a controller – called servocompensator – that renders the plant output as insensitive as possible to disturbances while at the same asymptotically track certain reference inputs r .5. Bs ) is controllable. and in (Hovd and Skogestad 1994). x v ( 0 ) = x v0 (4. consisting of the plant (4. but otherwise arbitrary. Cv ) is an observable pair. xs (t ) 189 .82) and where Bs is of compatible dimensions and such that ( As . y ∈ Rn y .81) where A s is block-diagonal with n y blocks As = diag( Av . Consider the closed loop system consisting of the open-loop stable plant P and the controller K having transfer matrix K (s ) := α Kss s (4. Hovd.18 (Steady-state gain and DIC). The inherent control limitations of decentralized control have been studied by Skogestad. . MIMO structural requirements and design methods Summary 4.5.79) and assume that v is a disturbance that is generated by a dynamic system having all of its poles on the imaginary axis x ˙v ( t ) = v(t ) = Av xv (t ). Av ) (4. Av . x (0 ) = x0 ∈ Rn (4. and Le 1993). We discuss a state space solution to this problem. Further results on decentralized integral controllability can be found in (Le.78) Assume unity feedback around the loop and assume that the summing junction in the loop introduces a minus sign in the loop. Then a necessary condition for the closed loop system to be stable for all α in an interval 0 ≤ α ≤ α with α > 0 is that the matrix P (0 ) Kss must have all its eigenvalues in the open right half plane. Then a possible choice of the servocompensator is the compensator (controller) with realization x ˙s (t ) = As xs (t ) + Bs e (t ). .80) The disturbance is persistent because the eigenvalues fof Av are assumed to lie on the imaginary axis. a solution that works for SISO as well as MIMO systems. Consider a plant with state space realization x ˙(t ) y (t ) = = Ax (t ) + Bu (t ) Cx (t ) + v(t ) u ( t ) ∈ R nu . . Further results on decentralized i/o pairing can be found in (Skogestad and Morari 1992).79) and the servocompensator (4. .4. e (t ) = r (t ) − y (t ) x s (0 ) = x s 0 (4. The composite system.5. Nwokah. 4.

the disturbance v(t ) is still present and the servocompensator acts in an open loop fashion as the autonomous generator of a compensating disturbance at the output y (t ). B ) is stabilizable and ( A . but it may be stabilized by an appropriate state feedback u (t ) = Fx (t ) + Fs xs (t ) (4. it is an example of the more general internal model principle. so that the disturbance model used for v can also be utilized for r . Hence the output y (t ) approaches r (t ) as t → ∞. 1. rank A−λi B C D = n + n y for every eigenvalue λi of Av . This means that the servocompensator contains the mechanism that generates v. there are at least as many inputs as there are outputs 3. Then F . Multivariable Control System Design As its stands the closed loop is not stable. ( A . • No transmission zero of the system should coincide with one of the imaginary axis poles of the disturbance model • The system should be functionally controllable • The variables r and v occur in the equations in a completely interchangeable role. such as LQ theory. The conditions as mentioned imply the following.81). r (t ) = Cr xv (t ) the error signal e (t ) converges to zero.10. n u ≥ n y . i.84) has all its eigenvalues in the open left-half plane. 190 . when e (t ) is zero. • Asymptotically. for any r that satisfies x ˙v (t ) = Av xv (t ).83) is a servocompensator in that e (t ) → 0 as t → ∞ for any of the persistent v that satisfy (4. Fs can be selected such that the closed loop system matrix A + BF − Bs C B Fs As (4. That is. 2.: Servomechanism configuration Summary 4.(4.e.19 (Servocompensator). v r e K xs Fs F u P x y Figure 4. Suppose that the following conditions are satisfied.5. of equal form as v(t ) but of opposite sign.80). 4. C ) is detectable.4. The system matrix As of the servocompensator contains several copies of the system matrices Av that defines v. In that case the controller (4.10.83) where F and Fs are chosen by any method for designing stabilizing state feedback. The resulting closed loop is shown in Fig. see (Davison 1996) and (Desoer and Wang 1980).

5. ( A . MIMO structural requirements and design methods • If the state of the system is not available for feedback. ˙ x ˆ(t ) u (t ) = = ( A − KC ) x ˆ(t ) + Bu (t ) + Ky (t ) Fx ˆ (t ) + Fs xs (t ). We shall verify the conditions of Summary 4. rank A C B = n + n y. x ˙(t ) = Ax (t ) + Bu (t ).20 (Integral action). This is the case if and only if 0 0 B 0 AB CB A2 B · · · C AB · · · has full row rank.19 states. 191 . 0 I xs Closing the loop with state feedback u (t ) = Fx (t ) + Fs xs (t ) renders the closed loop state space representation x ˙(t ) x ˙ s (t ) y (t ) = = A + BF −C C 0 B Fs 0 x (t ) 0 + (r (t ) − v(t )) xs (t ) I x (t ) + v(t ). 2. Combined with the integrating action of the servocompensator x ˙ s (t ) = e (t ). B ) is controllable. The above matrix may be decomposed as A C B 0 0 Im B 0 AB 0 ··· ··· and therefore has full row rank if and only if 1. Example 4. then the use of a state estimator is possible without altering the essentials of the preceding results. we obtain x ˙(t ) x ˙ s (t ) = = A 0 A −C 0 0 0 0 B x (t ) 0 + u (t ) + e (t ) 0 xs (t ) I B 0 x + u (t ) + (r (t ) − v(t )).) The three conditions are exactly what Summary 4. 3. 0 (In fact the third condition implies the second.4.5. As before the plant P is assumed strictly proper with state space realization and corrupted with noise v. xs (t ) All closed-loop poles can be arbitrarily assigned by state feedback u (t ) = Fx (t ) + Fs xs (t ) if and A 0 . with e (t ) = r (t ) − y (t ). y (t ) = Cx (t ) + v(t ). B only if C is controllable.19 for this case. If the servocompensator is taken to be an integrator x ˙s = e then constant disturbances v are asymptotically rejected and constant reference inputs r asymptotically tracked. n u ≥ n y .5. If the full state x is not available for feedback then we may replace the feedback law u (t ) = Fx (t ) + Fs xs (t ) by an approximating observer.5.

(4.0 ). then the mass balances over each vessel and a relation for outflow are: ˙ 1 (t ) A1 ρ h ˙ 2 (t ) A2 ρh = = ρ[φ1 (t ) + φ4 (t ) − φ2 (t )] ρ[φ2 (t ) − φ4 (t ) − φ3 (t )] (4. φi (t ) h k (t ) = = ˜ i (t ).87) where k is a valve dependent constant.1 is derived under the following assumptions: • The liquid is incompressible.0 . As ρ goes to infinity.0 2 (4. φi. Define the deviations from the equilibrium values with tildes.13.0 . Proof of Algorithm 4. and h1. Let ρ denote density of the liquid expressed in kg/m3 . • The pressure in both vessels is atmospheric.0 = 1 leads to Equation (4.0 h h 2 h 1.0 ) gives ˜ 2 (t ) = φ k 1˙ ˙ ˜ 1 (t ).0 ˙ ˜ 2 (t ) h 2 A1 h1.0 h 1. • The flow φ4 is the instantaneous result of a flow-controlled recycle pump.e. Appendix: Proofs and Derivations Derivation of process dynamic model of the two-tank liquid flow process. • φ3 is determined by a valve or pump outside the system boundary of the process under consideration.90) Assuming that φ4.0 0 0 ˜ 1 (t ) h + ˜ h2 ( t ) 1 A1 0 1 A1 1 −A 2 ˜ 1 (t ) 0 ˜ φ + 1 φ3 ( t ) ˜ −A φ4 ( t ) 2 (4. 2. • The flow φ2 depends instantaneously on the static pressure defined by the liquid level H1 .0 = 1. φi. φi..86) yields the linearized equations φ 2. and thus φ2. ˜ 1 ( t ) = φ 2.85). Multivariable Control System Design 4.88) Linearization of equation (4.4 k = 1.4. Next we linearize these equations around an assumed equilibrium state (h k .0 + h i = 1 . that is. We only prove the square case. h k .0 .0 = φ1.0 ˙ ˜ 1 (t ) − 2A h 1 h 1. and let A1 and A2 denote the crosssectional areas (in m2 ) of each vessel.0 + φ ˜ k (t ). .1).0 = 2φ1. independent of the variables under consideration in the model. (4.86) The flow rate φ2 (t ) of the first tank is assumed to be a function of the level h1 of this tank. (4. The common model is that φ2 (t ) = k h1 (t ). . i. A dynamic process model for the liquid flow process of figure 4.0 . the zeros s of   A − sI B 0  C D I 1 K 0 ρI 192 . and taking numerical values φ1.87) around (h k .6.85) (4. A1 = A2 = 1. where D and K are sqaure.0 = φ 2.89) Inserting this relation into equations (4.2.

2.6. Then we get the matrix   A − sI −ρ BK B  C I − ρ DK D  . Appendix: Proofs and Derivations converge to the transmission zeros. So D = 0 and A is a stability matrix. 1. If A is not a stability matrix then h (t ) is unbounded. Y satisfies the Lyapunov equation AT Y + Y A = −C T C .91) is well known to be unique. However. (4.16.4. By the property of state. Now apply the following two elementary operations to the above matrix: subtract ρ K times the second column from the third column and then interchange these columns.91) That is. Then ∞ 0 T y (t )T y (t ) dt = 0 ∞ A t T xT C C e At x0 dt = xT 0e 0 Y x0 . 193 .3. So D = 0 and h (t ) = C e At B½(t ). and as A is a stability matrix the solution Y of (4. T If X satisfies A X + X A = − BBT then its inverse Z := X −1 satisfies Z A + AT Z = − Z B BT Z . the output y for positive time is a function of the state x0 at t = 0. T The so defined matrix Y satisfies AT Y + Y A = 0 ∞ AT e A t C T C e At + e A t C T C e At A dt = e A t C T C e At T T T t =∞ t =0 = −C T C. 1 0 0 ρI In this form it is clear that its zeros are the zeros of A − sI C −ρ BK I − ρ DK I −( I − ρ DK )−1 C nonsingular 1 I − DK )−1 C . By completion of the square we may write d T x Zx dt = = = 2 xT Z x ˙ = 2 xT ( Z Ax + Z Bu ) = xT ( Z A + AT Z ) x + 2 xT Z Bu xT (− Z B BT Z ) xT + 2 xT Z Bu uT u − (u − BT Zx )T (u − BT Zx ). hence the H2 -norm is infinite. since ρB I − ρ DK 1 I − DK )−1 C − sI A + BK ( ρ 0 = I 0 Proof of Summary 4. If D is not zero then the impulse response matrix h (t ) = C e At B½(t ) + δ(t ) D contains Dirac delta functions and as a result the H2 -norm can not be finite. Let X and Y be the controllability and observability gramians. we see that these zeros are simply the eigenvalues of A + BK ( ρ A−s I −rho B C I −ρ D K . Then H 2 H2 = = = ∞ tr −∞ ∞ h (t )T h (t ) dt BT e A t C T C e At B dt ∞ 0 Y T tr 0 tr BT e A t C T C e At dt B .

which in turn is equivalent to that λi (Y X ) = λi ( X 1/2 Y X 1/2 ) < γ 2 . σ( D ) < γ . . Proof of Summary 4. (4. T x0 X −1 x0 Now this supremum is less then some γ 2 for some x0 if and only if 2 T −1 xT 0 Y x0 < γ · x0 X x0 . γ 2 I − DT D > 0) and nowhere on the imaginary axis γ 2 I − H ∼ H is singular.e.92) 194 . − AT − sI where Aham is the Hamiltonian matrix (4. Multivariable Control System Design Therefore.e. then the derivative y ˙ (t ) satisfies y ˙ (t ) = Cx ˙(t ) = C Ax (t ) + C Bu (t ) The matrix D f (s ) = diag(s f1 .5. Cx (t ) + Du (t ) x (0 ) = 0 . We will show that γ 2 I − H ∼ H has imaginary zeros iff (4. .55). i. We make use of the fact that if x ˙(t ) y (t ) = = Ax (t ) + Bu (t ). This is the case iff it is positive definite at infinity (i. H H∞ < γ holds if and only if H is stable and γ 2 I − H ∼ H is positive definite on jR ∪ ∞. . assuming x (−∞ ) = 0. This proves the result.. It is readily verified that a realization of γ 2 I − H ∼ H is   A − sI 0 −B  −C T C − AT − sI  CT D T T 2 T D C B γ I−D D By the Schur complement results applied to this matrix we see that det A − sI −C T C 0 · det(γ 2 I − H ∼ H ) = det(γ 2 I − DT D ) · det( Aham − sI ). This then shows that sup u ∞ T 0 y ( t ) y ( t ) dt 0 T −∞ u ( t )u ( t ) dt = xT 0 Y x0 .4. There exist such x0 iff Y < γ 2 X −1 .5. . As A has no imaginary eigenvalues we have that γ 2 I − H ∼ H has no zeros on the imaginary axis iff Aham has no imaginary eigenvalues.55) has imaginary eigenvalues. Cx (t ) x (0 ) = 0 . 3. Since the plant y = Pu satisfies x ˙(t ) y (t ) = = Ax (t ) + Bu (t ). s fm ) is a diagonal matrix of differentiators. From this expression it follows that the smallest u (in norm) that steers x from x (−∞ ) = 0 to x (0 ) = x0 is u (t ) = BT Zx (t ) (verify this). 0 −∞ u (t ) 2 2 − u (t ) − BT Zx (t ) 2 2 dt = xT (t ) Zx (t ) t =0 t =−∞ T −1 = xT 0 Zx0 = x0 X x0 .

(4.  . x (0 ) = 0 . 195 .12 are obtained.2. ym (t ) Ax (t ) + Bu (t ). Proof of Norms of linear time-invariant systems.  x (t ) + D u (t )  .3. (4. so u (t ) = D −1 (−C x (t ) + v(t )). the ith component yi of the output y = h ∗ u of the convolution system may be bounded as | y i ( t )| = ≤ so that y L∞ ∞ −∞ ∞ −∞ j j hi j (τ)u j (t − τ) d τ ≤ |h i j (τ)| d τ · u L∞ . Proof of Lemma 4. 1.   C1∗ A f1 C2∗ A f2     .95) The proof that the inequality may be replaced by equality follows by showing that there exists an input u such that (4.  . We only prove it for the case that none of the transmission zeros s0 are poles. ∞ −∞ j |h i j (τ)| · |u j (t − τ)| d τ t ∈ R.  Cm A fm C By assumption D is nonsingular.4. Since A − s 0 In C B D In 0 (s0 In − A )−1 B A − s 0 In = I C 0 P ( s0 ) s0 I B we see that the rank of A− C D equals n plus the rank of P ( s0 ). We indicate how the formulas for the system norms as given in Summary 4. Hence s0 is a transmission zero A−s0 In B of P if and only if C D drops rank. L∞ -induced norm. Appendix: Proofs and Derivations it follows that the component-wise differentiated v defined as v := D f y satisfies    v(t ) =    d f1 dt f 1 d f2 dt f 2 d fm dt fm x ˙(t ) =  y1 ( t )  y2 ( t )   = .6. C ) x (t ) + DD v(t ) −1 1 Since v = D f y it must be that the above is a realization of the D− f .8 (sketch). .94) is achieved with equality. Then ( A − s0 In )−1 exists for any transmission zero.94) This shows that φ ≤ max i ∞ −∞ j |h i j (τ)| d τ. Inserting this in the state realization (4. In terms of explicit sums. (C − DD −1 x (0 ) = 0 . (4.92) yields x ˙(t ) y (t ) = = ( A − BD −1C ) x (t ) + BD −1 v(t ).93) ≤ max i ∞ −∞ j |h i j (τ)| d τ · u L∞ . and s0 is a transmission zero if and only if P (s0 ) drops rank.

ω (4.97) where y ˆ is the Laplace transform of y and u ˆ that of u.4. The proof that the inequality may be replaced by equality follows by showing that there exists an input u for which (4.96) (4.g. L2 -induced norm. By Parseval’s theorem (see e. For any fixed frequency ω we have that u ˆH ( jω) H H ( jω) H ( jω)u ˆ ( jω) = σ 2 ( H ( jω)).98) The right hand side of this inequality is by definition H 2 H∞ . H ( jω)u u ˆ ˆ ( j ω) u ( jω) sup Therefore sup u y u L2 L2 2 2 ≤ sup σ 2 ( H ( jω)). Multivariable Control System Design 2. 196 .98) is achieved with equality within an arbitrarily small positive margin ε. (Kwakernaak and Sivan 1991)) we have that y u L2 L2 2 2 = = = ∞ H −∞ y ( t ) y ( t ) dt ∞ H −∞ u ( t )u ( t ) dt ∞ 1 ˆH ( jω) y ˆ ( jω) d ω 2π −∞ y ∞ 1 H ˆ ( jω)u ˆ ( jω) d ω 2π −∞ u ∞ H H ˆ ( jω) H ( jω) H ( jω)u ˆ ( jω) d ω −∞ u ∞ H ˆ ( jω)u ˆ ( jω) d ω −∞ u (4.

Next. Moreover.2 is devoted to parametric uncertainty models. Section 5. We discuss some methods to analyze closed-loop stability under this type of uncertainty. The most famous result is Kharitonov’s theorem. The idea is to assume that the equations that describe the dynamics of the plant and compensator (in particular. Parametric and nonparametric uncertainty with varying degree of structure may be captured by the basic perturbation model. It is introduced in § 5. Unstructured perturbations may involve changes in the order of the dynamics and are characterized by norm bounds. The uncertainty is characterized by an interval of possible values.6 are relatively crude.3 we introduce the so-called basic perturbation model for linear systems. The perturbation models of § 5. Uncertainty Models and Robustness Overview – Various techniques exist to examine the stability robustness of control systems subject to parametric uncertainty.5 sufficient and necessary conditions for the robust stability of the basic perturbation model. 5.6 the basic stability robustness result is applied to proportional. in § 5. we present methods to analyze the effect of uncertainty on closed-loop stability and performance. With the help of that we establish in § 5. which admits a much wider class of perturbations. In § 5.5. Norm bounds are bounds on the norms of operators corresponding to systems.1. In § 5.7. Introduction In this chapter we discuss various paradigms for representing uncertainty about the dynamic properties of a plant. The small gain theorem provides the tool to analyze the stability robustness of this model. proportional inverse perturbations and fractional perturbations of feedback loops. These methods allow to generalize the various stability robustness results for SISO systems of Chapter 2 in several ways. their transfer functions or matrices) are known. including unstructured perturbations. but that there is uncertainty about the precise values of various parameters in these equations. The size of the perturbation is characterized by bounds on the norm of the perturbation. An appealing feature of structured 197 .4 we review the fixed point theorem of functional analysis to the small gain theorem of system theory. Doyle’s structured singular value allows much finer structuring.

and such that the closed-loop system is stable at the nominal plant parameter values g = g0 = 1 and θ = 0.1 (Third-order uncertain plant). The question is whether the system remains stable under variations of g and θ. 198 . reference input r prefilter F error signal e + − compensator C plant input u plant P output z Figure 5. From this point on they are usually omitted.1. (5. 5. 5.4) Presumably.8. 1 + T0 s (5.2. that is. and T0 are accurately known. The number θ is a parasitic time constant and nominally 0 [s]1 .1) The gain g is not precisely known but is nominally equal to g0 = 1 [s−2 ]. Uncertainty Models and Robustness singular value analysis is that it allows studying combined stability and performance robustness in a unified framework. 1 All physical units are SI. the return difference J = 1 + L = 1 + PC is given by J (s ) = 1 + θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk k + Td s g = . whether the roots of the closed-loop characteristic polynomial remain in the open left-half complex plane.1 where the plant transfer function is given by g . A system of this type can successfully be controlled by a PD controller with transfer function C (s ) = k + Td s. With these plant and compensator transfer functions.2) where the time constant T0 is small so that the PD action is not affected at low frequencies. In § 5. but contain several parameters whose values are not precisely known. We illustrate this by an example. the stability of the feedback system is determined by the locations of the roots of the closed-loop characteristic polynomial χ(s ) = θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk .5. This is explained in § 5. In this view of uncertainty the plant and compensator transfer functions are assumed to be given. Parametric robustness analysis 5.9 a number of proofs for this chapter are presented.2. Introduction In this section we review the parametric approach to robustness analysis. P (s ) = 2 s (1 + θ s ) (5. The latter transfer function is not proper so we modify it to C (s ) = k + Td s .2.: Feedback system Example 5.3) Hence. the compensator can be constructed with sufficient precision so that the values of the parameters k. Consider a feedback configuration as in Fig. s2 (1 + θ s ) 1 + T0 s s2 (1 + θ s )(1 + T0 s ) (5. Td .1.

We assume that the ground rules for the construction of root loci are known. Varying k from 0 to ∞ or from 0 to −∞ results in the root locus of Fig.7071.: Root locus for compensator consisting of a simple gain Example 5.6) s2 + g0 Td s + g0 k . Parametric robustness analysis Because we pursue this example at some length we use the opportunity to demonstrate how the compensator may be designed by using the classical root locus design tool. 5.3 (only the part for k ≥ 0 is shown). This may be achieved by placing the two closed-loop poles at √ 1 2 ( − 1 ± j ).2.: Root locus for a PD compensator The closed-loop characteristic polynomial corresponding to the PD compensator transfer function C (s ) = k + sTd is easily found to be given by (5. 5.e. Modification to a PD controller with transfer function C (s ) = k + sTd amounts to addition of a zero at −k / Td . For no value of k stability is achieved. Accordingly.2.2 (Compensator design by root locus method).2. (5. √ √ poles at the Choosing g0 Td = 2 and g0 k = 1 (i. The zero in Fig.. Setting the ratio of the imaginary to the real part of this pole pair equal to 1 ensures an adequate time response with a good compromise between rise time and overshoot. We assume that a closed-loop bandwidth of 1 is required.3. Given the nominal plant with transfer function g0 (5. Im Re zero at − k/Td double pole at 0 Figure 5.5) P0 (s ) = 2 .2. k= Im k=0 8 k =− Re Figure 5.2 is altered to that of Fig.3 may now be found at −k / Td = − 2 The final step in the design is to make the compensator transfer function proper by changing it to k + Td s . desired locations. 5. Td = 2 and k = 1) places the closed-loop √ 1 2 = −0. keeping −k / Td fixed while varying k the root locus of Fig. resulting in the desired closed2 loop bandwidth.7) C (s ) = 1 + T0 s 8 199 . 5. The distance of this pole pair from the origin is 1.5. s the simplest choice for a compensator would be a simple gain C (s ) = k.

the Routh-Hurwitz criterion may be invoked for testing stability.4697 appears. stability robustness analysis comes down to investigating the roots of a characteristic polynomial of the form χ(s ) = χn ( p )sn + χn−1 ( p )sn−1 + · · · + χ0 ( p ). 5.2. Recall that a polynomial is Hurwitz if all its roots have zero or negative real part.5. if the problem is not very complicated.10) whose coefficients χn ( p ). a1 .11) (5.2. Assuming that T0 is small the root locus now takes the appearance shown in Fig.4. · · · . 5. χn−1 ( p ).12) 200 . Consider the polynomial χ(s ) = a0 sn + a1 sn−1 + · · · + a n−1 s + an with real coefficients a0 . (5. χ0 ( p ) depend on the parameter vector p.4697. For completeness we summarize this celebrated result (see for instance Chen (1970)). The dominant pole pair at pole at −8. an . (5.7715. It is strictly Hurwitz if all its roots have strictly negative real part. Sometimes. The corresponding closed-loop characteristic polynomial is (5. Summary 5. √ 1 2 −8. √ Keeping the values Td = 2 and k = 1 and choosing T0 somewhat arbitrarily equal to 1/10 (which places the additional pole in Fig. Usually it is not possible to determine the dependence of the roots on p explicitly.: Root locus for the modified PD compensator This amounts to adding a pole at the location −1/ T0. Routh-Hurwitz Criterion In the parametric approach.4 at −10) results in the closed-loop poles −0.9) 2(−1 ± j ) has been slightly shifted and an additional non-dominant 5.2. Uncertainty Models and Robustness Im Re pole at −1/To zero at − k/Td double pole at 0 Figure 5.8) T0 s3 + s2 + g0 Td s + g0 k .4. · · · . From these coefficients we form the Hurwitz tableau a0 a1 b0 b1 ··· a2 a3 b2 b3 a4 a5 b4 b5 a6 a7 ··· ··· ··· ··· (5.3 (Routh-Hurwitz criterion).7652 ± j0.

4 (Stability region for third-order plant).13) Note that the numerator of b0 is obtained as minus the determinant of the 2 × 2 matrix formed by the first column of the two rows above and the column to the right of and above b0 . · · · of the first column of the tableau are nonzero and have the same sign.2. the Routh-Hurwitz criterion allows establishing the stability region of the parameter dependent polynomial χ as given by (5.14) The parameter vector p has components g and θ .15) = θ+ 1 10 − √ 2 10 g θ θ+ (θ + 1 10 1 10 .2. Similarly. In principle. an of the polynomial χ are nonzero and all have the same sign. (5. The Hurwitz tableau may easily be found to be given by θ 10 θ+ b0 b1 g where b0 1 10 1 √ g 2 g g (5. · · · is formed from the two rows above it in the same way the third row is formed from the first and second rows.2 we have from Example 5. this is often not simple. In practice.17) 201 . a1 . a necessary and sufficient condition for the polynomial χ to be strictly Hurwitz is that all the entries a0 . All further rows are constructed in this manner. b0 . respectively.2. the set of all parameter values p for which the roots of χ all have strictly negative real part. Using the numerical values established in Example 5. + 1 2 10 ) (5.2. We stop adding entries to the third row when they all become zero. A useful necessary condition for stability (known as Descartes’ rule of signs) is that all the coefficients a0 . Missing entries at the end of the rows are replaced with zeros. b3 . b1 .5.1 that the closed-loop characteristic polynomial is given by χ(s ) = √ θ 4 1 s + (θ + )s3 + s2 + g 2s + g. the numerator of b2 is obtained as minus the determinant of the 2 × 2 matrix formed by the first column of the two rows above and the column to the right of and above b2 .2. 10 10 (5. (5.10). Parametric robustness analysis The first two rows of the tableau are directly taken from the “even” and “odd” coefficients of the polynomial χ.2. The third row is constructed from the two preceding rows by letting b0 b2 b4 · · · = a2 a4 a6 ··· − a0 a3 a1 a5 a7 ··· . · · · . All further entries of the third row are formed this way.1 and 5.2.16) b1 = − θ √ √ 2 10 g θ) 2 − (θ √ 1 2 + 10 − 10 gθ g. that is. Given this tableau. By way of example we consider the third-order plant of Examples 5. We stop after n + 1 rows. a1 . The fourth row b1 . Example 5.

4 (which applies if θ = 0) shows that for θ = 0 closed-loop stability is obtained for all g > 0. second and fifth entries of the first column of the tableau to be positive. The number of grid points increases exponentially with the dimension of the parameter space. and testing for stability in each grid point. 5.19) g4 (θ) = θ Figure 5. The case θ = 0 needs to be considered separately.4.2.6 shows the results of gridding the parameter space in the region defined by 0.5 ≤ g ≤ 4 and 0 ≤ θ ≤ 1.4.2. Inspection of the root locus of Fig.14) shows that a necessary condition for closed-loop stability is that both g and θ be positive. The small circles indicate points where the closed-loop system is stable. Example 5. however. 202 .2.9(b) of the closed-loop system.5.4.18) g3 (θ) = θ The fourth entry b1 is positive if g < g4 (θ). (5. The stability of the feedback system of Example 5.5. 40 g 20 g3 g4 0 0 1 θ 2 Figure 5. and the computational load for a reasonably fine grid may well be enormous. 5.4 is quite robust with respect to variations in the parameters θ and g. Figure 5.5 (Stability margins). The grid does not necessarily need to be rectangular. where g3 is the function √ 1 5 2 (θ + 10 ) .2. (5.6 (Gridding the stability region for the third-order plant). It consists of covering the relevant part of the parameter space with a grid. Each point may be obtained by applying the Routh-Hurwitz criterion or any other stability test. The third entry b0 is positive if g < g3 (θ). Uncertainty Models and Robustness Inspection of the coefficients of the closed-loop characteristic polynomial (5. A simple but laborious alternative method is known as gridding.: Stability region Exercise 5.2 and Exercise 1.5 shows the graphs of g3 and g4 and the resulting stability region. with g4 the function √ 1 1 5 (θ + 10 )( 2 − 10 − θ) . Inspect the Nyquist plot of the nominal loop gain to determine the various stability margins of § 1. Gridding If the number of parameters is larger than two or three then it seldom is possible to establish the stability region analytically as in Example 5. This condition ensures the first.3. Clearly this is a task that easily can be coded for a computer. Plus signs correspond to unstable points.2.

Anagnost. and Desoer (1989). In 1978.4. Kharitonov’s theorem deals with the stability of a system with closed-loop characteristic polynomial χ(s ) = χ0 + χ1 s + · · · + χn sn . Parametric robustness analysis 4 g 2 0 + ++ + ++ ++ + 0 θ 1 Figure 5. (5.20) with χi and χi given numbers for i = 0. Kharitonov’s theorem allows verifying whether all these characteristic polynomials are strictly Hurwitz by checking only four special polynomials out of the infinite family.22) (5.5. · · · . i = 0.6.23) is strictly Hurwitz if and only if each of the four Kharitonov polynomials k1 ( s ) k2 ( s ) k3 ( s ) k4 ( s ) = = = = χ0 + χ1 s + χ 2 s 2 + χ 3 s 3 + χ 4 s 4 + χ5 s 5 + χ6 s 6 + · · · . 2 For a survey see Barmish and Kang (1993). 1.2. 1.26) (5. (5. χ0 + χ1 s + χ 2 s + χ 3 s + χ 4 s + χ5 s + χ6 s + · · · . 203 . Summary 5. n . Each member of the infinite family of polynomials χ(s ) = χ0 + χ1 s + · · · + χn sn . χ0 + χ1 s + χ 2 s + χ 3 s + χ 4 s + χ5 s + χ6 s + · · · 2 3 4 5 6 is strictly Hurwitz. with χi ≤ χi ≤ χi .2. A simple proof of Kharitonov’s theorem is given by Minnichelli.25) (5.24) (5. n.: Stability region obtained by gridding 5. We see what we can do with this result for our example. Kharitonov’s theorem Gridding is straightforward but may involve a tremendous amount of repetitious computation. Note the repeated patterns of under. Kharitonov (1978a) (see also Kharitonov (1978b)) published a result that subsequently attracted much attention in the control literature2 because it may save a great deal of work. 2.21) (5.27) χ0 + χ1 s + χ 2 s 2 + χ 3 s 3 + χ 4 s 4 + χ5 s 5 + χ6 s 6 + · · · .7 (Kharitonov’s theorem).and overbars. Each of the coefficients χi is known to be bounded in the form χi ≤ χi ≤ χi . · · · .2. 2 3 4 5 6 (5.

From Example 5.2. This shows that the polynomial χ0 + χ1 s + χ2 s2 + χ3 s3 + χ4 s4 is not strictly Hurwitz for all variations of the coefficients within the bounds (5. k3 . 5 + 5 2 + s2 + 0 .5 and g = 5.31) (5.1s3 + 0. 0 ≤ θ ≤ θ. however. so that the closed-loop system is stable for all parameter variations. √ = 0 .28) shows that the coefficients χ0 .1 we know that the stability of the third-order uncertain feedback system is determined by the characteristic polynomial χ(s ) = θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk . √ = 5 + 0. Uncertainty Models and Robustness Example 5. 5 2s + s 2 + 0 . because the coefficients χi of the polynomial do not vary independently within the bounds (5.5 and g = 2.02s4 .29) Inspection of (5. (5.3s3 + 0. 3 s 3 . 0. Td = 2 and T0 = 1/10.2. Suppose that the variations of the plant parameters g and θ are known to be bounded by g ≤ g ≤ g. T0 ≤ χ3 ≤ T0 + θ. We assume that 0 ≤ θ ≤ 0. √ = 5 + 5 2s + s2 + 0. 5 + 0 . (5.5.34) (5.5 2s + s2 + 0.2.30). Repeating the calculations we find that each of the four Kharitonov polynomials is strictly Hurwitz.2 we took k = 1. χ2 .5 shows that this region of variation is well within the stability region. and k4 are not Hurwitz. (5.28) √ where in Example 5. Inspection of Fig.2. so that g = 0. χ3 . The gain now only varies by a factor of four. It is easily found that the four Kharitonov polynomials are k1 ( s ) k2 ( s ) k3 ( s ) k4 ( s ) √ = 0 .8 (Application of Kharitonov’s theorem). 0. 5. 204 .5 ≤ g ≤ 2. This corresponds to a variation in the gain by a factor of ten. √ √ g 2 ≤ χ 1 ≤ g 2. and χ4 are correspondingly bounded by g ≤ χ0 ≤ g.35) (5. 1 ≤ χ2 ≤ 1 .29).30) By the Routh-Hurwitz test or by numerical computation of the roots using M ATLAB or another computer tool it may be found that only k1 is strictly Hurwitz. so that θ = 0.32) (5. while k2 .5 ≤ g ≤ 5. This does not prove that the closed-loop system is not stable for all variations of g and θ within the bounds (5. 0 ≤ χ4 ≤ T0 θ. but consider two cases for the gain g: 1. 1 s3 .30). χ1 . 2.02s4 .2. so that g = 0.33) (5.

(5.2. a contour that does not intersect itself) inside the domain encloses only points of the domain. θ T0 s + (θ + T0 )s + s + gTd s + gk . we need to check the locations of the roots of the four “exposed edges” θ=0: θ=θ: g=g: g=g: T0 s3 + s2 + gTd s + gk .e.37) are contained in D if and only if the roots of each polynomial on the exposed edges of the polytope are contained in D .. and varying the remaining parameter over its interval. g ≤ g ≤ g. the family of polynomials (5. g = 5. We therefore consider characteristic polynomials of the form χ(s ) = χ0 ( p )sn + χ1 ( p )sn−1 + · · · + χn ( p ).2. The exposed edges are obtained by fixing N − 1 of the parameters pi at their minimal or maximal value.40) θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk . Then all the roots of each polynomial (5. 0 ≤ θ ≤ θ. · · · . g = 0. The edge theorem Example 5. (5. A simply connected domain is a domain such that every simple closed contour (i. · · · .2.8 shows that Kharitonov’s theorem usually only yields sufficient but not necessary conditions for robust stability in problems where the coefficients of the characteristic polynomial do not vary independently. The edge theorem actually is stronger than this.2.39) Assuming that 0 ≤ θ ≤ θ and g ≤ g ≤ g this forms a polytope. N are fixed and given. (5.2. i = 0. Let D be a simply connected domain in the complex plane.5. the example we are pursuing is of this type.2. From Example 5. and θ = 0. 1. Hollot. Assuming that each of the parameters pi lies in a bounded interval of the form pi ≤ p i ≤ pi . Td and k as determined in Example 5. By the edge theorem. Then the edge theorem (Bartlett.2.2. (5.37) forms a polytope of polynomials. N . · · · . Example 5. We may rearrange the characteristic polynomial in the form N χ(s ) = φ0 (s ) + i=1 pi φi (s ). 2. i = 1.5.37) where the polynomials φi .36) where the parameter vector p of uncertain coefficients p i . indeed. i = 1.2. i = 0.38) (5. This is the case where in Example 205 . N .10 (Application of the edge theorem).1 we know that the closed-loop characteristic polynomial is given by χ(s ) = = θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk ( T0 s3 + s2 ) + θ( T0 s4 + s3 ) + g ( Td s + k ). Summary 5. θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk . Although obviously application of the edge theorem involves much more work than that of Kharitonov’s theorem it produces more results. 0 ≤ θ ≤ θ.7 shows the patterns traced by the various root loci so defined with T0 . · · · . and Lin 1988) states that to check whether each polynomial in the polytope is Hurwitz it is sufficient to verify whether the polynomials on each of the exposed edges of the polytope are Hurwitz. Parametric robustness analysis 5. 1. 2. Such problems are quite common. 4 3 2 g ≤ g ≤ g.5. Figure 5. enters linearly into the coefficients χi ( p ).9 (Edge theorem). We apply the edge theorem to the thirdorder uncertain plant. n.

7 shows that the four root loci are all contained within the left-half complex plane.: Loci of the roots of the four exposed edges 5.2.41) may be established by a single test.5.6.2. Testing the edges Actually. Uncertainty Models and Robustness First edge 10 5 0 -5 -10 -10 1 0.2.5 0 -0.42) (5. the closed-loop system is stable for all parameter perturbations that are considered.5 -1 -150 -5 -15 10 5 0 -5 -10 -400 0 5 Second edge -5 0 -10 -5 0 Third edge Fourth edge -100 -50 0 -300 -200 -100 0 Figure 5. Given a polynomial q (s ) = q0 sn + q1 sn−1 + q2 sn−2 + · · · + q n . 3 Stable (5.7. Figure 5. λ ∈ [0. as we did in Example 5. 1].43) polynomial is the same as strictly Hurwitz polynomial. By the edge theorem. to apply the edge theorem it is not necessary to “grid” the edges. 5. define its Hurwitz matrix Q (Gantmacher 1964) as the n × n matrix   q1 q3 q5 · · · · · · · · ·  q0 q2 q4 · · · · · · · · ·    0 q1 q3 q5 · · · · · ·  Q=  0 q0 q2 q4 · · · · · · .   0 0 q1 q3 q5 · · · ··· ··· ··· ··· ··· ··· Białas’ result may be rendered as follows. 206 . By a result of Białas (1985) the stability of convex combinations of stable polynomials3 p1 and p2 of the form p = λ p1 + (1 − λ) p2 .10 application of Kharitonov’s theorem was not successful in demonstrating robust stability.10. (5.

By the edge theorem. θ T0 s + (θ + T0 )s + s + gTd s + gk .5.1 (5. Let P1 and P2 be their Hurwitz matrices.2.068 0. Parametric robustness analysis Summary 5. The first of these families of polynomials is the convex combination of the two polynomials p1 ( s ) p2 ( s ) = = T0 s3 + s2 + gTd s + gk . They are all real and negative. Similarly. the system is stable on the edge under investigation.03152 0. Exercise 5.48) which both are strictly Hurwitz for the given numerical values. Suppose that the polynomials p1 and p2 are strictly Hurwitz with their leading coefficient nonnegative and the remaining coefficients positive.5.50) The eigenvalues of W are −1. λ ∈ [0. with T0 = 1/10.2.46) θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk .47) (5. In Example 5.10 we found that by the edge theorem we need to check the locations of the roots of the four “exposed edges” T0 s3 + s2 + gTd s + gk . 3 2 (5.  P2 =  T0 0 1 gk gTd 1 0  (5. Td =  −1 W = − P1 P2  −1. Example 5. by Białas’ test. 4 3 2 g ≤ g ≤ g.09685 −0.1. Test the stability of the system on the remaining three edges as given by (5. 0 ≤ θ ≤ θ.44) Then each of the polynomials p = λ p1 + (1 − λ) p2 . Note that no restrictions are imposed on the non-real eigenvalues of W . We apply Białas’ test to the third-order plant.46).13 (Application of Białas’ test).2.12 (Application of Białas’ test). 1]. Hence. it may be checked that the system is stable on the other edges.1. g ≤ g ≤ g. gk gk √ 2. k = 1. W = − P1 P2 (5. θ T0 s4 + (θ + T0 )s3 + s2 + gTd s + gk . Numerical evaluation.2. 0 ≤ θ ≤ θ.49)  0 . yields respectively. −0. = −0. the system is stable for the parameter perturbations studied. g = 0.1370 −0.11 (Białas’ test).01370 −0.2. (5. (5.45) is strictly Hurwitz iff the real eigenvalues of W all are strictly negative. and define the matrix −1 . and g = 5. 207 . T0 s + s + gTd s + gk . The Hurwitz matrices of these two polynomials are   P1 =  T0 0 1 gk gTd 1 0   0 . and −0.6848 0 0 .

p H q ∆H Figure 5. 5. Application: Perturbations of feedback systems The stability robustness of a unit feedback loop is of course of fundamental interest.2. the feedback system may be rep4 It may be attributed to Doyle (1984).3. q ∆L p + − H L q ∆L p − L − L + + + (a) (b) (c) H Figure 5. The block “ H ” represents a perturbation of the dynamics of the system.: Basic perturbation model 5. Unstructured perturbations are perturbations that may involve essential changes in the dynamics of the system.8 shows the block diagram of the basic perturbation model.3.3. and then apply it to a number of special cases. Its transfer matrix H is sometimes called the interconnection matrix. The block marked “ H ” is the system whose stability robustness under investigation. The basic perturbation model 5. (c) Multiplicative perturbation Example 5. 208 .9. Introduction This section presents a quite general uncertainty model. This perturbation model is simple but the same time powerful4.8.1 (The additive perturbation model). characterized by norm bounds.: Perturbation models of a unit feedback loop.3.3. (b) Additive perturbation.9(a) shows a feedback loop with loop gain L.1.5. We illustrate it by some applications. although it is more suited to the latter. Uncertainty Models and Robustness 5. We first consider the fundamental perturbation paradigm. The basic perturbation model Figure 5. (a) Nominal system. It deals with both parametric — or structured — and unstructured perturbations. Figure 5.3. After perturbing the loop gain from L to L + L .

209 .3.1 with transfer function P (s ) = g . the system satisfies the signal balance equation q = − p − Lq. s2 (1 + sθ) (5.10 shows one way of doing this. denote the input to the perturbation block L as q and its output as p. 5. (5. is shown in Fig. Consider the third-order plant of Example 5.52) (5. From the signal balance q = L (− p − q ) we obtain q = −( I + L )−1 Lp.8.9(b) shows that with the perturbation block L taken away. and L is the perturbation H in the basic model. (5. It follows that the interconnection matrix H is H = −( I + L )−1 = − S . as in Fig. 5.3. The block within the dashed lines is the unperturbed system.55) The parameters g and θ are uncertain.9(c). The transfer matrix of the perturbed loop gain is (1 + L )L. To compute H .5. Example 5. 5.53) which is equivalent to an additive perturbation L L. so that q = −( I + L )−1 p . It is not difficult to represent this transfer function by a block diagram where the parameters g and θ are found in distinct blocks.3. Note the way − u g 1/ s 2 s θ y + Figure 5.4. The model of Fig. Application: Parameter perturbations The basic model may also be used for perturbations that are much more structured than those in the previous application. Figure 5.9(b) is known as the additive perturbation model.51) with S the sensitivity matrix of the feedback system. so that the interconnection matrix is H = −( I + L )−1 L = − T .3 (Parameter perturbation).: Block diagram for third-order plant the factor 1 + sθ in the denominator is handled by incorporating a feedback loop.54) 5.2 (The multiplicative perturbation model). (5.10. Example 5. called multiplicative or proportional perturbation model. Inspection of the diagram of Fig. 5. An alternative perturbation model.3. The quantity L may be viewed as the relative size of the perturbation.2. The basic perturbation model resented as in Fig. 5. T is the complementary sensitivity matrix of the feedback system. It does not matter that one of the blocks is a pure differentiator.9(b). denoted H .

the system satisfies the signal balance equation q2 = −s ( p2 + θ q2 ) + Solution for q2 yields q2 = 1 / s2 s p1 − p2 . Uncertainty Models and Robustness q1 ∆g p1 p2 + ∆θ + q2 C(s) − u g + + 1/s 2 − + s θ y H Figure 5.59) (5.5. one of the 210 . s2 (5.11. 5. The small gain theorem is an application of the fixed point theorem. 1 + s θ + C ( s ) g / s2 1 + s θ + C ( s ) g / s2 − Cs(2s ) C(s )g s2 1 s2 (5.: Application of the the basic perturbation model to the thirdorder plant After perturbing g to g + g and θ to θ + θ and including a feedback compensator with transfer function C the block diagram may be arranged as in Fig. 5. 5.11. the interconnection matrix is H (s ) = = 1 1 + sθ + 1 + L (s ) 1 1+s θ sC (s ) −s s .4.57) 1 ( p1 − gC (s )q2 ) . (5.58) Consequently.5 we analyze the stability of the basic perturbation model using what is known as the small gain theorem. so that q1 = − C (s )/s2 sC (s ) p1 + p2 .11. To compute the interconnection matrix H we consider the block diagram of Fig.1. Inspection shows that with the perturbation blocks g and θ omitted.4.60) C (s ) −1 − s12 where L = PC is the (unperturbed) loop gain.56) Further inspection reveals that q1 = −C (s )q2. The large block inside the dashed lines is the interconnection matrix H of the basic perturbation model. The small gain theorem 5. Introduction In § 5. 1 + s θ + C ( s ) g / s2 1 + s θ + C ( s ) g / s2 (5.

12(a). defined by un+1 = L (u n ). 1.62) converges to u∗ for every initial point u0 ∈ U . If L is linear then it is a contraction if and only if L < 1. 2. (5. The sequence un . · · · . 5.63) (5. In the feedback loop of Fig. suppose that L : U → U is a BIBO stable system for some complete normed space U . This method of approaching the fixed point is called successive approximation. · · · .4. Summary 5.61) for some real constant 0 ≤ k < 1. Then: 1.: Left: feedback loop. 2.4. 2.4. 211 . Let (5. 5.12(a) to be internally stable is that the input-output map L is a contraction. Such a map L is called a contraction. 1. 5 For (5. p + + q L L (a) (b) Figure 5. Then a sufficient condition for the feedback loop of Fig. The point u∗ is called a fixed point of the equation. we are now ready to establish the small gain theorem (see for instance Desoer and Vidyasagar (1975)). v ∈ U .m→∞ un − um = 0 has a limit in U .2. The small gain theorem With the fixed point theorem at hand. Summary 5. 6 The normed space U is complete if every sequence u ∈ U . There exists a unique solution u∗ ∈ U of the equation u = L (u ).5.4. U is a complete normed space6 .1 (Fixed point theorem).2 (Small gain theorem).64) an easy introduction to this material see for instance the well-known though no longer new book by Luenberger (1969). Right: feedback loop with internal input and output. · · · such that lim i n. 2. i = 1. n = 0. 5. The small gain theorem most important results of functional analysis5 . Suppose that L : U → U be a map such that L (u ) − L (v) ≤ k u − v for all u. n = 0.12.

67) Hence. First consider BIBO stability in the sense of the L∞ norm on the input and output signals.70) 212 . Norm induced by the L2 -norm.68) 2. Uncertainty Models and Robustness where · is the norm induced by the signal norm.69) Again we find from the small gain theorem that (5. for these norms the small gain theorem applies. The signal spaces of signals of finite energy u L2 and the signals of finite amplitude u L∞ are two examples of complete normed spaces. where L is a linear time-invariant system with transfer function L (s ) = − k .13 the L2 -induced norm exists and equals L H∞ = sup | L ( jω)| = sup | ω ∈R ω ∈R k | = |k |. 1− L (5.65) θ is a positive time constant and k a positive or negative gain.9. we determine the exact stability region.12(b) is w = v + Lw. The power of the small gain theorem is that it does not only apply to linear time-invariant systems but also to nonlinear time-varying systems. Example 5.4. Norm induced by the L∞ -norm. 5.66) The norm L of the system induced by the L∞ -norm is (see Summary 4.3.68) is a sufficient condition for closedloop stability. Interpreting this in terms of Laplace transforms we may solve for w to obtain w= 1 v.3. by the small gain theorem a sufficient condition for internal stability of the feedback system is that −1 < k < 1 . Consider a feedback system as in Fig. 5. (5. hence. otherwise.12(a).3 (Small gain theorem). The small gain theorem gives a sufficient condition for internal stability that is very simple but often also very conservative. 1 + sθ (5. For positive θ the system L is stable so according to Summary 4. 0 (5.5. u U The proof may be found in § 5. In conclusion. The feedback equation of the system of Fig. L := sup u∈U Lu U . (5. We investigate the stability of the feedback system with the help of the small gain theorem. 1 + jωθ (5.13) L = l L1 = ∞ −∞ |l (t )| dt = |k|. 1. By inverse Laplace transformation it follows that the impulse response corresponding to L is given by l (t ) = k −t /θ −θ e for t ≥ 0.

5. In the robust control literature the H∞ -norm of a transfer matrix H is commonly denoted by H ∞ and not by H H∞ as we have done so far. that is. = k 1 − L (s ) ( 1 + k ) + sθ 1 + 1+sθ (5. 213 . In the remainder of the lecture notes we will adopt the notation predominantly used and hence write H ∞ to mean the H∞ norm.5.72) 5. internally stable if H = 0. 5. which is repeated in Fig. (b) Arrangement for internal stability 5. Stability robustness of the basic perturbation model so that the closed-loop transfer function is 1 1 + sθ 1 = . 5.71) Inspection shows that the closed-loop transfer function has a single pole at −(1 + k )/θ . This stability region is much larger than that indicated by (5. w1 + + + ∆H ∆H w2 + v2 H v1 H (a) (b) Figure 5. Sufficient conditions We assume that the model is nominally stable. The corresponding system is BIBO stable if and only if 1 + k > 0. Introduction We study the internal stability of the basic perturbation model of Fig. The closed-loop system is internally stable if and only if k > −1 .13(a). Remark.2.5.8. (5. Stability robustness of the basic perturbation model 5.13. we assume that also H by itself is BIBO stable.5. and we refer to this norm simply as the ∞-norm. This is equivalent to the assumption that the interconnection system H by itself is BIBO stable.1.: (a) Basic perturbation model. This will not lead to confusion as the p-norm for p = ∞ defined for vectors in Chapter 4 will not return in the remainder of the lecture notes.68).5. Similarly.

10(3).78) This follows from the submultiplicative property of the induced norm of Exercise 4. (5. Inspection shows that by the fixed point theorem the equations have bounded solutions w1 and w2 for any bounded v1 and v2 if the inequalities HH < 1 and H H <1 (5.77) but at the same time do not destabilize the system. It may be proved.77) within an arbitrarily small margin but destabilizes the system.73) Rearrangement yields w1 w2 = = H H H w1 H w2 + v1 + H v2 .75) + H v1 + v2 . 214 . that if robust stability is desired for all perturbations satisfying (5. Uncertainty Models and Robustness Introducing internal inputs and outputs as in Fig. 5.77) then the condition is also necessary.5. Hence. though8.74) (5. This means that it is always possible to find a perturbation that violates (5. (5.77) is a sufficient condition for stability of the perturbed system. w2 = v2 + H (v1 + H w1 ).12.77) is a sufficient condition for internal stability. It provides a frequency-dependent bound of the perturbations that are guaranteed not to destabilize the system. 5. (5. (5. Then a necessary and sufficient condition for robust stability is that H ∞ < 1.77) The same condition implies H H ∞ < 1. The condition (5. since H = sup σ( ¯ H ( j ω) ( j ω)) and by property 7 of 4. Dahleh and Ohda (1988)).3. Following the work of Doyle we use the system norm induced by the L2 -norm7. we obtain the signal balance equations w 1 = v1 + H (v2 + H w1 ).5. 7 Use of the norm 8 See induced by the L∞ -norm has also been considered in the literature (see e. Vidysagar (1985) for a proof for the rational case.3. (5. This is the ∞-norm of the system’s transfer matrix as introduced in Summary 4.g.76) both hold. Necessity The inequality (5. Another way of reformulating the conditions H H ∞ < 1 and H H ∞ < 1 is to replace them with the sufficient condition H ∞ · H ∞ < 1.3.8 σ( ¯ H ( jω) H ( jω)) ≤ σ( ¯ H ( jω)) · H ∞ H ω H < 1 is implied by σ( ¯ H ( jω)) it follows that H ∞ σ( ¯ H ( jω)) · σ( ¯ H ( jω)) <1 for all ω ∈ R. First. (5. It is easy to find examples that admit perturbations which violate (5.3.78) is equivalent to H ∞ < 1 H ∞ .79) Also this condition is necessary for robust stability if stability is required for all perturbations satisfying the bound. The latter condition is particularly elegant if the perturbations are scaled such that they satisfy H ∞ ≤ 1.1(b). We reformulate the conditions H H ∞ < 1 and H H ∞ < 1 in two ways.

1. 5. 3.13(a) both H and H are BIBO stable. Example 5. (b) Perturbation model the nominal value of θ and θ its perturbation.5. Another sufficient condition for internal stability is that H ∞ < 1 H ∞ . Because the system should be nominally stable we need θ0 to be positive. If stability is required for all perturbations satisfying this bound then the condition is also necessary.5.5. By way of example.4.5.80) with σ ¯ denoting the largest singular value. 1 + sθ0 (5.2 (Stability under perturbation). Solution for q yields q= −s p.: (a) Block diagram. with θ0 p θ + y (a) u − + + ∆θ θο q − u + s s y (b) H Figure 5.14(a) we obtain the perturbation model of Fig. Suppose that in the basic pertur- bation model of Fig. 5.81) If stability is required for all perturbations satisfying this bound then the condition is also necessary. (5.5. 5. Examples We consider two applications of the basic stability robustness result.1 (Stability of the basic perturbation model). Stability robustness of the basic perturbation model Summary 5.82) Arranging the system as in Fig. 2.83) 215 . a sufficient and necessary condition for robust stability under all perturbations satisfying H ∞ ≤ 1 is that H ∞ < 1. 1 + sθ (5. 5. Sufficient for internal stability is that σ( ¯ H ( jω)) < 1 σ( ¯ H ( jω)) for all ω ∈ R. (5.14. we study which perturbations of the real parameter θ preserve the stability of the system with transfer function P (s ) = 1 . The interconnection matrix H of the standard perturbation model is obtained from the signal balance equation q = −s ( p + θ0 q ) (omitting u and y).14(b). In particular.

3. such as θ (s ) = θ0 α .90) respectively. s2 (1 + sθ) C (s ) = k + Td s . θ0 (5. or −θ0 < θ < θ0 . Hence. 5. θ (5. 1 + T0 s (5.3 (Two-parameter perturbation).5.85) = sup σ( ¯ H ( jω)) = ω 1 .87) such that (5. The scaled perturbations are denoted δ g and δθ .2. 1 + sτ (5. 5. for instance. A more complicated example is the feedback system discussed in Examples 5. On the other hand.5.87) with an arbitrarily small margin.87). the system is stable for all θ > 0.88) Obviously. 1 + sθ0 1 + sθ0 θ0 (5.87) and destabilizes the system.15.1 imply that internal stability is guaranteed for all perturbations such that | θ | < θ0 .5.89) with τ any positive time constant.91) with g0 and θ0 denoting the nominal values of the two uncertain parameters g and θ . that is. respectively. the estimate for the stability region is quite conservative.1. for all perturbations −θ0 < θ < ∞.3. and destabilizes the system (because it makes θ negative). This model includes scaling factors α1 and β1 for the perturbation g and scaling factors α2 and β2 for the perturbation θ .3 we found that the interconnection matrix H with respect to perturbations in the parameters g and θ is H (s ) = 1 1+s θ0 1 + L0 ( s ) C (s ) −1 − s12 s . In Example 5.84) 1 θ0 2 ω2 θ 0 2 1 + ω2 θ 0 (5.11 to that of Fig. (5. and α a real number with magnitude less than 1.2 and 5. It consists of a plant and a compensator with transfer functions P (s ) = g . Example 5. both (1) and (2) of Summary 5. violates (5. though.86) Since σ( ¯ H ( jω)) = | θ | = H ∞ . The perturbation θ = −θ0 (1 + ε). θ could be a transfer function. For instance. with ε positive but arbitrarily small. 5. it is easy to find a perturbation that violates (5. The 216 . This “parasitic” perturbation leaves the system stable. Uncertainty Models and Robustness It follows that H (s ) = so that σ( ¯ H ( jω)) = | H ( jω)| = and H ∞ −s −s θ0 1 = . Before continuing the analysis we modify the perturbation model of Fig.2.3. Note that the class of perturbations such that θ ∞ ≤ θ0 is much larger than just real perturbations satisfying (5.

we need to choose θ0 positive but such that the nominal closed-loop system is stable.15 is H (s ) = 1 1+s θ0 1 + L0 ( s ) β1 C ( s ) −β2 1 −α s2 α2 s . 217 . The nonnegative quantity σ(ω) ¯ is the largest singular value of H ( jω).5. By substitution of L0 and C into (5. Reflection reveals that this phenomenon is caused by the fact that the stability robustness test is based on full perturbations H . On first sight this seems surprising. (5. this situation admits negative values of θ . + α2 2ω ω4 ω ∈ R.5. (5. Second. while α2 β2 = ε2 is the largest possible perturbation in θ.94) shows that for fixed uncertainties ε1 = α1 β1 and ε2 = α2 β2 the function σ ¯ depends on the way the individual values of the scaling constants α1 . which destabilize the closed-loop system.95) First we note that if we choose the nominal value θ0 of the parasitic time constant θ equal to ¯ ∞ ) = ∞. Hence.93) it follows that σ ¯ 2 (ω) = 2 2 2 2 2 2 2 2 6 β2 1 ( k + ω Td ) + β2 (1 + ω T0 ) (α1 + α2 ω ) . Stability robustness of the basic perturbation model q1 δg p1 p2 δθ q2 β1 α1 + 1/ s2 − + s α2 β2 + + + θ y C(s) − u g H Figure 5. α2 .92) It may also easily be worked out that the largest eigenvalue of H T (− jω) H ( jω) is σ ¯ (ω) = 2 1 1+ω2 θ2 0 |1 + L0 ( jω)|2 2 2 β2 1 |C ( jω)| + β2 α2 1 2 . 5. Full perturbations are perturbations such that all entries of the perturbation transfer matrix H are filled with dynamical perturbations.93) The other eigenvalue is 0. Indeed. It is easily verified that the interconnection matrix corresponding to Fig. β1 . |χ( jω)|2 (5. and β2 are chosen. inspection of (5.: Scaled perturbation model for the third-order plant product α1 β1 = ε1 is the largest possible perturbation in g. of course. so that stability is not zero — which is a natural choice — and ε2 = α2 β2 = 0 then σ( guaranteed.15.94) with χ the closed-loop characteristic polynomial χ(s ) = θ0 T0 s4 + (θ0 + T0 )s3 + s2 + g0 Td s + g0 k . (5.

2. Td = 2.: Plots of σ( ¯ H ( jω)) for the two-parameter plant Less conservative results may be obtained by allowing the scaling factors α1 . The reasons that the test fails are that (i) the test is based on full perturbations rather than the structured perturbations implied by the variations in the two parameters. Correspondingly.1 and ε2 = 0.2. 0. α2 . (5. α2 = β 2 = √ ε2 .75. ω ≥ 0.2.25.98) and that for this value of ρ σ(ω) ¯ = 2 + ε ω3 1 + ω2 T 2 ε1 k2 + ω2 Td 2 0 |χ( jω)| . (5. Correspondingly. 80 σ(H( jω)) 40 (a) (b) 0 10 −2 10 −1 10 0 10 1 ω 10 2 10 3 Figure 5. Uncertainty Models and Robustness √ We choose the numerical values k = 1.5. For the variations in the gain g we study two cases as in Example 5. and to choose them so as to minimize σ(ω) ¯ for each ω.75 and ε1 = 2.16 shows the resulting plots of σ(ω) ¯ for the cases (1) and (2).2. we let θ0 = 0.2. β1 .2.1.4 we know that in both situations stability is ensured.5 ≤ g ≤ 5. Correspondingly. 0. we let g0 = 2.7. Both cases fail the stability robustness test by a very wide margin. It is easy to establish that for fixed ω the quantity σ ρ = ω3 ε1 ε2 2 k2 + ω2 Td .16. and β2 to be frequency-dependent. and (ii) the test also allows dynamical perturbations rather than the real perturbations implied by the parameter variations.96) Figure 5.1. although from Example 5. and T0 = 1/10 as in Example 5.5 ≤ g ≤ 2. we take g0 = 1. 1 + ω2 T02 (5. For lack of a better choice we select for the time being α1 = β 1 = √ ε1 . (5. while in case (2) it is approximately 38.97) 2 ¯ 2 (ω) is minimized for where ρ = α2 1 /α2 . 2.8 we consider variations in the time constant θ between 0 and 0.8: 1.25 and ε1 = 0. In case (1) the peak value is about 67.99) 218 . In Example 5. Substituting β1 = ε1 /α1 and β2 = ε2 /α2 we obtain σ ¯ (ω) = 2 2 2 2 ε2 1 ( k + ω Td ) + 2 2 2 6 ε2 1 ( k +ω Td )ω ρ 2 2 2 2 2 6 + ε2 2 (1 + ω T0 )ρ + ε2 (1 + ω T0 )ω |χ( jω)|2 .

83. If only real-valued perturbations are considered then r ( A ) is the real stability radius. 219 .17.5. If the elements of may also assume complex values then r ( A ) is the complex stability radius. Assume that the nominal system x ˙(t ) = Ax (t ) is stable. only in case (b) robust stability is established. while in case (b) it is 1.17 shows plots of σ ¯ for the same two cases (a) and (b) as before.5. In case (a) the peak value of σ ¯ is about 1.4 (Stability radius).100) unstable. t ∈ R.11 in the section on the structured singular value this example is further discussed. denoted rC ( A ). Hence. + + + 1 sI x A p ∆ q Figure 5.5. Stability robustness of the basic perturbation model 2 σ(H( jω)) 1 (a) (b) 0 10 −2 10 −1 10 0 10 1 ω 10 2 10 3 Figure 5.7. The number r( A) = A + ∈ Rn×n has at least one eigenvalue in the closed right-half plane inf σ( ) (5.: Perturbation of the state space system Exercise 5. Only if the size of the perturbations is downsized by a factor of almost 2 robust stability is guaranteed. The matrix norm used is the spectral norm.18.: Plots of σ( ¯ H ( jω)) with optimal scaling for the two-parameter plant Figure 5. In Example 5.101) is called the stability radius of the matrix A. The reason that in case (a) robust stability is not proved is that the perturbation model allows dynamic perturbations. denoted rR ( A ). Hinrichsen and Pritchard (1986) investigate what perturbations of the matrix A make the system described by the state differential equation x ˙(t ) = Ax (t ). (5.

C ) ≥ 1/ H ∞ . B. explicit formulas are available for the complex stability radius. It follows that the perturbed system is stable for any static nonlinear perturbation satisfying (5. 5. Hint: Represent the system by the block diagram of Fig.107) with c a nonnegative constant. Uncertainty Models and Robustness 1. B. A more structured perturbation model results by assuming that A is perturbed as A −→ A + B C.18. We use the L2 -norm for signals. Rantzer. (5. 220 .102) with F the rational matrix given by F (s ) = (sI − A )−1 .13(a) is stable if k H ∞ < 1. and the perturbation. Suppose that the perturbation H in the block diagram of Fig. Davison. (5. C ) = 1/ H ∞ . H (5. 5. We call f a sector bounded function.18 and prove that the associated complex stability radius rC ( A .5.104) holds with k = c. 5.104) for every two input e1 . C ) satisfies rC ( A . Let f satisfy the inequality f (e ) ≤ c. t ∈ R. t ∈ R. B. e ∈ R. Hinrichsen and Pritchard (1986) prove that actually rC ( A ) = 1/ F ∞ and rC ( A .106) Suppose for instance that the signals are scalar. into the signal ( H e )(t ). 5. and that ( H e )( t ) = f (e (t )). Only very recently results have become available (Qiu.103) with B and C matrices of suitable dimensions. Similarly.5. Nonlinear perturbations The basic stability robustness result also applies for nonlinear perturbations. Bernhardsson. Young. provided that these perturbations are suitably bounded. Thus. F ∞ (5. It is not difficult to see that (5. By application of the fixed point theorem it follows that the perturbed system of Fig. Adapt the block diagram of Fig. 2.13(a) is a nonlinear operator that maps the signal e (t ). with H (s ) = C (sI − A )−1 B. with f : R → R.105) is a static nonlinearity described by (5.107) as long as c ≤ 1/ H ∞ .5. e e = 0 .19 illustrates the way the function f is bounded. 5. Figure 5. the basic stability robustness result holds for time-varying perturbations and perturbations that are both nonlinear and time-varying. and Doyle 1995). Prove that rR ( A ) ≥ rC ( A ) ≥ 1 . e2 to the nonlinearity. Assume that there exists a positive constant k such that H e1 − H e2 ≤ k e1 − e2 (5. The real stability radius is more difficult to determine.

5. the interconnection matrix is H = −( I + L )−1 L = − T . The proportional model is preferred because it represents the relative size of the perturbation. 5. The results of § 1.109) 221 .4 emerge as special cases.20. so that q = −( I + L )−1 Lp.6. (5.: Sector bounded function 0 5. Stability robustness of feedback systems 5.2.: Proportional and scaled perturbations of a unit feedback loop L −→ ( I + L)L (5.3 we consider additive and multiplicative (or proportional) perturbation models for singledegree-of-freedom feedback systems. It represents a perturbation q δL p q ∆L + p + − L W + V + − L (a) H (b) H Figure 5. The corresponding block diagram is repeated in Fig.5.6.1. Stability robustness of feedback systems ce f (e) e −ce Figure 5.108) Since q = − L ( p + q ).19.6. Proportional perturbations In § 5.5 to various perturbation models for single-degreeof-freedom feedback systems.6. Introduction In this section we apply the results of § 5. We successively discuss proportional. proportional inverse and fractional perturbations for MIMO systems.20(a).

(5. 2. The model represents a perturbation L −→ ( I + L−1 ) −1 L.1 leads to the following conclusions. Summary 5. 5. As- sume that the feedback system of Fig. Summary 5.112) with T = ( I + L )−1 L = L ( I + L )−1 the complementary sensitivity matrix of the feedback loop.111) be stable with L ( jω)) < 1 σ( ¯ T ( jω)) for all ω ∈ R. A sufficient condition for the closed-loop system to be stable under proportional perturbations of the form L −→ (1 + is that σ( ¯ L L)L (5.1 (Robust stability of feedback systems for proportional perturbations). consider the perturbation model of Fig. this may be rewritten as the perturbation L−1 −→ L−1 ( I + L−1 ).1 is the original MIMO version of Doyle’s robustness criterion (Doyle 1979). 1.20(b). where the perturbation L−1 is included in a feedback loop. If stability is required for all perturbations satisfying the bound then the condition is also necessary. For SISO systems part (1) of this result reduces to Doyle’s robustness criterion of § 1. This diagram represents a perturbation L −→ ( I + V δ L W ) L . Application of the results of Summary 5. For this configuration the interconnection matrix is H = − W T V . Proportional inverse perturbations The result of the preceding subsection confirms the importance of the complementary sensitivity matrix (or function) T for robustness.3. We complement it with a “dual” result involving the sensitivity function S. (5.6.116) 222 . (5.115) Assuming that L has an inverse.5.110) where V and W are available to scale such that δ L ∞ ≤ 1.6.20(a) is nominally stable.20(a) to that of Fig. 5. 5. (5. To scale the proportional perturbation we modify the block diagram of Fig.4.114) < 1.113) ≤ 1 if and only if (5. 5.6. 5. To this end.21(a). Uncertainty Models and Robustness with T = ( I + L )−1 L = L ( I + L )−1 the complementary sensitivity matrix of the closed-loop system. In fact. Under scaled perturbations L −→ (1 + V δ L W ) L the system is stable for all δ L WTV ∞ ∞ (5.5.

If stability is required for all perturbations satisfying the bound then the condition is also necessary.118) where V and W provide freedom to scale such that δ L−1 ∞ ≤ 1.: Proportional inverse perturbation of unit feedback loop Hence.119) be stable with (5. The interconnection matrix now is H = − W SV . The interconnection matrix H follows from the signal balance q = − p − Lq so that q = −( I + L )−1 p.6.2 (Robust stability of feedback systems for proportional inverse perturbations). with S = ( I + L )−1 the sensitivity matrix of the feedback loop.1 yields the following conclusions.117) S = ( I + L )−1 is the sensitivity matrix of the feedback loop. Application of the results of Summary 5.120) a sufficient condition for stability is that σ( ¯ L−1 ( jω)) < 1 σ( ¯ S ( jω)) for all ω ∈ R. Under scaled inverse perturbations of the form L−1 −→ L−1 ( I + V δ L−1 W ) the closed-loop system is stable for all δ L−1 W SV ∞ ∞ (5.5. We allow for scaling by modifying the model to that of Fig.21(a) and (b) is nominally stable.5. 2. Stability robustness of feedback systems p δL−1 q p ∆L−1 − q V + − W − L + − H L (a) (b) H Figure 5. (5.122) < 1. Hence. the interconnection matrix is H = −( I + L )−1 = − S . Assume that the feedback system of Fig.6.21. 223 . Summary 5. 5.21(b).121) ≤ 1 if and only if (5. Under proportional inverse perturbations of the form L−1 −→ L−1 ( I + L−1 ) L−1 (5. the perturbation represents a proportional perturbation of the inverse loop gain. (5. 5. 1. This model represents perturbations of the form L−1 −→ L−1 ( I + V δ L−1 W ).

125) Figures 5.2. Therefore.2. In Example 5. is constant.3 (Robustness of a SISO closed-loop system). 1. Example 5. This is where the proportional perturbations of the loop gain need to be the smallest.2 the gain g may vary between about 0. 224 . hence.127) Figure 5. The reason is of course that the perturbation model allows a much wider class of perturbations than just those caused by changes in the parameters g and θ .1 we considered a SISO single-degree-of-freedom feedback system with plant and compensator transfer functions P (s ) = g .6. Td = 2 and T0 = 1/10.745.1 permits values of θ up to about 1. For smaller values of the parasitic time constant θ a wider range of gains is permitted.6.4.75. (5.5. For g = g0 .2 by application to an example.22(b) demonstrate that for θ = 0. on the other hand. Nominally the gain g equals g0 = 1.123) respectively.126) and.125) of the loop gain reduces to L (s ) = −s θ . Inspection shows that 1/ T0 assumes relatively small values in the low-frequency region.6. Uncertainty Models and Robustness 5. In Example 5. Example We illustrate the results of Summaries 5. For θ = 0 the proportional perturbation (5. 1 + T0 s (5. g0 (5.1 and 5.6. We use the results of Summaries 5. so that 0. s2 (1 + sθ) C (s ) = k + Td s . For this value of θ the gain g can be neither increased nor decreased without violating the criterion. the proportional perturbation (5. The stability bounds on g and θ are conservative. The plots of of | L | of Fig.25 < g < 1. Loop gain perturbation model.124) it is easy to find that the proportional loop gain perturbations are L (s ) = L ( s ) − L0 ( s ) = L0 ( s ) g− g0 g0 − sθ 1 + sθ . The robustness criterion (a) of Summary 5.6.75. while the parasitic time constant θ is nominally √ 0.2 to study what perturbations of the parameters g and θ leave the closed-loop system stable. 5.255 and 1.1 allows relative perturbations of g up to 0.22(a) and (b) show the magnitude plot of the inverse 1/ T0 of the nominal complementary sensitivity function.125) of the loop gain reduces to L (s ) = g − g0 .22(a) shows magnitude plots of this perturbation for several values of θ .1 and 5.75.2 we chose the compensator parameters as k = 1. The minimal value of the function 1/| T0 | is about 0.6.6. for θ = 0 the robustness criterion 1 of Summary 5.6. Starting with the expression L ( s ) = P ( s )C ( s ) = g s2 (1 + sθ) k + Td s 1 + T0 s (5.15. 1 + sθ (5.

15 1 ω 100 1 g = 1. The derivation of the unstructured robust stability tests of Summary 5. when the plant by itself is unstable).2 require the perturbation of the inverse of the loop gain to be stable. Likewise. Inspection shows that for the inverse loop gain perturbation model the high-frequency region is the most critical.2.001 .129) and. This is highly restrictive if the loop gain by itself 225 .6.001 . Stability robustness of feedback systems (a) 1000 1/|To| g = 1. The magnitude of 1/ S0 has a minimum of about 0.2.01 1 ω 100 Figure 5. = − 1 g g L0 ( s ) (5.6 θ = 1. This is highly restrictive if the loop gain L by itself represents an unstable system. Inverse loop gain perturbation model. This range of for θ = 0 stability is ensured for | g0g variation is larger than that found by application of the proportional loop gain perturbation model.128) We apply the results of Summary 5. Figure 5.2 g = 0. It is well-known (Vidysagar 1985) that the stability assumption on the perturbation may be relaxed to the assumption that both the nominal loop gain and the perturbed loop gain have the same number of right-half plane poles. is constant.6.536 < g < 7. By inspection of (5.23 gives the magnitude plot of the inverse 1/ S0 of the nominal sensitivity function. 2. the result is conservative. the proofs of the inverse perturbation tests of Summary 5.: Robust stability test for the loop gain perturbation model: (a) 1/| T0 | and the relative perturbations for g = 1.6. this model cannot handle high-frequency perturbations caused by parasitic dynamics. so that −g | < 0.523. g (5.128) we see that if θ = 0 then | L−1 (∞ )| = ∞.01 1000 (b) 1/|To| . For θ = 0 the proportional inverse loop gain perturbation reduces to L−1 ( s ) = g0 − g . and presumes the perturbations L to be BIBO stable.2 . It is disturbing that the model does not handle parasitic perturbations. so that robust stability is not ensured.6. Again.867. which may easily occur (in particular.22.5.867. hence.2 θ = 0. Apparently. (b) 1/| T0| and the relative perturbations for θ = 0.745 1 θ = 0. or 0.1 is based on the small gain theorem. The proportional inverse loop gain perturbation is given by L−1 ( s ) = 1 L−1 (s ) − L− g0 g0 − g 0 (s ) + sθ .

For a SISO system the fractional representation is obvious. the loop gain L is represented as L = N D−1 .5. Example 5. 5. and N a rational or polynomial matrix. may be relaxed to the assumption that the nominal loop gain and the perturbed loop gain have the same number of right-half plane zeros. however. The fractional representation may be made rational by letting D (s ) = s2 (1 + sθ) . but often overly conservative.: Magnitude plot of 1/ S0 has right-half plane zeros. Another model that is encountered in the literature9 relies on what we term here fractional perturbations. (5. Vidysagar 1985). d (s ) N (s ) = g . when the plant has right-half plane zeros.6. If D and N are rational and proper with all their poles in the open left-half complex plane then the representation is known as a (right) rational matrix fraction representation . Schneider. It combines. loop gain perturbations and inverse loop gain perturbations.23. in a way. Suppose that g . This occurs.4 (Fractional representations).131) Clearly we have the polynomial fractional representation L = N D−1 with N (s ) = g and D (s ) = s2 (1 + sθ).6. idea originates from Vidyasagar (Vidyasagar.132) with d any strictly Hurwitz polynomial. 10 Because the denominator is on the right. L (s ) = 2 s (1 + sθ) (5. 9 The 226 . In this analysis.5. in particular. It is elaborated in McFarlane and Glover (1990). Uncertainty Models and Robustness 10 4 10 1 2 1/|So| 10 −2 10 −1 10 0 10 1 10 2 ω Figure 5.130) where the denominator D is a square nonsingular rational or polynomial matrix. and Francis 1982. d (s ) (5. Any rational transfer matrix L may be represented like this in many ways. If D and N are polynomial then the representation is known as a (right)10 polynomial matrix fraction representation. The requirement. Fractional representation The stability robustness analysis of feedback systems based on perturbations of the loop gain or its inverse is simple.

(5. D and N represent proportional perturbations of the denominator and of the numerator. D −→ ( I + D ) D.135) Thus.133) (I + D) −1 . 5. (5.24. 5. (5.24 shows the fractional perturbation model. in turn.139) (5. Stability robustness of feedback systems Right fractional representation may also be obtained for MIMO systems (see for instance Vidysagar (1985)). The latter.6. 5.136) = [− D N] . p1 ∆D q1 q2 ∆N + p2 + − − L + H Figure 5. Fractional perturbations Figure 5.: Fractional perturbation model 5.25(b). q2 (5.138) 227 . respectively.24 is equivalent to that of Fig.137) To establish the interconnection matrix H of the configuration of Fig.6. Inspection shows that the perturbation is given by L −→ ( I + or N D−1 −→ ( I + N )N D −1 N ) L( I + D) −1 . may be rearranged as in Fig.25(b) we consider the signal balance equation q1 = p − Lq1. By block diagram substitution it is easily seen that the configuration of Fig.25(a).6. (5. Since q2 = − Lq1 we have q2 = − L ( I + L )−1 p = − T p. Here p=− with L D q1 + N q2 = L q1 . It follows that q1 = ( I + L )−1 p = Sp . the numerator and denominator are perturbed as N −→ ( I + N )N.134) Hence. 5. (5. with S = ( I + L )−1 the sensitivity matrix.5.

Uncertainty Models and Robustness q2 ∆N + p2 p1 − + ∆D q1 − L (a) + H q2 ∆L + p q1 − L (b) + H Figure 5. (5.27. It is useful to allow for scaling by modifying the configuration of Fig.: Equivalent configurations with T = L ( I + L )−1 the complementary sensitivity matrix.140) Investigation of the frequency dependence of the greatest singular value of H ( jω) yields information about the largest possible perturbations L that leave the loop stable.26. − W2 T V (5.139) shows that the interconnection matrix is H= S . This modification corresponds to representing the perturbations as D = V δ D W1 .5.6.142) Accordingly. 5.138–5. Summary 5. suppose that the loop gain is perturbed as L = N D−1 −→ [( I + V δ N W2 ) N ][( I + V δ D W1 ) D]−1 (5.143) Application of the results of Summary 5. 5. N = V δ N W2 . −T (5. Inspection of (5.25. with δ L = [−δ D δ N ]. (5.5 (Numerator-denominator perturbations). 5.141) where V .144) 228 .25(b) to that of Fig. and W2 are suitably chosen (stable) rational matrices such that δL ∞ ≤ 1.1 yields the following conclusions. W1 . In the stable feedback configuration of Fig. the interconnection matrix changes to H= W1 SV .5.

6.146).: Perturbation model with scaling with V .7. without loss of generality we may take the scaling function V equal to 1. − L Figure 5.5. (5. equals the square root of the largest eigenvalue of H T (− jω) H ( jω) = T V T (− jω)[ S T (− jω) W1 (− jω) W1 ( jω) S ( jω) T (− jω) W2 ( jω) T ( jω)] V ( jω).: Feedback loop The largest singular value of H ( jω). − W2 T V (5.6. with H given by (5.146) with S = ( I + L )−1 the sensitivity matrix and T = L ( I + L )−1 the complementary sensitivity matrix.27. Then the closed-loop system is stable for all stable perturbations δ P = [−δ D δ N ] such that δ P ∞ ≤ 1 if and only if H ∞ < 1. W1 and W2 stable transfer matrices. Then W1 represents the scaling factor 229 .145) Here we have H= W1 SV .26.147) For SISO systems this is the scalar function H T (− jω) H ( jω) = | V ( jω)|2 [| S ( jω)|2 | W1 ( jω)|2 + | T ( jω)|2 | W2 ( jω)|2 ]. + T T (− jω) W2 (5. (5. Discussion We consider how to arrange the fractional perturbation model. Stability robustness of feedback systems q2 δL p q1 W2 V + W1 − L + H Figure 5. In the SISO case.148) 5.

This means that at low frequencies we may allow large denominator perturbations.001 . Likewise.01 1 ω 100 Figure 5. Since we may trivially write L= 1 1 L . (5.150) modeling low frequency perturbations as pure denominator perturbations implies modeling low frequency perturbations as inverse loop gain perturbations. This amounts to taking | W1 ( jω)| | W2 ( jω)| = = WL−1 (ω) 0 at low frequencies.01 .28. near the frequency where the loop gain crosses over the zero dB line. that is.01 1 ω 100 .: Sensitivity function and complementary sensitivity function for a well-designed feedback system the robustness condition H ∞ < 1. WL (ω) at high frequencies.001 .149) For well-designed control systems the sensitivity function S is small at low frequencies while the complementary sensitivity function T is small at high frequencies. Uncertainty Models and Robustness for the denominator perturbations and W2 that for the numerator perturbations.01 . (5. and WL a bound on the size of the loop perturbations. Hence.5. and at high frequencies large numerator perturbations. the boundary between “low” and “high” frequencies lies in the “crossover” region. W1 may be large at low frequencies and W2 large at high frequencies without violating 1 1 |S| |T| . We accordingly have H 2 ∞ = sup | S ( jω)|2 | W1 ( jω)|2 + | T ( jω)|2 | W2 ( jω)|2 .28 illustrates this. at high frequencies. In this frequency region neither S nor T is small. Obviously.152) with WL−1 a bound on the size of the inverse loop perturbations. 230 . ω ∈R (5.151) 0 at low frequencies. and all high frequency perturbations are numerator perturbations. (5. modeling high frequency perturbations as pure numerator perturbations implies modeling high frequency perturbations as loop gain perturbations. The most extreme point of view is to structure the perturbation model such that all low frequency perturbations are denominator perturbations. Figure 5.

8. Again consider the feedback system that we studied before.01 1 ω 100 .2. for instance. but passes the test (5. Plots of the magnitudes of the nominal sensitivity and complementary sensitivity functions S and T are given in Fig. On the other hand. perturbed plant g . Stability robustness of feedback systems Another way of dealing with this perturbation model is to modify the stability robustness test to checking whether for each ω ∈ R | L−1 ( jω)| < 1 or | | S ( jω)| L ( jω)| < 1 . its results are less conservative than the individual tests.28. fails the two separate tests.29. S is small up to a frequency of almost 1.01 1 ω 100 Figure 5. In the crossover region neither sensitivity is small. The crossover region is located about the frequency 1. with nominal plant P0 (s ) = g0 /s2 . unstructured uncertainty — deriving from neglected dynamics and parasitic effects — should be kept within bounds by adequate modeling. the perturbation leaves the closed-loop system stable. Hence. 5.6. Obviously.6.: (a). (c) Combined test.01 1 ω 100 .6 (Numerator-denominator perturbations).153).2.2.153) This test amounts to verifying whether either the proportional loop gain perturbation test succeeds or the proportional inverse loop gain test. the feedback system is not robust for perturbations that strongly affect the crossover region. On the one hand this limitation restricts structured uncertainty — caused by load variations and environmental changes — that the system can handle. It is important to 231 .5. Figure 5. such as S and T .6. Plant perturbation models In the previous subsections we modeled the uncertainty as an uncertainty in the loop gain L. Feedback systems are robustly stable for perturbations in the frequency regions where either the sensitivity is small (at low frequencies) or the complementary sensitivity is small (at high frequencies). We use the design values of Example 5. | T ( jω)| (5. Hence. Example 5.29 illustrates that the perturbation from the nominal parameter values g0 = 1 and θ0 = 0 to g = 5. In the crossover region the uncertainty therefore should be limited. (a) 10000 1/|S| (b) (c) 100 1/|T| ∆L 1/|S| ∆L−1 1/|T| ∆L 1 ∆L−1 . while T is small starting from a slightly higher frequency. (b) Separate stability tests. which results in interconnection matrices in terms of this loop gain. θ = 0. most recently in Example 5. 5.154) P (s ) = 2 s (1 + sθ) and compensator C (s ) = (k + sTd )/(1 + sT0 ). (5.3.6.

215) guarantees robustness under all perturbations such that σ( ¯ ( jω)) · σ( ¯ H ( jω)) < 1 for all ω ∈ R. Otherwise.1. The overall perturbation has the block diagonal form   0 ··· 0 1 0 0 ··· 2  (5. it makes sense to model the uncertainty as such. Introduction We return to the basic perturbation model of Fig. If the unit matrix I has dimension 1 this represents a real parameter variation.7.5. Table 5.155) = · · · · · · · · · · · ·  .1. which we repeat in Fig. Several examples that we considered in the preceding sections confirm this. however.. In this approach. 232 . 5. Perturbations ( jω) whose norm σ( ¯ ( jω)) does not exceed the number 1/σ( ¯ H ( jω)). 5.30(b) (Safonov and Athans 1981). As demonstrated in § 5.5. In particular as the uncertainty is usually present in the plant P only. this is a repeated real scalar perturbation .1 (p. The stability robustness theorem of Summary 5. the model is very flexible in representing both structured perturbations (i.3.1 lists several ways to model plant uncertainty.8. the perturbation structure is detailed as in Fig.e. often quite conservative estimates are obtained of the stability region for structured perturbations. and has one of the following forms: i = δ I . 5. plant P P P P P D −1 N ¯D ¯ −1 N (D + V ¯ + W1 (N perturbed plant P+V (I + V P( I + V (I + V P( I + V D W1 ) −1 PW PW)P PW) −1 interconnection matrix H − W K SV −W T V − W K S PV − W SV − W ( I + K P )−1 V N W2 ) DV) −1 perturbation matrix P P P PW) P P PW) −1 P (N + V W2 − ¯ −1 −V D W1 S W2 K S K SW1 D −1 V ( I + K P )−1 W2 D N N V )( D + ¯ N D Table 5. Structured singular value robustness analysis 5.7. and not in controller K . 0 ··· 0 K Each diagonal block entry • i has fixed dimensions.: A list of perturbation models with plant P and controller K 5. As a result. Doyle (1982) proposed another measure for stability robustness based on the notion of what he calls structured singular value. are completely unstructured. variations of well-defined parameters) and unstructured perturbations.30(a). with δ a real number. Uncertainty Models and Robustness realize that we have a choice in how to model the uncertainty and that we need not necessarily do that in terms of the loop gain.

i i is a (not necessarily square) stable transfer matrix. with ε a real number that varies between 0 and 1.13 the Nyquist plot of det( I − H ) encircles the origin at least once. Im 0 1 ω Re Figure 5. For ε = 1 the modified Nyquist plot coincides with that of Fig. there must exist a value of ε in the interval (0.31. Suppose that a given perturbation destabilizes the system. with δ a stable11 scalar transfer matrix.7.30. Consider the Nyquist plot of det ( I − ε H ). This represents a multivariable dynamic perturbation .: (a) Basic perturbation model. Structured singular value robustness analysis H (a) ∆ H (b) ∆1 ∆2 ∆K Figure 5. A wide variety of perturbations may be modeled this way.31.30(a).31. This represents a scalar or repeated scalar dynamic perturbation . 5. if destabilizes the perturbed 11 A transfer function or matrix is “stable” if all its poles are in the open left-half plane.3. Then by the generalized Nyquist criterion of Summary 1. Since obviously the plot depends continuously on ε. model • • (b) Structured perturbation = δ I .5. Hence.: Nyquist plot of det( I − H ) for a destabilizing perturbation We study which are the largest perturbations with the given structure that do not destabilize the system of Fig. as illustrated in Fig. 5. while for ε = 0 the plot reduces to the point 1. 233 . 1] such that the Nyquist plot of det( I − ε H ) passes through the origin. 5.

with δ a complex number. i = δ I .156) then det( I − ε H ( jω) ( jω)) = 0 for ε ∈ (0.156) within an arbitrarily small margin that destabilizes the perturbed system. (5. 0 ··· 0 K The diagonal block i has the same dimensions as the corresponding block of the dynamic perturbation. Let M be an n × m complex-valued matrix. 1] and ω ∈ R such that det( I − ε H ( jω) ( jω)) = 0.7. Conversely.158) · · · · · · · · · · · ·  . if the dynamic perturbation is a scalar or repeated real scalar perturbation. The structured singular value We are now ready to define the structured singular value of a complex-valued matrix M : Definition 5. and has the following form: • • • i = δ I .7.2. if for a given perturbation ( jω). Therefore. does not destabilize the system. obviously. Uncertainty Models and Robustness system. i is a complex-valued matrix if the dynamic perturbation is a multivariable dynamic perturbation.5.159) If det( I − M ) = 0 for all ∈ D then µ( M ) = 0. Given the perturbation structure of the perturbations of the structured perturbation model. with δ a real number. it is always possible to find a perturbation that violates (5. Besides being determined by H ( jω) the structured singular value is of course also dependent on the perturbation structure. (5. det( I − M )=0 inf σ( ¯ ). does not destabilize the perturbed system if and only if there do not exist ε ∈ (0. if the dynamic perturbation is a scalar dynamic or a repeated scalar dynamic perturbation.1 (Structured singular value). ω ∈ R. 1] and ω ∈ R such that det( I − ε H ( jω ( jω)) = 0.157) If no structure is imposed on the perturbation then κ( H ( jω)) = 1/σ( ¯ H ( jω)). σ( ¯ ( jω)) < κ( H ( jω)) for all ω ∈ R (5. define by D the class of constant complex-valued matrices of the form   0 ··· 0 1 0 0 ··· 2  = (5. and D a set of m × n structured perturbation matrices. 1] and ω ∈ R. Then. and. We note that for fixed ω κ( H ( jω)) = sup{γ | σ( ¯ ( jω)) < γ ⇒ det( I − H ( jω) ( jω)) = 0} = inf{σ( ¯ ( jω)) | det( I − H ( jω) ( jω)) = 0}. This led Doyle to terming the number µ( H ( jω)) = 1/κ( H ( jω)) the structured singular value of the complexvalued matrix H ( jω). there exist ε ∈ (0. 5. and let κ( H ( jω)) be the largest real number such that det( I − H ( jω) ( jω)) = 0 for all ( jω) (with the prescribed structure) such that σ( ¯ ( jω)) < κ( H ( jω)). Then the structured singular value of M given the set D is the number µ( M ) defined by 1 = µ( M ) ∈D . the Nyquist plot of det( I − H ) does not encircle the origin. The number κ( H ( jω)) is known as the multivariable robustness margin of the system H at the frequency ω (Safonov and Athans 1981). hence. Therefore. Fix ω. 234 .

3723. Structured singular value robustness analysis The structured singular value µ( M ) is the inverse of the norm of the smallest perturbation (within the given class D ) that makes I + M singular.4 (Alternative characterization of the structured singular value). (5. Example 5.2 (Structured singular value).163) Elimination of. By way of example we consider the computation of the structured singular value of a 2 × 2 complex-valued matrix M= m11 m21 m12 . κ( M ) = 4 so that √ 4 5 + 33 = 5. Thus. | 2 | ).5. Show that if on the other hand some of the perturbations are real then µ( M ) = ∈ D : σ( ¯ ) ≤1 max ρR ( M ). 4 (5. (5. (5. Hence. If A has no real eigenvalues then ρR ( A ) = 0.7. µ( M ) = √ = 2 −5 + 33 Exercise 5. (5. 2 ∈ R: 1+m11 inf 1 +m22 2 +m 1 2 =0} max(| 1 |. σ( ¯ ) = max(| det( I + M ) = 1 + m11 1 + m22 2 +m 1 2.161) ∈ R and ∈ R. | 2 | ).169) H1ere ρR ( A ) is largest of the magnitudes of the real eigenvalues of the complex matrix A. the smaller the perturbation is that is needed to make I + M singular.7. Prove (Doyle 1982) that if all perturbations are complex then µ( M ) = ∈ D : σ( ¯ ) ≤1 max ρ( M ). the larger µ( M ).168) with ρ denoting the spectral radius — that is. say. κ( M ) = inf max(| 1 + m11 m22 + m 1 1 ). 1 ∈R 2 results in the equivalent expression 1 |.7.165) Then it may be found (the details are left to the reader) that √ −5 + 33 . Fill in the details of the calculation.160) under the real perturbation structure = with 1 1 0 2 2 0 .167) Exercise 5. (5. (5.164) Suppose that M is numerically given by M= 1 3 2 .7.3 (Structured singular value). It is easily found that 1 |. the magnitude of the largest eigenvalue (in magnitude). (5. m22 (5. 235 .166) (5.162) with m = det( M ) = m11 m22 − m12 m21 . the structured singular value of M is the inverse of κ( M ) = { 1 ∈R .

3.7 (Principal properties of the structured singular value).170) with µ the structured singular value with respect to the perturbation structure D . Then Summary 5. Upper and lower bounds: Suppose that M is square. ω ∈R Suppose that the perturbations are scaled. The structured singular value µ( M ) of a matrix M under a structured perturbation set D has the following properties: 1. 5. we have (5. but first summarize its application to robustness analysis. Even more can be said: Summary 5. 5.5 (Structured robust stability).6 (Structured robust stability for scaled perturbations). (5. ω ∈R ≤ 1.7. µ( H ( jω)) (5. Given the stable unperturbed system H the perturbed system of Fig. Properties of the structured singular value Before discussing the numerical computation of the structured singular value we list some of the principal properties of the structured singular value (Doyle 1982.4. Packard and Doyle 1993). so that the perturbed system is stable if µ H < 1. Scaling property: µ(α M ) = |α| µ( M ) (5. Then ρR ( M ) ≤ µ( M ) ≤ σ( ¯ M ).174) 236 . where µ H = sup µ( H ( jω)). If robust stability is required with respect to all perturbations within the perturbation class that satisfy the bound (5. such that ( jω) ∈ D for all ω ∈ R and Clearly. if the perturbations are scaled to a maximum norm of 1 then a necessary and sufficient condition for robust stability is that µ H < 1.5. Uncertainty Models and Robustness 5.171) ∞ = sup σ( ¯ ( jω)). Summary 5.7.173) for every α ∈ R.7.170) then the condition is besides sufficient also necessary. we call µ H the structured singular value of H .5 implies that (5.4.30(b) is stable for all stable perturbations ∞ ≤ 1 if and only if µ H < 1.7. Summary 5.172) With some abuse of terminology. 5. If none of the perturbations are real then this holds as well for every complex α. Structured singular value and robustness We discuss the structured singular value in more detail in § 5.7. 2. Given a structured perturbation ∞ such that ( jω) ∈ D for every ω ∈ R.30(b) is stable for all perturbations such that ( jω) ∈ D for all ω ∈ R if σ( ¯ ( jω)) < 1 for all ω ∈ R.7.7. The perturbed system of Fig.

with d i a positive real number.180) 237 .7. Then ¯ −1 ).179) with a1 . ¯ i = d i Im i .7. a2 .7.5. Ik denotes a k × k unit matrix.177) 4.7. Summary 5. A useful formula The following formula for the structured singular value of a 2 × 2 dyadic matrix has useful applications. The proof is given in § 5. 1 ∈ C. If none of the perturbations is real then ρ( M ) ≤ µ( M ) ≤ σ( ¯ M ).178) The scaling property is obvious.181) 1 0 2 0 .7. Exercise 5.9.8 (Proof of the principal properties of the structured singular value). Preservation of the structured singular value under unitary transformation: Suppose that the ith diagonal block of the perturbation structure has dimensions mi × ni . Preservation of the structured singular value under diagonal scaling: Suppose that the ith diagonal block of the perturbation structure has dimensions mi × ni . (5. µ( M ) = µ( DM D (5. The properties 3 and 4 are easily checked. Form the ¯ whose ith diagonal blocks are given by the mi × mi block diagonal matrices Q and Q ¯ i . 2 ∈ C. with respect to the perturbation structure = is µ( M ) = |a1 b1 | + |a2 b2 |. Form two block ¯ whose ith diagonal blocks are given by diagonal matrices D and D D i = d i In i . Structured singular value robustness analysis with ρR defined as in Exercise 5. b1 . with δ a real or complex number.7. The upper bounds also apply if M is not square. 3.4. (5. The structured singular value of the 2 × 2 dyadic matrix M= a1 b1 a2 b1 a1 b2 a = 1 a2 b2 a2 b1 b2 . 5. Then unitary matrix Qi and the ni × ni unitary matrix Q ¯ ). The upper bound in 2 follows by considering unrestricted perturbations of the form ∈ Cm×n (that is. (5. D (5. (5. Fill in the details of the proof of Summary 5.175) with ρ denoting the spectral radius. respectively. µ( M ) = µ( QM ) = µ( M Q (5.176) respectively. and b2 complex numbers.9 (Structured singular value of a dyadic matrix).5. is a full m × n complex matrix).7. The lower bound in 2 is obtained by considering restricted perturbations of the form = δ I .

and Doyle (1995a)). Example 5. Summary 5. bers d i that determine D and D Suppose that the perturbation structure consists of m repeated scalar dynamic perturbation blocks and M full multivariable perturbation blocks. provided that we use a generalization of Dscaling called ( D . Then with property (b) we have ¯ −1 ).7.7. Let the diagonal matrices D and D 5. The closeness of the bounds is a measure of the reliability of the calculation. By way of example.7. Let Q be a unitary matrix as constructed in (d) of Summary 5. ¯ be chosen as in (c) of Summary 1. Then if 2m + M ≤ 3 the minimized upper bound actually equals the structured singular value12 2.7. and Doyle (1995b) and Young. Newlin. The M ATLAB Robust Control Toolbox (Chiang and Safonov 1992) provides routines for calculating upper bounds on the structured singular value for both complex and real perturbations. Lower bound. µ( M ) = max ρ( M Q ). The numerical methods that are presently available for approximating the structured singular value are based on calculating upper and lower bounds for the structured singular value. Real perturbations are not handled by this version of the µ-toolbox.7.7. 238 . Glover. Packard. consider the SISO feedback system that was studied in Example 5.7.11 (SISO system with two parameters).183) (5.184) with Q varying over the set of matrices as constructed in (d) of Summary 5.2. Numerical approximation of the structured singular value Exact calculation of the structured singular value is often not possible and in any case computationally intensive. Uncertainty Models and Robustness 5. Algorithms for real and mixed complex-real perturbations are the subject of current research (see for instance Young.185) P (s ) = 2 s (1 + sθ) 12 If there are real parametric perturbations then this result remains valid.7.7. Example We apply the singular value method for the analysis of the stability robustness to an example. Actually (Doyle 1982). 5. Doyle. Q (5. This is a generalization that allows to exploit realness of perturbations.182) The rightmost side may be numerically minimized with respect to the free positive num¯. ¯ −1 ) ≤ σ( ¯ DM D µ( M ) = µ( DM D (5.5.1 and several other places. Practical algorithms for computing lower and upper bounds on the structured singular value for complex perturbations have been implemented in the M ATLAB µ-Analysis and Synthesis Toolbox (Balas.7. D-scaling upper bound.7. and Smith 1991). With property (b) we have (for complex perturbations only) µ( M ) = µ( M Q ) ≥ ρ( M Q ). Newlin.10 (Upper and lower bounds for the structured singular value). G )-scaling. A plant with transfer function g (5.6.

Inspection shows that the right-hand side of this expression is identical to that of (5.3. α2 .2. 5.7. The interconnection matrix H has a dyadic structure. and β2 are scaling factors such that | g − g0 | ≤ ε1 with ε1 = α1 β1 .10(1) this is no coincidence.187) where the subscript o indicates nominal values.17 of the structured singular value for the two cases of Example 5.7. and L0 = P0 C is the nominal loop gain. (5. which was obtained by a singular value analysis based on optimal scaling. β1 . = 2 + ε ω3 1 + ω2 T 2 ε1 k2 + ω2 Td 2 0 |χ0 ( jω)| (5.2.3 it was found that the interconnection matrix for scaled perturbations in the gain g and time constant θ is given by C (s ) = H (s ) = 1 1+s θ0 β1 C ( s ) −β2 1 + L0 ( s ) 1 −α s2 α2 s . 5. Exercise 5.99) in Example 5. Only in case (b) robust stability is certain.32. Application 2 µ(H( j ω)) 1 (a) (b) 0 10 −2 10 −1 10 0 10 1 ω 10 2 10 3 Figure 5.32 repeats the plots of of Fig. In view of the statement in Summary 5. Structured singular value robustness analysis is connected in feedback with a compensator with transfer function k + sTd . Since the perturbation analysis is based on dynamic rather than parameter perturbations the results are conservative.5.3. For case (a) the peak value of the structured singular value is greater than 1 so that robust stability is not guaranteed.9 shows that its structured singular value with respect to complex perturbations in the parameters is µ( H ( jω)) = 1 1+ω2 θ2 0 |1 + L0 (ω)| α1 β 1 |C ( jω)| + α2 β 2 ω ω2 .5.5. Figure 5.32 using the appropriate numerical routines from the µ-Toolbox or the Robust Control Toolbox for the computation of structured singular values.188) with χ0 (s ) = θ0 T0 s4 + (θ0 + T0 )s3 + s2 + g0 Td s + g0 k the nominal closed-loop characteristic polynomial. (5.3.: Structured singular values for the two-parameter plant for two cases of the result of Summary 5.7. In Example 5.12 (Computation of structured singular values with M ATLAB). 239 . and |θ − θ0 | ≤ ε2 with ε2 = α2 β2 . Reproduce the plots of Fig.186) 1 + sT0 We use the design values of the parameters established in Example 5.7. The numbers α1 .

34. and reference signals. We continue in this section by considering performance and its robustness. H ∞ = 0.33. Considering the combined signal   r w = v m as the external input.33 represents a control system with external input w. (5.189) where S is the sensitivity matrix of the feedback loop and T the complementary sensitivity matrix. Combined performance and stability robustness 5. it follows that the transfer matrix of the control system is H = TF− I S −T . To demonstrate how performance may be specified this way.5. or. The tracking error z = y − r is given by z = y − r = ( T F − I )r + S v − T m. m the measurement noise. Ideals cannot always be obtained.8.1 (Two-degree-of-freedom feedback system). and y the control system output. 5.33 is ideal if H = 0. For simplicity we assume that it is a SISO system.190) (5. Introduction In the preceding section we introduced the structured singular value to study the stability robustness of the basic perturbation model. By introducing suitable frequency dependent scaling functions (that is. C the compensator.” rather than zero. again consider the two-degree-of-freedom feedback system of Fig. and that the reference signal and measurement noise 240 . 5.193) Example 5. P is the plant. (5.191) The performance of the control system of Fig. measurement noise. v the disturbance.: Control system Example 5.8. (5. by modifying H to W HV ) we may arrange that “satisfactory” performance is obtained if and only if H ∞ < 1. To illustrate this model. and F a precompensator. such as disturbances.34.2 (Performance specification). and ideally should be zero. w H z Figure 5. and external output z. The output signal z represents an error signal. The transfer matrix H is the transfer matrix from the external input w to the error signal z.192) (5.8. consider the two-degree-of-freedom feedback configuration of Fig. so we settle for the norm H ∞ to be “small. u the plant input. equivalently. Figure 5.1. 5.8. The signal r is the reference signal. It is easy to find that y = = ( I + PC )−1 PC Fr + ( I + PC )−1 v − ( I + PC )−1 PCm T Fr + S v − T m. Uncertainty Models and Robustness 5.

8.01 .195) is equivalent to | S ( jω) V ( jω)| < 1 for ω ∈ R. A doubly logarithmic magnitude plot of Sspec is shown in Fig. ω ∈ R. such as Sspec (s ) = s2 .195) with Sspec a specified rational function.01 .8. It follows from Example 5. 5. with V = 1/ Sspec.: Two-degree-of-freedom feedback control system.197) 241 .1 1 ω 10 100 Figure 5.0001 . (5.: Magnitude plot of Sspec . (5.194) Performance is deemed to be adequate if the sensitivity function satisfies the requirement 10 |Sspec| 1 . while the relative damping coefficient ζ specifies the allowable amount of peaking.001 . Combined performance and stability robustness v + y + + m r F + − C u P + Figure 5.1 . Redefining H as H = SV this reduces to H ∞ < 1. 1 + PC (5. | S ( jω)| < | Sspec ( jω)|.34.5. The inequality (5.35. are absent. s2 + 2ζω0 s + ω2 0 (5.1 that the control system transfer function reduces to the sensitivity function S of the feedback loop: H= 1 = S.196) The parameter ω0 determines the minimal bandwidth.35 for ω0 = 1 and ζ = 1 2.

p (5. 5. (b) Doubly perturbed system. ∆0 w p z q w p z q H H ∆ (a) ∆ (b) Figure 5. Uncertainty Models and Robustness By suitable scaling we thus may arrange that the performance of the system of Fig. and is scaled such that ∞ ≤ 1.199) H11 is the nominal control system transfer matrix. 5.: (a) Perturbed control system. and 2.37(a).: Artificially perturbed system 5. We describe the system by the interconnection equations z w H11 =H = H21 q p H12 H22 w . 242 .1 this specification is equivalent to the requirement that the artificially perturbed system of Fig. Performance robustness Performance is said to be robust if H ∞ remains less than 1 under perturbations.37. the perturbed system remains stable under all perturbations. (5.5. 5. the ∞-norm of the transfer matrix of the perturbed system remains less than 1 under all perturbations. To make this statement more specific we consider the configuration of Fig.8.198) In what follows we exploit the observation that by (1) of Summary 5. We define performance to be robust if 1. The perturbation may be structured.36 remains stable under all stable perturbations 0 such that 0 ∞ ≤ 1.33 is considered satisfactory if H ∞ < 1.5.2. ∆0 w H z Figure 5.36.

(5. Robust performance of the system of Fig. and Tannenbaum (1992)). in turn. This.8. (5.37(a) is achieved if and only if µ H < 1.: SISO feedback system.38(a).3 (Robust performance and stability).201) Summary 5.8.200) with µ the structured singular value of H with respect to the perturbation structure defined by 0 0 0 . 243 . structured as specified. (b) Perturbed. SISO design for robust stability and performance We describe an elementary application of robust performance analysis using the structured singular value (compare Doyle.3.203) is a “full” perturbation. 5.8.38.37(a) is less than 1 for every perturbation with norm less than or equal to 1. where µ H is the structured singular value of H under perturbations of the form 0 (5. and w u + + z (a) − C P w q W2 − C u P δP + p + + + y W1 z (b) Figure 5.37(b) is stable for every “full” perturbation 0 and every structured perturbation . both with norm less than or equal to 1. is equivalent to the condition that the augmented perturbation model of Fig. Necessary and sufficient for this is that µ H < 1. Consider the feedback control system configuration of Fig. A SISO plant P is connected in feedback with a compensator C. (5. Combined performance and stability robustness Necessary and sufficient for robust performance is that the norm of the perturbed transfer matrix from w to z in the perturbation model of Fig. 5. Francis. 5. (a) Nominal. 5.202) 0 0 0 . 5.5.

8. (5. robust performance and stability are achieved if and only if µ H < 1. By inspection we obtain the signal balance equation y = w + p − PCy.38(b) includes the weighting filter W1 and the plant perturbation model.207) (5. (5. − W2 PCy = − W2 T (w + p ). (5. 1 + PC T= PC 1 + PC (5.37(b) follows from q − W2 T = W1 S z H H has the dyadic structure H= W2 T W1 S −1 1 . 5. so that y= 1 (w + p ). The block diagram of Fig.209) = = W1 y = W1 S (w + p ).208) are the sensitivity function and the complementary sensitivity function of the feedback system. (5. w (5. 1 + PC (5. respectively. 244 . Uncertainty Models and Robustness • Performance is measured by the closed-loop transfer function from the disturbance w to the control system output z.5. the transfer matrix H in the configuration of Fig.210) With the result of Summary 5. 5.205) with W2 a stable function representing the maximal uncertainty. that is. Thus.9 we obtain the structured singular value of the interconnection matrix H as µH = = ω ∈R sup µ( H ( jω)) = sup (| W1 ( jω) S ( jω)| + | W2 ( jω) T ( jω)| ) ω ∈R ∞. (5.204) with W1 a suitable weighting function.212) (5. ω ∈ R.206) By further inspection of the block diagram it follows that z q Here S= 1 . • Plant uncertainty is modeled by the scaled uncertainty model P −→ P (1 + δ P W2 ).211) − W2 T W1 S p .213) | W1 S | + | W2 T | By Summary 5.7.3. by the sensitivity function S = 1/(1 + PC ). Performance is considered satisfactory if | S ( jω) W1 ( jω)| < 1. and δ P a scaled stable perturbation such that δ P ∞ ≤ 1.

This comes down to choosing the weighting function W1 in (5.39 magnitude plots are given of the nominal sensitivity and complementary sensitivity functions of this feedback system. 245 .40(a) shows a plot of the right-hand side of (5.: Nominal sensitivity and complementary sensitivity functions Example 5. Combined performance and stability robustness 10 1 . 1+ε ε 1+ε ω ∈ R.39.217) with T0 the nominal complementary sensitivity function.25.8.216) We consider how to select the weighting function W2 .001 . In Fig.204) as W1 = 1 . (5.001 .4 (Robust performance of SISO feedback system).01 |T | .215) with S0 the nominal sensitivity function and the positive number ε a tolerance.218) for ε = 0. s2 C (s ) = k + sTd . We consider the SISO feedback system we studied in Example 5. ω ∈ R. This is equivalent to | W2 ( jω)| < | T0 ( jω)| .2. This right-hand side is a frequency dependent bound for the maximally allowable scaled perturbation δ P . 198) and on several other occasions.2. (5. (1 + ε) S0 (5. 199). 1 + sT0 (5.1 . We use the design values of Example 5. which specifies the allowable perturbations. 5.01 |S| 10 1 .212) reduces to 1 + | W2 ( jω) T0 ( jω)| < 1. With W1 chosen as in (5.214) respectively.5. so we impose as design specification that under perturbation S ( jω) ≤ 1 + ε.2.01 1 ω 100 1 ω 100 Figure 5.8.2 (p. The nominal sensitivity function is completely acceptable.1 (p. S0 ( jω) ω ∈ R.218) Figure 5. The plot shows that for low frequencies the admissible proportional uncertainty is limited to ε/(1 + ε) = 0. (5.1 . with nominal plant and compensator transfer functions P0 (s ) = g0 .01 .216) the robust performance criterion µ H < 1 according to (5.

25 then the latter number is about 0. which is about 1+ ε 0. Suppose that g varies between 0. For performance robustness this number definitely needs to be less than the minimum of the bound on the rightε hand side of (5.5. we may use the uncertainty specification (in the form of W2 ) to estimate the largest possible performance variations.1. as we did before.1 . Conversely. If ε = 0.1 10 bound 1 .01 1 ω 100 1 ω 100 (a) Uncertainty bound and admissible perturbation (b) Effect of perturbation on S Figure 5.9 or g = 1. s2 (1 + sθ) g− g0 g0 (5. Figure 5.2.40(b) shows the magnitude plot of the sensitivity function S for g = 1. 1. that the plant is perturbed to P (s ) = g .218) to determine the smallest value of ε for which robust performance is guaranteed.220) At low frequencies the perturbation has approximate magnitude | g − g0 |/ g0 .15.1 and θ = 0.01 |(1 + ε )S o | | S| | ∆P | .5 (Bound on performance robustness). Comparison with the bound |(1 + ε) S0 | shows that performance is robust. (5. For this perturbation the performance robustness test is just satisfied.1 (that is.75.1.219) The corresponding proportional plant perturbation is P (s ) = P (s ) − P0 (s ) = P0 (s ) − sθ 1 + sθ .5 and 1.001 .2. g = 0. Uncertainty Models and Robustness 10 1 . Determine a function W2 that tightly bounds the uncertainty P of (5.8.1 then the perturbation fails the test.4 we used the performance specification (in the form of W1 ) to find bounds on the maximally allowable scaled perturbations δ P .01 .5 and θ between 0 and 0.218). In the crossover frequency region the allowable perturbation is even less than 0.001 .220). In Example 5.01 .40. 2. as expected.1 or θ is increased beyond 0.9 or larger than 1. 246 .: The bound and its effect on S but that the system is much more robust with respect to high-frequency perturbations. If g is made smaller than 0.40(a) includes a magnitude plot of P for | g − g0 |/ g0 = 0.1) and θ = 0. Figure 5.8. Use (5. Exercise 5. Suppose.

Given M as the product of a column and row vector M = ab we have det ( I − M ) = det ( I − ab ) = det (1 − b a ) = 1 − i ai bi i. (5.224) For det ( I − M ) to be zero we need at least that the lower bound is not positive. for any u and v in U we have Lu − L v U = L (u − v) U ≤ L · u − v U . . for every v ∈ U the feedback equation has a unique solution w ∈ U . 5. Proof of Small gain theorem.7.223) This gives a lower bound for the real part of det ( I − M ). Appendix: Proofs 3. Appendix: Proofs In this Appendix to Chapter 5 the proofs of certain results in are sketched. ··· ··· 247 .225)  . If L is a contraction with contraction constant k then it follows from Lu U ≤ k u U that L ≤ k. This proves that L is a contraction if and only if its norm is strictly less than 1. Proof of structured singular value of dyadic matrix. For any fixed input v ∈ U . Now take equal to   0 ··· sgn (a1 b1 ) 1  0 sgn (a2 b2 ) · · · = (5. if L is a contraction. so is the map defined by w → v + L w. if L < 1. It follows that the system with input v and output w is BIBO stable. Structured singular value of a dyadic matrix We prove Summary 5. Denote the normed signal space that L operates on as U . 5. .4. Check how tight the bound on performance is that follows from the value of ε obtained in 2. Conversely.. 5.9. that σ( ) ≥ 1/ i |ai bi |. (5. 5.2. 2 ··· ··· ··· there holds that µ( M ) = i (5.9. Small gain theorem We prove Summary 5.9. Hence. . The proof of the small gain theorem follows by applying the fixed point theorem to the feedback equation w = v + L w that follows from Fig. and perturbation structure   0 ··· 1 = 0 · · · .5. .9. In fact we prove the generalization that for rank 1 matrices   a1 a2  ai .222) |ai bi |. b j ∈ M =   b1 b2 · · · . det ( I − M ) = 1 − i ai bi i ≥1− i |ai bi | max | i i| =1− i |ai bi |σ( ).221) i ∈ (5. that is. if L is a contraction.9.12(b).1. Hence.2. i |ai bi | . Compute the sensitivity function S for a number of values of the parameters g and θ within the uncertainty region. so that L is a contraction. L < 1.

226) Hence a smallest (in norm) that makes I − M singular has norm 1/ i |ai bi |. Uncertainty Models and Robustness Then σ( ) = 1/ i |ai bi | and it is such that a jb j j j det ( I − M ) = 1 − = 1− j a jb j sgn a j b j = 0. It may be shown that the D-scaling upper bound for rank 1 matrices equals µ( M ) in this case. 248 . Therefore µ( M ) = i |ai bi |.5. i |ai bi | (5.

H∞ -optimization amounts to the minimization of the ∞-norm of a relevant frequency response function. 264) we derive a formula for so-called suboptimal solutions and establish a lower bound for the ∞-norm. It is very well suited to frequency response shaping.5 (p. Design by µ-synthesis aims at reducing the peak value of the structured singular value. It accomplishes joint robustness and performance optimization. An important aspect of H∞ optimization is that it allows to include robustness constraints explicitly in the criterion. which is the most general version.2 (p. 258) we introduce the standard problem of H∞ -optimization. H∞ -optimization resembles H2 -optimization. robustness against plant uncertainty may be handled more directly than with H2 optimization.5 (p. 27). In § 6. The mixed sensitivity problem is a special case. 250) we discuss the mixed sensitivity problem. The key tool in solving H∞ -optimization problems is J-spectral factorization. 272) we show that by using state space techniques the J -spectral factorization that is re- 249 . Because the 2. In § 6. H. Moreover.1.3 (p. may be obtained. The name derives from the fact that mathematically the problem may be set in the space H∞ (named after the British mathematician G. Several other special cases of the standard problem are exhibited. which consists of all bounded functions that are analytic in the right-half complex plane. In § 6. Introduction In this chapter we introduce what is known as H∞ -optimization as a design tool for linear multivariable control systems. The next few sections are devoted to the solution of the standard H∞ -optimization problem. if any exist. where the criterion is the 2-norm.6. We also establish how all stabilizing suboptimal solutions. This special H∞ problem is an important design tool. 6.4 (p. In § 6.and ∞-norms have different properties the results naturally are not quite the same. Closed-loop stability is defined and a number of assumptions and conditions needed for the solution of the H∞ -optimization problem are reviewed. Hardy). H∞-Optimization and µ-Synthesis Overview – Design by H∞ -optimization involves the minimization of the peak magnitude of a suitable closed-loop system function. We show how this problem may be used to achieve the frequency response shaping targets enumerated in § 1. We do not go to this length. however.

6. 6. H∞ -Optimization and µ-Synthesis quired for the solution of the standard problem may be reduced to the solution of two algebraic Riccati equations.3) ∞ < 1. Introduction In § 5. 275) we discuss optimal (as opposed to suboptimal) solutions and highlight some of their peculiarities.4) ∞ 250 . 292) is given over to a rather elaborate description of an application of µ-synthesis. Given a feedback system with compensator C and corresponding loop gain L = PC that does not satisfy the inequality (6. Section 6.10 (p. and W2 are so chosen that the scaled perturbation δ p = [−δ D δ N ] satisfies δ P ∞ ≤ 1. contains a number of proofs.6 (p.1. The mixed sensitivity problem 6.2) S = ( I + L )−1 and T = L ( I + L )−1 are the sensitivity matrix and the complementary sensitivity matrix of the closed-loop system. Section 6.2. An effective way of doing this is to consider the problem of minimizing H ∞ with respect to all compensators C that stabilize the system. In this case. − W2 T V (6.7 (p. Stability robustness is guaranteed if H where H= W1 SV . 279) explains how integral control and high-frequency rolloff may be handled.: Feedback loop L = N D−1 −→ ( I + V δ N W2 ) N D−1 ( I + V δ D W1 )−1 .2. finally. with L the loop gain L = PC . Section 6. W1 . stability robustness is only obtained for perturbations satisfying δ P ∞ ≤ 1/γ . We considered fractional perturbations of the type − C P Figure 6.2) one may look for a different compensator that does achieve inequality. The problem of minimizing W1 SV − W2 T V (6.1.6. 304). Section 6. 227) we studied the stability robustness of the feedback configuration of Fig.1.6 (p.1) The frequency dependent matrices V . (6. 288) is devoted to an introductory exposition of µ-synthesis.8 (p.9 (p. If the minimal value γ of H ∞ is greater than 1 then no compensator exists that stabilizes the systems for all perturbations such that δ P ∞ ≤ 1. In § 6. This approximate technique for joint robustness and performance optimization uses H∞ optimization to reduce the peak value of the structured singular value. 6. respectively. (6.

If we denote the constant as γ 2 . The criterion (6. p. and consider the modified problem of minimizing W1 SV W2 UV (6. The mixed sensitivity problem (Kwakernaak 1983. | W2 ( jω) V ( jω)| ω ∈ R. ω ∈ R. but also to achieve a number of other important design targets for the one-degree-of-freedom feedback configuration of Fig. 275). we introduce a useful modification of the problem. We mainly consider the SISO mixed sensitivity problem. (6.1. The reason is that the solution of the mixed sensitivity sensitivity problem often has the equalizing property (see § 6. In what follows we explain that the mixed sensitivity problem cannot only be used to verify stability robustness for a class of perturbations.7) Many of the conclusions also hold for the MIMO case. although their application may be more involved. This property implies that the frequency dependent function | W1 ( jω) S ( jω) V ( jω)|2 + | W2 ( jω)U ( jω) V ( jω)|2 . The name derives from the fact that the optimization involves both the sensitivity and the complementary sensitivity function. For a fixed plant we may absorb the plant transfer matrix P (and the minus sign) into the weighting matrix W2 . then it immediately follows from | W1 ( jω) S ( jω) V ( jω)|2 + | W2 ( jω)U ( jω) V ( jω)|2 = γ 2 that for the optimal solution | W1 ( jω) S ( jω) V ( jω)|2 ≤ γ 2 .1.5) is the input sensitivity matrix introduced in § 1. We may write W2 T V = W2 L ( I + L )−1 V = W2 PC ( I + L )−1 V = W2 PUV .10) (6.6) ∞ with respect to all stabilizing compensators. Hence. Before starting on this. | W1 ( jω) V ( jω)| γ .2. 6.9) 251 . (6. Kwakernaak 1985) is a version of what is known as the mixed sensitivity problem (Verma and Jonckheere 1984). ω ∈R (6.2.11) (6. (6.6) then reduces to the square root of the scalar quantity sup | W1 ( jω) S ( jω) V ( jω)|2 + | W2 ( jω)U ( jω) V ( jω)|2 .5 (p. however. actually is a constant (Kwakernaak 1985). Frequency response shaping The mixed sensitivity problem may be used for simultaneously shaping the sensitivity and input sensitivity functions. We redefine the problem of minimizing (6.2.6. | S ( jω)| ≤ |U ( jω)| ≤ γ . ω ∈ R. where U = C ( I + PC )−1 (6. 6. | W2 ( jω)U ( jω) V ( jω)|2 ≤ γ 2 . 27) .6) as the mixed sensitivity problem.12) ω ∈ R.6.8) whose peak value is minimized. with γ nonnegative.

This is also true if the optimal solution does not have the equalizing property. This is the case if W1 (s ) V (s ) includes a factor sk in the denominator. Because the ∞-norm involves the supremum frequency response shaping based on minimization of the ∞-norm is more direct than for the H2 optimization methods of § 3. 6. Likewise. 130) and § 3.2. with W1 V large at low frequencies and W2 V large at high frequencies). Type k control. 1 The pole excess of a rational transfer function P is the difference between the number of poles and the number of zeros. that the limits of performance discussed in § 1. the weighting functions must be chosen with the respect due to these limits. H∞ -Optimization and µ-Synthesis By choosing the functions W1 . 40) can never be violated. 132) . and.15). Then | S ( jω)| behaves as ωk as ω → 0.5. Hence. suppose that | W2 ( jω) V ( jω)| behaves as ωm as ω → ∞.5.5 (p. W2 . for ω large. with e the pole excess1 of P. over the performance of the feedback system. If k = 1 then the system has integrating action. if the degree of the numerator of W2 V exceeds that of the denominator (by m).5 (p. however. If the weighting functions are suitably chosen (in particular. and the roll-offs of the complementary and input sensitivity functions. with k a nonnegative integer. for H2 optimization. hence. Then |U ( jω)| behaves as ω−m as ω → ∞. equality may often be achieved asymptotically.14–6.3. This is the case if W2 V is nonproper. This is important for robustness against high-frequency unstructured plant perturbations. Suppose that | W1 ( jω) V ( jω)| behaves as 1/ωk as ω → 0.6. with excellent low-frequency disturbance attenuation if k ≥ 1.14) (6. Note. From U = C /(1 + PC ) it follows that C = U/(1 + U P ). Hence. 127).4 (p. Similar techniques to obtain type k control and high-frequency roll-off are used in § 3. Hence.6 (p.15) + | W2 ( jω)U ( jω) V ( jω)|2 dominates at high frequencies = γ2. This number is also known as the relative degree of P 252 . and T = PC /(1 + PC ) behaves as ω−(m+e ). Type k control and high-frequency roll-off In (6. and V correctly the functions S and U may be made small in appropriate frequency regions. by choosing m we pre-assign the high-frequency roll-off of the compensator transfer function. | S ( jω)| ≈ |U ( jω)| ≈ γ | W1 ( jω) V ( jω)| γ | W2 ( jω) V ( jω)| for ω small. if P is strictly proper and m ≥ 0 then also C behaves as ω−m . that is. which implies a type k control system. High-frequency roll-off. respectively.13) This result allows quite effective control over the shape of the sensitivity and input sensitivity functions. then often the solution of the mixed sensitivity problem has the property that the first term of the criterion dominates at low frequencies and the second at high frequencies: | W1 ( jω) S ( jω) V ( jω)|2 dominates at low frequencies As a result. (6. (6.

2. This means that the open-loop poles (the roots of D). reappear as closed-loop poles. DX + NY (6.2. Since the right-hand side of (6. In particular. the polynomial E cancels against D in the left-hand side of (6. D B1 B2 E (6.21) If A is any rational or polynomial function then A∼ is defined by A∼ (s ) = A (−s ). ∼ ∼ ∼ E ∼ E · B1 B1 · B2 B2 · Dcl Dcl (6. The equalizing property of § 6. may be prevented by choosing the denominator polynomial E of V equal to the plant denominator polynomial D.6. The mixed sensitivity problem 6. hence. possibly after having been mirrored into the left-half plane. and V in rational form as P= N A1 A2 M . Substituting S and U as given by (6.2 (p.19) The denominator Dcl = DX + NY (6. Further inspection of (6.17) with all numerators and denominators polynomials. so that V = M / D. all polynomial factors in the numerator of the rational function on the left cancel against corresponding factors in the denominator. If we take V proper (which ensures V ( jω) to be finite at high frequencies) then the polynomial M has the same degree as D.21) shows that if there are no cancellations between M ∼ M and ∼ ∼ ∼ B1 B2 B2 .4. If there are no cancellations between D∼ D and E ∼ E B1 B1 B2 B2 then the closed-loop characteristic polynomial Dcl (which by stability has left-half plane roots only) necessarily has among its roots those roots of D that lie in the left-half plane. We write the transfer function P and the weighting functions W1 . and the mirror images with respect to the imaginary axis of those roots of D that lie in the right-half plane. DX + NY U= DY .21).2. W2 . W1 = .18) then it easily follows that S= DX . This phenomenon. This involves a pole cancellation phenomenon that is sometimes misunderstood. Partial pole placement There is a further important property of the solution of the mixed sensitivity problem that needs to be discussed before considering an example.16) we easily obtain ∼ ∼ ∼ ∼ ∼ D∼ D · M ∼ M · ( A∼ 1 A1 B2 B2 X X + A2 A2 B1 B1 Y Y ) = γ2.19) into (6. ∼ ∼ the factor D∼ D cancels. and.21) is a constant. 253 . so that the open-loop poles do not reappear as closed-loop poles. If also the compensator transfer function is represented in rational form as C= Y X (6. which is not propitious for good feedback system design. E E B1 then the polynomial M cancels against a corresponding factor in Dcl . With this special choice of the denominator of V . 251) implies that W1 (s ) W1 (−s ) S (s ) S (−s ) V (s ) V (−s ) + W2 (s ) W2 (−s )U (s )U (−s ) V (s ) V (−s ) = γ 2 (6. V= . has the same number of roots as D.16) for all s in the complex plane. W2 = . and we assume without loss of generality that M has left-half plane roots only.20) is the closed-loop characteristic polynomial of the feedback system.

Hence. Tsai. This technique.24) where g is nominally 1 and the nonnegative parasitic time constant θ is nominally 0. Example: Double integrator We apply the mixed sensitivity problem to the same example as in § 3. Because the plant is minimum-phase trouble-free crossover may be achieved (that is. The variations in the parasitic time constant θ mainly cause high-frequency perturbations. (6. without undue peaking of the sensitivity and complementary sensitivity functions). Consider a SISO plant with nominal transfer function P0 (s ) = 1 .26) The relative perturbation of the denominator is constant over all frequencies. the relative perturbations of the denominator and the numerator are 1 D ( s ) − D0 ( s ) = − 1. 2 Much of the text of this subsection has been taken from Kwakernaak (1993).25) Correspondingly. It is very useful in designing for a specified bandwidth and good time response properties. Perturbation analysis. the open-loop poles (the roots of D) are reassigned to the locations of the roots of M . D (6.22) where the polynomial M has the same degree as the denominator polynomial D of the plant. we model the effect of the parasitic time constant as a numerator perturbation. N0 (s ) 1 + sθ (6. Accordingly. s2 (6. and write P (s ) = N (s ) = D (s ) 1 1+s θ s2 g . and the gain variations as denominator perturbations. while the low-frequency perturbations are primarily the effect of the variations in the gain g. also in the crossover region. and Gu 1990) allows further control over the design. By suitably choosing the remaining weighting functions W1 and W2 these roots may often be arranged to be the dominant poles. 254 .23) The actual. perturbed plant has the transfer function P (s ) = g .5. we expect that — in the absence of other perturbations — values of |1/ g − 1| up to almost 1 may be tolerated.2. H∞ -Optimization and µ-Synthesis All this means that by letting V= M . D0 ( s ) g N (s ) − N0 (s ) −s θ = . 133). 6.6.6. s2 (1 + sθ) (6. known as partial pole placement (Kwakernaak 1986. In the design examples in this chapter it is illustrated how the ideas of partial pole placement and frequency shaping are combined.2 (p. where H2 optimization is illustrated2 . A discussion of the root loci properties of the mixed sensitivity problem may be found in Choi and Johnson (1996). Postlethwaite. We start with a preliminary robustness analysis.

both for robustness and for performance. of the polynomial M ). the largest possible value of θ dictates the bandwidth. The mixed sensitivity problem The size of the relative perturbation of the numerator is less than 1 for frequencies below 1/θ .: Bode magnitude plot of 1/ V frequency behavior of the sensitivity function.2. Then if the first of the two terms of the mixed sensitivity criterion dominates at low frequencies from we have from (6. Owing to the presence of the double open-loop 255 . Choice of the weighting functions. To obtain a good time response corresponding to the bandwidth 1. To prevent destabilization it is advisable to make the complementary sensitivity small for frequencies greater than 1/θ . which does not suffer from √ sluggishness or excessive overshoot. we assign two dominant poles to the locations 1 2 2 (−1 ± j ). W1 and W2 .1 .01 .29) Figure 6.6.14) that | S ( jω)| ≈ ( jω)2 γ =γ √ | V ( jω)| ( jω)2 + jω 2 + 1 at low frequencies.1 1 ω 10 100 Figure 6.28) We tentatively choose the weighting function W1 equal to 1.27) √ s2 + s 2 + 1 .001 . The plot implies a very good low1/|V | 10 1 . we expect that — in the absence of other perturbations — values of the parasitic time constant θ up to 1 do not destabilize the system.01 . To accomplish this with a mixed sensitivity design. 2 2 (6. and equal to 1 for high frequencies. Thus. Assuming that performance requirements specify the system to have a closedloop bandwidth of 1.0001 . This is achieved by choosing the polynomial M as M ( s ) = [s − so that V (s ) = √ 1√ 1√ 2(−1 + j )][s − 2(−1 − j )] = s2 + s 2 + 1. s2 (6. we aim at a closed-loop bandwidth of 1 with small sensitivity at low frequencies and a sufficiently fast decrease of the complementary sensitivity at high frequencies with a smooth transition in the crossover region. (6. As the complementary sensitivity starts to decrease at the closed-loop bandwidth.2 shows the magnitude plot of the factor 1 / V .2. we successively consider the choice of the functions V = M / D (that is.

Inspection shows that as c increases. Next contemplate the high-frequency behavior. The two former poles dominate the latter pole. | T | decreases and | S | increases.01 .31) √ and results in the closed-loop poles 1 2 2 (−1 ± j ) and −5. 1 + 0.2. Hence. and a high-frequency roll-off of T of 2 decades/decade. (6.01 . The minimal ∞-norm is H ∞ = 1. and as cr ω if r = 0.2586 s + 0.: Bode magnitude plots of S and T for r = 0 Solution of the mixed sensitivity problem. 276).15563s (6.01 . and c = 10.01 . Then for high frequencies the magnitude of W2 ( jω) asymptotically behaves as c if r = 0. 3 The actual computation of the compensator is discussed in Example 6.3 shows3 the optimal sensitivity function S and the corresponding complementary sensitivity function T for c = 1/100. which results in a proper but not strictly proper compensator transfer function C .1 c=1/10 c=1/10 c=1/100 .001 c=10 10 1 . if r = 0 then the high-frequency roll-off of the input sensitivity function U and the compensator transfer function C is 0 and that of the complementary sensitivity T is 2 decades/decade (40 dB/decade).1 (p. The corresponding optimal compensator has the transfer function C (s ) = 1. c = 1/10. The smaller c is. c = 1.1 1 ω 10 100 Figure 6.3.2861.6. For high frequencies V is constant and equal to 1. We first study the case r = 0. as planned. Figure 6.0114. the closer the shape of | S| is to that of the plot of Fig. H∞ -Optimization and µ-Synthesis pole at the origin the feedback system is of type 2. This makes the sensitivity small with little peaking at the cut-off frequency.1 1 ω 10 100 .1 . 256 . which conforms to expectation.001 c=1 c=10 c=1/100 complementary sensitivity |T | c=1 . If r = 0 then U and C roll off at 1 decade/decade (20 dB/decade) and T rolls off at 3 decades/decade (60 dB/decade).61967 . 6. sensitivity |S| 10 1 .30) with c and r nonnegative constants such that c = 0. We choose c = 1/10.6. There is no need to correct this low frequency behavior by choosing W1 different from 1. Consider choosing W2 as W2 (s ) = c (1 + rs ).

the stability region of Fig. and T sets in at the frequency 1.32) shows that both basically are lead compensators with high-frequency roll-off.1 1 ω 10 100 .6. The compensator (6.32) √ which results in the closed-loop poles 1 2 2(−1 ± j ) and −7. Inspection of the two compensators (6. 257 . By straightforward computation.31) is similar to the modified PD compensator that is obtained in Example 5.4.1 1 ω 10 100 Figure 6.: Bode magnitude plots of S and T for c = 1/10 the sensitivity function has little extra peaking while starting at the frequency 10 the complementary sensitivity function rolls off at a rate of 3 decades/decade.2107 s + 0.4 shows the resulting magnitude plots. The corresponding optimal compensator transfer function is C (s ) = 1. The diagram shows that for θ = 0 the closed-loop system is stable for all g > 0. Inspection of W2 as given by (6. This is accomplished by taking the constant r nonzero.01 .5987 .1 r = 1/10 r=0 .001 r=1 r = 1/10 r=0 . Given the compensator C = Y / X the closedloop characteristic polynomial of the perturbed plant is Dcl (s ) = D (s ) X (s ) + N (s )Y (s ) = (1 + sθ)s2 X (s ) + gY (s ). for all −1 < 1 g − 1 < ∞. 199) by root locus design.30) shows that by choosing r = 1 the resulting extra roll-off of U .3833. which also is a somewhat larger interval than expected. Likewise.2 (p. The minimal ∞-norm is H ∞ = 1. 6.01267s2 (6.01 .1 .01 . Figure 6.001 r=1 10 1 .01 . 5. For r = 1/10 the break point is shifted to the frequency 10. 6.2.3281 ± j1. This stability interval is larger than predicted.5 is similar to that of Fig.2. the stability region of Fig.31).5 may be established for the compensator (6. Again the former two poles dominate the latter. For r = 1/10 sensitivity |S| complementary sensitivity |T | 10 1 . Robustness analysis. We conclude this example with a brief analysis to check whether our expectations about robustness have come true. The mixed sensitivity problem Robustness against high-frequency perturbations may be improved by making the complementary sensitivity function T decrease faster at high frequencies. C . 1 + 0.8765.20355s + 0. For g = 1 the system is stable for 0 ≤ θ < 1.5.179.31) and (6. which involves fixing one of the two parameters g and θ and varying the other. that is. That for the other compensator is similar.

The output z has the meaning of control error.: The standard H∞ optimal regulation problem. and ideally should be zero. so that y = ( I − G22 K )−1 G21 w. From z = G11 w + G12 u = G11 x + G12 Ky we then have z = [ G11 + G12 K ( I − G22 K )−1 G21 ] w. 6.1.34) we obtain from y = G21 w + G22 u the closed-loop signal balance equation y = G21 w + G22 Ky. The standard H∞ optimal regulation problem 6.5 θ Figure 6. measurement noise. The output y. H (6.6.3.: Stability region 6. u (6. This standard problem is defined by the configuration of Fig. It is often referred to as the generalized plant.6.3. and represents driving signals that generate disturbances. Introduction The mixed sensitivity problem is a special case of the so-called standard H∞ optimal regulation problem (Doyle 1984). finally. is the observed output.33) By connecting the feedback compensator u = Ky (6.35) 258 .6. The plant w u G z y K compensator Figure 6. The plant has an open-loop transfer matrix G such that z w G11 =G = G21 y u G12 G22 w . H∞ -Optimization and µ-Synthesis 6 4 g 2 0 0 .5 1 1. “plant” is a given system with two sets of inputs and two sets of outputs. The signal u is the control input. and is available for feedback.5. and reference inputs. The signal w is an external input.

where the compensator transfer function now is denoted K rather than C . the external signal w generates the disturbance v after passing through a shaping filter with transfer matrix V . (6.7 we denote the input to the compensator as y.36) The standard H∞ -optimal regulation problem is the problem of determining a compensator with transfer matrix K that 1. − W2 UV z2 H so that minimization of the ∞-norm of the closed-loop transfer matrix H amounts to minimization of W1 SV W2 UV . We read off from the block diagram that z1 z2 y = = = −V w − W1 V w + W1 Pu . minimizes the ∞-norm H w to the control error z.7.39) 259 . and 2.6. W2 u . The “control error” z has two components. 6.38) (6.37) ∞ In the block diagram of Fig. The second component z2 is the plant input u after passing through a weighting filter with transfer matrix W2 .6. (6. In this diagram. Pu . It is easy to verify that for the closed-loop system z= z1 W1 SV = w. 6. To explain why the mixed sensitivity problem is a special case of the standard problem we consider the block diagram of Fig.3. z1 and z2 . the closed-loop transfer matrix H is H = G11 + G12 K ( I − G22 K )−1 G21 . 6. stabilizes the closed-loop system. The first component z1 is the control system output after passing through a weighting filter with transfer matrix W1 . ∞ of the closed-loop transfer matrix H from the external input z2 W2 y − w V + v K u P + W1 z1 Figure 6.: The mixed sensitivity problem. (6. The standard H∞ optimal regulation problem Hence. as in the standard problem of Fig.7.

It was solved in a famous paper by Zames (1981). respectively. −P (6.42) The problem is also known as the (generalized) Nehari problem.41) with P a given (unstable) transfer matrix. Exercise 6.1 (Closed-loop transfer matrix).3.3 (Minimum sensitivity problem).: Two-degree. As a further application consider the two-degree-of-freedom configuration of Fig.9 the diagram is expanded with v + zo + + m r F + − C u P + Figure 6. a shaping filter 260 .8.40) Other well-known special cases of the standard problem H∞ problem are the model matching problem and the minimum sensitivity problem (see Exercises 6. Exercise 6.8.4 (Two-degree-of-freedom feedback system). 6. A special case of the mixed sensitivity problem is the minimization of the infinity norm W SV ∞ of the weighted sensitivity matrix S for the closed-loop system of Fig.2 (The model matching problem). Show that this minimum sensitivity problem is a standard problem with open-loop plant G= WV −V WP .3.1.2 and 6. 6. In Fig.of-freedom feedback configuration a shaping filter V1 that generates the disturbance v from the driving signal w1 . Exercise 6.3.43) The SISO version of this problem historically is the first H∞ optimization problem that was studied.3. the open-loop transfer matrix G for the standard problem is   W1 V W1 P W2  G= 0 −V −P (6. Verify (6.6.) The H∞ -optimal regulation problem is treated in detail by Green and Limebeer (1995) and Zhou. Find a block diagram that represents this problem and show that it is a standard problem with generalized plant G= P I −I .3.3. 6. Exercise 6.37).3. The model matching problem consists of find- ing a stable transfer matrix K that minimizes P−K L∞ := sup σ( P ( jω) − K ( jω)) ω (6. 0 (6. Doyle. H∞ -Optimization and µ-Synthesis Hence. and Glover (1996).

6. ( I − G22 K )−1 . a weighting filter W1 that produces the weighted tracking error z1 = W1 ( z0 − r ). First.5 (Closed-loop stability).44) Prove furthermore that if the loop is SISO with scalar rational transfer functions K = Y / X .3. 6. as defined in § 1. (6.9.11 is internally stable.3.11 is internally stable if and only if the four following transfer matrices have all their poles in the open left-half complex plane: ( I − K G22 )−1 . The standard H∞ optimal regulation problem V2 that generates the measurement noise m = V2 w2 from the driving signal w2 . Closed-loop stability We need to define what we mean by the requirement that the compensator K “stabilizes the closed-loop system” of Fig. Second. ( I − K G22 )−1 K . 261 . the compensator C and the prefilter F are combined into a single block K . and w3 . X and Y have no common polynomial factors. Determine the open-loop generalized plant transfer matrix G and the closed-loop transfer matrix H . ( I − G22 K )−1 G22 . with X and Y coprime polynomials4 and G22 = N0 / D0 . 6. with D0 and N0 coprime. Prove that the configuration of Fig. the external input w with components w1 . a shaping filter V0 that generates the reference signal r from the driving signal w3 .10 shows that we certainly may require the compensator to stabilize the loop around the block G22 . 4 That is.2 (p.2.3.6. 6. and the control input u. z2 w1 W2 V1 − w3 y2 V0 r y1 K u P + + v z0 + + V2 w2 + W1 z1 Figure 6. we take stability in the sense of internal stability. the observed output y with components y1 and y2 as indicated in the block diagram. then the loop is internally stable if and only if the closed-loop characteristic polynomial Dcl = D0 X − N0 Y has all its roots in the open left-half complex plane. w2 .3. 12). but that we cannot expect that it is always possible to stabilize the entire closed-loop system.: Block diagram for a standard problem Define the “control error” z with components z1 and z2 . Exercise 6. inspection of the block diagram of Fig. Moreover. “The loop around G22 is stable” means that the configuration of Fig.6. 6. and a weighting filter W2 that produces the weighted plant input z2 = W2 u.

H∞ -Optimization and µ-Synthesis w G11 + z + G12 G21 + + G G22 u K y Figure 6.10.11.6.: The loop around G22 262 .: Feedback arrangement for the standard problem G22 K Figure 6.

Hence. 272).5 (p. (6. 260). Example 6. that is.3. 5A rational matrix function has full normal column rank if it has full column rank everywhere in the complex plane except at a finite number of points. These assumptions are less restrictive than those usually found in the H∞ literature (Doyle.3. Then K ( I − G22 K )−1 = Y ( X − G22 Y )−1 . where G11 has dimensions m1 × k1 . This is what we assume at this point: (a) G is rational but not necessarily proper. and Francis 1989. Show that for the mixed sensitivity problem “stability of the loop around G22 ” is equivalent to stability of the feedback loop. Let K = Y X −1 . G12 has dimensions m1 × k2 . Khargonekar. This formalism allows considering infinite-gain feedback systems. If the rank conditions are not satisfied then often the plant input u or the observed output y or both may be transformed so that the rank conditions hold. Glover. Indeed the assumptions need to be tightened when we present the state space solution of the H∞ problem in § 6. has all its zeros6 in the open left-half plane. Discuss what additional requirements are needed to ensure that the entire closed-loop system be stable.45) The feedback system is said to be well-posed as long as X − G22 Y is nonsingular (except at a finite number of points in the complex plane). G21 has dimensions m2 × k1 . The present assumptions allow nonproper weighting matrices in the mixed sensitivity problem. iff K is stable. and G22 has dimensions m2 × k2 . 263 .6 (Stability for the model matching problem). Trentelman and Stoorvogel 1993). Assumptions on the generalized plant and well-posedness Normally. Note that if P is unstable then the closed-loop transfer function H = P − K can never be stable if K is stable. necessary to stabilize the entire closed-loop system we interpret the requirement that the compensator stabilizes the closed-loop system as the more limited requirement that it stabilizes the loop around G22 .3. that is. This implies that k1 ≥ m2 and m1 ≥ k2 . We accordingly reformulate the H∞ problem as the problem of minimizing the ∞-norm of H = G11 + G12 X ( X − G22 Y )−1 G21 with respect to all stable rational X and Y such that X − G22 Y is strictly Hurwitz. so that the closed-loop transfer matrix is H = G11 + G12 Y ( X − G22 Y )−1 G21 . for the model matching problem stability of the entire closed-loop system can never be achieved (except in the trivial case that P by itself is stable). (b) G12 has full normal column rank5 and G21 has full normal row rank. with X and Y stable rational matrices (possibly polynomial matrices). A feedback system may be well-posed even if D0 or X is singular. even if such systems cannot be realized.3. 6. Assumption (b) causes little loss of generality. various conditions are imposed on the generalized plant G to ensure its solvability. This is consistent with the problem formulation.2 (p. indeed. Stoorvogel 1992.7 (Mixed sensitivity problem). We conclude with discussing well-posedness of feedback systems.6. Since for this problem G22 = 0. 6 The zeros of a polynomial matrix or a stable rational matrix are those points in the complex plane where it loses rank.3. internal stability of the loop around G22 is achieved iff K has all its poles in the open left-half plane. so that G and K are not well-defined. which is important for obtaining high frequency roll-off. The standard H∞ optimal regulation problem Since we cannot expect that it is always possible or. To illustrate the point that it is not always possible or necessary to stabilize the entire system consider the model matching problem of Exercise 6.3. The compensator K is stabilizing iff X − G22 Y is strictly Hurwitz. Exercise 6.

4. H ∼ is called the adjoint of H . 304). If A and B are two complex-valued matrices then A ≤ B means that B − A is a nonnegative-definite matrix. 272) we specialize the results to the well-known state space solution. As long as H has no poles on the imaginary axis the right-hand side of (6. ω ∈R (6. This solution method applies to the wide class of problems defined in § 6. ω ∈ R. In Subsection 6. Substituting H = P − K into H ∼ H ≤ γ 2 I we obtain from Summary 6.8 (An infinite-gain feedback system may be well-posed).10. In what follows the ∞-norm notation is to be read as H ∞ = sup σ( ¯ H ( jω)). so that the corresponding system is not stable and. H∞ -Optimization and µ-Synthesis Example 6.2 (p. Frequency domain solution of the standard problem 6. Summary 6. Once suboptimal solutions may be obtained optimal solutions may be approximated by searching on the number γ . We first need the following result. Suboptimal solutions may be determined by spectral factorization in the frequency domain. This notation is also used if H has right-half plane poles. Some of the proofs of the results of this section may be found in the appendix to this chapter (§ 6.48) 264 . Let γ be a nonnegative number and H a rational transfer matrix. From (6.10 (p.5 (p. The proof of Summary 6. Consider the SISO mixed sensitivity problem defined by Fig. The details are described in Kwakernaak (1996). with P a given transfer matrix.1 may be found in § 6. To motivate the next result we consider the model matching problem of Exercise 6. Hence.3. 258).1(a) that H ∞ ≤ γ is equivalent to P∼ P − P∼ K − K ∼ P + K ∼ K ≤ γ 2 I on the imaginary axis. Introduction In this section we outline the main steps that lead to an explicit formula for so-called suboptimal solutions to the standard H∞ optimal regulation problem.4. For X = 0 the system has infinite gain.4. 270) an example is presented. the system is well-posed even if X = 0 (as long as Y = 0). “On the imaginary axis” means for all s = jω. has no ∞-norm.47) is well-defined. For this problem we have H = P − K . 304). In § 6. These are solutions such that H ∞ < γ.4.7 (p. (6.3 (p. (6.47) with σ ¯ the largest singular value.7.3.6. Then H ∞ ≤ γ is equivalent to either of the following statements: (a) H ∼ H ≤ γ 2 I on the imaginary axis. The system is well-posed if X + PY is not identical to zero.4.1 (Inequality for ∞-norm).46) with γ a given nonnegative number.4. and K a stable transfer matrix to be determined. 6. H ∼ is defined by H ∼ (s ) = H T (−s ). 260). (b) H H ∼ ≤ γ 2 I on the imaginary axis.40) we have G22 = − P. p. 6. hence.1. The reason for determining suboptimal solutions is that strictly optimal solutions cannot be found directly.

2 (Basic inequality). (6.49) This expression defines the rational matrix γ .52) An outline of the proof is found in § 6. 304).53) with J a signature matrix. if and only if [ X ∼ Y ∼] where γ γ Summary 6. Write K = Y X −1 . if one exists.4. 6. ∼ has a γ .50) The inequality (6.6. A para-Hermitian rational matrix γ = spectral factorization if there exists a nonsingular square rational matrix Z such that both Z and its inverse Z −1 have all their poles in the closed left-half complex plane.2. and = Z∼ J Z. with γ a given nonnegative number.4.49) on the right by X and on the left by X ∼ we obtain [ X ∼ Y ∼] γ 2 I − P∼ P P γ P∼ −I X ≥0 Y on the imaginary axis. (6.4. The compensator K = Y X −1 achieves H ∞ ≤ γ for the X ≥0 Y on the imaginary axis. − Im (6. with X and Y rational matrices with all their poles in the left-half complex plane.10 (p.2 follows by finding a spectral factorization of the rational matrix γ .4. Frequency domain solution of the standard problem This in turn we may rewrite as I K∼ γ 2 I − P∼ P P γ P∼ −I I ≥0 K on the imaginary axis. Spectral factorization A much more explicit characterization of all suboptimal compensators (stabilizing or not) than that of Summary 6. Z is called a spectral factor.54) 265 . that is.50) characterizes all suboptimal compensators K . J is a signature matrix if it is a constant matrix of the form J= In 0 0 . holds for the general standard problem: standard problem. (6. (6. Note that γ is para-Hermitian. (6. A similar inequality.51) is the rational matrix = 0 ∼ − G12 I ∼ − G22 γ ∼ G11 G11 − γ2 I ∼ G21 G11 ∼ G11 G21 ∼ G21 G21 −1 0 − G12 I − G22 . Then by multiplying (6. less easily obtained than for the model matching problem.

There are many such matrices. − Ik 2 (6.58) It may be proved that such a spectral factorization generally exists provided γ is sufficiently large.4. (6. then so is U Z . This characterization of all suboptimal solutions has been obtained under the assumption that γ has a spectral factorization. If J= 1 0 0 . where U is any rational or constant matrix such that U ∼ JU = J . All suboptimal solutions Suppose that γ γ has a spectral factorization (6. Summary 6. Given γ . γ ∼ = Zγ J Zγ . if it exists. Y B (6. − Ik 2 −1 . with Im 2 0 (6. If has no poles or zeros7 on the imaginary axis then n is the number of positive eigenvalues of on the imaginary axis and m the number of negative eigenvalues.60).56) α ∈ R.3 (Suboptimal solutions). Then (6. Write X −1 A = Zγ . −1 (6. 6.55) for instance. it may be shown that if suboptimal solutions exist then γ has a spectral factorization (Kwakernaak 1996). with U= c1 cosh α c1 sinh α c2 sinh α . Conversely. On the imaginary axis is Hermitian. c2 cosh α (6. characterizes all suboptimal compensators. c1 = ±1. then all constant matrices U . satisfy U T JU = J . c2 = ±1.59). U is said to be J-unitary.57) ∼ = Zγ J Zγ .61) zeros of are the poles of 266 .4. (6. is not at all unique.59) with A an m2 × m2 rational matrix and B a k2 × m2 rational matrix. It may be proved that the feedback system defined this way is well-posed iff A is nonsingular.51) reduces to A∼ A ≥ B∼ B on the imaginary axis.6. both with all their poles in the closed left-half plane. with J= Im 2 0 0 . The factorization = Z ∼ J Z . a suboptimal solution of the standard prob- lem exists iff J= 7 The γ has a spectral factorization 0 . with A and B chosen so that they satisfy (6. H∞ -Optimization and µ-Synthesis with In and Im unit matrices of dimensions n × n and m × m. If Z is a spectral factor. respectively.3.60) Hence.

6. There are many matrices of stable rational functions A and B that satisfy A∼ A ≥ B∼ B on the imaginary axis. We call this the central solution The central solution as defined here is not at all unique.4. 6. while the zeros of the determinant of (6.64) has a γ -dependent8 zero on the imaginary axis.2 (p. It follows that γ0 is also a lower bound for the ∞-norm. H ∞ ≥ γ0 (6. γ2 ). Similarly. = Zγ B Y (6.4. There exists a lower bound γ0 so that γ is factorizable for all γ > γ0 . with X −1 A . Frequency domain solution of the standard problem If this factorization exists then all suboptimal compensators are given by K = Y X −1 .65) is spectrally factorizable for γ > γ0 . Define γ1 as the first value of γ as γ decreases from ∞ for which the determinant of ∼ G11 G11 − γ2 I ∼ G21 G11 ∼ G11 G21 ∼ G21 G21 (6. Also.4. (6.62) with A nonsingular square rational and B rational.64) 1 are the poles of γ .4. B = 0. 267 . Lower bound Suboptimal solutions exist for those values of γ for which γ is spectrally factorizable. both with all their poles in the closed left-half plane.57) is not unique. An obvious choice is A = I . Note that the zeros of the determinant (6. 265) the spectral factorization (6.4. (6. 8 That is. The proof may be found in Kwakernaak (1996).67) The lower bound γ0 may be achieved. H ∞ ≥ γ0 . and such that A∼ A ≥ B∼ B on the imaginary axis. let γ2 be the first value of γ as γ decreases from ∞ for which the determinant of ∼ G11 − γ 2 I G11 ∼ G12 G11 ∼ G11 G12 ∼ G12 G12 γ (6. that is. Summary 6. because as seen in § 6. Then with γ0 = max(γ1 .4 (Lower bound).65) are the poles of − γ . a zero that varies with γ .63) for every compensator.66) has a γ -dependent zero on the imaginary axis.

4. all suboptimal compensators that make the closed-loop system stable (as defined in § 6. 261).11 − G22 Mγ. that is.21 ) A are in the open left-half complex plane.72) 268 .62) all suboptimal compensators are characterized by X = Mγ. The spectral factorization = Z ∼ J Z of a biproper rational matrix is said to be canonical if the spectral factor Z is biproper.4. (6.11 A + Mγ. The assumptions on the state space version of standard problem that are introduced in § 6.21 Mγ. The latter condition corresponds to the condition that the central compensator.69) Here T = ( Mγ. we are led to the following conclusion: Summary 6.5 (p. 272) ensure that γ is biproper.21 )( I + TU ) A .22 (6.3.21 A + Mγ.5. Correspondingly.3 (p.71) crosses the imaginary axis. 2.12 Mγ.11 − G22 Mγ. 264) the matrix γ is not necessarily biproper.5 (All stabilizing suboptimal compensators). Suppose that the factorization of γ is canonical. Suppose that the factorization γ ∼ = Zγ J Zγ is canonical. 263) closed-loop stability depends on whether the zeros of X − G22 Y all lie in the open left-half plane.4. Thus.11 Mγ. but in all stabilizing suboptimal solutions.21 )( I + ε TU ) A (6. Actually.21 have all their roots in the open left-half plane.12 A − G22 Mγ.22 ) and U = BA−1 .4 a representation is established of all suboptimal solutions of the standard problem for a given performance level γ .22 B. According to § 6. (6. A stabilizing suboptimal compensator exists if and only if the central compensator is stabilizing. Under the general assumptions of § 6.11 − G22 Mγ. M is biproper iff lim|s|→∞ M (s ) is finite and nonsingular. H∞ -Optimization and µ-Synthesis 6.21 ) A + ( Mγ. = Zγ B Y (6. Stabilizing suboptimal compensators In § 6. It follows that a suboptimal compensator is stabilizing iff both A and Mγ. none of the zeros of ( Mγ. that is.68) Then by (6. 1. p. X − G22 Y = = ( Mγ.11 − G22 Mγ. be stabilizing. This means that all the zeros of X − G22 Y are in the open left-half complex plane iff all the zeros of ( Mγ.70) Y = Mγ. If it is not biproper then the definition of what a canonical factorization is needs to be adjusted (Kwakernaak 1996).11 − G22 Mγ. Then it may be proved (Kwakernaak 1996) that as the real number ε varies from 0 to 1. the compensator obtained for A = I and B = 0.12 − G22 Mγ. We consider the stability of suboptimal solutions. First.6. If a stabilizing suboptimal compensator exists then all stabilizing suboptimal compensators K = Y X −1 are given by X −1 A . we are not really interested in all solutions.3.12 B.22 ) B ( Mγ. a square rational matrix M is said to be biproper if both M and its inverse M −1 are proper.4 (p.11 − G22 Mγ. Before discussing how to find such compensators we introduce the notion of a canonical spectral factorization.2.21 )−1 ( Mγ. Denote −1 = Mγ = Zγ Mγ.

6. both with all their poles in the open left-half complex plane.4. 275).6.4. Prove that if the factorization is canonical then all stabilizing suboptimal compensators K = Y X −1 . or guess a lower bound Choose a value of γ > γ o Decrease γ Does a canonical spectral factorization of Π γ exist? Yes No Increase γ Compute the central compensator Is the compensator stabilizing? Yes No Is the compensator close enough to optimal? No Yes Exit Figure 6. such that A has all its zeros in the open left-half complex plane and A∼ A ≥ B∼ B on the imaginary axis. Compute the lower bound γ o .12.12 serves to obtain near-optimal solutions. 6. Optimal solutions are discussed in § 6.4. Y U with U rational stable such that U ∞ (6.: Search procedure 269 . if any exist. Assuming that we know how to do this numerically — analytically it can be done in rare cases only — the search procedure of Fig. Exercise 6.73) ≤ 1. Frequency domain solution of the standard problem with A nonsingular square rational and B rational.6 (All stabilizing suboptimal solutions). 6.6 (p. may be characterized by X −1 I = Zγ . Search procedure for near-optimal solutions In the next section we discuss how to perform the spectral factorization of γ .

It may be verified that for γ = signature matrix  2 3    Zγ (s ) =   γ −4 γ2 − 1 4 the matrix γ has the spectral factor and +s 1+s 1 γ 1 γ2 − 1 4 1+s − The inverse of Zγ is    −1 Zγ (s ) =   1 4 γ(γ 2 − 1 ) 4 1+s γ2 + 1 4 γ2 − 1 4 +s   .78) 1+s 1 4 γ2 − 1 4 1+s γ2 − 3 4 γ2 − 1 4 +s ) 1+s −1 is biproper. the “compensator” K (s ) = P (s ) = 1/(1 − s ) achieves H ∞ = P − K ∞ = 0. so that the factorization is canonical.79) with U = B/ A such that A∼ A ≥ B∼ B on the imaginary axis. Indeed.77) 1+s γ2 + 1 4 γ2 − 1 4 +s 1+s − γ( γ γ2 − 1 4    . −1 (6.7. G (s ) = −1 0 1 .  J= 1 0 0 . (6.65) are both equal to −γ 2 . γ It is straightforward to find from (6. we consider the solution of the model matching problem of Exercise 6.3 (p. Example: Model matching problem By way of example.75) It may easily be found that the determinants of the matrices (6. Noncanonical factorization do exist for this value of γ . Spectral factorization. 270 .52) and (6. γ 1 2 Suboptimal compensators. For γ = both Zγ and Zγ are proper.76) − γ12 We need γ > 0. 266) after some all suboptimal “compensators” have the transfer function γ2− 3 4 γ2− 1 4 +γ + s U (s ) γ2− 1 4 γ γ2+ 1 4 γ2− 1 4 +s − U (s ) . We consider two special cases. It may be proved that for γ = 1 2 actually no canonical factorization exists. H∞ -Optimization and µ-Synthesis 6.6. For 1 γ = 2 the spectral factor Zγ is not defined.  (6.4. algebra that for γ = K (s ) = γ2− 1 4 1 4 1 2 It follows by application of Summary 6.64) and (6. It follows that γ1 = γ2 = 0 so that also γ0 = 0.75) that for this problem 1 γ2 is given by  γ (s ) γ2 1 − 1−s2 = 1 1−s 1 γ2   .4. 260) with P (s ) = 1 .74) Lower bound.2 (p. though. 1−s By (6. (6. Note that this compensator is not stable.42) we have 1 s−1 (6. 1 2 1+s (6.3.

4. according to Summary 6. we conclude that no stabilizing . so that U = 1.6. is stabilizing.79) the compensator γ s+ K (s ) = s+ 1 γ+ 1 2 − 2γ γ+ 1 2 γ− 1 2 γ+ 1 2 . the compensator (6. Frequency domain solution of the standard problem “Central” compensator. We then have the “central” com- pensator K (s ) = 1 4 2 γ −1 4 γ2+ 1 4 γ2− 1 4 +s . After simplification we obtain from (6. Hence.5 (p. Choose A = 1. The central compensator (6.83).84) The magnitude of the corresponding frequency response function H ( jω) is constant and equal to γ . Let A = 1.83) For this compensator we have the closed-loop transfer function H (s ) = γ (1 + s )(s − (1 − s )(s + γ− 1 2 γ+ 1 2 γ− 1 2 γ+ 1 2 ) ) . ≤ γ . The spectral factorization of γ exists for γ > 0. B = 1. the equalizing compensator (6. Since the central compensator is not stabilizing for γ < 1 2 . (6.82) It may be checked that H Equalizing compensator. compensators exist such that H ∞ < 1 2 271 . The resulting “closed-loop (1 + s ) ( γ2+ 1 4 γ2− 1 4 + s )(1 − s ) .80) For γ > 1 2 the compensator is stable and for 0 ≤ γ < transfer function” may be found to be H (s ) = P (s ) − K (s ) = γ2 γ2− 1 4 it is unstable. and is canonical for γ = 1 . (6. Hence. 1 2 (6.83) is equalizing with H ∞ = γ . (6.85) with U scalar stable rational such that U ∞ ≤ 1. and we find H ∞ = | H ( 0 )| = γ2 . They are given by K (s ) = γ2− 1 4 1 4 +γ γ2− 3 4 γ2− 1 4 + s U (s ) γ γ2− 1 4 γ2+ 1 4 γ2− 1 4 +s − U (s ) .81) The magnitude of the corresponding frequency response function takes its peak value at the frequency 0. γ2 + 1 4 ∞ (6.80) is stabilizing iff the compensator itself is stable. 2 .4. (6. Stabilizing compensators. 268) for γ > 1 which is the case for γ > 1 2 2 stabilizing suboptimal compensators exist. with equality only for γ = 0 and γ = 1 2. B = 0. so that U = 0. obtained by letting U (s ) = 1. In particular.

272 .87) with A.86) 1 so that H0 ∞ = 1 2 . and of all suboptimal compensators approaches K0 (s ) = 1 2 the closed-loop transfer function is The optimal compensator.1. 6. is assumed to be given by z= Hx . In this and other state space versions of the standard H∞ problem it is usual to require that the compensator stabilize the entire closed-loop system. H0 (s ) = P (s ) − K0 (s ) = 1 2 1+s . This necessitates the assumption that the system x ˙ = Ax + Bu. The observed output y is y = Cx + w2 . and not just the loop around G22 . Since for γ < 2 no stabilizing compensators exist we conclude that the 1 compensator K0 = 2 is optimal.2. finally. It is easy to see that the transfer matrix of the standard plant is   H (sI − A )−1 F 0 H (sI − A )−1 B   (6. State space solution 6. that is. This compensator is (trivially) stable. and F constant matrices. State space formulation Consider a standard plant that is described by the state differential equation x ˙ = Ax + Bu + F w1 . Otherwise. 274) a more general version is considered. u (6. and C another constant matrix.5.5. Introduction In this section we describe how to perform the spectral factorization needed for the solution of the standard H∞ optimal regulator problem. The state space approach leads to a factorization algorithm that relies on the solution of two algebraic Riccati equations.88) with w2 the second component of w. and the vector-valued signal w1 one component of the external input w.5. y = Hx be stabilizable and detectable. In § 6. 1−s (6.90) G (s ) =  . closed-loop stabilization is not possible. The error signal z.89) with also H a constant matrix.5 (p. I 0 0 C (sI − A )−1 F I C (sI − A )−1 B This problem is a stylized version of the standard problem that is reminiscent of the LQG regulator problem. (6. 6. To this end. B.5. (6. H∞ -Optimization and µ-Synthesis Investigation of (6. we first introduce a special version of the problem in state space form. minimizes H ∞ with respect to all stable compensators.79) shows that for γ → 1 2 the transfer function .6.

The matrix X2 is a symmetric solution — again.91) ¯ 2 is defined as The constant matrix A ¯ 2 = A + ( 1 F F T − B BT ) X2 .94) is a stability matrix9 . The order of the compensator is the same as that of the standard plant.2 the rational matrix γ as defined in Summary 6.2 (p.98) (6. with X −1 I = Zγ = Mγ Y 0 I Mγ.92) The matrix X1 is a symmetric solution — if any exists — of the algebraic Riccati equation (6. if any exists — of the algebraic Riccati equation 0 = AT X2 + X2 A + H T H − X2 ( BBT − 1 F F T ) X2 γ2 (6.3. γ γ u = − BT X2 x ˆ.11 .5.93) (6.5. γ (6.21 Mγ. (sI − A + γI γ2 − BT X2 (6. A γ2 0 = A X1 + X1 AT + F F T − X1 (C T C − such that ¯ 1 = A + X1 ( 1 H T H − C T C ) A γ2 1 T H H ) X1 γ2 (6. 304) it is proved that for the state space standard problem described in § 6. 114). (6.96) −1 After some algebra we obtain from (6. we may obtain the transfer matrix of the central compensator as K = Y X −1 .10 (p.95) ¯ 2 is a stability matrix.21 0 (6. 265) may be spectrally factored as γ = ∼ −1 J Zγ . The central suboptimal compensator Once the spectral factorization has been established. 9A matrix is a stability matrix if all its eigenvalues have strictly negative real part. such that A The numerical solution of algebraic Riccati equations is discussed in § 3.2.4. where the inverse Mγ = Zγ of the spectral factor is given by Zγ Mγ (s ) = I 0 C 0 ¯ 2 )−1 ( I − 1 X1 X2 )−1 [ X1 C T γ B].99) The compensator consists of an observer interconnected with a state feedback law.9 (p.97) In state space form the compensator may be represented as 1 1 ˙ x ˆ = ( A + 2 F F T X2 ) x ˆ + ( I − 2 X1 X2 )−1 X1 C T ( y − C x ˆ ) + Bu.4. Factorization of γ In § 6. = Mγ. 273 . 11 of the central compensator in the form K (s ) = − BT X2 sI − A − 1 F F T X2 + BBT X2 γ2 −1 1 + ( I − 2 X1 X2 )−1 X1 C T C γ 1 ( I − 2 X1 X2 )−1 X1 C T .6.91) the transfer matrix K = Mγ.5.5. State space solution 6. 6.

A more general state space standard problem A more general state space representation of the standard plant is x ˙ z y = = = Ax + B1 w + B2 u .5. Doyle. ( A . Glover. D12 B1 has full row rank for all ω ∈ R (hence. Packard. The eigenvalues of the Hamiltonian corresponding to the Riccati equation (6.4 may be determined by finding the first value of γ as γ decreases from ∞ such that any of these eigenvalues reaches the imaginary axis. 1.93) are the poles of γ . The assumption of stabilizability and detectability ensures that the whole feedback system is stabilized.5. cannot be solved with this algorithm. Stoorvogel (1992) discusses a number of further refinements of the problem. and Smith 1991) and the Robust Control Toolbox (Chiang and Safonov 1992) a solution of the corresponding H∞ problem based on Riccati equations is implemented that requires the following conditions to be satisfied: 1. D12 and D21 have full rank. It consists of two algebraic Riccati equations whose solutions define an observer cum state feedback law. Hence.5. More extensive treatments may be found in a celebrated paper by Doyle.4. the number γ1 defined in the discussion on the lower bound in § 6. the number γ2 defined in the discussion on the lower bound in § 6. 274 . are considerably more complex than those in § 6. C2 x + D21 w + D22 u . (6. The solution is documented in a paper by Glover and Doyle (1988). and Francis (1989) and in Glover and Doyle (1989). 6.3 and § 6. 10 A matrix is tall if it has at least as many rows as columns. There are interesting H∞ problems that do not satisfy the stabilizability or detectability assumption. The required solutions X1 and X2 of the Riccati equations (6.93) and (6. 3.95) are nonnegative-definite. and. 4. hence. Glover.5. Khargonekar. 2. The expressions. Hence.101) (6. A − jω I C1 A − jω I C2 B2 has full column rank for all ω ∈ R (hence.5. 3.100) (6.6.95) are the poles of the inverse of γ . not just the loop around G22 . the eigenvalues of the Hamiltonian corresponding to the Riccati equation (6. Likewise. C1 x + D11 w + D12 u . B2 ) is stabilizable and (C2 .102) In the µ-Tools MATLAB toolbox (Balas. It is wide if it has at least as many columns as rows. H∞ -Optimization and µ-Synthesis 6. though. Characteristics of the state space solution We list a few properties of the state space solution. D21 is wide). D21 The solution of this more general problem is similar to that for the simplified problem.6. D12 is tall10 ). 2.4 may be determined by finding the first value of γ as γ decreases from ∞ such that any of these eigenvalues reaches the imaginary axis. A ) is detectable.

6.6. Optimal solutions to the H∞ problem

4. The central compensator is stabilizing iff λmax ( X2 X1 ) < γ 2 , (6.103)

where λmax denotes the largest eigenvalue. This is a convenient way to test whether stabilizing compensators exist. 5. The central compensator is of the same order as the generalized plant (6.100–6.102). 6. The transfer matrix K of the central compensator is strictly proper. 7. The factorization algorithm only applies if the factorization is canonical. If the factorization is noncanonical then the solution breaks down. This is described in the next section.

6.6. Optimal solutions to the H∞ problem
6.6.1. Optimal solutions
In § 6.4.6 (p. 269) a procedure is outlined for finding optimal solutions that involves a search on the parameter γ . As the search advances the optimal solution is approached more and more closely. There are two possibilities for the optimal solution:
Type A solution. The central suboptimal compensator is stabilizing for all γ ≥ γ0 , with γ0 the

lower bound discussed in 6.4.4 (p. 267). In this case, the optimal solution is obtained for γ = γ0 .
Type B solution. As γ varies, the central compensator becomes destabilizing as γ decreases

below some number γopt with γopt ≥ γ0 .

In type B solutions a somewhat disconcerting phenomenon occurs. In the example of § 6.4.7 (p. 270) several of the coefficients of the spectral factor and of the compensator transfer matrix grow without bound as γ approaches γopt . This is generally true. In the state space solution the phenomenon is manifested by the fact that as γ approaches γopt either of the two following eventualities occurs (Glover and Doyle 1989):
(B1.) The matrix I −
1 γ2

X2 X1 becomes singular.

(B2.) In solving the Riccati equation (6.93) or (6.95) or both the “top coefficient matrix” E1 of

§ 3.2.9 (p. 114) becomes singular, so that the solution of the Riccati equation ceases to exist.

In both cases large coefficients occur in the equations that define the central compensator. The reason for these phenomena for type B solutions is that at the optimum the factorization of γ is non-canonical. As illustrated in the example of § 6.4.7, however, the compensator transfer matrix approaches a well-defined limit as γ → γopt , corresponding to the optimal compensator. The type B optimal compensator is of lower order than the suboptimal central compensator. Also this is generally true. It is possible to characterize and compute all optimal solutions of type B (Glover, Limebeer, Doyle, Kasenally, and Safonov 1991; Kwakernaak 1991). An important characteristic of optimal solutions of type B is that the largest singular value σ( ¯ H ( jω)) of the optimal closed-loop frequency response matrix H is constant as a function of the frequency ω ∈ R. This is known as the equalizing property of optimal solutions.

275

6. H∞ -Optimization and µ-Synthesis

Straightforward implementation of the two-Riccati equation algorithm leads to numerical difficulties for type B problems. As the solution approaches the optimum several or all of the coefficients of the compensator become very large. Because eventually the numbers become too large the optimum cannot be approached too closely. One way to avoid numerical degeneracy is to represent the compensator in what is called descriptor form (Glover, Limebeer, Doyle, Kasenally, and Safonov 1991). This is a modification of the familiar state representation and has the appearance Ex ˙ y = = Fx + Gu, Hx + Lu . (6.104) (6.105)

If E is square nonsingular then by multiplying (6.104) on the left by E −1 the equations (6.104– 6.105) may be reduced to a conventional state representation. The central compensator for the H∞ problem may be represented in descriptor form in such a way that E becomes singular as the optimum is attained but all coefficients in (6.104–6.105) remain bounded. The singularity of E may be used to reduce the dimension of the optimal compensator (normally by 1). General descriptor systems (with E singular) are discussed in detail in Aplevich (1991).

6.6.2. Numerical examples
We present the numerical solution of two examples of the standard problem.
Example 6.6.1 (Mixed sensitivity problem for the double integrator). We consider the mixed sensitivity problem discussed in § 6.2.5 (p. 254). With P, V , W1 and W2 as in 6.3, by (6.39) the standard plant transfer matrix is  M (s )  1

 G (s ) = 

s2

− Ms(2s )

0

 c (1 + rs )  , − s12

s2

(6.106)

√ where M (s ) = s2 + s 2 + 1. We consider the case r = 0 and c = 0.1. It is easy to check that for r = 0 a state space representation of the plant is x ˙= 1 0 0 1 x+ √ w+ u, 1 0 0 2 A z= 0 0 C1 y= 0 B1 B2 (6.108) (6.107)

1 0 1 x+ w+ u, 0 c 0 D11 D12

−1 x + −1 w. C2 D21

(6.109)

Note that we constructed a joint minimal realization of the blocks P and V in the diagram of Fig. 6.7. This is necessary to satisfy the stabilizability condition of § 6.5.5 (p. 274). A numerical solution may be obtained with the help of the µ-Tools M ATLAB toolbox (Balas, Doyle, Glover, Packard, and Smith 1991). The search procedure for H∞ state space problems

276

6.6. Optimal solutions to the H∞ problem

implemented in µ-Tools terminates at γ = 1.2861 (for a specified tolerance in the γ -search of 10−8 ). The state space representation of the corresponding compensator is −0.3422 × 109 ˙ x ˆ= 1 −1.7147 × 109 −0.3015 x ˆ+ y, −1.4142 −0.4264 (6.110) (6.111)

ˆ. u = [−1.1348 × 109 − 5.6871 × 109 ] x Numerical computation of the transfer function of the compensator results in K (s ) = 2.7671 × 109 s + 1.7147 × 109 . + 0.3422 × 109 s + 2.1986 × 109

s2

(6.112)

The solution is of type B as indicated by the large coefficients. By discarding the term s2 in the denominator we may reduce the compensator to K (s ) = 1.2586 s + 0.6197 , 1 + 0.1556s (6.113)

which is the result stated in (6.31)(p. 256). It may be checked that the optimal compensator possesses the equalizing property: the mixed sensitivity criterion does not depend on frequency. The case r = 0 is dealt with in Exercise 6.7.2 (p. 288).
Example 6.6.2 (Disturbance feedforward). Hagander and Bernhardsson (1992) discuss the example of Fig. 6.13. A disturbance w acts on a plant with transfer function 1/(s + 1 ). The plant

w K(s) y + u z2
Figure 6.13.: Feedforward configuration is controlled by an input u through an actuator that also has the transfer function 1/(s + 1 ). The disturbance w may be measured directly, and it is desired to control the plant output by feedforward control from the disturbance. Thus, the observed output y is w. The “control error” has as components the weighted control system output z1 = x2 /ρ, with ρ a positive coefficient, and the control input z2 = u. The latter is included to prevent overly large inputs. In state space form the standard plant is given by x ˙= 1 0 −1 0 w+ u, x+ 0 1 1 −1 A B1 B2 (6.114)

1 s+1 actuator

+ x1

1 s+1 plant

x2

1 ρ

z1

277

6. H∞ -Optimization and µ-Synthesis

z=

0 0 C1

1 ρ

0

x+

0 0 w+ u, 0 1 D11 D12

(6.115)

y= 0 C2

0 x + 1 w. D21

(6.116)

The lower bound of § 6.4.4 (p. 267) may be found to be given by γ0 = 1 1 + ρ2 . (6.117)

Define the number ρc = √ 5+1 = 1.270. 2 (6.118)

Hagander and Bernhardsson prove this (see Exercise 6.6.3): 1. For ρ > ρc the optimal solution is of type A. 2. For ρ < ρc the optimal solution is of type B. For ρ = 2, for instance, numerical computation using µ-Tools with a tolerance 10−8 results in 1 γopt = γ0 = √ = 0.4472, 5 with the central compensator −1.6229 −0.4056 0 ˙ x ˆ= x ˆ+ y, 1 −1 0.7071 u = [−0.8809 − 0.5736] x ˆ. (6.120) (6.121) (6.119)

The coefficients of the state representation of the compensator are of the order of unity, and the compensator transfer function is K (s ) = −0.4056s − 0.4056s . s2 + 2.6229s + 2.0285 (6.122)

There is no question of order reduction. The optimal solution is of type A. For ρ = 1 numerical computation leads to γopt = 0.7167 > γ0 = 0.7071, with the central compensator −6.9574 × 107 ˙ x ˆ= 1 −4.9862 × 107 0 x ˆ+ y, −1 1 (6.124) (6.125) (6.123)

ˆ. u = [−6.9574 × 107 − 4.9862 × 107 ] x

278

6.7. Integral control and high-frequency roll-off

The compensator transfer function is K (s ) = −4.9862 × 107 s − 4.9862 × 107 . s2 + 0.6957 × 108 s + 1.1944 × 108 (6.126)

By discarding the term s2 in the denominator this reduces to the first-order compensator K (s ) = −0.7167 s+1 . s + 1.7167 (6.127)

The optimal solution now is of type B.
Exercise 6.6.3 (Disturbance feedforward).

1. Verify that the lower bound γ0 is given by (6.117). 2. Prove that ρc as given by (6.118) separates the solutions of type A and type B. 3. Prove that if ρ < ρc then the minimal ∞-norm γopt is the positive solution of the equation γ 4 + 2γ 3 = 1/ρ2 (Hagander and Bernhardsson 1992). 4. Check the frequency dependence of the largest singular value of the closed-loop frequency response matrix for the two types of solutions. 5. What do you have to say about the optimal solution for ρ = ρc ?

6.7. Integral control and high-frequency roll-off
6.7.1. Introduction
In this section we discuss the application of H∞ optimization, in particular the mixed sensitivity problem, to achieve two specific design targets: integral control and high-frequency roll-off. Integral control is dealt with in § 6.7.2. In § 6.7.3 we explain how to design for high-frequency roll-off. Subsection 6.7.4 is devoted to an example. The methods to obtain integral control and high frequency roll-off discussed in this section for H∞ design may also be used with H2 optimization. This is illustrated for SISO systems in § 3.5.4 (p. 130) and § 3.5.5 (p. 132).

6.7.2. Integral control
In § 2.3 (p. 64) it is explained that integral control is a powerful and important technique. By making sure that the control loop contains “integrating action” robust rejection of constant disturbances may be obtained, as well as excellent low-frequency disturbance attenuation and command signal tracking. In § 6.2.3 (p. 252) it is claimed that the mixed sensitivity problem allows to

C

P

Figure 6.14.: One-degree-of-freedom feedback system

279

6. H∞ -Optimization and µ-Synthesis

design for integrating action. Consider the SISO mixed sensitivity problem for the one-degree-offreedom configuration of Fig. 6.14. The mixed sensitivity problem amounts to the minimization of the peak value of the function | V ( jω) W1 ( jω) S ( jω)|2 + | V ( jω) W2 ( jω)U ( jω)|2 , ω ∈ R. (6.128)

S and U are the sensitivity function and input sensitivity function S= 1 , 1 + PC U= C , 1 + PC (6.129)

respectively, and V , W1 and W2 suitable frequency dependent weighting functions. If the plant P has “natural” integrating action, that is, P has a pole at 0, then no special provisions are needed. In the absence of natural integrating action we may introduce integrating action by letting the product V W1 have a pole at 0. This forces the sensitivity function S to have a zero at 0, because otherwise (6.128) is unbounded at ω = 0. There are two ways to introduce such a pole into V W1 : 1. Let V have a pole at 0, that is, take V (s ) = V0 (s ) , s (6.130)

with V0 (0 ) = 0. We call this the constant disturbance model method. 2. Let W1 have a pole at 0, that is, take W1 (s ) = W1o (s ) , s (6.131)

with W1o (0 ) = 0. This is the constant error suppression method. We discuss these two possibilities.
Constant disturbance model. We first consider letting V have a pole at 0. Although V W1 needs to have a pole at 0, the weighting function V W2 cannot have such a pole. If V W2 has a pole at 0 then U would be forced to be zero at 0. S and U cannot vanish simultaneously at 0. Hence, if V has a pole at 0 then W2 should have a zero at 0. This makes sense. Including a pole at 0 in V means that V contains a model for constant disturbances. Constant disturbances can only be rejected by constant inputs to the plant. Hence, zero frequency inputs should not be penalized by W2 . Therefore, if V is of the form (6.130) then we need

W2 (s ) = sW2o (s ),

(6.132)

with W2o (0 ) = ∞. Figure 6.15(a) defines the interconnection matrix G for the resulting standard problem. Inspection shows that owing to the pole at 0 in the block V0 (s )/s outside the feedback loop this standard problem does not satisfy the stabilizability condition of § 6.5.5 (p. 274). The first step towards resolving this difficulty is to modify the diagram to that of Fig. 6.15(b), where the plant transfer matrix has been changed to s P (s )/s. The idea is to construct a simultaneous minimal state realization of the blocks V0 (s )/s and s P (s )/s. This brings the unstable pole at 0 “inside the loop.”

280

Integral control and high-frequency roll-off (a) z 2 w Vo(s) s + v sW2o (s) − y K(s) u P(s) + W1(s) z1 z2 (b) sW2o(s) w Vo(s) s sP(s) s + v + W1(s) z1 − y K(s) u (c) z 2 w Vo (s) s P(s) s + v + W1(s) z1 W2 o(s) − y sK(s) Ko(s) u Figure 6.: Constant disturbance model 281 .6.7.15.

6.3 (p. and doing a mixed sensitivity design for this augmented system. 6. 282 .16(b). 252). of course. The resulting interconnection matrix   W1 V W1 P W2  (6.16(c) shows that both the constant disturbance model method and the constant error rejection method may be reduced to the integrator in the loop method. cannot be realized as a state system of the form needed for the state space solution of the H∞ -problem.133) Constant error suppression.134) G= 0 −V −P is also nonproper and.17(a). Suppose that the compensator K0 solves this modified problem. The block diagram of Fig.5. 6.15(c) and 6.6.16(c). Suppose by way of example that the desired form of W2 is W2 (s ) = c (1 + rs ). with c and r 11 That is.7. hence. To prevent the pole at 0 of the integrator from reappearing in the closed-loop system it is necessary to let V have a pole at 0. 6. Normally V is chosen biproper11 such that V (∞ ) = 1.15(c) and 6.16(c). Further block diagram substitution yields the arrangement of Fig.17(a) produces the block diagrams of Figs.15(c). to achieve a roll-off of m decades/decade it is necessary that W2 behaves as sm for large s. If the desired roll-off m is greater than 0 then the weighting function W2 needs to be nonproper. high-frequency roll-off in the compensator transfer function K and the input sensitivity function U may be pre-assigned by suitably choosing the high-frequency behavior of the weighting function W2 .15(c) defines a modified mixed sensitivity problem. both V and 1/ V are proper. that owing to the cancellation in s P (s )/s there is no minimal realization that is controllable from the plant input. 6. By a minimal joint realization of the blocks V0 (s )/s and P (s )/s the interconnection system may now be made stabilizable. Figure 6.5 (p. we remove the factor s from the numerator of s P (s )/s by modifying the diagram to that of Fig. 6.16(a) shows the corresponding block diagram. If W2 (s ) V (s ) is of order sm as s approaches ∞ then K and U have a high-frequency roll-off of m decades/decade. 6. Hence. 274). 6. High-frequency roll-off As explained in § 6. Again straightforward realization results in violation of the stabilizability condition of § 6. Contemplation of the block diagrams of Figs. H∞ -Optimization and µ-Synthesis The difficulty now is.4. We next consider choosing W1 (s ) = W1o (s )/s. Then the original problem is solved by compensator K (s ) = K0 (s ) . This corresponds to penalizing constant errors in the output with infinite weight. 6. After a compensator K0 has been obtained for the augmented plant the actual compensator that is implemented consists of the series connection K (s ) = K0 (s )/s of the compensator K0 and the integrator as in Fig. This is illustrated in the example of § 6. Hence. This means that V (∞ ) is finite and nonzero. Integrator in the loop method. The makeshift solution usually seen in the literature is to cut off the roll-on at high frequency. 6.3. The offending factor 1/s may be pulled inside the loop as in Fig. This method amounts to connecting an integrator 1/s in series with the plant P as in Fig.7.2. s (6. Replacing V (s ) with V0 (s )/s in the diagram of Fig.17(b).

: Constant error suppression 283 . Integral control and high-frequency roll-off (a) z2 W2 (s) w V(s) + v − y K(s) u P(s) + W1o(s) s z1 z2 (b) W2 (s) w V(s) + v − y K(s) u P(s) + W1o(s) s s W1o (s) z 1 z2 (c) W2 (s) w V(s) W 1o (s) s P(s)W1o (s) s + v + z1 − y sK(s) W1o (s) Ko(s) u Figure 6.6.16.7.

7. The example of § 6. This increase in complexity is the price of using the state space solution. W2 difficulty in finding a state realization of the equivalent mixed sensitivity problem defined by −1 P. These are the design specifications: 1.17. 6.18 with the modified plant P0 = W2 Suppose that the modified problem is solved by the compensator K0 .7. s+ p (6.1.6.4. 1 + sτ (6. Constant disturbance rejection.137) with p = 0.: Integrator in the loop method positive constants. 6. Example We present a simple example to illustrate the mechanics of designing for integrating action and high-frequency roll-off. Fig. (6. Then the original problem is solved by −1 K = K0 W2 . and there is no 1992). Consider the first-order plant P (s ) = p .4 illustrates the procedure. 284 . H∞ -Optimization and µ-Synthesis (a) z2 w V(s) Po (s) + P(s) + v W1(s) z1 W2 (s) − y Ko (s) u 1 s (b) − Ko(s) K(s) 1 s P(s) Figure 6. Then W2 may be made proper by modifying it to W2 (s ) = c (1 + rs ) .18 (Krause −1 is necessarily proper.136) Inspection suggests that the optimal compensator K has the zeros of W2 as its poles. This contrivance may be avoided by the block diagram substitution of Fig. 6. Because this is in fact not the case the extra poles introduced into K cancel against corresponding zeros.135) with τ r a small positive time constant. in the SISO case. If W2 is nonproper then.

143) also Exercise 6.17(a) with the substitution of Fig.1 (p.140) 2 of the numerator of V . 6.19 defines a standard mixed sensitivity problem with P (s ) V (s ) 12 See = = p rc s (s + 1 r )( s + p ) . 6. (6.139) s (s + p ) √ The choice a = 2 and b = 1 places two closed-loop poles at the roots 1√ 2 ( −1 ± j ) (6. 4.6.7. We plan these to be the dominant closed-loop poles in order to achieve a satisfactory time response and the required bandwidth.138) P0 (s ) = s (s + p ) Partial pole placement (see § 6. 3.18.142) 2 (s + 1 s2 + as + b r )( s + as + b ) = .141) with the positive constants c and r to be selected12 . We use the integrator in the loop method of § 6. 285 .7.2. Accordingly. p.19 shows the block diagram of the mixed sensitivity problem thus defined. The diagram of Fig. Figure 6. s (s + p ) s (s + 1 r )( s + p ) (6. (6.18. To satisfy the high-frequency roll-off specification we let V (s ) = W2 (s ) = c (1 + rs ). the plant is modified to p . 6. A closed-loop bandwidth of 1. Integral control and high-frequency roll-off w z2 Ko y K W2 u W2−1 Po P + V + v W1 z1 − Figure 6.7. (6. It does not look as if anything needs to be accomplished with W1 so we simply let W1 (s ) = 1. (6. It is obtained from Fig.: Block diagram substitution for nonproper W2 2. A suitable time response to step disturbances without undue sluggishness or overshoot.4. 253) is achieved by letting s2 + as + b . We can now see why in the eventual design the compensator K has no pole at the zero −1/ r of the weighting factor W2 . 287).2 to design for integrating action. High-frequency roll-off of the compensator and input sensitivity at 1 decade/decade.

If V is in the form of the rightmost side of (6.6.147) = = 0 0 1 −p 0 1 x1 b p + w+ u. 253) this zero reappears as a closed-loop pole.148) (6.149) Combining the state differential equations (6. cancels in the transfer function K of (6.19 has the transfer matrix representation 2 z1 = s + as + b s (s + p ) p s (s + p ) w . r (6. a− p 0 x2 x1 + w. This zero. in turn.143) then its numerator has a zero at −1/ r .148) and arranging the output equations we obtain the equations for the interconnection system that defines the standard problem as 286 .146) may be represented in state form as x ˙3 u = = 1 1 − x3 + u 0 . 6.145) and (6. H∞ -Optimization and µ-Synthesis w z2 Po(s) − y Ko(s) uo 1 c(1+rs) u p s(s+p) s2+as+1 s(s+p) + + z1 Figure 6.2. r c 1 x3 .136).4 (p. By the partial pole assignment argument of § 6.: Modified block diagram for the example and W1 (s ) = W2 (s ) = 1. Because this zero is also an open-loop pole it is necessarily a zero of the compensator K0 that solves the mixed sensitivity problem of Fig. 6.19. x2 (6.19.145) (6. The system within the larger shaded block of Fig. u (6.144) This system has the minimal state realization x ˙1 x ˙2 z1 The block u= 1 u0 c (1 + rs ) (6.

The desired roll-off of 1 decade/decade may be obtained with the constant weighting function W2 (s ) = c.7049 ) (6. 287 .153) Neglecting the term s3 in the denominator results in K0 (s ) = 1.056 · sW2 (s ) (s + 9. (c) The roll-off of 2 decades/decade is more than called for by the design specifications. (s + 9.3207 × 1010 s + 1.7. Integral control and high-frequency roll-off follows:   x ˙1 x ˙2  = x ˙3  0 0 1 − p 0 0 A       0 x1 b 0   x 2  +  a − p w +  0  u 0 . Exercise 6.2047 ) 10 = 12.7049 ) K0 (s ) (s + 10 )(s + 0. s (s + 9. the optimal compensator for the original problem is K (s ) = = (6. (a) For the compensator (6.055s + 2. Numerical computation of the optimal compensator yields the type B solution K0 (s ) = 1. and the complementary sensitivity function T . as expected. in particular.6.9195 × 1010 (6.8552 )(s + 1.154) (6.7049 ) s (s + 10 ) 120. x3 D21 C2 B2 (6.150) z = y = B1   x 0 0 1 0  1 1 x2 + w+ u 1 0 0 0 0 0 x3 D12 D11 C1   x1 0 −1 0  x2  + (−1 ) w. if S and T do not peak too much. The compensator has a high-frequency roll-off of 2 decades/decade.56(s + 0.820 0.1142s2 + 1.1 (Completion of the design).1142 × 1010 s2 + 1. (d) Repeat (a) and (b) for the design of (c). Complete the design of this example.2047 ) . the input sensitivity function U .157).152) Let c = 1 and r = 0. as planned.2047 ) = 12.156) (6.377 × 1010 s2 + 14.91195 (s + 10 )(s + 0.1. compute and plot the sensitivity function S .8552 )(s + 1.151) (6. See if these functions may be improved by changing the values of c and r by trial and error. (b) Check whether these functions behave satisfactorily. Recompute the optimal compensator for this weighting function with c = 1.377s2 + 14.8552 )(s + 1.056 .820 × 1010 s3 + 0. 1 x3 0 −1 r c p r (6.7.3207s + 1.155) Hence.157) The pole at −10 cancels.055 × 1010 s + 2. and the compensator has integrating action.

1 (p.8.158) ∞ such that ∞ ≤ 1. µ-Synthesis 6.1.20 for the designs selected in (b) and (d).8 (p.32). Let H denote the transfer function from w to z (under perturbation ). the transfer function is assumed to have been scaled in such a way that the control system performance is deemed satisfactory if H ∞ ≤ 1. is stable under all structured perturbations and 2. In Example 6. The central result established in § 5. Introduction In § 5. The signal w is the external input to the control system and z the control error.5 (p. Suppose that the control system performance and stability robustness may be modified by feedback through a compensator with transfer matrix K .8. that is. that is. We briefly review the conclusions.21(a). 276) the numerical solution of the double integrator example of § 6. (6.freedom system with disturbance v Exercise 6. as in Fig. if and only if µ H < 1. ≤ 1 under all structured perturbations such that The quantity µ H .6. 6.8 is that the control system 1.172).: One-degree-of. 6. and verify (6.7. 6. H ∞ ≤ 1.20. v + z C − u P + Figure 6. defined in (5. with structured perturbations represented by the block .22(a). 6.2 (Roll-off in the double integrator example).21(b). The input and output signals of the perturbation block are denoted p and q. 6. Denote by µ H K the structured singular 288 . has robust performance.6. Discuss the results. is robustly stable. is the structured singular value of H with respect to perturbations as in Fig. The perturbations are assumed to have been scaled such that all possible structured perturbations are bounded by ∞ ≤ 1. 254) is presented for r = 0. Again. Extend this to the case r = 0. the interconnection block H represents a control system. In Fig. with structured and 0 unstructured. respectively. 240) the use of the structured singular value for the analysis of stability and performance robustness is explained. The compensator feeds the measured output y back to the control input u. H∞ -Optimization and µ-Synthesis (e) Compute and plot the time responses of the output z and the plant input u to a unit step disturbance v in the configuration of Fig.2.

We analyze its robustness with respect to proportional plant perturbations of the form P −→ P (1 + δ p W2 ) with δ P ∞ (6.8.21.159) (a) (b) ∆0 ∆ q w H ∆ p z w q H p z Figure 6. 289 . 6. (6.161) for all perturbations. 6.23 is considered. Then µ-synthesis is any procedure to construct a compensator K (if any exists) that stabilizes the nominal feedback system and makes µ H K < 1. Approximate µ-synthesis for SISO robust performance In § 5. µ-Synthesis value of the closed-loop transfer matrix H K of the system of Fig.: µ-Synthesis for robust stability and performance. W1 and W2 are suitable frequency dependent functions. 6. The system is deemed to have satisfactory performance robustness if ∞ W1 S ≤1 (6.8.22(b) with respect to structured perturbations and unstructured perturbations 0 . with S the sensitivity function of the closed-loop system.: µ-Analysis for robust stability and performance.22.2. (a) (b) ∆0 ∆ q w u K G y p z w u q ∆ p G y K z Figure 6.160) ≤ 1.6.8 the SISO single-degree-of-freedom configuration of Fig.

8 that the closed-loop system is robustly stable and has robust performance if and only if µ H K = sup (| W1 ( jω) S ( jω)| + | W2 ( jω) T ( jω)| ) < 1. Thus.6. if we can find a compensator K such that sup (| W1 ( jω) S ( jω)|2 + | W2 ( jω) T ( jω)|2 )1/2 = W1 S W2 T < 1 2 √ ω ∈R 2 (6.9.23. b) and (1. 290 .2 (Minimization of upper bound). H∞ -Optimization and µ-Synthesis w K u P + + z − Figure 6. (6. 1).165) ∞ then this compensator achieves robust performance and stability.162) T is the complementary sensitivity function of the closed-loop system. ω ∈R (6. Unfortunately. The problem cannot be solved by the techniques available for the standard H∞ problem. 294). 236). may be obtained by minimizing W1 S W2 T .1 (p. Exercise 6.7. the robust performance and stability problem consists of determining a feedback compensator K (if any exists) that stabilizes the nominal system and satisfies µ H K < 1.166) ∞ which is nothing but a mixed sensitivity problem. this is not a standard H∞ -optimal control problem. no solution is known to this problem.: Single-degree.163) may also be obtained from Summary 5. Exercise 6. Show that the upper bound on the right-hand side of (6. In fact. Such a compensator.164) implies µ H K < 1. sup (| W1 ( jω) S ( jω)|2 + | W2 ( jω) T ( jω)|2 ) < ω ∈R 1 2 (6. for instance by application of the Cauchy-Schwarz inequality to the vectors (a. if any exists. Hence.1 (Proof of inequality). One way of doing this is to minimize µ H K with respect to all stabilizing compensators K .8. the robust design problem has been reduced to a mixed sensitivity problem. Prove the inequality (a + b )2 ≤ 2 (a2 + b2 ) for any two real numbers a and b.of-freedom configuration By determining the structured singular value it follows in § 5.163) (6.8. This reduction is not necessarily successful in solving the robust design problem — see Exercise 6. Hence.7(2) (p. Therefore. By the well-known inequality (a + b )2 ≤ 2(a2 + b2 ) for any two real numbers a and b it follows that (| W1 S | + | W2 T | )2 ≤ 2(| W1 S |2 + | W2 T |2 ).

µ-Synthesis 6. Minimizing (6. for a number of values of ω on a with D and D suitable frequency grid. On the frequency grid. Because of the scaling property it is sufficient to fit their magnitudes only. where H K is the closed-loop transfer matrix of the configuration of Fig. The problem of minimizing with D and D µ H K = sup µ( H K ( jω)) ω ∈R (6. of D and D The resulting extra freedom (in the phase) is used to improve the fit.167) (6. and compute the corresponding nominal closed-loop transfer matrix H K . 291 . Various approximate solution methods to the µsynthesis problem have been suggested.6.22 from w to z with = 0 = 0.8. µ( M ) ≤ σ ¯ [ DM D ¯ suitable diagonal matrices. The best known of these (Doyle 1984) relies on the property (see Summary 5.K iteration).7(c). σ ¯ [ D ( jω) H K ( jω) D (6. fit stable minimum-phase rational functions to the diagonal entries ¯ . Replace the original ¯ with their rational approximations. If µ H K is small enough then stop.8. 6. matrix functions D and D ¯ . 3.169) ¯ ( jω) are chosen so that the bound where for each frequency ω the diagonal matrices D ( jω) and D is the tightest possible. D min ¯ −1 ( jω)].3 ( D. 2. The maximum of this upper bound over the frequency grid is an estimate of µ H K . (6. 1. Denote the minimizing compensator as K and the corresponding closed-loop transfer matrix as H K . continue. Otherwise.8.170) ¯ diagonal matrices as in Summary 5. provided ¯ are rational stable matrix functions. Given the rational approximations D and D stabilizing compensators K .7. 0 One way of finding an initial compensator is to minimize H K ∞ with respect to all sta0 bilizing K .168) is now replaced with the problem of minimizing the bound ¯ −1 ( jω)] = DH K D ¯ −1 ¯ [ D ( jω) H K ( jω) D sup σ ω ∈R ∞. Evaluate the upper bound ¯ ( jω) D ( jω). Choose an initial compensator K that stabilizes the closed-loop system.3. Approximate solution of the µ-synthesis problem More complicated robust design problems than the simple SISO problem we discussed cannot be reduced to a tractable problem so easily. D and D Doyle’s method is based on D-K iteration: Summary 6.169) with respect to K is a standard H∞ problem. minimize DH K D ¯ −1 ∞ with respect to all 4. Return to 2.7(2–3) ¯ −1 ].7.

Doyle. Td = √ 2 = 1. V S0 is the sensitivity function of the nominal closed-loop system and is given by S0 ( s ) = s2 (1 + sT0 ) . Introduction To illustrate the application of µ-synthesis we consider the by now familiar SISO plant of Example 5. “Real” perturbations. s2 (6. Performance specification In Example 5. In this section we explore how D– K iteration may be used to obtain an improved design that achieves robust performance.7652 ± j 0.2.2 (p. Packard.9.2.6.175) with χ0 (s ) = T0 s3 + s2 + g0 Td s + g0 k the nominal closed-loop characteristic polynomial. Glover.4679.7715 and −8.4 (p. perturbations corresponding to dynamic uncertainties.9. so that the nominal plant transfer function is P0 (s ) = g0 . g = g0 = 1 and θ = 0.9. The procedure is continued until a satisfactory solution is obtained. and Smith 1991).1.1.1 (p. (6. 6. 6.174) The function V is chosen as 1 = (1 + ε) S0 . caused by uncertain parameters. An application of µ-synthesis 6. | V ( jω)| ω ∈ R.176) (6. 292 .414. Convergence is not guaranteed. need to be overbounded by dynamic uncertainties. that is. A lengthy example is discussed in § 6.9. (6.2. The method may be implemented with routines provided in the µ-Tools toolbox (Balas.8. T0 = 0. 1 + sT0 k = 1. 198) with transfer function P (s ) = g s2 (1 + sθ) . H∞ -Optimization and µ-Synthesis Any algorithm that solves the standard H∞ -problem exactly or approximately may be used in step (d).172) In Example 5. 199) root locus techniques are used to design a modified PD compensator with transfer function C (s ) = k + sTd .173) The corresponding closed-loop poles are −0.171) Nominally. χ0 ( s ) (6. The number ε is a tolerance. The method is essentially restricted to “complex” perturbations. (6. 245) the robustness of the feedback system is studied with respect to the performance specification | S ( jω)| ≤ 1 .

5.1. 20 |S | [dB] bound 0 -20 -40 -60 -1 10 10 0 10 1 10 2 angular frequency [rad/s] Figure 6. In particular.3.24.: Magnitude plot of S for different parameter value combinations and the bound 1/ V 6.181) 293 . and ζ = 1 2. 0 ≤ θ < 0.178) For the purposes of this example we moreover change the performance specification model.8. Numerically we choose ε = 0.174) with V given by (6. The design (6.177) The robustness test used in this example is based on a proportional loop perturbation model. the bound is violated when the value of the gain g drops to too small values. 1 + sθ0 ≤ | W0 ( jω)| for all ω ∈ R. (6. 0 ≤ θ < 0.180) We bound the perturbations as | W0 (s ) = α + sθ0 . As seen in Example 5. (6.4 it is found that robust performance is guaranteed with a tolerance ε = 0. An application of µ-synthesis In Example 5.25.8. (6.24 shows plots of the bound 1/| V | together with plots of the perturbed sensitivity functions S for the four extreme combinations of parameter values. Setting up the problem To set up the µ-synthesis problem we first define the perturbation model. V (s ) s + 2ζω0 s + ω2 0 (6.4 the loop gain has proportional perturbations P (s ) = P (s ) − P0 (s ) = P0 (s ) g− g0 g0 − sθ 1 + sθ P ( jω)| . Figure 6.9. Instead of relating performance to the nominal performance of a more or less arbitrary design we choose the weighting function V such that 1 s2 = (1 + ε) 2 . namely for 0.9 ≤ g ≤ 1.179) Again.05.178). ω0 = 1.1.25 for parameter perturbations that satisfy 0.5 ≤ g ≤ 1.173) does not meet the specification (6.9. with (6. ε is a tolerance.6.179) for the range of perturbations (6. The constant ω0 specifies the minimum bandwidth and ζ determines the maximally allowable peaking. In the present example we wish to redesign the system such that performance robustness is achieved for a modified range of parameter perturbations.

It also includes the scaling function V used to normalize performance. with H= SV . and θ0 is the largest possible value for the parasitic time constant θ .26 shows 20 |δ P | [dB] bound 0 -20 -40 -60 -1 10 10 0 10 1 10 2 angular frequency [rad/s] Figure 6.8. In § 6.6. w Wo y − u + p δP + q uo V + Po + z K Figure 6.9.2 (p. 289) it is argued that robust performance such that SV P ∞ ≤ W0 ∞ is achieved iff ω ∈R ∞ < 1 under proportional plant perturbations bounded by (6.1 (Reduction to mixed sensitivity problem).184) 294 . A sufficient condition for this is that sup (| S ( jω) V ( jω)|2 + | T ( jω) W0 ( jω)|2 ) < ω ∈R 1 2 √ 2.25.26. Numerically we let α = 0.183) To see whether the latter condition may be satisfied we consider the mixed sensitivity problem consisting of the minimization of H ∞ . Figure 6.5 and θ0 = 0.: Perturbation and performance model Exercise 6.05.25 shows plots of the perturbation |δ P | for various combinations of values of the uncertain parameters together with a magnitude plot of W0 . is a bound | g − g0 |/ g0 ≤ α for the relative perturbations in g.: Magnitude plots of the perturbations δ P for different parameter values and of the bound W0 the corresponding scaled perturbation model. (6. − T W0 (6.182) sup (| S ( jω) V ( jω)| + | T ( jω) W0 ( jω)| ) < 1. Figure 6. H∞ -Optimization and µ-Synthesis The number α. with α < 1.

and θ0 = 0.188) To apply the state space algorithm for the solution of the H∞ problem we need a state space representation of G. For the output z we have from the block diagram of Fig. To remedy this. 1+ε s2 s (6. The fact that a robust design cannot be obtained this way does not mean that a robust design does not exist.8. 6. (Hint: Compare (6.26 z = V w + P0 u0 = 1 s2 + 2ζω0 s + ω2 g0 0 w + 2 u0 .178).190) and (6. Use the numerical values introduced earlier: g0 = 1. u − P0 (6.191).185) (b) Find a (minimal) state representation of G. α = 0.05. 6. say. Check that H ∞ cannot be made less than about 0.5. 274) needed for the state space solution of the H∞ problem is not satisfied.26 shows that p z y = = = W0 u .22 from the interconnection equations     p q  z  = G w . hence.5. modify the problem by adding a third component z1 = ρu to the error signal z. V w + P0 (q + u ).6. ρ = 10−6 or even smaller.182). ω0 = 1.186) so that    p 0  z  =  P0 − P0 y 0 W1 − W1 G (6.187) (6. 6.189) 295 . − V w − P0 (q + u ). (c) Solve the H∞ -optimization problem for a small value of ρ. ζ = 0. The next step in preparing for D– K iteration is to compute the interconnection matrix G as in Fig. and. y u Inspection of the block diagram of Fig. ε = 0.25. (d) Verify that the solution of the mixed sensitivity problem does not satisfy (6.) Check that condition (b) of § 6.183) cannot be met. Also check that the solution does not achieve robust performance for the real parameter variations (6. with ρ small. the condition (6.5 (p. An application of µ-synthesis (a) Show that the mixed sensitivity problem at hand may be solved as a standard H∞ problem with generalized plant V  G= 0 −V   P0 W0 V −1 P0  .5.   q W0 P0  w .9. − P0 (6.

191) Exercise 6.2. Using the interconnection equation u0 = q + u we obtain from (6. u α−1 0  1 q  0 w  .9. calculating the frequency response of the initial closed-loop system H0 on this axis. H∞ -Optimization and µ-Synthesis This transfer function representation may be realized by the two-dimensional state space system x ˙1 0 1 = 0 0 x ˙2 z= 1 0 x1 + x2 x1 x2 2ζω0 1+ε ω2 0 1+ε 0 g0 w . u0 (6. Derive or prove the state space representations (6. The calculation of the upper bound also yields the “ D-scales. and Smith 1991) offers a step-by-step explanation of the iterative process. D– K iteration The D– K iteration procedure is initialized by defining a frequency axis. 1 + sT0 (6.9.6. and computing and plotting the lower and upper bounds of the structured singular value of H0 on this frequency axis13 .190) 1 w. + 1+ε The weighting filter W0 may be realized as x ˙3 = − 1 α−1 x3 + u. 296 .191).2 (State space representation).191) the overall state differential and output equations  0 0  x+  g0 1 0 − θ0 0 0      0 0 p 0 0 1 1  z  =  1 0 0 x +  0 1+ε y −1 0 0 0 − 1  0 0 x ˙= 0 1 0   0 2ζω0 1+ε ω2 0 1+ε    q   g0   w . θ0 θ0 u + x3 . Glover.190–6. p = (6. T0 T0 Td Td (k − ) x + y.190) and (6.4. The manual (Balas. 199) with transfer function K0 (s ) = k + sTd .194) 6.193) This compensator has the state space representation x ˙ u = = − 1 1 x + y. Packard.” 13 The calculations were done with the M ATLAB µ-tools toolbox. T0 T0 (6. Doyle. u 0 θ0  (6.2 (p.192) 1+ε To initialize the D– K iteration we select the compensator that was obtained in Example 5.

as we already know. Since the peak value exceeds 1.4 µ 1. Note that the computed lower and upper bounds coincide to the extent that they are indistinguishable in the plot. In the search procedure for the H∞ -optimal solution as described in § 6.27 shows that the structured singular value peaks to a value of about 1. in particular that of the frequency response of the optimal closed-loop system. hence.4. the closed-loop system does not have robust performance.28. The peak value of the structured 297 . does not need to be fitted. Taking it too small not only prolongs the computation but — more importantly — results in a solution that is close to but not equal to the actual H∞ optimal solution. 6.9.: Calculated and fitted D-scales blocks only. Figure 6.01. solutions that are close to optimal often have large coefficients. the left and right scales are equal.8 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6. 269) the search is terminated when the optimal value of the search parameter γ has been reached within a prespecified tolerance. The second diagonal entry is normalized to be equal to 1.3. ill-conditioned. and.27. The next step is to fit rational functions to the D-scales. The result is a sub-optimal solution corresponding to the bound γ1 = 1.29 shows the structured singular value of the closed-loop system that corresponds to the compensator that has been obtained after the first iteration. This difficulty is a weak point of algorithms for the solution of of the H∞ problem that do not provide the actual optimal solution. These large coefficients make the subsequent calculations. The following step in the µ-synthesis procedure is to perform an H∞ optimization on the scaled system G1 = DGD−1 . An application of µ-synthesis 1.6 (p. Because we have 1 × 1 perturbation 10 2 |D | 10 0 10 -2 fit 10 -2 10 -4 D-scale 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6. A quite good-looking fit may be obtained on the first diagonal entry of D with a transfer function of order 2. and easily lead to erroneous results.6 (p.: The structured singular value of the initial design The plot of Fig. As seen in § 6.6. This tolerance should be chosen with care. Figure 6.28 shows the calculated and fitted scales for the first diagonal entry. 221).2 1 0.

This means that the design achieves robust stability. Again the Dscale may be fitted with a second-order rational function.: Structured singular values for the three successive designs singular value is about 0.2 initial solution first iteration 1 0. 50 |S | [dB] bound 0 -50 -100 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6.30. The plot shows that robust performance is obtained at the cost of a large controller gain-bandwidth product.91. For the third design the structured value has a peak value of about 0.29.5. 6.9. The H∞ optimization results in a suboptimal solution with level γ2 = 0.9.4 µ 1.8 -2 10 second iteration 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6. The plot for the structured singular value for the final design is quite flat.97. at least not in the frequency region shown. 298 .29 shows the structured singular value plots of the three designs we now have. It therefore has robust performance with a good margin.30 shows plots of the sensitivity function of the closed-loop system with the compensator K2 for the four extreme combinations of the values of the uncertain parameters g and θ. Figure 6. The plots confirm that the system has robust performance. This appears to be typical for minimum-µ designs (Lin. Figure 6. Assessment of the solution The compensator K2 achieves a peak structured singular value of 0. The margin may be improved by further D– K iterations. To increase the robustness margin another D– K iteration may be performed. The input sensitivity function U increases to quite a large value. and Gu 1993).6.9. and has no high-frequency roll-off. well below the critical value 1. The reason for this is that the design procedure has no explicit or implicit provision that restrains the bandwidth. Postlethwaite. H∞ -Optimization and µ-Synthesis 1. We pause to analyze the solution that has been obtained.31 gives plots of the nominal system functions S and U .: Perturbed sensitivity functions for K2 and the bound 1/ V Figure 6.

9. may be done by bounding the high-frequency behavior of the weighted input sensitivity UV .6. This is the familiar performance specification of the mixed sensitivity problem.33.9. Figure 6. We wish to modify the design so that UV is bounded as |U ( jω) V ( jω)| ≤ where β 1 = .195) Numerically.196) 1 . we tentatively choose β = 100 and ω1 = 1.32. 6. The closed-loop transfer function from w to z2 is − W2 UV . Figure 6. in turn.26 to that of Fig.: Magnitude plots of UV and the bound 1/ W2 6. (6. 299 . This. To include this extra performance specification we modify the block diagram of Fig.: Nominal sensitivity S and input sensitivity U for the compensator K2 magnitude [dB] 100 50 1/ W2 0 UV -50 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6.31. Bandwidth limitation We revise the problem formulation to limit the bandwidth explicitly. W2 (s ) s + ω1 (6. The diagram includes an additional output z2 while z has been renamed to z1 .32 shows a magnitude plot of UV for the compensator K2 . One way of doing this is to impose bounds on the high-frequency behavior of the input sensitivity function U . | W2 ( jω)| ω ∈ R.6. (6.32 includes the corresponding bound 1/| W2 | on |U |. An application of µ-synthesis magnitude [dB] 100 50 0 -50 -100 -2 10 U S 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6.197) ∞ where W1 = 1. To handle the performance specifications SV ∞ ≤ 1 and W2 UV ∞ ≤ 1 jointly we impose the performance specification W1 SV W2 UV ≤ 1. 6.

7.34 may be 300 . H∞ -Optimization and µ-Synthesis z2 p δP + + w W2 y − W0 q uo V + P o + z1 K u Figure 6.33 we see that the structured perturbation model used to design for robust performance now is   p 0   z1 .33. 6. new compensator K It is easy to check that the open-loop interconnection system according to Fig.6.198) with δ P . and any nonproper transfer functions now have been eliminated.: Revised modified block diagram ˜ = K W2 . p. Because W2 is a nonproper rational function we modify the diagram to the equivalent diagram of Fig.34. δ1 and δ2 independent scalar complex perturbations such that ∞ ≤ 1.: Modified block diagram for extra performance specification By inspection of Fig. 6. 279). δ2 z2 q δ = P w 0 0 δ1 (6.34 (compare § 6. 6. The compensator K and the filter W2 are combined to a w z2 p δP + x4 + q uo Wo V + Po + z1 y K − ~ K W2 ~ u W2−1 Figure 6.

This amounts to the solution of a mixed sensitivity problem. This compensator very nearly has robust performance with the required bandwidth. z (6. This means that the bandwidth constraint is too severe.78. Starting with H∞ -optimal initial design one D. To obtain an initial design we do an H∞ -optimal design on the system with q and p removed.6. so that we do not have robust performance.32 (see Fig.97. 301 . Further reduction does not seem possible.K iteration in a compensator with a peak structured singular value of 1.1.35).199) To let the structured perturbation have square diagonal blocks we rename the external input w to w1 and expand the interconnection system with a void additional external input w2 :     ζω0 0 2 0 0   0 1 0 0 1+ε   q   ω2 0  g0   0 0 0 g0  0 0 1 + ε   w 1  . Nine iterations are needed to find a suboptimal solution according to the level γ0 = 0. An application of µ-synthesis represented as  0  0 x ˙= 0  0    p 0  z1   1  =  z2   0 y −1 1 0 0 0 0 0 0 0 0 0 1 0 0 0    g0   x +  g0  α−1  0 0 θ0  − ω1 0 0    0 0 0   1   1 0 0 q 0 1+ε   w .02.196) from 100 to 1000. Figure 6.    =   z2   0 0 0 0 x + 0 0 0 1  w2  y −1 0 0 0 u ˜ 1 0 0 0 − 1+ ε The perturbation model now is q δ = P 0 w 0 0 p . 6. β = 1000 and β = 10000. on the nominal. however.201) with δ p a scalar block and 0 a full 2 × 2 block.  0 u β (6. unperturbed system. which means that nominal performance is achieved. The latter number is less than 1. The peak value of the structured singular value turns out to be 1. x+ 0 0 0 1  u  ˜ 0 1 0 − 1+ε 0 0   0 2ζω0 1+ε ω2 0 1+ε 0 − θ10  0    q 0  w . One D– K iteration leads to a design with a reduced peak value of the structured singular value of 1.9. Again. robust performance is not feasible. To relax the bandwidth constraint we change the value of β in the bound 1/ W2 as given by (6.35 shows the structured singular value plots for the three compensators that are successively obtained for β = 100. that is. Finally.   x+ x ˙=    1 α−1  0 0 0  w2 0 0 0 − θ0 θ0  u ˜ 0 0 0 − ω1 0 0 0 β (6. after choosing β = 10000 the same procedure results after one D.200)       0 0 0 0   0 0 1 1 p  q  1 0 z1   1 0 0 0 0 0 w 1  1 + ε  .K iteration leads to a design with a peak structured singular value of 1.

: Exact (a) and approximate (b. c) frequency responses of the compensator compensator.33 have order 9.8 0. compensators K = K 2 It is typical for the D.2 1 0.35. 6.36(a) shows the Bode magnitude and phase plots of the limited bandwidth robust 10 4 magnitude 10 10 10 10 2 0 (b). This number may be explained as follows.K algorithm that compensators of high order are obtained. In the (single) D– K iteration that yields the compensator the “ D-scale” is fitted by a rational function of order 2. Its Bode plot has break points near the frequencies 1. H∞ -Optimization and µ-Synthesis 1.200) has order 4.7.6 -2 10 β = 100 β = 1000 β = 10000 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6.9.(c) -2 (a) -4 10 100 -2 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 phase [deg] 0 (b). Central subop˜ for this plant hence also have order 8. The generalized plant (6. The compensator is minimum-phase and has pole excess 2.36. We construct a simplified compensator in two steps: 302 . Order reduction of the final design The limited bandwidth robust compensator has order 9. This is caused by the process of fitting rational scales. Premultiplying the plant by D and postmultiplying by its inverse increases the plant order to 4 + 2 + 2 = 8.(c) -100 (a) -200 -2 10 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 angular frequency [rad/s] Figure 6.6. 60 and 1000 rad/s.4 µ 1. Figure 6.: Structured singular value plots for three successive designs 6. This means that the corresponding timal compensators K ˜ W −1 for the configuration of Fig. Often the order of the compensator may considerably be decreased without affecting performance or robustness.

36(b) is the Bode plot of the resulting simplified compensator. Figure 6.36(c) shows that the approximation is quite good.312 (6. An application of µ-synthesis 1.6. The break point at 1000 rad/s is caused by a large pole. The Bode plot that is now found shows that the compensator has pole excess 1. This large pole corresponds to a pole at ∞ of the H∞ -optimal compensator.43 )2 + 45. The plots confirm that performance is very nearly robust.9.37. Comparison of the plot of the input sensitivity U with that of Fig. magnitude [dB] 50 bound 0 -50 -100 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6. (s + 22. 6.35 shows that the singular value plot for the limited-bandwidth design has a small peak of magnitude slightly greater 303 .: Perturbed sensitivity functions of the reduced-order limitedbandwidth design magnitude [dB] 50 U 0 S -50 -100 -2 10 10 -1 10 0 10 1 10 2 10 3 angular frequency [rad/s] Figure 6. We remove this pole by omitting the leading term s9 from the denominator of the compensator.3 (Peak in the structured singular value plot). 2.9.6234 . Figure 6.31 confirms that the bandwidth has been reduced.38 displays the nominal performance. Figure 6.38.202) Figure 6. A suitable numerical routine from M ATLAB yields the approximation K (s ) = 6420 s + 0. Figure 6.bandwidth design Exercise 6.: Nominal sensitivity S and input sensitivity U of the reducedorder limited. It has break points at 1 rad/s (corresponding to a single zero) and at 60 rad/s (corresponding to a double pole). We therefore attempt to approximate the compensator transfer function by a rational function with a single zero and two poles.37 gives the magnitudes of the perturbed sensitivity functions corresponding to the four extreme combinations of parameter values that are obtained for this reduced-order compensator.

199) by the root locus method. Preliminary to the proof of the basic inequality for H∞ optimal control of Summary 6.202) is not very different from the compensators found in Example 5. Proofs for § 6.1. A severe shortcoming is that the design method inherently leads to compensators of high order. though. 133).203) and Z square rational. which is part (a) of Summary 6.8.9.2.4.6. and by solution of a mixed sensitivity problem in § 6. 6.5.10. 3. Conclusion The example illustrates that µ-synthesis makes it possible to design for robust performance while explicitly and quantitatively accounting for plant uncertainty.4 We first prove the result of Summary 6.4. 2. Compute the stability regions for the four designs and plot them in one graph as in Fig. This in turn is the same condition that σ( ¯ H ∼ ( jω) H ( jω)) ≤ γ for ω ∈ Ê. Appendix: Proofs 6. 1. 304 .4 (p. Inspection of Fig.1 (Meinsma’s lemma).2. Discuss the comparative robustness and performance of the four designs. A similar definition holds for full normal row rank. by H2 optimization in § 3.4. Introduction In this Appendix to Chapter 6 we present several proofs for § 6.10. 250) we establish the following lemma. Inequality for the ∞-norm. 254). 264). 6. in polished form. What is the explanation for this peak? 6.5 (p. The usual approach is to apply an order reduction algorithm to the compensator. H∞ -Optimization and µ-Synthesis than 1 near the frequency 1 rad/s.37. This finally amounts to the condition that H ∼ H ≤ γ 2 I on the imaginary axis. The proof of (b) is similar.2.9.2 (p. Summary 6. The method cannot be used na¨ ıvely. The compensator (6.10. Then 14 A rational matrix has full normal column rank if it has full column rank everywhere in the complex plane except at a finite number of points.10. The lemma presents. often unnecessarily high.2 (p. 6. however.1.5 (p. Let be a nonsingular para-Hermitian rational matrix function that admits a factorization J= In 0 0 − Im = Z ∼ J Z with (6. with λi the ith largest eigenvalue. 6. a result originally obtained by Meinsma (Kwakernaak 1991). 264) and § 6. At this time only ad hoc methods are available to decrease the order. with σ as the condition that λi ( H ∼ H ) ≤ γ 2 on the imaginary axis for all i.4 (Comparison with other designs). Suppose that V is an (n + m ) × n rational matrix of full normal column rank14 and W an m × (n + m ) rational matrix of full normal row rank such that W ∼ V = 0. Exercise 6.1 (p. Compare the four designs by computing their closed-loop poles and plotting the sensitivity and complementary sensitivity functions in one graph.6.2 (p. 272). reveals no violation of the performance specification on S near this frequency. and considerable insight is needed. By the definition of the ∞-norm the inequality H ∞ ≤ γ is equivalent to the ¯ the largest singular value.

Substitution of H = G11 + G12 K ( I − G22 K )−1 G21 . 265) for the standard H∞ optimal control problem. V > 0 on the imaginary axis if and only if W ∼ £ Proof of Meinsma’s lemma. Proof of Basic inequality.213) 305 . Since ¯ ∞ ≤ 1.205) ¯ B.209) (6. hence.1(b) (p.207) ¯A ¯ ∼ )U ∼ .212) − 12 −1 22 21 0 22 I −1 22 21 I 0 0 . The proof of (b) is similar. and. hence. From A∼ A ≥ B∼ B on the imaginary axis it follows It follows that B ¯ −1 B ¯A ¯∼ ≥ B ¯B ¯ ∼ on the imaginary axis.4. B (6. Consider −1 −1 = = I 0 11 21 12 12 22 −1 22 11 (6. L leads to ∼ ∼ ∼ ∼ ∼ ∼ G11 G11 + G11 G21 L + LG21 G11 + LG21 G21 L ≤ γ2 I ∞ ≤ γ is equivalent to H H ∼ ≤ (6. Define an m × m nonsingular rational matrix A ¯ ] A = 0. A ¯ −1 B ¯A = A ¯ = BA−1. and.10. This. (a) V ∼ V ≥ 0 on the imaginary axis is equivalent to V = Z −1 A . A that BA−1 ∞ ≤ 1. ¯ A W ∼ Z −1 = U [− B ¯ ] Z .211) on the imaginary axis.205) that we have ¯ ]. so that also A W ∼ V = W ∼ Z −1 A = 0. ¯ A with U some square nonsingular rational matrix. Appendix: Proofs (a) V ∼ V ≥ 0 on the imaginary axis if and only if W ∼ (b) V ∼ −1 −1 W ≤ 0 on the imaginary axis. ¯ A [− B B (6.2 (p. ¯B ¯∼ − A W = U( B −1 (6.210) on the imaginary axis.4. B (6. I (6. By Summary 6.208) This proves that W ∼ W ≤ 0 on the imaginary axis. 264) the inequality H γ 2 I on the imaginary axis.206) it follows from (6. hence.6.204) with A an n × n nonsingular rational matrix and B an n × m rational matrix such that A∼ A ≥ B∼ B on the ¯ and an m × n rational matrix B ¯ such that imaginary axis. We next consider the proof of the basic inequality of Summary 6. W < 0 on the imaginary axis. is equivalent to [I L] ∼ − γ2 I G11 G11 ∼ G21 G11 ∼ G11 G21 ∼ G21 G21 I ≤0 L∼ (6. in turn. and. It follows that W ∼ = U [− B W∼ −1 (6.

3 (p.5 Our derivation of the factorization of γ of § 6. 304) to (6.1 (p.211) (replacing [ − L∼ I] −1 I ≥0 L on the imaginary axis.217) with ) yields (6.216) Z is nonsingular. K (6.221) First factorization. 273) for the state space version of the standard problem is based on the Kalman-Jakuboviˇ c-Popov equality of Summary 3. I (6. Since −G12 K ( I − G22 K )−1 −L 0 = = I I I we have K∼ I 0 ∼ −G12 I ∼ −G22 ∼ − γ2 I G11 G11 ∼ G21 G11 γ ∼ G11 G21 ∼ G21 G21 −1 −G12 −G22 I ( I − G22 K )−1 K (6. Then by a well-known theorem (see for instance Wilson (1978)) there exist square rational matrices V1 and V2 such that 12 −1 22 21 − 11 ∼ = V1 V1 . (6.219) 0 I −G12 −G22 I ≥ 0. 22 ∼ = V2 V2 . As a first step. obviously 22 = G21 G21 ≥0 on the imaginary axis. Also. It may be proved that the matrix W= −L . Proofs for § 6.3. This is certainly the case for γ sufficiently large.215) Hence. 6.218) which shows that V= I .2. we may write = I 0 12 −1 22 I Z∼ ∼ V1 0 0 ∼ V2 −I 0 0 I V1 0 0 V2 I −1 22 21 0 . 143) in § 3.214) ∼ on the imaginary axis.5.1(a) (p. we determine a factorization ∼ − γ2 I G11 G11 ∼ G21 G11 ∼ G11 G21 ∼ G21 G21 ∼ .7.220) This is what we set out to prove. I application of Summary 6.222) −1 have all their poles in the closed left-half complex plane. L admits a factorization.6. Taking (6.10.7. (6.10. Such a factorization is such that Z1 and Z1 sometimes called a spectral co factorization. Proof (Factorization of = 0 ∼ −G12 γ ). H∞ -Optimization and µ-Synthesis For the suboptimal solution to exist we need that 12 −1 22 21 − 11 ∼ ∼ −1 ∼ ∼ = G11 G21 ( G21 G21 ) G21 G11 − G11 G11 + γ2 I ≥ 0 (6. = Z1 J1 Z1 (6. We are looking for a factorization of ∼ − γ2 I G11 G11 ∼ G21 G11 ∼ G11 G21 ∼ G21 G21 −1 I ∼ −G22 0 I −G12 −G22 γ . A spectral cofactor of a rational matrix may be obtained by 306 . This equality establishes the connection between factorizations and algebraic Riccati equations.

225) The corresponding gain F1 and the factor V1 are  1   1  − γ2 H − γ2 H F1 =  0  X1 . H1 (s ) = F T (sI − AT )−1 [ H T 0 C T ]. This leads to the algebraic Riccati equation 0 = A X1 + X1 AT + F F T − X1 (C T C − 1 T H H ) X1 .10.229) Second factorization. with   H T Z1 (s ) = V1 (s ) = I +  0  (sI − A )−1 X1 − γ12 F 0 C . W1 = I.224) Appending the distinguishing subscript 1 to all matrices we apply the Kalman-Jakuboviˇ c-Popov equality of Summary 3. γ2 (6.7. The next step in completing the factorization of γ is to show. D1 = 0. that  ¯ 1 )−1 C T ¯ 1 )−1 B  − H (sI − A − H (sI − A −G12   −1 0 =  0 −I Z1  I −G22 ¯ 1 )−1 X1 C T −C (sI − A ¯ 1 )−1 B I − C (sI − A  = 0 0 I    0 −H ¯ 1 )−1 X1 C T − I  +  0  (sI − A 0 −C B .225) such that AT + ( γ12 H T H − C T C ) X1.222). C (sI − A )−1 F F T (−sI − AT )−1 C T + I (6. B1 = [ H T 0 C T ].230) 307 . where  2  −γ I 0 0 −γ 2 I 0 . C1 = F T . ¯ 1 = A + X1 ( 1 H T H − C T C ) A γ2 is a stability matrix. V1 (s ) = I +  0  X1 (sI − AT )−1 [ H T 0 C T ].228) (6. R1 =  0 0 0 I (6. and R1 and W1 as displayed above. By transposing the factor V1 we obtain the cofactorization (6. it is easily found that ∼ (s ) − γ 2 I G11 (s )G11 ∼ (s ) G21 (s )G11 T and then transposing the resulting spectral factor. (6.227) (6. J1 = R1 =  0 0 0 I (6. For the problem ∼ G11 (s )G21 (s ) ∼ G21 (s )G21 (s )   = H (sI − A )−1 F F T (−sI − AT )−1 H T − γ 2 I 0 C (sI − A )−1 F F T (−sI − AT )−1 H T 0 −γ 2 I 0 H (sI − A )−1 F F T (−sI − AT )−1 C T 0   .1 with A1 = AT .6. or. C C (6. C   2 0 0 −γ I −γ 2 I 0 .223) ∼ The transpose of the right-hand side may be written as R1 + H1 (s ) W1 H1 (s ). Appendix: Proofs first finding a spectral factorization of at hand. equivalently.226) We need a symmetric solution X1 of the Riccati equation (6. by invoking some algebra.

V2 (s ) = I + F (6.235) γ2 ¯ 2 − I )−1 . where F2 = ( B BT − equation (6.233) 1 T ¯ 2 + γ2 X ¯ 2 B BT X ¯ 2.241) the desired spectral factor Zγ of Zγ ( s ) = I 0 1 γ 0 V (s ).239) F F T ) X2 is the gain corresponding to the algebraic Riccati (6. H H − C T C ) X1 X γ2 (6.236) for X2 we obtain X ¯ 2 as X ¯ 2 = X2 ( X1 X2 − γ 2 I )−1 . − (X ¯ 1 from (6.231) 0 − γ12 I 0 0 0 . B2 = [ X1 C T B]. assuming that the inverse exists. with (appending the distin(6. I  (6.237) ¯2 (sI − A2 )−1 B2 . where X2 = γ 2 X ¯ The gain F2 and factor V2 for the spectral factorization of γ are 1 T ¯ T ¯2 = L − F 2 ( B2 X2 + D2 W2 C2 ) = ¯2 − I) C ( X1 X . − γ12 I γ 1 γ2 (6.234) Substitution of X1 ( γ12 H T H − C T C ) X1 from the Riccati equation (6.236) ¯ 2 ( X1 X ¯ 2 − I )−1 . It ¯2 is a stability matrix iff is proved in the next subitem that A2 − B2 F ¯ 2 = A − BF2 = A + ( 1 F F T − B BT ) X2 A γ2 is a stability matrix. H∞ -Optimization and µ-Synthesis To obtain the desired factorization of guishing subscript 2) γ we again resort to the KJP equality. ¯2 −γ 2 B T X (6.227) and rearrangement yields Substitution of A ¯2 A − (X ¯ 2 X1 − I ) ¯2 + X 0 = AT X ¯ 2 X1 ( +X 1 T ¯2 − I) H H ( X1 X γ2 (6. R 2 = 0.236).6.232) ¯ 1.240) is (6. W2 = J1 = 0 −C I 0 0 ¯ 2) This leads to the algebraic Riccati equation (in X 1 T T ¯T ¯ ¯ ¯ 0= A 1 X2 + X2 A1 − 2 H H + C C γ ¯ 2 X1 − I )C T C ( X1 X ¯2 − I ) + γ2 X ¯ 2 B BT X ¯ 2. ¯2 − I) − X + (X (6. Since L2 = I 0 0 . A2 = A      1 − γ2 I −H 0 0 −1 C2 =  0  .238) ¯ ¯ We need to select a solution X2 of the Riccati equation (6. I 2 308 . After solving (6. γ2 (6. and premultipliPostmultiplication of this expression by γ ( X1 X cation by the transpose yields the algebraic Riccati equation 0 = AT X2 + X2 A + H T H − X2 ( B BT − 1 F F T ) X2 . D2 = 0 − I  .225) and further rearrangement result in ¯2 + X ¯2 − I) ¯ 2 A ( X1 X ¯ 2 X1 − I ) AT X 0 = (X ¯ 2 X1 − I ) 1 H T H ( X1 X ¯ 2 (γ 2 B BT − F F T ) X ¯ 2.233) such that A2 − B2 F2 is a stability matrix.

¯ 2 = A − B BT X2 + ¯2 has the same eigenvalues as A Inspection shows that A2 − B2 F ¯ 2 is a stability matrix.244) ¯2 = ( A X1 X2 − γ 2 A + A2 − B2 F − X1 C T C X1 X2 + γ 2 B BT X2 )( X1 X2 − γ 2 I )−1 . Hence.236) and simplification yields ¯2 = ( X1 X2 − γ 2 I )( A − B BT X2 + A2 − B2 F 1 F F T X2 )( X1 X2 − γ 2 I )−1 .6.233) such that A2 − B2 F ¯ 1 − X1 C T C ( X1 X ¯2 = A ¯ 2 − I ) + γ 2 B BT X ¯ 2.10. (sI − A + γI γ2 − BT X2 (6. Consider equation (6.246) F F T X2 . Substitution of X1 ( γ12 H H − C C ) X1 from (6.245) (6. Appendix: Proofs Invoking an amount of algebra it may be found that the inverse Mγ of the spectral factor Zγ may be expressed as Mγ (s ) = I 0 C 0 ¯ 2 )−1 ( I − 1 X1 X2 )−1 [ X1 C T γ B].243) ¯ = X2 ( X1 X2 − γ 2 I )−1 results in X1 H T H − X1 C T C and X 1 X1 H T H X1 X2 − X1 H T H γ2 (6. γ2 1 γ2 (6. A2 − B2 F ¯1 = A + Substitution of A 1 γ2 (6.225) results in T T ¯2 = (−γ 2 A − X1 H T H + γ 2 B BT X2 − X1 AT X2 − F F T X2 )( X1 X2 − γ 2 I )−1 .242) Choice of the solution of the algebraic Riccati equation. We require a solution to the algebraic Riccati ¯2 is a stability matrix. ¯2 is a stability matrix iff A A2 − B2 F 309 . A2 − B2 F Further substitution of H T H + AT X2 from (6.

H∞ -Optimization and µ-Synthesis 310 .6.

The element in the ith row and jth column of a matrix A is denoted by Ai j. singular and nonsingular matrices The trace. determinant. Not every square matrix A has an eigenvalue decomposition. where V and D are square and D is diagonal. lower case letters denote column or row vectors or scalars. In that case λ is referred to as an eigenvalue of A. Basic matrix results Eigenvalues and eigenvectors A column vector v ∈ Cn is an eigenvector of a square matrix A ∈ Cn×n if v = 0 and Av = λv for some λ ∈ C. In particular χ A = χ T AT −1 . columns) in A. tr( A ) of a square matrix A ∈ Cn×n is defined as tr( A ) = that n n i=1 Aii . trace. Whenever sums A + B and products AB etcetera are used then it is assumed that the dimensions of the matrices are compatible. Rank. Often λi ( A ) is used to denote the ith eigenvalue of A (which assumes an ordering of the eigenvalues. A. In what follows capital letters denote matrices. It may be shown tr( A ) = i=1 λi ( A ). 311 . (λ ∈ C. equivalently. The eigenvalues are the zeros of the characteristic polynomial χ A (λ) = det(λ In − A ).A.1. Matrices This appendix lists several matrix definitions. An eigenvalue decomposition of a square matrix A is a decomposition of A of the form A = V DV −1 . It also equals the rank of the square matrix AT A which in turn equals the number of nonzero eigenvalues of AT A. In this case the diagonal entries of D are the eigenvalues of A and the columns of V are the corresponding eigenvectors. The rank of a (possibly nonsquare) matrix A is the maximal number of linearly independent rows (or. formulae and results that are used in the lecture notes. The eigenvalues of a square A and of T AT −1 are the same for any nonsingular T . ) where In denotes the n × n identity matrix or unit matrix. an ordering that should be clear from the context in which it is used).

. n ). Lemma A. 2. A matrix A ∈ Cn×n is Hermitian if AH = A. Suppose A and BH are matrices of the same dimension n × m. Matrices The determinant of a square matrix A ∈ Cn×n is usually defined (but not calculated) recursively by det( A ) = n j+1 A1 j det( Aminor ) j=1 (−1 ) 1j A if n > 1 . . det( A ) = n i=1 λi ( A ). . A. n ). if n = 1 is the (n − 1 ) × (n − 1 )-matrix obtained from A by removing its ith row and jth Here Aminor ij column. . Symmetric. Here AH is the complex conjugate transpose of A defined as ( AH )i j = A ji (i. Hermitian and positive definite matrices. A symmetric or Hermitian matrix A is said to be positive definite if xH Ax > 0 for all nonzero column vectors x. . All eigenvalues of A are real valued. j = 1. 4. A square matrix is singular if det( A ) = 0 and is regular or nonsingular if det( A ) = 0. The determinant of a matrix equals the product of its eigenvalues. . . Then 1. Then for any λ ∈ C there holds det(λ In − A B ) = λn−m det(λ Im − BA ).1. (A. (∀ i = 1. . .1) 312 .2. Three matrix lemmas Lemma A. the transpose and unitary matrices A matrix A ∈ Rn×n is (real) symmetric if AT = A. If T is nonsingular then A ≥ 0 if and only T H AT ≥ 0. . For Hermitian matrices A and B the inequality A ≥ B is defined to mean that A − B ≥ 0.2. A > 0 ⇐⇒ λi ( A ) > 0 (∀ i = 1.1 (Nonnegative definite matrices). . Here AT is the transpose of A is defined elementwise as ( AT )i j = A ji . Every real-symmetric and Hermitian matrix A has an eigenvalue decomposition A = V DV −1 and they have the special property that the matrix V may be chosen unitary which is that the columns of V have unit length and are mutually orthogonal: V H V = I . (i. . n ). n ). We denote this by A > 0. For square matrices A and B of the same dimension we have det( A B ) = det( A ) det( B ). j = 1.1 (Eigenvalues of matrix products). . . A symmetric or Hermitian matrix A is said to be nonnegative definite or positive semi-definite if xH Ax ≥ 0 for all column vectors x. A ≥ 0 ⇐⇒ λi ( A ) ≥ 0 3. We denote this by A ≥ 0. Let A ∈ Cn×n be a Hermitian matrix. .A. Overbars x + j y of a complex number x + j y denote the complex conjugate: x + j y = x − j y. .

3 (Schur complement). ( A + UV H )−1 = A−1 − A−1 U ( I + V H A−1 V H ) A−1 . det( In − A B ) = det( Im − B A ). This formula is used mostly if U = u and V = v are column vectors. 1 + vH A−1 u rank-one Lemma A. ( A + u vH )−1 = A−1 − 1 ( A−1 u )(vH A−1 ) . One the one hand we have λ Im A B In Im 0 1 λ Im −λ B = In A 0 1 AB In − λ and on the other hand λ Im A B In Im −A 0 λ Im − BA = In 0 B . 313 . The matrix R − QH P−1 Q is referred to as the Schur complement of P (in A). Three matrix lemmas Proof. tr( A B ) = i λi ( A B ) = j λ j ( BA ) = tr( BA ). Lemma A. P > 0 and R − QH P−1 Q > 0.2. This gives the two very useful identities: 1. In So the nonzero eigenvalues of A B and BA are the same. Then A > 0 ⇐⇒ P is invertible. 2. In Taking determinants of both of these equations shows that λm det( In − 1 λ Im A B ) = det A λ B = det(λ Im − BA ). Suppose a Hermitian matrix A is partitioned as A= P QH Q R with P and R square. and it shows that a rank-one update of A corresponds to a rank-one update of its inverse. Then UV H = u vH has rank one.2.2.2 (Sherman-Morrison-Woodburry & rank-one update).A.

A. Matrices 314 .

A. Graphs. Natick. The asymptotes of the root locus. and R. R. O. I. W. H. E. and E. B. Simultaneous Stabilization of Linear Systems. C. Relations between attenuation and phase in feedback amplifier design. (1934). A survey of extreme point results for robustness of control systems. Prentice-Hall. NJ. Automatica 29. Jury (1976). Springer-Verlag. Bode. (1966). Berman. Lin (1988). Dover Publications. Berlin.. and Mathematical Tables. H. Volume 152 of Lecture Notes in Control and Information Sciences. Handbook of Mathematical Functions with Formulas. and H. Anderson.References Abramowitz. (1991). M. Bagchi. Springer-Verlag. G. 387–388. Basile. Linear Controller Design: Limits of Performance. V. and R. Implicit Linear Systems. Bristol. Linear Optimal Control. S. D. A note on the Youla-Bongiorno-Lu condition. 133–134. S. and G. The MathWorks. O. D. Prentice Hall. ISBN 0-387-53537-3. H. and J. Stanton (1963). Mass. J. Barmish. J. Control Signals Systems 1. Inc. Root locations of an entire polytope of polynomials: It suffices to check the edges. Math. B. SIAM Review 5. International Series in Systems and Control Engineering. 13–35. 421–454.. P. Marro (1992). Inc. Hempstead. (1993). C. O. and C. New York. V. and I. Englewood Cliffs. C. Packard. and H. Białas. D. New York. Berlin etc. Englewood Cliffs. On a new measure of interaction for multivariable process control. and J. G. Blondel. B. (1985). Hollot. A. K. A. 1–18. G. Smith (1991). D. 61–71.. Bode. Englewood Cliffs. 315 . NJ. S. H. Van Nostrand. of Sciences 33. NJ. (1945). Anderson. H. NJ. J. Automatica 12. Prentice Hall. Prentice Hall. Black. Doyle. A necessary and sufficient condition for the stability of convex combinations of polynomials or matrices. Balas. (1994). UK. Stegun (1965). G. B. Controlled and Conditioned Invariance in Linear System Theory. 209–218. Stabilized feedback amplifiers. W. (1940). Optimal control of stochastic systems. IEEE Transactions on Automatic Control 11. User’s Guide. Glover. Bulletin Polish Acad. Network Analysis and Feedback Amplifier Design. Volume 191 of Lecture Notes in Control and Information Sciences. Bell System Technical Journal 19. B. Optimal Control: Linear Quadratic Methods. µAnalysis and Synthesis Toolbox. A. 473–480. Barratt (1991). B. Kang (1993). Moore (1971). Aplevich. Bell System Technical Journal 13. Bartlett. Boyd. I. Moore (1990). Englewood Cliffs.. Anderson. Prentice Hall International.

Natick. South Natick. Desoer. Linear Control System Analysis and Design: Conventional and Modern.D) using high gain output feedback. Third Edition. 12–18. Properties and calculation of transmission zeros of linear multivariable systems.). pp. and A. and M. A. Minn. Minneapolis. J.A. Wang (1978). Doyle. (1970).. R. Introduction to Linear Systems Theory. R. C.-G. Analysis of feedback systems with structured uncertainties. B. Desoer. and M. J. 932–943. C. Campo. J. Macmillan. J. User’s Guide.. D’Azzo. Johnson (1996). 548–563. and S. Safonov (1988).B. Robust-Control Toolbox. CRC press. G. Dorf. Robust Control Toolbox. (1984). New York. 643–658. Ohda (1988). (1979). Francis. Automatica 10. An algorithm for the calculation of transmission zeros of a the system (C. IEEE Transactions on Automatic Control 39. and J. pp. Control Toolbox (1990). New York. Wang (1980). Linear time-invariant robust servomechanism problem: a self-contained exposition. Vidyasagar (1975). T. T. Robust servomechanism problem. E. A.. The MathWorks. C. NY. D. Optimal Control Applications and Methods 17. R. The MathWorks. A. and C. In The Control Handbook. W.-T. and M.References Brockett. S. G. New York. A. Decision & Control. Y. pp. The root loci of H∞ -optimal control: A polynomial approach. IEEE Transactions on Circuits and Systems 21 . 242–250. Journal of Mathematical Analysis and Applications 11. R. M. E.. Robustness of multiloop linear feedback systems. USA. C. NY. P. Chiang. Doyle. C.. (1992). H. J. ONR/Honeywell Workshop on Advances in Multivariable Control. In C. D. Wang (1974). Chiang. Academic Press. MA. Academic Press. Natick. Davison. R. McGraw-Hill Book Co. H. Mass. Part D 129. Addison-Wesley Publishing Co. 738–741. J. J. E. The MathWorks. Choi. Lecture Notes. Leondes (Ed. USA. Feedback Control Theory. New York. (1982). IEEE Transactions on Automatic Control 23. Rinehart and Winston. Mesarovic (1965). (1996). 17th IEEE Conf.. Tannenbaum (1992). The reproducibility of multivariable systems. Schulman (1974). Zeros and poles of matrix transfer functions and their dynamical interpretations. User’s Guide. Reading. J. Dahleh. and S. Doyle. In Proc. C. User’s Guide. Doyle. Systems & Control Letters 11. J. Morari (1994). C. J. Chen. A. H. A. C. Desoer. Davison. IEE Proc. A necessary and sufficient condition for robust BIBO stability. Y. and M. Safonov (1992). Mass. Sixth Edition. Mass. 731– 747. 79–105. 81–129. and Y. and M. Davison. Holt. Modern Control Systems. 316 . 271–275. New York. Feedback Systems: Input-Output Properties. Volume 16. Achievable closed-loop properties of systems under decentralized control: conditions involving the steady-state gain. Houpis (1988). Control and Dynamic Systems. and M. 3–8. C. and Y.

Co. (1950). M. and A.. Frequency Domain Properties of Scalar and Multivariable Feedback Systems. 66–69. ¨ F¨ ollinger. M. Glover. Lathrop (1953). Baltimore. K. Automatic Control 30. Emami-Naeini (1986). Systems & Control Letters 11. R.References Doyle. and C. Berlin. MA. S. R. J. 547–551. The Theory of Matrices. F. Control and Optimization 29. Optimale lineare Regelung: Grenzen der erreichbaren Regelg¨ ute in linearen zeitinvarianten Regelkreisen. 4–16. Maryland. Doyle (1989). Uber die Bestimmung der Wurzelortskurve. C. 657–681. Springer-Verlag. J. Chelsea Publishing Company. and L. Freudenberg. K. W. Co. M. N. Institute of Electrical Engineers 67. Schumacher (Eds. Matrix Computations.. Evans. Golub. 831–847. P. McGraw-Hill Book Co. F. The role of transmission zeros in linear multivariable regulators. The synthesis of ‘optimum’ transient response: Criteria and standard forms. and D. Khargonekar. Glover. International Journal of Control 22. and G. (1988). Springer-Verlag. NY. Volume 18 of Fachberichte Messen–Steuern–Regeln. New York. J. Feedback Control of Dynamic Systems. Trans. AIEE Part II (Applications and Industry) 72.. 555–565. B. and M. Looze (1985). Graphical analysis of control systems.. Three Decades of Mathematical System Theory.. S. D. Heidelberg. Amer. Multivariable feedback design: concepts for a classical/modern synthesis. J. C. NY.. IEEE Transactions on Automatic Control 26 . Wonham (1975). P. The Johns Hopkins University Press. Franklin. 167–172. Francis (1989). G. J. Trans. Doyle. A state space approach to H∞ optimal control. D. Gantmacher. Emami-Naeini (1991). etc. O. New York. Van Loan (1983). K. Limebeer. Aut.. Graham. R. and D. Doyle (1988). State-space solutions to standard H2 and H∞ control problems. Matrix polynomials. P. Regelungstechnik 6.. G. (1948). Feedback Control of Dynamic Systems. Second Edition. Addison-Wesley Publ. Volume 104 of Lecture Notes in Control and Information Sciences. P. 283–324. Freudenberg. Safonov (1991). Amer. Kasenally. and B. Nijmeijer and J. M. Reprinted. Volume 135 of Lecture Notes in Control and Information Sciences. 273–288. Institute of Electrical Engineers 69. Academic Press. F. State-space formulae for all stabilizing controllers that satisfy an H∞ -norm bound and relations to risk sensitivity. Control System Dynamics. 317 . and W. A. and A. Looze (1988). Franklin. A characterization of all solutions to the four block general distance problem. and R. and J. Evans. E. Trans. I. MA. etc. and J. P. J. W. J. (1964). G. Glover. etc. In H. (1954). Powell. Lancaster. Rodman (1982). (1958). C. Stein (1981). Reading. Control 34 . G. Doyle. S. Control system synthesis by root locus method. Reading. SIAM J. C. Glover. Powell. Addison-Wesley Publ. D. C. NY. K. Gohberg. IEEE Trans. 442– 446. A. IEEE Trans. J. D. Springer-Verlag. Berlin.). Evans. R. C. Right half plane poles and zeros and design tradeoffs in feedback systems. New York. Engell. Francis. W.

and C. J. P. Systems & Control Letters 7. 989–996. Computers and Chemical Engineering 11. and R. Mehrgr¨ ossenregelungen. Korn. and R. Horowitz. T. Berlin. Horowitz. (1963). Synthesis of of Feedback Systems. Verlag Technik. Chicago. Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Gera (1979). Illinois. New York. November). A computer aided methodology for the design of decentralized controllers. Control 126. E. Moderne Entwurfsprinzipien im Zeit.und Frequenzbereich. U. 2134–2139. I. Sidi (1972). Differential’nye Uraveniya 14. Necessary and sufficient conditions for zero assignment by constant squaring down. A proof of the stability of multivariable feedback systems. Simple frequency-dependent tools for control system analysis. Academic Press. I. Proceedings of the IEEE 56. (1961). P. M. V. and C. Kharitonov. Hovd. 1–10. S. Aut. Morari. Fiziko-matematicheskaia 1. Skogestad (1992). Synthesis of feedback systems with large plant ignorance for prescribed time domain tolerances. 287–309. Quantitative feedback theory. Limebeer (1995). Grimble. H. 1992 American Control Conference. 215–226. Pt. (1974).. In Preprints. N. M. 644–653. Karcanias. Basic Engineering. A simple test example for H∞ optimal control. and M. Blending of uncertain nonminimum-phase plants for elimination or reduction of nonminimum-phase property. Nichols. and S. M. Grosdidier. N. New York. 318 . Fundamentals 24. New York. Pairing criteria for decentralized control of unstable plants. M. Int. T. 53–57. and H.References Green. Chen (1968). and B. D. 423–433. Applied Mathematatics 9. 2061–2062. Kalman. Englewood Cliffs. M. L. Hsu. Holt (1985). (1978b). I. Linear Systems. (1980). Kailath. Englewood Cliffs. and D. Vol. Journal Soc. A. (1978a). 1. Akademii Nauk Kahskoi SSR. ASME Series D 83. Industr. Krall. L. H. Applied and Computational Complex Analysis. P.. J. and A. Kharitonov. Philips (1947). V. Linear Algebra and its Applications 122/123/124. J. Trans. IEE Proc. On a generalization of a stability criterion. J. (1982. 95–108. B. C. Wilfert (1982). Hinrichsen. S. 221–235. Control 39(1). Bernhardsson (1992). Horowitz. R. Stability radii of linear systems. and A. and B. International Journal of Systems Science 10. 415–446. M. Bucy (1961). James. Industrial and Engineering Chemistry Research 33. Hagander. M. Industrial and Engineering Chemistry. Closed-loop properties from steady-state gain information. (1994). N. 122–127. Theory of Servomechanisms. structure selection and design. Automatica 28. P. Pritchard (1986). Linear Robust Control. Morari (1987). N. Skogestad (1994). R. Grosdidier. J. I. and M. Wiley-Interscience. Hovd. and S. New results in linear filtering and prediction theory. Two and a half degrees of freedom LQG controller and application to wind turbines. Horowitz. J. H. Giannakopoulos (1989). An extension and proof of the root-locus method. M. 1483–1485. McGrawHill. D 129. Henrici. 1007–1024. IEEE Trans. Prentice Hall. Prentice Hall.

and M. Industr. Oustaloup (Ed. 378–382. 17–51. µ– K Iteration: A new algorithm for µsynthesis. Rolland. and R. and A. K. (1983). I.). D. H. International Journal of Control 54. (1996). M. H. 481–496. I. Rational matrix GCD’s and the design of squaringdown compensators. (1963). In Preprints. J. Applied Mathematics 11. A. Int. Lin. Grimble and V. Mosca and L. Control 37. Westdijk (1985). Multivariable decentralized integral controllability. pp. Springer-Verlag. Kwakernaak. Moore (1978). 994–1004. Automatica 14. (1976). 700–704. and J. Regulability of a multiple inverted pendulum system. Kwakernaak. Wiley. A closed expression for the root locus method. Symmetries in control system design. In A. Isidori (Ed. Polynomial Methods for Control Systems Design. (1993). Hermes. Auto. M.. H. 22nd IEEE Conference of Decision and Control. New York. D. NJ. H. H. 64–72. 255–273. A polynomial approach to minimax frequency domain optimization of multivariable systems. The polynomial approach to H∞ -optimal regulation. Laub. In A. Krall. 557–566. Kwakernaak. Auto. USA.). Aut. Kwakernaak. SIAM Review 12.). Heidelberg. A.References Krall. D. Safonov (1992). R´ egulation num´ erique robuste: Le placement de pˆ oles avec calibrage de la fonction de sensibilit´ e. M. Control – Theory and Advanced Technology 1. D. Fornaro (1967). C. G. F. An algorithm for generating root locus diagrams. 141–221. Kwakernaak. Volume 1496 of Lecture Notes in Mathematics. Minimax frequency domain performance and robustness optimization of linear feedback systems. Linear Optimal Control Systems. Kwakernaak. Le. In E. Summer School on Automatic Control of Grenoble. 186–188. Sivan (1972). 384–392. Le.-L. Automatica 29. SpringerVerlag. IEEE Transactions on Automatic Control 37. 219–224. E. Nwokah. IEEE Trans. Kwakernaak. Pandolfi (Eds. 319 . Kwakernaak. and B. Sivan (1991). X. Landau. J. 1–9. San Antonio. (1969). H∞ -Control Theory. Robust control and H ∞ -optimization. Postlethwaite. Frazho (1991). John Wiley. (1991). 702. H. Prentice Hall. Control 30. Englewood Cliffs. and D. Modern Signals and Systems. Kuˇ cera (Eds. and A. (1970). (1992). H. (1986). and R. Heidelberg. Kwakernaak. Journal Soc. Cyrot. Comments on Grimble’s comments on Stein’s comments on rolloff of H ∞ optimal controllers. H. V. G. Kwakernaak. Optimization by Vector Space Methods. Automatica 29. Calculation of transmission zeros using QZ techniques. H. Asymptotic root loci of optimal linear regulators. pp. Kwakernaak. 117–156. H. New York. etc. Control 44. H. Gu (1993). Voda (1993). A.). The root-locus method: A survey. etc. Robustness optimization of linear feedback systems. (1995). J. In M.a state-space theory. Communications of the ACM 10.-W. and R. H. C. Luenberger. IEEE Trans. IEEE Trans. Trends in Control — A European Perspective.. O. Krause.. (1985). Frequency domain solution of the standard H∞ problem. Krall. M. To be published. A. Springer. J. J. Control 21(3). I. Texas.

London. UK.Y. M. and J. etc. J. UK. International Journal of Control 24. Principles and Applications. New York. The Theory of Matrices. Middleton.References Lunze. 574–577. 227–249. Control 34. Ogata. Englewood Cliffs. Bell System Technical Journal 11. (1991). 485–494. Robust Process Control. Nett. Robust Multivariable Feedback Control. Prentice Hall International Ltd. Interaction Analysis. NJ. Regeneration theory. A Complex Variable Approach to the Analysis of Linear Multivariable Feedback Systems. (1985). I. M. 405–407. Oxford. geometric and complex-variable theory. Minnichelli. NC. M. Tallinn. A. Springer-Verlag. Packard. (1988). C. and N. and A. H. IEEE Trans. In Proceedings. Edmunds.. and D. Berlin. C. J. Morari. Automatica 29. M. Volume 12 of Lecture Notes in Control and Information Sciences. O. N. R. C. MacFarlane. M. J. J. conjectures. (1932). J.. Le (1993). Amer. Manousiouthakis (1987). The complex structured singular value. IEEE Transactions on Automatic Control 26. McAvoy. London. Volume 146. Maciejowski. Desoer (1989). IEEE Transactions on Automatic Control 32 . Systems Analysis Modelling Simulation 2. Multivariable Feedback Design. McFarlane. G. Berlin. A. (1970). I. Noble. C. International Series in Systems and Control Engineering. (1989). UK. Determination of robust multivariable I-controllers by means of experiments and simulation. Robust Controller Design Using Normalized Coprime Factor Plant Descriptions. and K. (1983). G. R.. C. Instrument Soc. 320 . UK. and E. and V. A note on decentralized integral controllability.. Lunze. 995–1001. Addison-Wesley Publishing Company. J. An elementary proof of Kharitonov’s stability theorem with extensions. J. Prentice Hall. E. NJ. G. Tsai. Prentice Hall International Ltd. Prentice Hall. Robust stability of systems with integral control. Postlethwaite. K. Weighting function selection in H ∞ design. and A. A.. 126–147. MacFarlane (1981). IEEE Transactions on Automatic Control 30. Englewood Cliffs. Research Triangle Park. T. Pergamon Press. MacFarlane (1979). (1989). J. and clarifications. N. etc. J. K. I. Springer-Verlag. B. A. and D. Postlethwaite. (1969). MacDuffee. Frazho. (1956). Gu (1990). Modern Control Engineering. Nwokah. Nyquist. D. 71–109. Postlethwaite. C. 32–46. Zafiriou (1989). Trade-offs in linear control system design. Doyle (1993). USA. Automatica 27. J. and C. Aut. 281–292. UK. Applied Linear Algebra. Anagnost. 33–74. 11th IFAC World Congress.. Prentice Hall International. Wokingham. Karcanias (1976). H. Principal gains and principal phases in the analysis of linear multivariable feedback systems. I.-W. Chelsea Publishing Company. D. Robust Multivariable Feedback Control. Glover (1990). Hempstead. Lunze. Euclidean condition and block relative gain: connections. J. International Journal of Control 57. Morari. Poles and zeros of linear multivariable systems: a survey of the algebraic. (1985). J.

C. M. NJ. K. NY. Zero placement and squaring problem: a state space approach. M¨ unchen. Prentice Hall. 525–527. Raisch. Saberi. S. H. A. Schrader (1990). Englewood Cliffs. Nelson. S. 1407–1433. 113–125. Ludlage (1994). Chichester. F. Correction on ”The zeros of a system”. and M. Shaked. Athans (1981). (1970). H. O. and J. 2323–2330. (1974b). H. Basic Feedback Control System Design. Silverman. BRD. P. Automatic Control 26. 321 . A. J. E. L. International Journal of Control 24. Englewood Cliffs. G. V. K. P. 297– 299. A formula for computation of the real stability radius. Design techniques for high feedback gain stability. and B.. McGraw-Hill Book Co. IEEE Trans. Modeling Identification and Control 13. Research on system zeros: A survey. State-space and Multivariable Theory. Bernhardsson. J. Loop transfer recovery: Analysis and design. Stoorvogel. M. 381–388. Modeling Identification and Control 12. Schrader. Chen. A. Morari (1992). Rosenbrock. and I. ElArabawy (1986). London. Rantzer. A. and M.multioutput feedback systems. Implications of large RGA elements on control performance.. Savant. The role of zeros in the performance of multiinput. International Journal of Control 50. Algorithms for linear-quadratic optimization. A. Sannuti. (1970). 137–144. Englewood Cliffs. Sebakhy. Squaring down and the problems of almost zeros for continuous time systems. UK. J. N. Variable selection for decentralized control. and I. U. Skogestad. H. C.Oldenbourg Verlag. M. 159–177. Sussex. New YorkBasel. The zeros of a system. John Wiley and Sons Ltd. (1974a). Saberi. Mehrgr¨ ossenregelung im Frequenzbereich. Morari (1987). C. Prentice Hall. H. S. and J. Academic Press. Lundstr¨ om (1991). B. Marcel Dekker. ISBN 0-13-489782-X. Sannuti (1993).. Skogestad. Safonov.. Decoupling with state feedback and precompensation. Rosenbrock. and C. J. Computer-aided Control System Design. (1993). Systems and Control Letters 23. M. Skogestad. Sain (1989). Chen (1995). Simple frequency-dependent tools for analysis of inherent control limitations. Rosenbrock. N.. H. and M. New York. (1996). Automatic Control AC-15. 244–257. ElSingaby. H2 optimal control. A. 879–890. and M. 415–422. A. International Journal of Systems Science 17 . Doyle (1995). International Journal of Control 18. (1958). ISBN 0-8247-9612-8. B. Automatica 31. Sain. and P. M.. M. M. S. London. International Journal of Control 20. 487–489. IEEE Transactions on Education 33. Rosenbrock.. and P. (1973). Davison. L. H. J. Sima. Multivariable Feedback Control. A. IEEE Trams. Skogestad. B. (1976).References Qiu. (1992). M. The H-Infinity Control Problem: A State Space Approach . Industrial and Engineering Chemistry Research 26. Postlethwaite (1995). Young. Hovd. 1741–1750. A. Stoorvogel. Prentice Hall. Analysis and Design. H. R. B. A multiloop generalization of the circle criterion for stability margin analysis. H.

Robust Control Theory. M. Van de Vegte. A.. New York. H. and P. and C. P. (1985). New York. Band II: Entwurf im Zustandsraum. W. L∞ -compensation with mixed sensitivity as a broadband matching problem. IEEE Trans. In B. Volume 66 of IMA Volumes in Mathematics and its Applications. (1983). and P.G. (1991). New York. Newlin. 159–173. 222–232. Antsaklis (1996). R. Course for the Dutch Graduate Network on Systems and Control. J. J.. G. J. Algebraic and topological aspects of feedback stabilization. C. New York. M. Winter Term 1993–1994. Stroudsburg. (1979). Single-loop feedback-stabilization of linear multivariable dynamical plants. Youla. T.Teubner Verlagsgesellschaft. NY. M. P. Wilson. W. PA.. Thaler. Tolle. Francis (1982). John Wiley and Sons Ltd. Jonckheere (1984). November). (1981). Stuttgart. and J. L. J. Feedback and optimal sensitivity: Model reference transformations. Vardulakis. and Ross. Doyle (1995a). Control 26 . 301–320. Decoupling. Grundlagen und Frequenzbereichsverfahren. Schneider. P. Systems & Control Letters 14. G. D.. MA. 795–804. Feedback control systems. Wonham. IEEE Trans. M. In The Control Handbook. Antsaklis (1986). A.Oldenbourg Verlag. C. Sussex. G. and J. CRC press. M. M. (1955). MIT Press. Khargonekar (Eds. Vakgroep Wiskunde. T. T. Springer Verlag. Young. M¨ unchen. A. Aut. Chichester. Let’s get real. Vidyasagar. Control theory for linear systems. N. Mehrgr¨ ossen-Regelkreissynthese. N. and approximate inverses. E. SpringerVerlag. I. (1985). Bongiorno. Control 27. Band I. BRD. A. and A. (1983). M. Williams. J. A. Trentelman. H.. C. J. Tolle. Truxal. 8. G. Second Edition. P. Zuverl¨ assige numerische Analyse linearer Regelungssysteme. BRD.Oldenbourg Verlag. Internat. and B. H. B. Zames. C.. F. Prentice-Hall. Control Systems Synthesis—A Factorization Approach. Newlin. Linear Multivariable Control: A Geometric Approach.References Svaricek. Mehrgr¨ ossen-Regelkreissynthese. H. etc. Aut. Int. Computing bounds for the mixed µ problem. BRD.). Wong. (1990). R. Cambridge. Automatica 10. J. (1978). McGraw-Hill Book Co. of Robust and Nonlinear Control 5. (1995). A unifying approach to the decoupling of linear multivariable systems. Multivariate Anal. M¨ unchen. 880–894. 295–306. Francis and P. 573–590. (1973). and E. G. multiplicative seminorms. Design of Feedback Systems. Williams. NY. Stoorvogel (1993. J. Young. Springer-Verlag. W. J. Automatic Feedback Control System Synthesis. UK. Englewood Cliffs. J. Springer-Verlag. (1974). Linear Multivariable Systems. Control 44(1). Hutchinson. Linear Multivariable Control: Algebraic Analysis and Synthesis Methods. Introduction to random processes. 143–173. A convergence theorem for spectral factorization. P. 322 . RUG. pp. J. Doyle (1995b). 181–201. M. pp. Vidysagar. Lu (1974). Verma. Dowden. Wolovich.

Robust and Optimal Control. and Control 64.. Glover (1996). C.References Zhou. J. J. and K. J. G. Trans. Meas. Dyn. Englewood Cliffs. and N. Prentice Hall. 323 . B. ISBN 0-13-456567-3. K. 759–768. Doyle. Nichols (1942). ASME. Optimum settings for automatic controllers. Ziegler.

205 eigenvalue. 168 feedback equation. 185 controllability gramian. 238 edge theorem. 69 phase plot. 264 algebraic Riccati equation. 33 complementary sensitivity function. 311 decomposition. 268 central solution (of H∞ problem). 185 D. 107 transfer matrix. 291 D-scaling upper bound. 201 descriptor representation.K iteration. 209 adjoint of rational matrix. 124 1 -degrees-of-freedom. 13 biproper. 105 amplitude. 211 control decentralized-. 311 DIC. 179 delay margin. 50 12 2-degrees-of-freedom. 5 Euclidean norm. 35 magnitude plot. 168 ARE. 159 elementary row operation. 62 action. 6 fixed point theorem. 208 BIBO stability. 28. 188 decoupling control. 69 gain-phase relationship. 22 time. 69 sensitivity integral. 312 constant disturbance model. 155. 275 error covariance matrix. 11 acceleration constant. 276 determinant. 80 improvement. 159 energy. 32 decentralized control. 211 324 . 90 canonical factorization. 33 complex conjugate transpose. 66 Descartes’ rule of sign. 280 contraction. 41 Bode diagram. 188 disturbance condition number. 8 basic perturbation model. 311 classical control theory. 15 system matrix. 311 eigenvector. 80 derivative time. 105 solution. 121 characteristic polynomial.Index ∼. 64 constant error suppression. 280 constant disturbance rejection. 156 indices. 268 Blaschke product. 311 elementary column operation. 168 equalizing property. 16 bandwidth. 174 controller stabilizing-. 114 asymptotic stability. 172 additive perturbation model. 118 error signal. 12 B´ ezout equation. 185 integral action. 267 certainty equivalence. 59 closed-loop characteristic polynomial. 14 crossover region. 23 sensitivity matrix. 38 Butterworth polynomial. 14.

79 generalized Nyquist criterion. 116 L∞ -norm. 172. 21. 114 Hermitian. 108 inequality. 168 loop gain. 6. 21 multivariable robustness-. 155 interconnection matrix. 33 intensity-. 84 leading coefficient. 79 matrix. 65 325 . 16 lead compensation. 21. 10 shaping. 5 Freudenberg-looze equality. 173 H2 -optimization. 118 Hamiltonian. 89 internal stability. 178 FVE. 64 integrator in the loop. 65 integrating action. 185 input sensitivity function. 21. 312 high-gain. 69 crossover frequency. 125 Hadamard product. 204 KJP equality. 123 Lyapunov equation. 116 interaction. 30 matrix. 258 central solution. 22 gain-. 81 linearity improvement. 115 LTR. 180 tableau. 79 margin. 124. 234 phase-. 34. 173 Hermitian. 33 integral control. 132 intensity matrix. 33 determinant. 108 L2 -norm. 79 modulus-. 186 Hankel norm. 6 H∞ -norm. 81 transfer matrix. 15 transfer recovery. 8 linear regulator problem stochastic. 168 input-output pairing. 206 polynomial. 258 gridding. 175 ITAE. 90 J -unitary. 118 Kharitonov’s theorem. 311 closed loop system-. 123 LQ. 118 Lyapunov matrix differential equation. 208 internal model principle. 217 functional controllability. 213 of vector. 266 Kalman-Jakuboviˇ c-Popov equality. 19 generalized plant. 312 input sensitivity-.Index forward compensator. 18 gain matrix. 275 Hurwitz matrix. 267 equalizing property. 200 induced norm. 213 H∞ -problem. 126. 15 nonnegative definite-. 168 lag compensation. 128 ∞-norm of transfer matrix. 142 margin delay-. 311 error covariance-. 18 IO map. 116 loop transfer-. 104 LQG. 40 full perturbation. 108 Kalman filter. 80 gain. 312 H2 -norm. 178 reproducibility. 87 at a frequency. 171 inferential control. 202 Guillemin-Truxal. 107 complementary sensitivity-. 13. 200 strictly-.

77 plot. 289 µ H . 208 perturbation model additive. 32 parity interlacing property. 14 trace. 32 stability. 233 parametric. 251 type k control. 80 para-Hermitean. 209 proportional. 168 Hankel. 159 rational matrix. 311 norm. 254 peak value. 20 partial pole placement. 153 minimal realization. 79 minimum-. 74 stability criterion. 163 output controllability. 11. 105 symmetric. 311 state feedback gain-. 122 open-loop characteristic polynomial. 217 multivariable dynamic. 15 pole. 209 basic. 33. 198 parasitic effect. 21 monic. 198 repeated real scalar. 31 MIMO. 252 modulus margin. 169 submultiplicative property. 209 multivariable dynamic perturbation. 171 ∞-. 18 rank. 173 induced. 178 principal directions. 168 p-. 109 µ. 168 perturbation full. 312 M -circle. 168 matrix. 171 on Cn . 209 phase crossover frequency. 312 positive semi-definite-. 75 Nehari problem. 311 observer gain-. 33 singular. 87 stability. 83 Nyquist plot. 311 unitary. 38 326 . 18 observability gramian. 312 nonsingular matrix. 234 N -circle. 213 L p -. 117 poles. 232 repeated scalar dynamic. 38 mixed sensitivity problem. 87 optimal linear regulator problem. 185 multiplicative perturbation model. 20 zero. 213 nonnegative definite. 117 gain matrix. 104 order of realization. 312 system-. 21. 77 nominal. 160 notch filter. 208 multiplicative. 183 overshoot. 311 sensitivity-. 75 measurement noise. 174 observer. 18. 79 margin. 265 parametric perturbation. 178 functional reproducible. 168 spectral-. 233 unstructured. 234 µ-synthesis. 260 Nichols chart. 233 multivariable robustness margin. 163 minimum phase. 167 Euclidean. 312 proper rational-. 236 multiloop control. 171 normal rank polynomial matrix.Index nonsingular. 117 positive definite-.

168 L∞ -norm. 92 quantitative feedback theory. 132 mixed sensitivity. 80 shaping loop-. 169 signal amplitude. 108 equality. 252 relative gain array. 16 pole-zero excess. 65 PID control. σ . 252 of MIMO system. 2 relative degree. 252 root loci. 168 peak value. 109 unimodular matrix. 74. 163 regulator poles. 80 peak. 14 invariant-. 69 PI control. 108 RGA. 168 of vector. 160 realization minimal. 183 principal gain. 23 function. 232 repeated scalar dynamic perturbation. 163 order of. Smith form. 16 proportional control. 74 polynomial characteristic-. 81 Sherman-Morrison-Woodbury. 65 feedback. 65 QFT. 311 rational matrix fractional representation. 2 settling time. complementary-. 209 pure integral control. 159 matrix fraction representation. 189 servomechanism. 168 energy. 189 servo system. 265 single-degree-of-freedom. 159 matrix. 66 plant capacity. 80 robustness. 311 327 . 158 position error constant. 161 placement. 312 principal direction input. 183 proper. 23 input-. 30 matrix. 88 Schur complement. 313 σ. 7 perturbation model. 33 matrix. 144 inequality. 121 servocompensator.Index shift. 168 of signal. 168 pole assignment. 226 monic. 313 sector bounded function. 114. 65 resonance frequency. 31 p-norm. 186 rise time. 168 L2 -norm. 183 output. 108. 92 rank. 186 repeated real scalar perturbation. 15 strictly-. 5 return difference. 233 reset time. 220 sensitivity complementary-. 226 Smith McMillan form. 33 separation principle. 62 positive definite. 12 singular matrix. 33 matrix. 80 return compensator. 32. 168 signature matrix. 242 roll-off. input-. 16 excess. 18 rational function. 7 robust performance. 312 semi definite. 122 system. 86 LQG.

60 unitary. 62 well-posedness. 175 matrix. 18 of polynomial. 2 transmission zero rational matrix. 312 system matrix. 264 SVD. 236 upper bound. 211 Smith-McMillan form. 238 lower bound. 306 factor. 117 feedback gain matrix. 169 Sylvester equation.Index singular value. real. 213 Nyquist (generalized). 161 stability. 174 nominal. 20 radius. 159 328 . 12 BIBO-. 14 trace. 11 small gain theorem. 61 unimodular. 175 open-loop. 236 scaling property. 171 suboptimal solution (of H∞ problem). 14 standard H∞ problem. 16 symmetric. 159 spectral cofactorizaton. 176. 13 internal. 163 static output feedback. 238 of transfer matrix. 87 polynomial matrix. 11 type k control mixed sensitivity. 160 Smith form. 169 SISO. 208 vector space normed. 311 tracking system. 126 state estimation error. 16 structured singular value. 169 decomposition. 13. 263 zero open loop. 169 square down. 60 strictly proper. 219 region. 105 state-space realization. 219 radius. 234 D-scaling upper bound. 11 asymptotic-. 60 error. 168 velocity constant. 219 radius. 19 Nyquist criterion. 258 standard H2 problem. 252 type k system. 5 step function. 201 stabilizing. 236 submultiplicative property. 168 two-degrees-of-freedom. 236. 160 triangle inequality. 119. 206 of transfer matrix. complex. 60 step function unit. 312 unstructured perturbations. 265 spectral norm. 158 unit feedback. 164 steady-state.

Sign up to vote on this title
UsefulNot useful