This action might not be possible to undo. Are you sure you want to continue?

# Solutions Manual ESSENTIALS OF ROBUST CONTROL

Kemin Zhou

January 9, 1998

Preface

This solutions manual contains two parts. The rst part contains some intuitive derivations of H2 and H1 control. The derivations given here are not strictly rigorous but I feel that they are helpful (at least to me) in understanding H2 and H1 control theory. The second part contains the solutions to problems in the book. Most problems are solved in detail but not for all problems. It should also be noted that many problems do not have unique solutions so the manual should only be used as a reference. It is also possible that there are errors in the manual. I would appreciate very much of your comments and suggestions. Kemin Zhou

iii

iv .

Contents Preface iii I Understanding H2=LQG=H1 Control 1 Understanding H2/LQG Control 2 Understanding H1 Control 1.2 A New H2 Formulation : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.3 Traditional Stochastic LQG Formulation : : : : : : : : : : : : : : : : : : 2.1 Problem Formulation and Solutions : : : : : : : : : : : : : : : : : : : : : 2.1 H2 and LQG Problems : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2 An Intuitive Derivation : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 3 3 5 8 13 13 14 II Solutions Manual 1 2 3 4 5 6 7 8 9 Introduction Linear Algebra Linear Dynamical Systems H and H1 Spaces Internal Stability Performance Speci cations and Limitations Balanced Model Reduction Model Uncertainty and Robustness Linear Fractional Transformation 2 19 21 23 27 31 37 43 49 53 65 v .

vi CONTENTS 67 77 81 85 91 107 113 125 10 Structured Singular Value 11 Controller Parameterization 12 Riccati Equations 13 H Optimal Control 14 H1 Control 15 Controller Reduction 16 H1 Loop Shaping 17 Gap Metric and -Gap Metric 2 .

Part I Understanding H2=LQG=H Control 1 1 .

.

(ii) R1 = D12 D12 > 0 and R2 = D21 D21 > 0. (1. B2 ) is stabilizable and (C2 .3) B2 D12 has full column rank for all !.2) (1. Let Tzw denote the transfer matrix from w to z . A) is detectable. (iii) A j!I C1 A j!I C2 H2 Control Problem: nd a control law u = K (s)y 3 .1) (1. 1.1 H2 and LQG Problems Consider the following dynamical system x = Ax + B1 w + B2 u. B1 (iv) D21 has column column rank for all !.Chapter 1 Understanding H2/LQG Control We present a natural and intuitive approach to the H2 /LQG control theory from the state feedback and state estimation point of view. x(0) = 0 _ z = C1 x + D12 u y = C2 x + D21 w We shall make the following assumptions: (i) (A.

Then assumptions (iii) and (iv) guarantee that the following algebraic Riccati equations have respectively stabilizing solutions X2 0 and Y2 0 X2 Ax + Ax X2 X2 B2 R1 1 B2 X2 + C1 (I D12 R1 1D12 )C1 = 0 Y2 Ay + Ay Y2 Y2 C2 R2 1C2 Y2 + B1 (I D21 R2 1 D21 )B1 = 0 De ne F2 := R1 1(D12 C1 + B2 X2 ) L2 := (B1 D21 + Y2 C2 )R2 1 It is well-known that the H2 and LQG problems are equivalent and the optimal controller is given by L K2 (s) := A + B2 F2 + L2C2 0 2 F2 and min J = min kTzw k2 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ) : 2 . Ay = A B1 D21 R2 1 C2 . white GausE fw(t)g = 0.4 UNDERSTANDING H2/LQG CONTROL s Z that stabilizes the closed-loop system and minimizes kTzw k2 where kTzw k := 2 1 2 1 1 trace Tzw (j!)Tzw (j!)] d! sian noise LQG Control Problem: Assume w(t) is a zero mean. unit variance. E fw(t)w ( )g = I (t u = K (s)y ) 1 Z T kz k2 dt J = E Tlim T !1 0 ): Find a control law that stabilizes the closed-loop system and minimizes ( De ne Ax = A B2 R1 1 D12 C1 .

1. 2 .2. E fw0 w0 g = I: Then kTzw k = E 2 2 Z 0 1 kz (t)k dt 2 Proof. The following lemma gives a time domain characterization of the H2 norm of a stable transfer matrix. Let the impulse response of the system be denoted by g(t) = CeAtB. we shall look at the H2 problem from a time domain point of view. note that Z 0 E 1 z (t) = CeAt Bw0 kz (t)k dt = E 2 Z 1 = = Z 0 Z 0 1 0 w0 B eA t C CeAt Bw0 dt i trace B eA t C CeAt B E fw0 w0 g dt h i h 1 trace B eA t C CeAt B dt = trace (B QB ) This completes the proof.1 Consider a stable dynamical system below x = Ax + Bw _ z = Cx Let Tzw be the transfer matrix from w to z .2 A New H2 Formulation In this section. Let x(0) = 0 and w(t) = w0 (t) with a random direction w0 such that E fw0 g = 0. It is well-known by Parseval's Theorem that kTzw k = 2 2 Z 0 1 trace g (t)g(t)] dt = trace(B QB ) Z 0 where Q= 1 eA t C CeAt dt 0 is the solution of the following Lyapunov equation: A Q + QA + C C = 0 Next. which will lead to a simple proof of the result. Lemma 1. A NEW H FORMULATION 2 5 1.

6 UNDERSTANDING H2/LQG CONTROL In view of the above lemma. Then x(0+ ) = B1 w0 : _ = A + By. Then x(1) = 0. Lemma 1.3) with w = w0 (t) and x(0) = 0 that minimizes Z 1 J1 := E kz k2 dt Now we are ready to consider the output feedback H2 problem. Let K (s) be described by 2 Suppose that there exists an output feedback controller such that the closed-loop system is stable. 1 1 1 1 1 1 1 0 Proof.2 Suppose K (s) is strictly proper. u = C ^ ^ ^ Then the closed-loop system becomes ^ x = A B2 C _ x + B1 w =: A x + Bw _ ^ ^ 2 A ^ BD21 BC and x(t) = eAt Bw 0 (t) which gives x(0+ ) = B1 w0 . We shall need the following fact.1){(1. the H2 control can be regarded as a problem of nding a controller K (s) for the system described in equations (1. Note that J1 = E = E = E = E = E = E = E Z Z Z Z Z Z Z 0 kz (t)k dt 2 0 0 d kz k + dt x (t)X x(t) dt kz k + 2x X x dt _ 2 2 2 2 0 kC x + D uk + 2x X (Ax + B w + B u) dt 1 12 2 2 1 2 0 ((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w(t)) dt ((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w0 (t)) dt (u F2 x) R1 (u F2 x)dt + E x (0+ )X2 B1 w0 0 0 .

3) as _ x = Ax + B2 u + L(C2 x y) ^ ^ ^ (1.2. an optimal control law would be u = F2 x if full state is available for feedback. Since full state is not available for feedback.5) where L is the observer gain to be determined such that A + LC2 is stable and J1 is minimized.1) and (1.1. Let e := x x ^ Then e = (A + LC2 )e + (B1 + LD21)w =: AL e + BL w _ u F2 x = F2 e and e(t) = eAL t BL w0 J2 := E = E = E Z Z Z 0 1 1 1 (u F2 x) R1 (u F2 x)dt 0 e F2 R1 F2 e dt w0 BL eALt F2 R1 F2 eALt BL w0 dt 1 0 0 = trace Z F2 R1 F2 eALt BL E fw0 w0 gBLeAL t dt Z = trace F2 R1 F2 1 = trace fF2 R1 F2 Y g 0 eAL t BL BLeAL t dt .4) where x is the estimate of x. we have to implement the control law using estimated state: u = F2 x ^ (1. A NEW H FORMULATION 2 7 = E = E Z Z 0 1 1 (u F2 x) R1 (u F2 x)dt + E fw0 B1 X2 B1 w0 g (u F2 x) R1 (u F2 x)dt + trace (B1 X2 B1 ) 0 Obviously. A standard observer can be constructed from the system ^ equations (1.

8 where or UNDERSTANDING H2/LQG CONTROL Y= Z 0 1 e(A+LC2)t (B1 + LD21 )(B1 + LD21 ) e(A+LC2) t dt (A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0 Subtract the equation of Y2 from the above equation to get (A + LC2 )(Y Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0 It is then clear that Y Y2 and equality holds if L = L2 . the optimal H2 controller can be written as _ x = Ax + B2 u + L2(C2 x y) ^ ^ ^ u = F2 x ^ i. That is. unit variance. E fw(t)g = 0. we have the following relationship. x(0) = 0 _ z = Cx Then ) 1 Z T kz (t)k2 dt kTzw k = E Tlim T !1 2 2 0 ( .. E fw(t)w ( )g = I (t In this case. Hence J2 is minimized if L = L2 .3 Traditional Stochastic LQG Formulation The traditional stochastic LQG formulation assumes that w(t) is a zero mean. and K2 (s) := A + B2 F22 + L2C2 F q 0 L2 min kTzw k2 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ): This is exactly the H2 controller formula we are familiar with.3 Consider a stable dynamical system below x = Ax + Bw. In summary. 1. white Gaussian stochastic process.e. ): Lemma 1.

Then E fx(t)w (t)g = B1 =2: _ = A + By. Suppose that there exists an output feedback controller such that the closed-loop system is stable. Note that and ( 9 z (t) = ( Z 0 t CeA(t )Bw( )d ) 1 Z T kz (t)k2 dt E Tlim T !1 0 1 Z T Z t Z t w ( )B eA (t = E Tlim T !1 ( 0 0 0 ) C ) CeA(t s) Bw(s) d ds dt o ) 1 Z T Z t Z t trace nB = Tlim T !1 0 0 0 1 Z T Z t Z t trace nB = lim T !1 T 0 0 0 1 Z T Z t Z t trace nB eA (t = E Tlim T !1 0 0 0 C CeA(t s) Bw(s)w ( ) o d ds dt ) o eA (t )C CeA(t s) B E fw(s)w ( )g d ds dt eA (t )C CeA(t s) B ( s) d ds dt 1 Z T Z t trace nB eA (t s) C CeA(t s) B o ds dt = Tlim T !1 ) 1 Z T Z t eA s C CeAs ds dt B = trace B Tlim T !1 ( 0 0 = trace (B QB ) = kTzw k 0 0 2 2 2 Now consider the LQG control problem. Then x(1) = 0.1. TRADITIONAL STOCHASTIC LQG FORMULATION Proof. Let K (s) be described by .4 Suppose that K (s) is a strictly proper stabilizing controller. Lemma 1. u = C ^ ^ ^ Then the closed-loop system becomes ^ x = A B2 C _ x + B1 w =: A x + Bw _ ^ ^ 2 A ^ BD21 BC Then x(t) = Z t eA(t )Bw( ) d (t) 0 Proof.3.

6) . Now note that ( 0 eA B ( ) d = B=2 2 ) 1 Z T kz (t)k2 dt J := E Tlim T !1 1 = E Tlim T !1 1 = E Tlim T !1 ( ( ( ( 0 Z 0 T T d kz k + dt x (t)X2 x(t) dt 2 ) Z 0 kz k + 2x X x dt _ 2 2 ) ) 1 Z T kC x + D uk2 + 2x X (Ax + B w + B u) dt = E Tlim T 1 12 2 1 2 !1 0 1 = E Tlim T !1 ( ( Z 0 T ((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w(t)) dt ) ) 1 Z T (u F x) R (u F x)dt = E Tlim T 2 1 2 !1 0 1 = E Tlim T !1 ( ) 1 Z T trace f2X B E fw(t)x (t)gg +E Tlim T 2 1 !1 0 Z 0 T (u F2 x) R1 (u F2 x)dt + trace (B1 X2 B1 ) ) Again.10 Hence (t)w (t) E x(t)w (t) UNDERSTANDING H2/LQG CONTROL = E = = = Z Z Z 0 Z t 0 eA(t )Bw( )w (t) d t t t eA(t )B E fw( )w (t)g d eA(t )B (t )d 0 which gives E fx(t)w (t)g = B1 =2. Since full state is not available for feedback. we will have to implement the control law using estimated state: u = F2 x ^ (1. an optimal control law would be u = F2 x if full state is available for feedback.

TRADITIONAL STOCHASTIC LQG FORMULATION 11 where x is the estimate of x. Let e := x x ^ Then e = (A + LC2 )e + (B1 + LD21)w =: AL e + BL w _ u F2 x = F2 e e(t) = and Z t 0 eAL(t ) BLw( ) d J3 ) 1 Z T (u F x) R (u F x)dt := E Tlim T 2 1 2 !1 ( = E Z 1 0 1 Z T Z t Z t w (t)B eAL (t = E Tlim T L !1 ( 0 0 0 ( 0 e F2 R1 F2 e dt ) F2 R1 F2 eAL(t s) BLw(s) d ) ds dt ) 1 Z T Z t Z t F R F eAL (t = trace Tlim T 1 2 2 !1 1 Z T Z t Z t F R F eAL (t = trace Tlim T 1 2 2 !1 ( 0 0 0 ( 0 0 0 BL E fw(s)w ( )gBL eAL(t s) d ds dt BL ( s)BL eAL(t s) d ds dt ) ) ) = trace fF2 R1 F2 Y g where or ) 1 Z T Z t eALt B B eALt ds dt = trace F2 R1 F2 Tlim T L L !1 0 0 Y= Z 0 1 eAL t BL BLeAL t dt (A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0 Subtract the equation of Y2 from the above equation to get (A + LC2 )(Y Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0 It is then clear that Y Y2 and equality holds if L = L2 . . Hence J3 is minimized if L = L2 .3.7) where L is the observer gain to be determined such that A + LC2 is stable and J is minimized.1. A standard observer can be constructed from the system ^ equations as _ x = Ax + B2 u + L(C2 x y) ^ ^ ^ (1.

12 UNDERSTANDING H2/LQG CONTROL .

B1 ) is controllable and (C1 .1 Problem Formulation and Solutions Consider again the standard system below: z G w u 3 y 2 - K A B1 B2 G(s) = 4 C1 0 D12 5 : C2 D21 0 The following assumptions are made: (i) (A.1 There exists an admissible controller such that kTzw k1 < i the following three conditions hold: 13 . (iii) D12 C1 D12 = 0 I . B2 ) is stabilizable and (C2 . (iv) B1 0 D21 D21 = I . A) is detectable.Chapter 2 Understanding H1 Control We give an intuitive derivation of H1 controller. A) is observable. (ii) (A. Theorem 2. 2.

there is a Y = Y 0 such that Y A + AY + Y C CY= 2 + BB = 0 and A + Y C C= 2 is stable. we shall construct intuitively the output feedback H1 central controller by combining an H1 state feedback and an observer. which is essentially equivalent to Z 0 1 kz k 2 2 kwk dt < 0.14 UNDERSTANDING H1 CONTROL X1 A + A X1 + X1 (B1 B1 = AY1 + Y1 A + Y1 (C1 C1 = 2 (i) There exists a stabilizing solution X1 > 0 to B2 B2 )X1 + C1 C1 = 0: C2 C2 )Y1 + B1 B1 = 0: (2. Z1 := (I Y1 X1 ) 1 : 2. for a system z = G(s)w with state space realization G(s) = C (sI A) 1 B 2 H1 .2) (ii) There exists a stabilizing solution Y1 > 0 to 2 (iii) (X1 Y1 ) < 2 : Moreover. Here we shall present an intuitive but nonrigorous derivation of the H1 results by using only some basic system theoretic concept such as state feedback and state estimation.2 An Intuitive Derivation Most existing derivations and proofs of the H1 control results given in Theorem 2. when these conditions hold. one central controller is ^ Ksub(s) := A1 Z1 L1 F1 0 where ^ A1 := A + 2B1 B1 X1 + B2 F1 + Z1 L1 C2 2 F1 := B2 X1 . Some algebraic derivations (such as the one given in the book) are simple but they provide no insight to the theory for control engineers. L1 := Y1 C2 . In fact. A key fact we shall use is the so-called bounded real lemma.1 are mathematically quite complex. 8w 6= 0 2 if and only if there is an X = X 0 such that XA + A X + XBB X= 2 + C C = 0 and A + BB X= 2 is stable. Dually.1) (2. kGk1 < . . which states that.

1 Next. z = (C1 + D12 F )x _ By the bounded real lemma. we = 0 1 2 1 12 Z 1 d kwk + dt (x X1 x) dt 2 get Z 0 1 kz k 2 2 kwk dt = 2 Z 0 1 kvk 2 2 krk dt 2 2 where v = u + B2 X1 x = u F1 x and r = w B1 X1 x. by completing square with respect to F . we have Z 0 kTzw k1 < . Then x(1) = 0 because the closed-loop system is stable. AN INTUITIVE DERIVATION Note that the system has the following state space realization 15 x = Ax + B1 w + B2 u. we can take F = F1 and X = X1 . kTzw k1 < implies that there exists an X = X 0 such that X (A + B2 F ) + (A + B2 F ) X + XB1 B1 X= 2 + (C1 + D12 F ) (C1 + D12 F ) = 0 which is equivalent. suppose that there is an output feedback stabilizing controller such that kz k 2 2 kwk dt = 2 2 Z 0 2 1 2 kz k 2 2 kz k kwk + x X1 x + x X1 x dt _ _ Substituting x = Ax + B w + B u and z = C x + D u into the above integral and _ using the X1 equation.2.2. we have the new system equations x = (A + B1 B1 X1 = 2)x + B1 r + B2 u _ v = F1 x + u y = C2 x + D21 r . Then the closed-loop system becomes x = (A + B2 F )x + B1 w. Substituting w into the system equations. and nally completing the squares with respect to u and w. to XA + A X + XB1 B1 X= 2 XB2 B2 X + C1 C1 + (F + B2 X ) (F + B2 X ) = 0 Intuition suggests that we can take F = B2 X which gives XA + A X + XB1B1 X= 2 XB2B2 X + C1 C1 = 0 This is exactly the X1 Riccati equation under the preceding simpli ed conditions. z = C1 x + D12 u. Consequently. Hence. y = C2 x + D21 w _ We shall rst consider state feedback u = Fx.

Since full state is not available for feedback.16 UNDERSTANDING H1 CONTROL Hence the original H1 control problem is equivalent to nding a controller so that kTvr k1 < or Z 1 ku F1 xk2 2 krk2 dt < 0 Obviously. it follows from the dual version of the bounded real lemma that there exists a Y 0 such that Y (A + B1 B1 X1 = 2 + LC2 ) + (A + B1 B1 X1 = 2 + LC2 )Y + Y F1 F1 Y= 2 +(B1 + LD21 )(B1 + LD21 ) = 0 The above equation can be written as (A + B1 B1 X1 = 2) + (A + B1 B1 X1 = 2)Y + Y F1 F1 Y= 2 + B1 B1 Y C2 C2 Y +(L + Y C2 )(L + Y C2 ) = 0 Again. Let e := x x. this also suggests intuitively that the state feedback control can be u = F1 x and a worst state feedback disturbance would be w = 2 B1 X1 x. we have to implement the control law using estimated state: 0 u = F1 x ^ where x is the estimate of x. intuition suggests that we can take L = Y C2 which gives Y (A + B1 B1 X1 = 2) + (A + B1 B1 X1 = 2)Y + Y F1 F1 Y= It is easy to verify that where Y1 is as given in Theorem 2. Since Y 2 Y C2 C2 Y + B1 B1 = 0 Y = Y1 (I 2 X1 Y1 ) 2 1 0.1. we must have (X1 Y1 ) < . A standard observer can be constructed from the new ^ system equations as _ x = (A + B1 B1 X1 = 2)^ + B2 u + L(C2 x y) ^ x ^ where L is the observer gain to be determined. Then ^ e = (A + B1 B1 X1 = 2 + LC2 )e + (B1 + LD21 )r _ v = F1 e Since it is assumed that kTvr k1 < .

We can see that the H1 central controller can be obtained by connecting a state feedback with a state estimate under the worst state feedback disturbance.1.2. . AN INTUITIVE DERIVATION Hence L = Z1 L1 and the controller is give by _ x = (A + B1 B1 X1 = 2 )^ + B2 u + Z1 L1 (C2 x y) ^ x ^ u = F1 x ^ 17 which is exactly the H1 central controller given in Theorem 2.2.

18 UNDERSTANDING H1 CONTROL .

Part II Solutions Manual 19 .