Professional Documents
Culture Documents
ESSENTIALS OF
ROBUST CONTROL
Kemin Zhou
January 9, 1998
Preface
This solutions manual contains two parts. The rst part contains some intuitive deriva-
tions of H2 and H1 control. The derivations given here are not strictly rigorous but I
feel that they are helpful (at least to me) in understanding H2 and H1 control theory.
The second part contains the solutions to problems in the book. Most problems are
solved in detail but not for all problems. It should also be noted that many problems
do not have unique solutions so the manual should only be used as a reference. It is
also possible that there are errors in the manual. I would appreciate very much of your
comments and suggestions.
Kemin Zhou
iii
iv
Contents
Preface iii
II Solutions Manual 19
1 Introduction 21
2 Linear Algebra 23
3 Linear Dynamical Systems 27
4 H and H1 Spaces
2 31
5 Internal Stability 37
6 Performance Specications and Limitations 43
7 Balanced Model Reduction 49
8 Model Uncertainty and Robustness 53
9 Linear Fractional Transformation 65
v
vi CONTENTS
10 Structured Singular Value 67
11 Controller Parameterization 77
12 Riccati Equations 81
13 H Optimal Control
2 85
14 H1 Control 91
15 Controller Reduction 107
16 H1 Loop Shaping 113
17 Gap Metric and -Gap Metric 125
Part I
1
Understanding H2=LQG=H
Control
1
Chapter 1
Understanding H2/LQG
Control
We present a natural and intuitive approach to the H2 /LQG control theory
from the state feedback and state estimation point of view.
LQG Control Problem: Assume w(t) is a zero mean, unit variance, white Gaus-
sian noise
E fw(t)g = 0; E fw(t)w ( )g = I(t ):
Find a control law
u = K (s)y
that stabilizes the closed-loop system and minimizes
( Z T )
J = E Tlim 1 kz k dt
2
!1 T 0
Dene
C1 ; Ay = A B1 D R 1 C2 ;
Ax = A B2 R1 1 D12 21 2
Then assumptions (iii) and (iv) guarantee that the following algebraic Riccati equations
have respectively stabilizing solutions X2 0 and Y2 0
X2 Ax + Ax X2 X2 B2 R1 1 B2 X2 + C1 (I D12 R1 1D12
)C1 = 0
Dene
C1 + B X2 )
F2 := R1 1(D12 2
+ Y2 C )R 1
L2 := (B1 D21 2 2
It is well-known that the H2 and LQG problems are equivalent and the optimal controller
is given by
K2 (s) := A + B2 F2 + L2 C2 L 2
F2 0
and
min J = min kTzw k22 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ) :
1.2. A NEW H FORMULATION
2 5
Proof. Let the impulse response of the system be denoted by g(t) = CeAtB. It is
well-known by Parseval's Theorem that
Z 1
kTzw k = 2
2 trace [g (t)g(t)] dt = trace(B QB )
0
where Z 1
Q= eAt C CeAt dt 0
0
Now we are ready to consider the output feedback H2 problem. We shall need the
following fact.
Lemma 1.2 Suppose K (s) is strictly proper. Then
x(0+ ) = B1 w0 :
Proof. Let K (s) be described by
_ = A
^ + By;
^ ^
u = C
Then the closed-loop system becomes
x_ = A B2 C^ x + B1 w =: A x + Bw
_ ^ 2 A^
BC ^ 21
BD
and
x(t) = eAt Bw
(t) 0
0
Z 1d
= E
kz k + dt x (t)X2 x(t) dt
2
0
Z 1
= E 2
kz k + 2x X2 x_ dt
0
Z 1
= E kC1 x + D12 uk + 2x X2 (Ax + B1 w + B2 u) dt
2
0
Z 1
= E ((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w(t)) dt
0
Z 1
= E ((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w0 (t)) dt
0
Z 1
= E (u F2 x) R1 (u F2 x)dt + E x (0+ )X2 B1 w0
0
1.2. A NEW H FORMULATION
2 7
Z 1
= E (u F2 x) R1 (u F2 x)dt + E fw0 B1 X2 B1 w0 g
0
Z 1
= E (u F2 x) R1 (u F2 x)dt + trace (B1 X2 B1 )
0
or
(A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0
Subtract the equation of Y2 from the above equation to get
(A + LC2 )(Y Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0
It is then clear that Y Y2 and equality holds if L = L2 . Hence J2 is minimized if
L = L2 .
In summary, the optimal H2 controller can be written as
x^_ = Ax^ + B2 u + L2(C2 x^ y)
u = F2 x^
i.e.,
K2 (s) := A + B2 FF2 + L2C2 L2
0
2
and q
min kTzw k2 = trace (B1 X2 B1 ) + trace (F2 Y2 F2 R1 ):
This is exactly the H2 controller formula we are familiar with.
kTzw k = E Tlim
2 1 kz (t)k dt
2
2
!1 T 0
1.3. TRADITIONAL STOCHASTIC LQG FORMULATION 9
and (Z T )
E Tlim 1 kz (t)k dt
2
!1 T 0
( Z T Z tZ t )
= E Tlim 1
w ( )B e A (t )
C Ce A (t s)
Bw(s) d ds dt
!1 T 0 0 0
( )
= E Tlim 1 T Z Z tZ t n o
trace B eA (t )C CeA(t s) Bw(s)w ( ) d ds dt
!1 T 0 0 0
1 Z T Z tZ tn o
= Tlim
!1 T 0 0 0
trace B eA(t )C CeA(t s) B E fw(s)w ( )g d ds dt
Z T Z tZ t
= Tlim 1 trace
n
B eA(t )C CeA(t s) B(
o
s) d ds dt
!1 T 0 0 0
Z TZ t
= Tlim 1 trace
n
B
o
eA (t s) C CeA(t s) B ds dt
!1 T 0 0
( Z TZ t )
= trace B Tlim 1 A s As
e C Ce ds dt B
!1 T 0 0
= trace (B QB ) = kTzw k2 2
2
Now consider the LQG control problem. Suppose that there exists an output feed-
back controller such that the closed-loop system is stable. Then x(1) = 0.
Lemma 1.4 Suppose that K (s) is a strictly proper stabilizing controller. Then
E fx(t)w (t)g = B1 =2:
Proof. Let K (s) be described by
_ = A
^ + By;
^ ^
u = C
Then the closed-loop system becomes
x_ = A B2 C^ x + B1 w =: A x + Bw
_ ^ 2 A^
BC ^ 21
BD
Then
x(t) = Z t eA(t )Bw( ) d
(t) 0
10 UNDERSTANDING H2/LQG CONTROL
Hence
Z t
E x ((tt))ww ((tt)) = E eA(t )Bw( )w (t) d
0
Z t
= eA(t )B E fw( )w (t)g d
0
Z t
= eA(t )B (t ) d
0
Z t
= eA B ( ) d = B=2
0
J := E Tlim 1 kz (t)k dt
2
!1 T 0
( )
1 T d Z
= E Tlim 2
kz k + dt x (t)X2 x(t) dt
!1 T 0
( )
1 T Z
= E Tlim 2
kz k + 2x X2 x_ dt
!1 T 0
( )
1 Z T
= E Tlim
!1 T
kC1 x + D12 uk + 2x X2 (Ax + B1 w + B2 u) dt
2
0
( )
1 Z T
= E Tlim
!1 T
((u F2 x) R1 (u F2 x) + 2x (t)X2 B1 w(t)) dt
0
( )
1 T Z
= E Tlim
(u F2 x) R1 (u F2 x)dt
!1 T 0
( Z T )
where x^ is the estimate of x. A standard observer can be constructed from the system
equations as
x^_ = Ax^ + B2 u + L(C2 x^ y) (1.7)
where L is the observer gain to be determined such that A + LC2 is stable and J is
minimized.
Let
e := x x^
Then
e_ = (A + LC2 )e + (B1 + LD21)w =: AL e + BL w
u F2 x = F2 e
Z t
e(t) = eAL(t ) BLw( ) d
0
and
( Z T )
= E Tlim 1 A (t )
w (t)BL e L F2 R1 F2 e A L (t s)
BLw(s) d ds dt
!1 T 0 0 0
( )
1 Z T Z tZ t
= trace Tlim
!1 T
F2 R1 F2 eAL (t )BL E fw(s)w ( )gBL eAL t s) d ds dt
(
0 0 0
( Z T Z tZ t )
= trace Tlim 1
F RFe A L (t ) A
BL ( s)BL e L (t s)
d ds dt
!1 T 0 0 0 2 1 2
( Z TZ t )
= trace fF2 R1 F2 Y g
where Z 1
Y= eAL t BL BL eAL t dt
0
or
(A + LC2 )Y + Y (A + LC2 ) + (B1 + LD21)(B1 + LD21 ) = 0
Subtract the equation of Y2 from the above equation to get
(A + LC2 )(Y Y2 ) + (Y Y2 )(A + LC2 ) + (L L2)R2 (L L2 ) = 0
It is then clear that Y Y2 and equality holds if L = L2 . Hence J3 is minimized if
L = L2 .
12 UNDERSTANDING H2/LQG CONTROL
Chapter 2
Understanding H1 Control
We give an intuitive derivation of H1 controller.
Note that the system has the following state space realization
x_ = Ax + B1 w + B2 u; z = C1 x + D12 u; y = C2 x + D21 w
We shall rst consider state feedback u = Fx. Then the closed-loop system becomes
x_ = (A + B2 F )x + B1 w; z = (C1 + D12 F )x
By the bounded real lemma, kTzw k1 <
implies that there exists an X = X 0 such
that
X (A + B2 F ) + (A + B2 F ) X + XB1 B1 X=
2 + (C1 + D12 F ) (C1 + D12 F ) = 0
which is equivalent, by completing square with respect to F , to
XA + A X + XB1 B1 X=
2 XB2 B2 X + C1 C1 + (F + B2 X ) (F + B2 X ) = 0
Intuition suggests that we can take
F = B2 X
which gives
XA + A X + XB1B1 X=
2 XB2B2 X + C1 C1 = 0
This is exactly the X1 Riccati equation under the preceding simplied conditions.
Hence, we can take F = F1 and X = X1 .
Next, suppose that there is an output feedback stabilizing controller such that
kTzw k1 <
. Then x(1) = 0 because the closed-loop system is stable. Consequently,
we have
Z 1 Z 1 d
kz k 2
kwk dt =
2 2
kz k 2 2 2
kwk + dt (x X1 x) dt
0 0
Z 1
kz k
kwk + x_ X1 x + x X1 x_ dt
= 2 2 2
Obviously, this also suggests intuitively that the state feedback control can be u = F1 x
and a worst state feedback disturbance would be w =
2 B1 X1 x. Since full state is
not available for feedback, we have to implement the control law using estimated state:
u = F1 x^
where x^ is the estimate of x. A standard observer can be constructed from the new
system equations as
x^_ = (A + B1 B1 X1 =
2)^x + B2 u + L(C2 x^ y)
where L is the observer gain to be determined. Let e := x x^. Then
e_ = (A + B1 B1 X1 =
2 + LC2 )e + (B1 + LD21 )r
v = F1 e
Since it is assumed that kTvr k1 <
, it follows from the dual version of the bounded
real lemma that there exists a Y 0 such that
Y (A + B1 B1 X1 =
2 + LC2 ) + (A + B1 B1 X1 =
2 + LC2 )Y + Y F1 F1 Y=
2
+(B1 + LD21 )(B1 + LD21 ) = 0
The above equation can be written as
(A + B1 B1 X1 =
2) + (A + B1 B1 X1 =
2)Y + Y F1 F1 Y=
2 + B1 B1 Y C2 C2 Y
+(L + Y C2 )(L + Y C2 ) = 0
Again, intuition suggests that we can take
L = Y C2
which gives
Y (A + B1 B1 X1 =
2) + (A + B1 B1 X1 =
2)Y + Y F1 F1 Y=
2 Y C2 C2 Y + B1 B1 = 0
It is easy to verify that
Y = Y1 (I
2 X1 Y1 ) 1
where Y1 is as given in Theorem 2.1. Since Y 0, we must have
(X1 Y1 ) <
2
2.2. AN INTUITIVE DERIVATION 17
19