You are on page 1of 43

Lecture Note on Robust Control

Jyh-Ching Juang
Department of Electrical Engineering
National Cheng Kung University
Tainan, TAIWAN
juang@mail.ncku.edu.tw
1
Motivation of Robust Control
J.C. Juang
For a physical process, uncertainties may appear as
Parameter variation
Unmodeled or neglected dynamics
Change of operating condition
Environmental factors
External disturbances and noises
Others
Robust control, in short, is to ensure that the performance and stability of feedback
systems in the presence of uncertainties.
Objectives of the course
Recognize and respect the uncertainty eects in dynamic systems
Understand how to analyze the dynamic systems in the presence of uncertain-
ties
Use of computer aided tools for control system analysis and design
Design of robust controllers to accommodate uncertainties
2
Course Requirements
J.C. Juang
Prerequisite:
Control engineering
Linear system theory
Multivariable systems
Working knowledge of Matlab (or other equivalent tools)
References:
Kemin Zhou with John Doyle, Essentials of Robust Control," Prentice Hall,
1998.
Doyle, Francis, and Tannenbaum, Feedback Control Theory," Maxwell MacMil-
lan, 1992.
Boyd, El Ghaoui, Feron, and Balakrishnan, Linear Matrix Inequalities in
System and Control Theory," SIAM, 1994.
Zhou, Doyle, and Glover, Robust and Optimal Control," Prectice Hall, 1996.
Maciejowski, Multivariable Feedback Design," Addison-Wiley, 1989.
Skogestad and Postlethwaite, Multivariable Feedback Control," John Wiley
& Sons, 1996.
Dahleh and Diaz-Bobillo, Control of Uncertain Systems," Prentice Hall, 1995.
Green and Limebeer, Linear Robust Control," Prentice Hall, 1995.
Grading:
30% : Homework: approximately ve assignments
40% : Examination: one nal examination around mid-December
30% : Term project: paper studying and/or design example
3
Course Outline
J.C. Juang
1. Introduction
2. Preliminaries
(a) Linear algebra and matrix theory
(b) Function space and signal
(c) Linear system theory
(d) Measure of systems
(e) Design equations
3. Internal Stability
(a) Feedback structure and well-posedness
(b) Coprime factorization
(c) Linear fractional map
(d) Stability and stabilizing controllers
4. Performance and Robustness
(a) Feedback design tradeos
(b) Bodes relation
(c) Uncertainty model
(d) Small gain theorem
(e) Robust stability
(f) Robust performance
(g) Structured singular values
5. State Feedback Control
(a) H
2
state feedback
(b) H

state feedback
6. Output Feedback Control
(a) Observer-based controller
(b) Output injection and separation principle
(c) Loop transfer recovery
(d) H

controller design
7. Miscellaneous Topics
(a) Modeling issues
(b) Gain scheduling
(c) Multi-objective control
(d) Controller reduction
4
Review of Linear Algebra and Matrix Theory
5
Linear Subspaces
J.C. Juang
Let 1 and c be real and complex scalar eld, respectively.
A linear space 1 over a eld T consists of a set on which two operations are dened
so that for each pair of elements x and y in 1 there is a unique element x +y in 1,
and for each element in T and each element x in 1 there is a unique element x
in 1, such that the following conditions hold:
1. For all x, y in 1, x +y = y +x.
2. For all x, y, z in 1, (x +y) +z = x + (y +z).
3. There exists an element in 1 denoted by 0 such that x + 0 = x for each x in 1.
4. For each element x in 1 there exists an element y in 1 such that x +y = 0.
5. For each element x in 1, 1x = x.
6. For each , in T and each element x in 1, ()x = (x).
7. For each element in T and each pair of elements x and y in 1, (x+y) = x+y.
8. For each , of elements in T and each element x in 1, ( +)x = x +x.
The element x +y and x are called the sum of x and y and the product of and
x, respectively.
A subset of a vector space 1 over a eld T is called a subspace of 1 if is
a vector space over T under the operations of addition and scalar multiplication
dened on 1.
Let x
1
, x
2
, , x
k
be vectors in , then an element of the form
1
x
1
+
2
x
2
+ +
k
x
k
with
i
T is a linear combination over T of x
1
, x
2
, , x
k
.
The set of all linear combinations of x
1
, x
2
, , x
k
1 is a subspace called the span
of x
1
, x
2
, , x
k
, denoted by
spanx
1
, x
2
, , x
k
= x[x =
1
x
1
+
2
x
2
+ +
k
x
k
;
i
T
A set of vectors x
1
, x
2
, , x
k
are said to be linearly dependent over T if there
exists
1
,
2
, ,
k
T not all zero such that
1
x
1
+
2
x
2
+ +
k
x
k
= 0; otherwise
they are linearly independent.
Let be a subspace of a vector space 1, then a set of vectors x
1
, x
2
, , x
k

is said to be a basis for if x
1
, x
2
, , x
k
are linearly independent and =
spanx
1
, x
2
, , x
k
.
The dimension of a vector subspace equals the number of basis vectors.
A set of vectors x
1
, x
2
, , x
k
in are mutually orthogonal if x

i
x
j
= 0 for all i ,= j,
where x

i
is the complex conjugate transpose of x
i
.
A collection of subspaces
1
,
2
, . . .,
k
of 1 is mutually orthogonal if x

y = 0
whenever x
i
and y
j
for i ,= j.
6
The vectors are orthonormal if x

i
x
j
=
ij
=
_
1 when i = j
0 otherwise
, the Kronecker delta.
Let be a subspace of 1. The set of all vectors in 1 that are orthogonal to every
vector in is the orthogonal complement of and is denoted by

= y 1 : y

x = 0 x
Each vector x in 1 can be expressed uniquely in the form x = x
W
+x
W
for x
W

and x
W

.
A set of vectors u
1
, u
2
, . . . , u
k
is said to be an orthonormal basis for a k-dimensional
subspace if the vectors form a basis and are orthonormal. Suppose that the di-
mension of 1 is n, it is then possible to nd a set of orthonormal basis u
k+1
, . . . , u
n

such that

= spanu
k+1
, . . . , u
n

Let M 1
mn
be a linear transformation from 1
n
to 1
m
. M can be viewed as an
mn matrix.
The kernel or null space of a linear transformation M is dened as
kerM = ^(M) = x[x 1
n
: Mx = 0
The image or range of M is
imgM = y[y 1
m
: y = Mx for some x 1
n

The rank of a matrix M is dened as the dimension of imgM.


An mn matrix M is said to have full row rank if m n and rank(M) = m. It has
a full column rank if m n and rank(M) = n.
Let M be an mn real, full rank matrix with m > n, the orthogonal complement
of M is a matrix M

of dimension m (m n) such that


_
M M


is a square,
nonsingular matrix with the following property: M
T
M

= 0.
The following properties hold: (kerM)

= imgM
T
and (imgM)

= kerM
T
.
7
Eigenvalues and Eigenvectors
J.C. Juang
The scalar is an eigenvalue of the square matrix M c
nn
if
det(I M) = 0
There are n eigenvalues, which are denoted by
i
(M), i = 1, 2, , n.
The spectrum of M, denoted by spec(M), is the collection of all eigenvalues of M,
i.e., spec(M) =
1
,
2
, ,
n
.
The spectral radius is dened as
(M) = max
i
[
i
(M)[
where
i
is an eigenvalue of M.
If M is a Hermitian matrix, i.e., M = M

, then all eigenvalues of M are real. In


this case,
max
(M) is used to represent the maximal eigenvalue of M and
min
is
used to represent the minimal eigenvalue of M.
If
i
spec(M), then any nonzero vector x
i
c
n
that satises
Mx
i
=
i
x
i
is a (right) eigenvector of M. Likewise, any nonzero vector w
i
c
n
that satises
w

i
M =
i
w

i
is a left eigenvector of M.
A cyclic matrix M admits the spectral decomposition or modal decomposition
M =
n

i=1

i
x
i
w

i
= XW
where
W =
_

_
w

1
w

2
.
.
.
w

n
_

_
, X =
_
x
1
x
2
x
n

, and =
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
0 0
n
_

_
Here,
i
is an eigenvalue of M with x
i
and w
i
being the right and left eigenvectors,
respectively such that w

i
x
j
=
ij
.
If M is Hermitian, then there exists a unitary matrix U, i.e., U

U = I and a real
diagonal such that
M = UU

In this case, the columns of U are the (right) eigenvectors of M.


8
Matrix Inversion and Pseudo-Inverse
J.C. Juang
For a square matrix M, its inverse, denoted by M
1
is the matrix such that MM
1
=
M
1
M = I.
Let M be a square matrix partitioned as follows
M =
_
M
11
M
12
M
21
M
22
_
Suppose that M
11
is nonsingular, then M can be decomposed as
M =
_
I 0
M
21
M
1
11
I
_ _
M
11
0
0 S
11
_ _
I M
1
11
M
12
0 I
_
where S
11
= M
22
M
21
M
1
11
M
12
is the Schur complement of M
11
in M.
If M (as partitioned above) is nonsingular, then
M
1
=
_
M
1
11
+M
1
11
M
12
S
1
11
M
21
M
1
11
M
1
11
M
12
S
1
11
S
1
11
M
21
M
1
11
S
1
11
_
If both M
11
and M
22
are invertible,
(M
11
M
12
M
1
22
M
21
)
1
= M
1
11
+M
1
11
M
12
(M
22
M
21
M
1
11
M
12
)
1
M
21
M
1
11
The pseudo-inverse (also called Moore-Penrose inverse) of a matrix M is the matrix
M
+
that satises the following conditions:
1. MM
+
M = M
2. M
+
MM
+
= M
+
3. (MM
+
)

= MM
+
4. (M
+
M)

= M
+
M
Signicance of pseudo-inverse: Consider the linear matrix equation with unknown
X,
AXB = C
The equation is solvable if and only if
AA
+
CB
+
B = C
All solutions can be characterized as
X = A
+
CB
+
A
+
AY BB
+
+Y
for some Y .
In the case of no solutions, the best approximation is
X
appr
= A
+
CB
+
9
Invariant Subspaces
J.C. Juang
Let M be a square matrix. A subspace c
n
is invariant for the transformation
M, or M-invariant, if Mx for every x .
is invariant for M means that the image of under M is contained in : M .
Examples of M-invariant subspaces: eigenspace of M, kerM, and imgM.
If is a nontrivial subspace and is M-invariant, then there exist x and such
that Mx = x.
Let
1
, . . .,
k
be eigenvalues of M (not necessarily distinct), and let x
i
be the cor-
responding (generalized) eigenvectors. Then = spanx
1
, . . . , x
k
is an M-invariant
subspace provided that all the lower-rank generalized eigenvectors are included.
More specically, let
1
=
2
= =
l
be eigenvalues of M, and let x
1
, x
2
, . . ., x
l
be
corresponding eigenvector and the generalized eigenvectors obtained through the
following equations:
(M
1
I)x
1
= 0
(M
1
I)x
2
= x
1
.
.
.
(M
1
I)x
l
= x
l1
Then a subspace with x
j
for some j l is an M-invariant subspace if and
only if all lower-rank eigenvectors and generalized eigenvectors of x
j
are in , i.e.,
x
i
, i j.
An M-invariant subspace c
n
is called a stable invariant subspace if all the
eigenvalues of M constrained to have negative real parts.
Example
M
_
x
1
x
2
x
3
x
4

=
_
x
1
x
2
x
3
x
4

1
1

4
_

_
with 1
1
< 0, 1
3
< 0, and 1
4
> 0.
M-invariant subspace? stable invariant subspace?
spanx
1

spanx
2

spanx
3

spanx
4

spanx
1
, x
2

spanx
1
, x
2
, x
3

spanx
1
, x
2
, x
4

spanx
2
, x
3

10
Vector Norm and Matrix Norm
J.C. Juang
Consider a linear space 1 over the eld T. A real-valued function dened on all
elements x from 1 is called a norm, written | |, if it satises the following axioms:
1. (Nonnegativity) |x| 0 for all x 1 and |x| = 0 if and only if x = 0.
2. (Homogeneity) |x| = [[ |x| for all x 1 and T.
3. (Triangle inequality) |x +y| |x| +|y| for all x and y in 1.
A linear space together with a norm dened on it becomes a normed linear space.
Vector norm: Let x =
_

_
x
1
x
2
.
.
.
x
n
_

_
be a vector in c
n
. The followings are norms on c
n
1. Vector p-norm (for 1 p < ):
|x|
p
= (
n

i=1
[x
i
[
p
)
1/p
2. Vector 1-norm: |x|
1
=

n
i=1
[x
i
[
3. Vector 2-norm or Euclidean norm: |x|
2
=

x =
_

n
i=1
[x
i
[
2
4. Vector -norm: |x|

= max
1in
[x
i
[
If | |
p
is a vector norm on c
n
and M c
nn
. The matrix norm induced by the
vector p-norm | |
p
is
|M|
p
= sup
xC
n
,x=0
|Mx|
p
|x|
p
Matrix Norm: Let M be a matrix in c
mn
M =
_

_
m
11
m
12
m
1n
m
21
m
22
m
2n
.
.
.
.
.
.
m
m1
m
mn
_

_
1. Matrix 1-norm (column sum): |M|
1
= max
j

m
i=1
[m
ij
[
2. Matrix 2-norm: |M| = |M|
2
=
_

max
(M

M).
3. Matrix -norm (row sum): |M|

= max
i

n
j=1
[m
ij
[
4. Frobenius norm: |M|
F
= (

m
i=1

n
j=1
[m
ij
[
2
)
1
2
=
_
trace(M

M)
11
Singular Value Decomposition
J.C. Juang
The singular values of a matrix M are dened as

i
(M) =
_

i
(M

M)
Here,
i
(M)s are real and nonnegative.
The maximal singular value,
max
, can be shown to be

max
(M) = |M| = max
x=0
|Mx|
2
|x|
2
When M is invertible, the maximal singular value of M
1
is related to the minimal
singular value of M,
min
(M), by

max
(M
1
) =
1

min
(M)
The rank of M is the same as the number of nonzero singular values of M.
The matrix M and its complex conjugate transpose M

have the same singular


values, i.e.,
i
(M) =
i
(M

).
Let M c
mn
, there exists unitary matrices U =
_
u
1
u
2
u
m

c
mm
and
V =
_
v
1
v
2
v
n

c
nn
such that
M = UV

where
=
_

1
0
0 0
_
with
1
=
_

1
0 0
0
2
0
.
.
.
.
.
. 0
0 0
r
_

_
with
1

2

r
0 where r = minm, n.
The matrix admits the decomposition
M =
r

i=1

i
u
i
v

i
=
_
u
1
u
2
u
r

1
0 0
0
2
0
.
.
.
.
.
. 0
0 0
r
_

_
_
v
1
v
2
v
r

kerM = spanv
r+1
, , v
n
and (kerM)

= spanv
1
, , v
r
.
imgM = spanu
1
, , u
r
and (imgM)

= spanu
r+1
, , u
m
.
The Frobenius norm of M equals |M|
F
=
_

2
1
+
2
2
+ +
2
r
.
The condition number of a matrix M is dened as (M) =
max
(M)
max
(M
1
).
12
Semidenite Matrices
J.C. Juang
A square Hermitian matrix M = M

is positive denite (semidenite), denoted by


M > 0 ( 0), if x

Mx > 0 ( 0) for all x ,= 0.


All eigenvalues of a positive denite matrix are positive.
Let M =
_
M
11
M
12
M
21
M
22
_
be a Hermitian matrix. Then M > 0 if and only if M
11
> 0
and S
11
= M
22
M
21
M
1
11
M
12
> 0.
Let M 0. There exists a unitary matrix U and nonnegative diagonal such that
M = UU

.
The square root of a positive semidenite matrix M, denoted by M
1/2
, satises
M
1/2
M
1/2
= M and M
1/2
0. It can be shown that M
1/2
= U
1/2
U

.
Let M
1
and M
2
be two matrices such that M
1
= M

1
0 and M
2
= M

2
0. Then
M
1
> M
2
if and only if (M
2
M
1
1
) < 1
For a positive denite matrix M, its inverse M
1
exists and is positive denite.
For two positive denite matrices M
1
and M
2
, we have M
1
+ M
2
> 0 when > 0
and 0.
13
Matrix Calculus
J.C. Juang
Let x = [x
i
] be an n-vector and f(x) be a scalar function on x, then the rst
derivative gradient vector is dened as
f(x)
x
= [g
i
] =
_

_
f(x)
x
1
f(x)
x
2
.
.
.
f(x)
xn
_

_
and the second derivative Hessian matrix is

2
f(x)
x
2
= [H
ij
] =
_

2
f(x)
x
2
1


2
f(x)
x
1
xn
.
.
.
.
.
.

2
f(x)
xn x
1


2
f(x)
x
2
n
_

_
Let X = [X
ij
] be an mn matrix and f(X) be a scalar function on X. The derivative
of f(X) with respect to X is dened as

X
f(X) =
_

X
ij
f(X)
_
Formula for derivatives (assuming that x and y are vectors and A, B, and X are
real matrices)

x
(y
T
Ax) = A
T
y

x
(x
T
Ax) = (A +A
T
)x
(traceX)
X
= I
(traceAXB)
X
= A
T
B
T
(traceAX
T
B)
X
= BA
(traceX
T
AX)
X
= (A +A
T
)X
(traceAXBX)
X
= A
T
X
T
B
T
+B
T
X
T
A
T
(log det X)
X
= (X
T
)
1
(det X)
X
= (det X)(X
T
)
1
14
Functional Space and Signal
15
Function Spaces
J.C. Juang
Let 1 be a vector space over c and | | be a norm dened on 1. Then 1 is a
normed space.
The space /
p
(, ) (for 1 p < ) consists of all Lebesgue measurable functions
w(t) dened on the interval (, ) such that
|w|
p
:= (
_

[w(t)[
p
dt)
1/p
<
The space /

(, ) consists of all Lebesgue measurable functions w(t) with a


bounded
|w|

:= ess sup
tR
[w(t)[
Let 1 be a vector space over c. An inner product on 1 is a complex value function
, ) : 1 1 c
such that for any u, v, w 1 and , c
1. u, v +w) = u, v) +u, w)
2. u, v) = v, u)
3. u, u) > 0 if u ,= 0
A vector space 1 with an inner product is an inner product space.
A Hilbert space is a complete inner product space with the norm induced by its
inner product.
The space /
2
= /
2
(, ) is a Hilbert space of matrix-valued functions on 1, with
inner product
u, v) :=
_

trace(u

(t)v(t)) dt
The space /
2+
= /
2
[0, ) is a subspace of /
2
(, ) with functions zero for t < 0.
The space /
2
= /
2
(, 0] is a subspace of /
2
(, ) with functions zero for t > 0.
Let 1 be a Hilbert space and | 1 a subset. Then the orthogonal complement of
|, denoted by |

is dened as
|

= u : u, v) = 0, v |, u 1
The orthogonal complement is a Hilbert space.
Let = /
2+
/
2
, then

= /
2
.
Let | and be subspaces of a vector space 1. 1 is said to be the direct sum
of | and , written 1 = | , if |

= 0 and every element v 1 can be


expressed as v = u +w with u | and w . If 1 is an inner product space and |
and are orthogonal, then 1 is said to be the orthogonal direct sum of | and .
The space /
2
is the orthogonal direct sum of /
2+
and /
2
.
16
Power and Spectral Signals
J.C. Juang
Let w(t) be a function of time. The autocorrelation matrix is
R
ww
:= lim
T
1
2T
_
T
T
w(t +)w

(t) dt
if the limit exists and is nite for all .
The Fourier transform of the autocorrelation matrix is the spectral density, de-
noted as S
ww
(j)
S
ww
(j) :=
_

R
ww
()e
j
d
The autocorrelation can be obtained from the spectral density by performing an
inverse Fourier transform
R
ww
() =
1
2
_

S
ww
(j)e
j
d
A signal is a power signal if the autocorrelation matrix R
ww
() exists for all and
the power spectral density function S
ww
(j) exists.
The power of w(t) is dened as
|w|
rms
:=

lim
T
1
2T
_
T
T
|w(t)|
2
dt =
_
trace[R
ww
(0)]
In terms of the spectral density function,
|w|
2
rms
=
1
2
_

trace[S
ww
(j)] d
Note that | |
rms
is not a norm, since a nite-duration signal has a zero rms value.
17
Signal Quantization
J.C. Juang
For a signal w(t) mapping from [0, ) to 1 (or 1
m
), its measures can be dened as
1. -norm (peak value)
|w|

= sup
t0
[w(t)[
or
|w|

= sup
t0
max
1im
[w
i
(t)[ = max
1im
|w
i
|

2. 2-norm (total energy)


|w|
2
=

_

0
w
2
(t) dt
or
|w|
2
=

_
_

0
m

i=1
w
2
i
(t) dt =

_
m

i=1
|w
i
|
2
2
=

_

0
w
T
(t)w(t) dt
3. 1-norm (resource consumption)
|w|
1
=
_

0
[w(t)[ dt
or
|w|
1
=
_

0
m

i=1
[w
i
(t)[ dt =
m

i=1
|w
i
|
1
4. p-norm
|w|
p
= (
_

0
[w(t)[
p
dt)
1/p
5. rms (root mean square) value (average power)
|w|
rms
=

lim
T
1
T
_
T
0
w
2
(t) dt
or
|w|
rms
=

lim
T
1
T
_
T
0
w
T
(t)w(t) dt
If |w|
2
< , then |w|
rms
= 0.
If w is a power signal and |w|

< , then |w|


rms
|w|

.
If w is in /
1
and in /

, then |w|
2
2
|w|
1
|w|

.
18
Check the following signals
/
1
/
2
/

power signal
w(t) = sin t
w(t) =
_
e
2t
t 0
0 t < 0
w(t) =
1
1+t
w(t) =
_
1/

t 1 t > 0
0 t > 1 or t < 0
Relationship
/

Power signal
/
2
/
1
19
Review of Linear System Theory
20
Linear Systems
J.C. Juang
A nite-dimensional linear time-invariant dynamical system can be described by
the following equation
x = Ax +Bw, x(0) = x
o
z = Cx +Dw
where x(t) 1
n
is the state, w(t) 1
m
is the input, and z(t) 1
p
is the output.
The transfer function matrix from w to z is dened as
Z(s) = M(s)W(s)
where Z(s) and W(s) are the Laplace transform of z(t) and w(t), respectively. It is
known that
M(s) = D +C(sI A)
1
B
For simplicity, the short-hand notation is used
M(s)
s
=
_
A B
C D
_
The state response for the given initial condition x
o
and input w(t) is
x(t) = e
At
x
o
+
_
t
0
e
A(t)
Bw() d
and the output response is
z(t) = Ce
At
x
o
+
_
t
0
Ce
A(t)
Bw() d +Dw(t)
Assume that the initial condition is zero, D = 0, and the input is an impulse, then
the output admits
m(t) = Impulse response of M(s) = Ce
At
B
With respect to power signal input, the power spectral density of the output is
related to the power spectral density of the input by
S
zz
(j) = M(j)S
ww
(j)M

(j)
The system is stable (Hurwitz) if all the eigenvalues of the matrix A are in the left
half plane.
21
Controllability and Observability
J.C. Juang
The pair (A, B) is controllable if and only if
1. For any initial state x
o
, t > 0, and nal state x
f
, there exists a piecewise
continuous input w() such that x(t) = x
f
.
2. The controllability matrix
_
B AB A
2
B A
n1
B

has full row rank, i.e.,


< A[imgB >:=

n
i=1
img(A
i1
B) = 1
n
.
3. The matrix
W
c
(t) =
_
t
0
e
A
BB
T
e
A
T

d
is positive denite for any t > 0.
4. PBH test: The matrix
_
A I B

has full row rank for all in c.


5. Let and x be any eigenvalue and any corresponding left eigenvector of A, i.e.,
x

A = x

, then x

B ,= 0.
6. The eigenvalues of A +BF can be freely assigned by a suitable choice of F.
The pair (A, B) is stabilizable if and only if
1. The matrix
_
A I B

has full row rank for all in c with 1 0.


2. For all and x such that x

A = x

and 1 0, we have x

B ,= 0.
3. There exists a matrix F such that A +BF is Hurwitz.
The pair (C, A) is observable if and only if
1. For any t > 0, the initial state x
o
can be determined from the time history of
the input w(t) and the output z(t) in the interval [0, t].
2. The observability matrix
_

_
C
CA
CA
2
.
.
.
CA
n1
_

_
has full column rank, i.e.,

n
i=1
ker(CA
i1
) =
0.
3. The matrix
W
o
(t) =
_
t
0
e
A
T

C
T
Ce
A
d
is positive denite for any t > 0.
4. The matrix
_
A I
C
_
has full column rank for all in c.
5. The eigenvalues of A +HC can be freely assigned by a suitable choice of H.
6. For all and x such that Ax = x, we have Cx ,= 0.
7. The pair (A
T
, C
T
) is controllable.
22
The pair (C, A) is detectable if and only if
1. The matrix
_
A I
C
_
has full-column rank for all 1 0.
2. There exists a matrix H such that A +HC is Hurwitz.
3. For all and x such that Ax = x and 1 0, we have Cx ,= 0.
Kalman canonical decomposition: through a nonsingular coordinate transforma-
tion T, the realization admits
_
TAT
1
TB
CT
1
D
_
=
_
_
_
_
_
_
A
co
0 A
13
0 B
co
A
21
A
c o
A
23
A
24
B
c o
0 0 A
co
0 0
0 0 A
43
A
c o
0
C
co
0 C
co
0 D
_
_
_
_
_
_
In this case, the transfer function
M(s) = D +C(sI A)
1
B = D +C
co
(sI A
co
)
1
B
co
The state space realization (A, B, C, D) is minimal if and only if (A, B) is controllable
and (C, A) is observable.
23
State Space Algebra
J.C. Juang
Let
M
1
(s) :
_
x
1
= A
1
x
1
+B
1
w
1
z
1
= C
1
x
1
+D
1
w
1
and M
2
(s) :
_
x
2
= A
2
x
2
+B
1
w
2
z
2
= C
2
x
2
+D
2
w
2
In terms of the compact representation, M
1
(s)
s
=
_
A
1
B
1
C
1
D
1
_
and M
2
(s)
s
=
_
A
2
B
2
C
2
D
2
_
.
Parallel connection of M
1
(s) and M
2
(s).
w
w
2
M
2
(s)
w
1
M
1
(s)
z
2
z
1
z
The new state is x =
_
x
1
x
2
_
, input w = w
1
= w
2
, and output z = z
1
+z
2
.
Thus
x =
_
A
1
0
0 A
2
_
x +
_
B
1
B
2
_
w and z =
_
C
1
C
2

x + (D
1
+D
2
)w
or M
1
(s) +M
2
(s)
s
=
_
_
A
1
0 B
1
0 A
2
B
2
C
1
C
2
D
1
+D
2
_
_
.
Series connection of two systems
w
2
M
2
(s)
z
2
= w
1
M
1
(s)
z
1
The connection gives M
1
(s)M
2
(s)
s
=
_
A
1
B
1
C
1
D
1
__
A
2
B
2
C
2
D
2
_
=
_
_
A
1
B
1
C
2
B
1
D
2
0 A
2
B
2
C
1
D
1
C
2
D
1
D
2
_
_
=
_
_
A
2
0 B
2
B
1
C
2
A
1
B
1
D
2
D
1
C
2
C
1
D
1
D
2
_
_
.
Inverse system M
1
1
(s)
s
=
_
A
1
B
1
D
1
1
C
1
B
1
D
1
1
C
1
D
1
1
D
1
1
_
provided that D
1
is invertible.
24
Can be veried that M
1
(s)M
1
1
(s) = I.
w
1
= D
1
1
z
1
D
1
1
C
1
x
1
x
1
= A
1
x
1
+B
1
(D
1
1
z
1
D
1
1
C
1
x
1
) = (A B
1
D
1
1
C
1
)x
1
+B
1
D
1
1
z
1
Transpose or dual system M
T
1
(s)
s
=
_
A
T
1
C
T
1
B
T
1
D
T
1
_
.
Conjugate system M

1
(s) = M
T
1
(s)
s
=
_
A
T
1
C
T
1
B
T
1
D
T
1
_
. Thus, M

1
(j) = M

1
(j).
Feedback connection of M
1
(s) and M
2
(s).
w
1
w
M
1
(s)
z = z
1
w
2
M
2
(s)
z
2
The connection gives z = z
1
= w
2
and w
1
= w +z
2
.
Thus,
z = C
1
x
1
+D
1
(w +z
2
)
z
2
= C
2
x
2
+D
2
z
z = (I D
1
D
2
)
1
(C
1
x
1
+D
1
C
2
x
2
+D
1
w)
z
2
= (I D
2
D
1
)
1
(D
2
C
1
x
1
+C
2
x
2
+D
2
D
1
w)
x
1
= A
1
x
1
+B
1
w +B
1
(I D
2
D
1
)
1
(D
2
C
1
x
1
+C
2
x
2
+D
2
D
1
w)
x
2
= A
2
x
2
+B
2
(I D
1
D
2
)
1
(C
1
x
1
+D
1
C
2
x
2
+D
1
w)
(I M
1
(s)M
2
(s))
1
M
1
(s)
s
=
_
_
A
1
+B
1
(I D
2
D
1
)
1
D
2
C
1
B
1
(I D
2
D
1
)
1
C
2
B
1
+B
1
(I D
2
D
1
)
1
D
2
D
1
B
2
(I D
1
D
2
)
1
C
1
A
2
+B
2
(I D
1
D
2
)
1
D
1
C
2
B
2
(I D
1
D
2
)
1
D
1
(I D
1
D
2
)
1
C
1
(I D
1
D
2
)
1
C
2
(I D
1
D
2
)
1
D
1
_
_
25
System Poles and Zeros
J.C. Juang
Let M(s) = D + C(sI A)
1
B
s
=
_
A B
C D
_
. The eigenvalues of A are the poles of
M(s).
The system matrix of M(s) is dened as
Q(s) =
_
A sI B
C D
_
which is a polynominal matrix.
The normal rank of Q(s), denoted normal rank, is the maximally possible rank of
Q(s) for at least one s c.
A complex number
o
c is called an invariant zero of the system realization if it
satises
rank
_
A
o
I B
C D
_
< normal rank
_
A sI B
C D
_
The invariant zeros are not changed by constant state feedback, constant output
injection, or similarity transformation.
Suppose that
_
A sI B
C D
_
has full-column normal rank. Then
o
c is an invari-
ant zero if and only if there exist 0 ,= x c
n
and w c
m
such that
_
A
o
I B
C D
_ _
x
w
_
= 0
Moreover, if w = 0, then
o
is also a nonobservable mode.
When the system is square, i.e., equal number of input and output, then the
invariant zeros can be computed by solving a generalized eigenvalue problem.
_
A B
C D
_ _
x
w
_
=
_
I 0
0 0
_ _
x
w
_
for some generalized eigenvalue and generalized eigenvector
_
x
w
_
.
Suppose that
_
A sI B
C D
_
has full-row normal rank. Then
o
c is an invariant
zero if and only if there exist 0 ,= y c
n
and v c
p
such that
_
y


_
A
o
I B
C D
_
= 0
Moreover, if v = 0, then
o
is also a noncontrollable mode.
26
The system M(s) has full-column normal rank if and only if
_
A sI B
C D
_
has
full-column normal rank.
Note that
_
A sI B
C D
_
=
_
I 0
C(A sI)
1
I
_ _
A sI B
0 M(s)
_
and
normal rank
_
A sI B
C D
_
= n +normal rankM(s)
Let M(s) be a pm transfer matrix and let (A, B, C, D) be a minimal realization. If

o
is a zero of G(s) that is distinct from the poles, then there exists an input and
initial state such that the output of the system z(t) is zero for all t.
Let x
o
and w
o
be such that
_
A
o
I B
C D
_ _
x
o
w
o
_
= 0
i.e.,
Ax
o
+Bw
o
=
o
x
o
and Cx
o
+Dw
o
= 0
Consider the input w(t) = w
o
e
ot
and initial state x(0) = x
o
.
The output is
z(t) = Ce
At
x
o
+
_
t
0
Ce
A(t)
Bw() d +Dw(t)
= Ce
At
x
o
+Ce
At
_
t
0
e
(oIA)
(
o
I A)x
o
d +Dw
o
e
ot
= Ce
At
x
o
+Ce
At
_
e
(oIA)
[
t
0

x
o
+Dw
o
e
ot
= 0
27
Measure of System and Fundamental Equations
28
H
2
Space
J.C. Juang
Let S be an open set in c and let f(s) be a complex-valued function dened on S.
Then f(s) is said to be analytic at a point z
o
in S if it is dierentiable at z
o
and
also at each point in some neighborhood of z
o
.
If f(s) is analytic at z
o
then f has continuous derivatives of all orders at z
o
.
A function f(s) is said to be analytic in S if it has a derivative or is analytic at
each point of S.
A matrix-valued function is analytic in S if every element of the matrix is analytic
in S.
All real rational stable transfer function matrices are analytic in the right-half
plane.
Maximum modulus theorem: If f(s) is dened and continuous on a closed-bounded
set S and analytic on the interior of S, then [f(s)[ cannot attain the maximum in
the interior of S unless f(s) is a constant.
[f(s)[ can only achieve its maximum on the boundary of S, i.e.,
max
sS
[f(s)[ = max
sS
[f(s)[
where S denotes the boundary of S.
The space /
2
= /
2
(j1) is a Hilbert space of matrix-valued functions on j1 and
consists of all complex matrix functions M such that
_

trace[M

(j)M(j)] d <
. The inner product for this Hilbert space is dened as
M, N) :=
1
2
_

trace[M

(j)N(j)] d
The induced norm is given by
|M|
2
:=
_
M, M)
All real rational strictly proper transfer function matrices with no poles on the
imaginary axis form a subspace of /
2
(j1) which is denoted by 1/
2
.
The space H
2
is a subspace of /
2
(j1) with matrix functions M(s) analytic in 1(s) >
0. The corresponding norm is dened as
|M|
2
2
:= sup
>0

1
2
_

trace[M

( +j)M( +j)] d
It can be shown that
|M|
2
2
=
1
2
_

trace[M

(j)M(j)] d
29
The real rational subspace of H
2
, which consists of all strictly proper and real
rational stable transfer function matrices is denoted by 1H
2
.
The space H

2
is the orthogonal complement of H
2
in /
2
.
If M is a strictly proper, stable, real rational transfer function matrix, then M H
2
and M

2
.
Parsevals Relation: Isometric isomorphism between the /
2
spaces in the time
domain and the /
2
spaces in the frequency domain.
/
2
(, ) /
2
(j1)
/
2
[0, ) H
2
/
2
(, 0] H

2
If m(t) /
2
(, ) and its bilateral Laplace transform is M(s) /
2
(j1), then
|M|
2
= |m|
2
Dene an orthogonal projection P
+
: /
2
(, ) /
2
[0, ) such that for any func-
tion m(t) /
2
(, )
P
+
m(t) =
_
m(t) for t 0
0 otherwise
On the other hand, the operator P

from /
2
(, ) to /
2
(, 0] is dened as
P

m(t) =
_
0 for t > 0
m(t) for t 0
Relationships among function spaces
Inverse transform
Laplace transform
Inverse transform
Laplace transform
Inverse transform
Laplace transform
/
2
(, 0]
/
2
(, )
/
2
[0, )
H

2
/
2
(j1)
H
2
P
+
P

P
+
P

30
H

Space
J.C. Juang
The space /

(j1) is a Banach space of matrix-valued functions that are essentially


bounded on j1, with norm
|M|

:= ess sup
R

max
[M(j)]
The rational subspace of /

denoted by 1/

consists of all proper and real rational


transfer function matrices with no poles on the imaginary axis.
The space H

is a subspace of /

with functions that are analytic and bounded


in the open right-half plane. The H

norm is dened as
|M|

:= sup
Re(s)>0

max
[M(s)] = sup
R

max
[M(j)]
The real rational subspace of H

is denoted by 1H

, which consists of all proper


and real rational stable transfer function matrices.
The space H

is a subspace of /

with functions that are analytic and bounded


in the open left-half plane. The H

norm is dened as
|M|

:= sup
Re(s)<0

max
[M(s)] = sup
R

max
[M(j)]
The real rational subspace of H

is denoted by 1H

, which consists of all proper


and real rational antistable transfer function matrices.
Let M(s) /

be a pm transfer function matrix. For the multiplication operator

M
: /
m
2
/
p
2
that is dened as

M
w := Mw
we have
|
M
| = |M|

31
Lyapunov Equation
J.C. Juang
Given A 1
nn
and Q = Q
T
1
nn
, the equation on X 1
nn
A
T
X +XA +Q = 0
is called a Lyapunov equation.
Dene the map
: 1
nn
1
nn
, (X) = A
T
X +XA
Then the Lyapunov equation has a solution X if and only if Q img.
The solution is unique if and only if is injective.
The Lyapunov equation has a unique solution if and only if A has no two eigenvalues
sum to zero.
Assume that A is stable, then
1. X =
_

0
e
A
T
t
Qe
At
dt.
2. X > 0 if Q > 0 and X 0 if Q 0.
3. If Q 0, then (Q, A) is observable if and only if X > 0.
Suppose A, Q, and X satisfy the Lyapunov equation, then
1. Re
i
(A) 0 if X > 0 and Q 0.
2. A is stable if X > 0 and Q > 0.
3. A is stable if (Q, A) is detectable, Q 0, and X 0.
Proof:
(a) Let there be x ,= 0 such that Ax = x with Re > 0.
(b) Form x

(A
T
X+XA+Q)x = 0 which can be reduced to 2Re ()x

Xx+x

Qx =
0.
(c) Thus x

Qx = 0
(d) Leads to Qx = 0
(e) Implies that
_
A I
Q
_
x = 0, which contradicts the detectability assump-
tion.
32
Controllability and Observability Gramians
J.C. Juang
M(s)
w z
x
o
Consider the stable transfer function from M(s) with z = M(s)w. Assume the
following minimal realization
M(s) = C(sI A)
1
B
That is,
x = Ax +Bw
z = Cx
Suppose that there is no input from 0 to , then the output energy generated by
the initial state can be computed as follows:
[
_

0
z
T
(t)z(t) dt]
1
2
= x
T
0
[
_

0
e
A
T
t
C
T
Ce
At
dt]x
o

1
2
= [x
T
o
W
o
x
o
]
1
2
where W
o
=
_

0
e
A
T
t
C
T
Ce
At
dt is the observability gramian of the system M(s).
Indeed, W
o
is the positive denite solution of the Lyapunov equation
A
T
W
o
+W
o
A +C
T
C = 0
When A is stable, the system is observable if and only if W
o
> 0.
On the other hand, the input from to 0 that drives the state to x
o
satises
x
o
=
_
0

e
At
Bw(t) dt
If we minimize the input energy subject to the reachability condition, the optimal
input can be found to be
w(t) = B
T
e
A
T
t
W
1
c
x
o
where
W
c
=
_
0

e
At
BB
T
e
A
T
t
dt =
_

0
e
At
BB
T
e
A
T
t
dt
33
The matrix W
c
is the controllability gramian of the system M(s). It satises the
Lyapunov equation
AW
c
+W
c
A
T
+BB
T
= 0
When A is stable, the system is controllable if and only if W
c
is positive denite.
Note that the input energy (square) needed is
[
_
0

w
T
(t)w(t) dt]
1
2
= x
T
o
[
_
0

e
At
BB
T
e
A
T
t
dt]x
o

1
2
= [x
T
o
W
1
c
x
o
]
1
2
In summary, the observability gramian W
o
determines the total energy in the
system output starting from a given initial state (with no input) and the control-
lability gramian W
c
determines which points in the state space that can be reached
using an input with total energy one.
Both W
o
and W
c
depend on the realization. Through a similarity transformation,
the realization (TAT
1
, TB, CT
1
) gives the gramians as TW
c
T
T
and T
T
W
o
T
1
,
respectively.
The eigenvalues of W
c
W
o
are invariant under similarity transformation, thus, a
system property.
34
Balanced Realization and Hankel Singular Values
J.C. Juang
Let W
c
and W
o
be the controllability gramian and observability gramian of the
system (A, B, C), respectively, i.e.,
AW
c
+W
c
A
T
+BB
T
= 0
and
A
T
W
o
+W
o
A +C
T
C = 0
The Hankel singular values are dened as the square roots of the eigenvalues of
W
c
W
o
, which are independent of the particular realization.
Let T be the matrix such that TW
c
W
o
T
1
= =
_

2
1
0
0
2
2
.
.
.

2
n
_

_
.
The Hankel singular values of the system are
1
,
2
, ,
n
(in descending order).
The matrix T diagonalizes the controllability and observability gramians. Indeed,
the new realization (TAT
1
, TB, CT
1
) admits TW
c
T
T
= T
T
W
o
T
1
=
_

1
0
0
2
.
.
.

n
_

_
.
A realization (A, B, C) is balanced if its controllability and observability gramians
are the same.
The maximal gain from the past input to the future output is dened as the Hankel
norm.
t
x(t)
x
o
|M(s)|
h
= sup
u(.)L
2
(,0)
|z|
L
2
(0,)
|u|
L
2
(,0)
= sup
xo
(x
T
o
W
o
x
o
)
1
2
(x
T
o
W
1
c
x
o
)
1
2
=
1
2
max
(W
c
W
o
)
=
1
The Hankel norm is thus the maximal Hankel singular value.
35
Quantication of Systems
J.C. Juang
The norm of a system is typically dened as the induced norm between its output
and input.
M(s)
w z
Two classes of approaches
1. Size of the output due to a particular signal or a class of signal
2. Relative size of the output and input
System 2-norm (SISO case).
Let m(t) be the impulse response of the system M(s). Its 2-norm is dened as
|M|
2
= (
_

[m(t)[
2
dt)
1
2
The system 2-norm can be interpreted as the 2-norm of the output due to
impulse input.
By Parseval theorem,
|M|
2
= (
1
2
_

[M(j)[
2
d)
1
2
If the input signal w and output signal z are stochastic in nature. Let S
zz
()
and S
ww
() be the spectral density of the output and input, respectively. Then
S
zz
() = S
ww
() [M(j)[
2
Note that
|z|
rms
= (
1
2
_

S
zz
() d)
1
2
= (
1
2
_

[M(j)[
2
S
ww
() d)
1
2
The system 2-norm can then be interpreted as the rms value of the output
subject to unity spectral density white noise.
36
System 2-norm (MIMO case). The system 2-norm is dened as
|M|
2
= (
1
2
_

trace[M(j)M

(j)] d)
1
2
= (trace
_

0
m(t)m
T
(t) dt)
1
2
= (
1
2
_

i
(M(j))
2
d)
1
2
where m(t) is the impulse response.
Let e
i
be the i-th standard basis vector of 1
m
. Apply the impulsive input (t)e
i
to
the system to obtain the impulse response z
i
(t) = m(t)e
i
. Then
|M|
2
2
=
m

i=1
|z
i
|
2
2
The system 2-norm can be interpreted as
1. The rms response due to white noise: |M|
2
= |z|
rms
[
w=white noise
2. The 2-norm response due to impulse input |M|
2
= |z|
2
[
w=impulse
The system norm can be regarded as the peak value in the Bode magnitude
(singular value) plot.
|M|

= sup
Re s>0

max
(M(s))
= sup

max
(M(j)) (When M(s) is stable)
= sup
wrms=0
|Mw|
rms
|w|
rms
= sup
w
2
=0
|Mw|
2
|w|
2
= sup
w
2
=1
|Mw|
2
System 1-norm: peak to peak ratio
|M|
1
= sup

|Mw|

|w|

37
State-Space Computation of 2-Norm
J.C. Juang
Consider the state space realization of a stable transfer function M(s)
x = Ax +Bw
z = Cx +Dw
In 2-norm computation, A is assumed stable and D is zero.
Recall the impulse response of M(s), m(t), is
m(t) = Ce
At
B
The 2-norm, according to the denition, satises
|M|
2
2
= trace
_

0
m
T
(t)m(t) dt
= trace [B
T
(
_

0
e
A
T
t
C
T
Ce
At
dt)B]
= trace B
T
W
o
B
Thus, the 2-norm of M(s) can be computed by solving a Lyapunov function for
W
o
A
T
W
o
+W
o
A
T
+C
T
C = 0
and taking the square root of the trace of B
T
W
o
B.
|M|
2
=
_
trace B
T
W
o
B
Similarly, let W
c
be the controllabilty gramian
AW
c
+W
c
A
T
+BB
T
= 0
then,
|M|
2
=
_
trace CW
c
C
T
38
H

Norm Computation
J.C. Juang
The H

norm of M(s) can be computed through a search of the maximal singular


value of M(j) for all .
|M(s)|

= ess sup

max
(M(s))
Let M(s)
s
=
_
A B
C D
_
. Then
|M(s)|

<
if and only if
max
(D) < and H has no eigenvalues on the imaginary axis where
H =
_
A 0
C
T
C A
T
_
+
_
B
C
T
D
_
(
2
I D
T
D)
1
_
D
T
C B
T

Computation of the H

norm requires iterations on .


|M(s)|

<
if and only if
(s) =
2
I M

(s)M(s) > 0
if and only if (j) is nonsingular for all if and only if
1
(s) has no imaginary
axis pole. But

1
(s)
s
=
_
_
H
_
B(
2
I D
T
D)
1
C
T
D(
2
I D
T
D)
1
_
_
(
2
I D
T
D)
1
D
T
C (
2
I D
T
D)
1
B
T

(
2
I D
T
D)
1
_
_
39
Linear Matrix Inequality
J.C. Juang
Linear matrix inequality (LMI)
F(x) = F
0
+
m

i=1
x
i
F
i
< 0
where x R
m
is the variable and the symmetric matrices F
i
= F
T
i
R
nn
for
i = 1, , m are given.
The inequality < must be interpreted as the negative denite.
The LMI is a convex constraint on x, i.e., the set x[F(x) < 0 is convex. That is, if
x
1
and x
2
satisfy F(x
1
) < 0 and F(x
2
) < 0, then F(x
1
+x
2
) < 0 for positive scalars
and such that + = 1.
Multiple LMIs can be rewritten as a single LMI in view of the concatenation
F
1
(x) < 0, and F
2
(x) < 0
_
F
1
(x) 0
0 F
2
(x)
_
< 0
Schur complement technique: Assume that Q(x) = Q
T
(x), R(x) = R
T
(x), and S(x)
depend anely on x, then
_
Q(x) S(x)
S
T
(x) R(x)
_
< 0 R(x) < 0 and Q(x) S(x)R
1
(x)S
T
(x) < 0
That is, nonlinear inequalities of Q(x) S(x)R
1
(x)S
T
(x) < 0 can be represented as
an LMI.
Let Z(x) be a matrix on x. The constraint |Z(x)| < 1 is equivalent to I >
Z(x)Z
T
(x) and can be represented as the LMI
_
I Z(x)
Z
T
(x) I
_
> 0
Let c(x) be a vector and P(x) be a symmetric matrix, the constraints P(x) > 0
and c
T
(x)P
1
(x)c(x) < 1 can be represented as the LMI
_
P(x) c(x)
c
T
(x) 1
_
> 0
The constraint
trace S
T
(x)P
1
(x)S(x) < 1, P(x) > 0
where P(x) = P
T
(x) and S(x) depend anely on x can be restated as
trace Q < 1, S
T
(x)P
1
(x)S(x) < Q, P(x) > 0
and hence
trace Q < 1,
_
Q S
T
(x)
S(x) P(x)
_
> 0
40
Orthogonal complement of a matrix. Let P R
nm
be of rank m < n, the orthogonal
complement of P is a matrix P

R
n(nm)
such that
P
T
P

= 0
and
_
P P


is invertible.
(Finslers lemma). Let P R
nm
and R R
nn
where rank(P) = m < n. Suppose
P

is the orthogonal complement of P then


PP
T
+R < 0
for some real if and only if
P
T
RP

< 0
To see the above, note that
PP
T
+R < 0

_
P
T
P
T
_
(PP
T
+R)
_
P P


< 0

_
P
T
PP
T
P +P
T
RP P
T
RP

P
T
RP P
T
RP

_
< 0
P
T
RP

< 0 and
(P
T
PP
T
P) +P
T
RP P
T
RP

(P
T
RP

)
1
P
T
RP < 0
P
T
RP

< 0
Given P R
nm
(rank m < n), Q R
nl
(rank l < n), and R R
nn
, there exists
K R
ml
such that
R +PKQ
T
+QK
T
P
T
< 0
if and only if
P
T
RP

< 0 and Q
T
RQ

< 0
where P

and Q

be the orthogonal complements of P and Q, respectively.


41
Stability and Norm Computation: LMI Approach
J.C. Juang
Lyapunov Stability
x = Ax
is stable if and only if all the eigenvalues of A are in the left half plane if and only
if there exists an
X > 0
such that
AX +XA
T
< 0
H

Norm Computation. The H

norm of the system


_
A B
C D
_
is less than if
and only if
2
I > D
T
D and there exists an X > 0 such that
A
T
X +XA +C
T
C + (XB +C
T
D)(
2
I D
T
D)
1
(B
T
X +D
T
C) = 0
if and only if
2
I > D
T
D and there exists an X > 0 such that
A
T
X +XA +C
T
C + (XB +C
T
D)(
2
I D
T
D)
1
(B
T
X +D
T
C) < 0
if and only if
_
A
T
X +XA +C
T
C XB +C
T
D
B
T
X +D
T
C D
T
D
2
I
_
< 0
if and only if there exists an X > 0 such that
_
_
A
T
X +XA XB C
T
B
T
X I D
T
C D I
_
_
< 0
if and only if there exists a Y > 0 such that
_
_
Y A
T
+AY B Y C
T
B
T
I D
T
CY D I
_
_
< 0
42
Recommended Matlab Exercises
J.C. Juang
Understand the operations of matlab, control toolbox, synthesis toolbox, LMI
toolbox
How to represent system matrices?
How to compute norms and singular values of matrices?
How to compute the norm of a signal?
How to determine the poles and zeros?
How to perform system algebra?
How to solve a Lyapunov equation?
How to compute the norm of a system?
How to solve an LMI?
43