Summer School

on Time Delay Equations and Control Theory
Dobbiaco, June 25–29 2001
Linear Control Theory
Giovanni MARRO

, Domenico PRATTICHIZZO


DEIS, University of Bologna, Italy

DII, University of Siena, Italy
References
Wonham
Linear Multivariable Control – A Geometric Approach,
3rd edition, Springer Verlag, 1985.
Basile and Marro
Controlled and Conditioned Invariants in Linear Sys-
tem Theory, Prentice Hall, 1992
Trentelman, Stoorvogel and Hautus
Control Theory for Linear Systems, Springer Verlag,
2001
Early References
Basile and Marro
Controlled and Conditioned Invariant Subspaces in
Linear System Theory, Journal of Optimization The-
ory and Applications, vol. 3, n. 5, 1969.
Wonham and Morse
Decoupling and Pole Assignment in Linear Multivari-
able Systems: a Geometric Approach, SIAM Journal
on Control, vol. 8, n. 1, 1970.
Introduction to Control Problems
Consider the following figure that includes a controlled
system (plant) Σ and a controller Σ
r
, with a feedback
part Σ
c
and a feedforward part Σ
f
.
+
_
+
+
r
p
Σ
f
r
e
Σ
c
d
1
d
2
u
Σ
y
2
y
1
Σ
r
Fig. 1.1. A general block diagram for regulation.
• r
p
previewed reference
• r reference
• y
1
controlled output
• y
2
informative output
• e error variable
• u manipulated input
• d
1
non-measurable disturbance
• d
2
measurable disturbance
1
d
Σ u
y
Σ
r
r
p e
Fig. 1.2. A reduced block diagram.
In the above figure d :=¡d
1
, d
2
¦, y :=¡y
1
, y
2
, d
1
¦.
All the symbols in the figure denote signals, repre-
sentable by real vectors varying in time.
The plant Σ is given and the controller Σ
r
is to be
designed to (possibly) maintain e() =0.
Both the plant and the controller are assumed to be
linear (zero state and superposition property).
The blocks represent oriented systems (inputs, out-
puts), that are assumed to be causal.
In the classical control theory both continuous-time
systems and discrete-time systems are considered.
t k 0
0
2
An example
+
_
+
+
_
e
K
1
T
_
amplifier
v
a
tachometer
c
r
ω
r
motor
v
c
PI controller
z
Fig. 1.3. The velocity control of a dc motor.
The PI controlled yields steady-state control with no
error.
This property is robust against parameter variations,
provided asymptotic stability of the loop is achieved.
This is due to the presence of an internal model of
the exosystem that reproduces a constant input sig-
nal (an integrator).
Thus, a step signal r of any value is reproduced with
no steady-state error and the disturbance c
r
is steady-
state rejected. This is called a type 1 controller.
Similarly, a double integrator reproduces with no
steady-state error any linear combination of a step
and a ramp and rejects disturbances of the same type
This is a type 2 controller.
3
+
_
e
v
a
c
r
ω
r
PI M
T
Fig. 1.4. The simplified block diagram.
w
Σ
u y
Σ
r
e
Σ
e
Fig. 1.5. The reduced block diagram.
In Fig. 1.5 w accounts for both the reference and
the disturbance. The control purpose is to achieve a
“minimal” error e in the response to w.
If w is assumed to be generated by an exosystem Σ
e
like in the previous example, the internal model en-
sures zero stedy-state error.
This approach can easily be extended to the multi-
variable case with geometric techniques.
Modern approaches consider, besides the internal
model, the minimization of a norm (H
2
or H

) of
the transfer function from w to e to guarantee a sat-
isfactory transient.
4
A more complex example
+
r
p
delay
r
e
controller
motor
reduction
gear
d
gage
transducer
v
0
Fig. 1.6. Rolling mill control.
This example fits the general control scheme given in
Fig. 1.1.
The gage control has an inherent transportation de-
lay. If the aim of the control is to have given amounts
of material (in meters) at a specified thickness, it is
necessary to have a preview of these amounts, that is
taken into account with the delay.
Of course, this preview can be used with negligible
error if the cilinder rotation is feedback controlled by
measuring the amount of material with a type 2 con-
troller.
Thus, robustness is achieved with feedback and makes
feedforward (preview control) possible.
There are cases in which preaction (action in advance)
on the controlled system significantly improves track-
ing of a reference signal. The block diagram shown
in Fig. 1.1 also accounts for these cases.
5
Mathematical Models
Let us consider the velocity control of a motor shown
in Fig. 1.3 and its reduced block diagram (Fig. 1.5):
w
Σ
u y
Σ
r
e
Σ
e
Mathematical model of Σ:
v
a
(t) = R
a
i
a
(t) +L
a
d i
a
dt
(t) +v
c
(t) (1.1)
c
m
(t) = Bω(t) +J

dt
(t) +c
r
(t) (1.2)
In (1.1) v
a
is the applied voltage, R
a
and L
a
the arma-
ture resistance and inductance, i
a
and v
c
the armature
current and counter emf, while in (1.2) c
m
is the mo-
tor torque, B, J, and ω the viscous friction coefficient,
the moment of inertia, and the angular velocity of the
shaft, and c
r
the externally applied load torque.
Mathematical model of Σ
r
:
d z
dt
(t) =
1
T
e(t) (1.3)
v
a
(t) = K e(t) +z(t) (1.4)
where z denotes the output of the integrator in the
PI controller.
6
Their state space representation is
˙ x(t) = Ax(t) +B
1
u(t) +B
2
d(t)
y(t) = C x(t) +D
1
u(t) +D
2
d(t)
(1.5)
where for Σ, x := [i
a
ω]
T
, u := v
a
, d := c
r
, y := ω and
A =
_
−R
a
/L
a
−k
1
/L
a
k
2
/J −B/J
_
B
1
=
_
1/L
a
0
_
B
2
=
_
0
−1/J
_
C =
_
0 1
¸
D
1
= 0 D
2
= 0
while for Σ
r
, x
r
:= z, u
r
:= e, y
r
:= v
a
and
A
r
= 0 B
r
= 1/T C
r
= 1 D
r
= K
Mathematical model of Σ
e
:
d r
dt
= 0
d c
r
dt
= 0 (1.6)
This corresponds to an autonomous system (without
input) having x
e
= y := [r c
r
]
T
and
A
e
=
_
0 0
0 0
_
C
e
=
_
1 0
0 1
_
7
The overall system (controlled system and controller)
can be represented with a unique mathematical model
of the same type:
˙
ˆ x(t) =
ˆ
Aˆ x(t) +
ˆ
B
1
u(t) +
ˆ
B
2
d(t)
ˆ y(t) =
ˆ
C ˆ x(t) +
ˆ
D
1
u(t) +
ˆ
D
2
d(t)
(1.7)
where for x := [i
a
ω z]
T
, u := r, d := c
r
y := ω and
ˆ
A =
_
_
−R
a
/L
a
−(k
1
+K)/L
a
1/L
a
k
2
/J −B/J 0
0 −1/T 0
_
_
ˆ
B
1
=
_
_
K/L
a
0
1
_
_ ˆ
B
2
=
_
_
0
−1/J
0
_
_
ˆ
C =
_
0 1 0
¸
ˆ
D
1
= 0
ˆ
D
2
= 0
The regulator design problem is: determine T and K
such that the system (1.7) is internally stable, i.e. the
eigenvalues of
ˆ
A have stricly negative real parts and
this property is maintained in presence of admissible
parameter variations.
8
If only its behavior with respect to step inputs must
be considered, the overall system in Fig. 1.3 can be
represented as the autonomous system
˙
ˆ x(t) =
ˆ
Aˆ x(t)
ˆ y(t) =
ˆ
C ˆ x(t)
(1.8)
where for x := [i
a
ω z r c
r
]
T
, y := ω and
ˆ
A =
_
¸
¸
¸
_
−R
a
/L
a
−(k
1
+K)/L
a
1/L
a
K/L
a
0
k
2
/J −B/J 0 0 −1/J
0 −1/T 0 1 0
0 0 0 0 0
0 0 0 0 0
_
¸
¸
¸
_
ˆ
C =
_
0 1 0 0 0
¸
The regulator design problem is: determine T and K
such that the autonomous system (
ˆ
A,
ˆ
C) is externally
stable, i.e., lim
t→∞
y(t) = 0 for any initial state and
this property is maintained in presence of admissible
parameter variations.
9
State Space Models
Continuous-time systems:
˙ x(t) = Ax(t) +Bu(t)
y(t) = C x(t) +Du(t)
(1.9)
with the state x∈. =R
n
, the input u∈| =R
p
, the
output y ∈¸ =R
q
and A, B C, D real matrices of suit-
able dimensions. The system will be referred to as the
quadruple (A, B, C, D) or the triple (A, B, C) if D = 0.
Most of the theory will be derived referring to triples
since extension to quadruples is straightforward.
Discrete-time systems:
x(k+1) = A
d
x(k) +B
d
u(k)
y(k) = C
d
x(k) +D
d
u(k)
(1.10)
Recall that a continuous-time system is internally
asymptotically stable iff all the eigenvalues of A be-
long to C

(the open left half plane of the complex
plane) and a discrete-time system is internally asymp-
totically stable iff all the eigenvalues of A
d
belong to
C
¸
(the open unit disk of the complex plane).
In the discrete-time case a significant linear model
is also the FIR (Finite Impulse Response) system,
defined by the finite convolution sum
y(k) =

N
l=0
W(l) u(k −l) (1.11)
where W(k) (k =0, . . . , N) is a q p real matrix, re-
ferred to as the gain of the FIR system, while N is
called the window of the FIR system.
10
Transfer Matrix Models
By taking the Laplace transform of (1.9) or the Z
transform of (1.10) we obtain the transfer matrix rep-
resentations
Y (s) = G(s) U(s) with
G(s) = C (sI −A)
−1
B +D
(1.12)
and
Y (z) = G
d
(z) U(z) with
G
d
(z) = C
d
(zI −A
d
)
−1
B
d
+D
d
(1.13)
respectively.
The H
2
norm in the continuous-time case is
|G|
2
=
_
1

tr
__

−∞
G(jω) G

(jω) dω
__
1/2
(1.14)
=
_
tr
__

0
g(t) g
T
(t) dt
__
1/2
(1.15)
where g(t) denotes the impulse response of the system
(the inverse Laplace transform of G(s)), and in the
discrete-time case it is
|G
d
|
2
=
_
1

tr
__
π
−π
G
d
(e

) G

d
(e

) dω
__
1/2
(1.16)
=
_
tr
_

k=0
g
d
(k) g
T
d
(k) dt
__
1/2
(1.17)
where G
d
(e

) denotes the frequency response of the
discrete-time system for unit sampling time and g
d
(k)
the impulse response of the system (the inverse Z
transform of G
d
(z)).
11
Geometric Approach (GA)
Geometric Approach: is a control theory for multivari-
able linear systems based on:
• linear transformations
• subspaces
(The alternative approach is the transfer function ap-
proach)
The geometric approach consists of
• an algebraic part (theoretical)
• an algorithmic part (computational)
Most of the mathematical support is developed in
coordinate-free form, to take advantage of simpler
and more elegant results, which facilitate insight into
the actual meaning of statements and procedures; the
computational aspects are considered independently
of the theory and handled by means of the standard
methods of matrix algebra, once a suitable coordinate
system is defined.
12
A Few Words on the Algorithmic Part
A subspace . is given through a basis matrix of max-
imum rank X such that . =imX.
The operations on subspaces are all performed
through an orthonormalization process (subroutine
ima.m in Matlab) that computes an orthonormal ba-
sis of a set of vectors in R
n
by using methods of the
Gauss–Jordan or Gram–Schmidt type.
Basic Operations
• sum: ? = . +¸
• linear transformation: ¸ = A.
• orthogonal complementation: ¸ = .

• intersection: ? = . ∩ ¸
• inverse linear transformation: . = A
−1
¸
Computational support with Matlab
Q = ima(A,p) Orthonormalization.
Q = ortco(A) Complementary orthogonalization.
Q = sums(A,B) Sum of subspaces.
Q = ints(A,B) Intersection of subspaces.
Q = invt(A,X) Inverse transform of a subspace.
Q = ker(A) Kernel of a matrix.
In program ima the flag p allows for permutations of
the input column vectors.
13
Basic relations
. ∩ (¸ +?) ⊇ (. ∩ ¸) +(. ∩ ?)
. +(¸ ∩ ?) ⊆ (. +¸) ∩ (. +?)
(.

)

= .
(. +¸)

= .

∩ ¸

(. ∩ ¸)

= .



A(. ∩ ¸) ⊆ A. ∩ A¸
A(. +¸) = A. +A¸
A
−1
(. ∩ ¸) = A
−1
. ∩ A
−1
¸
A
−1
(. +¸) ⊇ A
−1
. +A
−1
¸
Remarks:
1. The first two relations hold with the equality sign
if one of the involved subspaces ., ¸, ? is con-
tained in any of the others.
2. The following relations are useful for computa-
tional purposes:
A. ⊆ ¸ ⇔ A
T
¸

⊆ .

(A
−1
¸)

= A
T
¸

where A
T
denotes the transpose of matrix A.
14
Invariant Subspaces
Definition 2.1 Given a linear map A : . →., a sub-
space ¸ ⊆. is an A-invariant if
A¸ ⊆ ¸
Property 2.1 Given the subspaces T, c contained in
. and such that T⊆c, and a linear map A : . → .,
the set of all the A-invariants ¸ satisfying T⊆¸ ⊆c
is a nondistributive lattice Φ
0
with respect to ⊆, +, ∩.
We denote with max¸(A, c) the maximal A-invariant
contained in c (the sum of all the A-invariants
contained in c) and with min¸(A, T) the mini-
mal A-invariant containing T (the intersection of
all the A-invariants containing T): the above lat-
tice is non-empty if and only if T⊆max¸(A, c) or
min¸(A, T) ⊆c.
{
c
max¸(A, c)
min¸(A, T)
T
Φ
0
Fig. 2.1. The lattice Φ
0
.
15
The Algorithms
Algorithm 2.1 Computation of min¸(A, B)
?
1
= B
?
i
= B +A?
i−1
(i = 2, 3, . . .)
min¸(A, B) = B +Amin¸(A, B)
(2.1)
Algorithm 2.2 Computation of max¸(A, ()
?
1
= (
?
i
= ( ∩ A
−1
?
i−1
(i = 2, 3, . . .)
max¸(A, () = ( ∩ A
−1
max¸(A, ()
(2.2)
Property 2.2 Dualities
max¸(A, () = min¸(A
T
, (

)

min¸(A, B) = max¸(A
T
, B

)

Computational support with Matlab
Q = mininv(A,B) Minimal A-invariant containing
imB
Q = maxinv(A,C) Maximal A-invariant contained
in imC
16
Internal and External Stability of an Invariant
The restriction of map A to the A-invariant subspace
¸ is denoted by A[
¸
; ¸ is said to be internally stable if
A[
¸
is stable. Given two A-invariants ¸
1
and ¸
2
such
that ¸
1
⊆¸
2
, the map induced by A on the quotient
space ¸
2

1
is denoted by A[
¸
2

1
. In particular, an
A-invariant ¸ is said to be externally stable if A[
./¸
is stable.
Algorithm 2.3 Matrices P and Q representing A[
¸
and A[
./¸
up to an isomorphism, are derived as fol-
lows. Let us consider the similarity transformation
T :=[J T
2
], with imJ =¸ (J is a basis matrix of ¸)
and T
2
such that T is nonsingular. In the new basis
the linear transformation A is expressed by
A
t
= T
−1
AT =
_
A
t
11
A
t
12
O A
t
22
_
(2.3)
The requested matrices are defined as P :=A
t
11
,
Q:=A
t
22
.
Complementability of an Invariant
An A-invariant ¸ ⊆ . is said to be complementable
if an A-invariant ¸
c
exixts such that ¸ ⊕¸
c
=.; if so,
¸
c
is called a complement of ¸.
Algorithm 2.4 Let us consider again the change of
basis introduced in Algorithm 2.3. ¸ is comple-
mentable if and only if the Sylvester equation
A
t
11
X −X A
t
22
= −A
t
12
(2.4)
admits a solution. If so, a basis matrix of ¸
c
is given
by J
c
:=J X +T
2
.
17
Refer to the autonomous system
˙ x(t) = Ax(t) x(0) = x
0
(2.5)
or
x(k +1) = A
d
x(k) x(0) = x
0
(2.6)
The behavior of the trajectories in the state space with
respect to an invariant can be represented as follows.
x(0)
x(0)
¸
Fig. 2.2. External and internal stability of an
invariant.
Computational support with Matlab
[P,Q] = stabi(A,X) Matrices for the internal
and external stability of
the A-invariant imX
18
Controllability and Observability
Consider a triple (A, B, C), i.e., refer to
˙ x(t) = Ax(t) +Bu(t)
y(t) = C x(t)
(2.7)
Let B:=imB. The reachability subspace of (A, B),
i.e., the set of all the states that can be reached
from the origin in any finite time by means of control
actions, is 1=min¸(A, B). If 1=., the pair (A, B)
is said to be completely controllable.
Let ( :=kerC. The unobservability subspace of (A, C),
i.e., the set of all the initial states that cannot be rec-
ognized from the output function, is ¸=max¸(A, ().
If ¸=¡0¦, (A, C) is said to be completely observable.
1
Fig. 2.3. The reachability subspace.
19
If 1, =., but 1 is externally stabilizable, (A, B) is said
to be stabilizable.
If ¸, =¡0¦, but ¸ is internally stabilizable, (A, C) is
said to be detectable.
Pole Assignment
+
+
v
u
y
x
u
y
Σ Σ
F G
Fig. 2.4. State feedback and output injection
State feedback
˙ x(t) = (A+BF) x(t) +Bv(t)
y(t) = C x(t)
(2.8)
Output injection
˙ x(t) = (A+GC) x(t) +Bu(t)
y(t) = C x(t)
(2.9)
The eigenvalues of A+BF are arbitrarily assignable
by a suitable choiche of F iff the system is com-
pletely controllable and those of A+GC are arbitrarily
assignable by a suitable choice of G iff the system is
completely observable.
20
Complete Pole Assignment through an Observer
+
+
v
u
y
u
y
Σ Σ
F
−G
Σ
c
Σ
o
˜ x
˜ x
Fig. 2.5. Dynamic pre-compensator and observer
+
+
v
u
y
Σ
F
−G
Σ
o
˜ x
Fig. 2.6. Pole assignment through an observer
The eigenvalues of the overall system are the union
of those of A+BF and those of A+GC, hence com-
pletely assignable if the triple (A, B, C) is completely
controllable and observable.
21
Controlled and Conditioned Invariants
Definition 2.2 Given a linear map A : . →. and
a subspace B⊆. a subspace 1 ⊆. is an (A, B)-
controlled invariant if
A1 ⊆ 1 +B (2.10)
Let B and V be basis matrices of B and 1 respectively:
the following statements are equivalent to (2.10):
- a matrix F exists such that (A+BF) 1 ⊆ 1
- matrices X and U exist such that AV = V X +BU
- 1 is a locus of trajectories of the pair (A, B)
1
Fig. 2.7. The controlled invariant as a locus of
trajectories.
22
The sum of any two controlled invariants is a con-
trolled invariant, while the intersection is not; thus
the set of all the controlled invariants contained in a
given subspace c ⊆. is a semilattice with respect to
⊆, +, hence admits a supremum, the maximal (A, B)-
controlled invariant contained in c, that is denoted by
max1(A, B, c) (or simply 1

(B,c)
). We use the symbol 1

for max1(A, imB, kerC), which is the most important
controlled invariant concerning the triple (A, B, C).
Referring to the pair (A, B), we denote with 1
1
the
reachable subspace from the origin by trajectories con-
strained to belong to a generic (A, B)-controlled invari-
ant 1. Owing to the first property above, it is derived
as 1
1
= min¸(A+BF, 1 ∩B) and, clearly being an
(A+BF)-invariant, it also is an (A, B)-controlled in-
variant.
A generic (A, B)-controlled invariant 1 is said to be in-
ternally stabilizable or externally stabilizable if at least
one matrix F exists such that (A+BF)[
1
is stable or
at least one matrix F exists such that (A+BF)[
./1
is stable. It is easily proven that the eigenstructure
of (A+BF)[
1/1
1
is independent of F; it is called the
internal unassignable eigenstructure of 1. 1 is both
internally and externally stabilizable with the same F
if and only if its internal unassignable eigenstructure
is stable and the A-invariant 1 +1 = 1 +min¸(A, B)
is externally stable. This latter is ensured by the sta-
bilizability property of the pair(A, B).
23
Definition 2.3 Given a linear map A : . →. and
a subspace ( ⊆. a subspace S ⊆. is an (A, ()-
conditioned invariant if
A(S ∩ () ⊆ S (2.11)
Let C be a matrix such that ( =kerC. The following
statement is equivalent to (2.11):
- a matrix G exists such that (A+GC) S ⊆ S
The intersection of any two conditioned invariants is
a conditioned invariant while the sum is not; thus
the set of all the conditioned invariants containing
a given subspace T⊆. is a semilattice with respect
to ⊆, ∩, hence admits an infimum, the minimal (A, ()-
conditioned invariant containing T, that is denoted
by minS(A, (, T) (or simply S

((,T)
). We use the simple
symbol S

for minS(A, kerC, imB), which is the most
important conditioned invariant concerning the triple
(A, B, C).
Controlled and conditioned invariants are dual to each
other. Controlled invariants are used in control prob-
lems, while conditioned invariants are used in obser-
vation problems.
The orthogonal complement of an (A, ()-conditioned
invariant is an (A
T
, (

)-controlled invariant, hence the
orthogonal complement of an (A, ()-conditioned in-
variant containing a given subspace T is an (A
T
, (

)-
controlled invariant contained in T

. External and in-
ternal stabilizability of conditioned invariants are easily
defined by duality.
24
Self-bounded Controlled Invariants
Definition 2.4 Given a linear map A : . →. and two
subspaces B ⊆ ., c ⊆ ., a subspace 1 ⊆. is an
(A, B)-controlled invariant self-bounded with respect
to c if, besides (2.10), the following relations hold
1 ⊆ 1

(B,c)
(2.12)
1

(B,c)
∩ B ⊆ 1 (2.13)
The set of all the (A, B)-controlled invariants self-
bounded with respect to c is a nondistributive lattice
with respect to ⊆, +, ∩, whose supremum is 1

(B,c)
and
whose infimum is 1
1

(B,c)
.
Given subspaces T, c contained in . and such
that T⊆1

, the infimum of the lattice of all
the (A, B)-controlled invariants self-bounded with
respect to c and containing T is the reach-
able set on 1

with forcing action B+T, i.e.,
1
m
:=min¸(A+BF, 1

(B,c)
∩(B+T)), with F such
that (A+BF) 1

(B,c)
⊆1

(B,c)
.
25
The infimum of the lattice of all the (A, B)-controlled
invariants self-bounded with respect to a given sub-
space c can be expressed in terms of conditioned in-
variants as follows.
Property 2.3 Let T⊆1

(B,c)
. The infimum of the
lattice Φ of all the (A, B)-controlled invariants self-
bounded with respect to c and containing T is ex-
pressed by
1
m
= 1

(B,c)
∩ S

(c,B+T)
(2.14)
Note, in particular, that 1
1

(B,c)
= 1

(B,c)
∩S

(c,B)
. The
dual of Property 2.3 is
Property 2.4 Let S

((,T)
⊆c. The infimum of the lat-
tice Ψ of all the (A, ()-conditioned invariants self-
hidden with respect to T and contained in c is ex-
pressed by
S
M
= S

((,T)
+1

(T,(∩c)
(2.15)
26
c
1

(B,c)
1
m
T
c
S
M
S

((,T)
T
Φ Ψ
Fig. 2.8. The lattices Φ and Ψ.
The main theorem and its dual
Theorem 2.1 Let T⊆1

(B,c)
. There exists at least
one internally stabilizable (A, B)-controlled invariant
1 such that T⊆1 ⊆c if and only if 1
m
is internally
stabilizable.
Theorem 2.2 Let S

((,T)
⊆c. There exists at least
one externally stabilizable (A, ()-conditioned invariant
S such that T⊆S ⊆c if and only if S
M
is internally
stabilizable.
27
The Algorithms
Algorithm 2.5 Computation of S

=minS(A, (, B)
S
1
= B
S
i
= B +A(S
i−1
∩ () (i = 2, 3, . . .)
S

= B +A(S

∩ ()
(2.16)
Algorithm 2.6 Computation of 1

=max1(A, B, ()
1
1
= (
1
i
= ( ∩ A
−1
(1
i−1
+B) (i = 2, 3, . . .)
1

= ( ∩ A
−1
(1

+B)
(2.17)
Property 2.5 Dualities
max1(A, B, () = minS(A
T
, B

, (

)

minS(A, (, B) = max1(A
T
, (

, B

)

Remark: Refer to the discrete-time triple (A
d
, B
d
, C
d
),
i.e., to equations (1.10) with D
d
=0. Algorithm 2.5
with A=A
d
, B=imB
d
and ( =kerC
d
at the generic
i-th step provides the set of all states reachable from
the origin with trajectories having all the states but
the last one belonging to kerC
d
, hence invisible at
the output. Thus S

has a control meaning in the
discrete-time dynamics: it is the maximum subspace
of the state space reachable from the origin with this
type of trajectories in ρ steps, being ρ the number of
iterations required for (2.16) to converge to S

.
28
Algorithm 2.7 Computation of matrix F such that
(A+BF) 1 ⊆ 1. Let V be a basis matrix of the (A, B)-
controlled invariant 1. First, compute
_
X
U
_
= [V B]
+
AV
where the symbol
+
denotes the pseudoinverse. Then,
compute
F := −U (V
T
V )
−1
V
T
Algorithm 2.8 Computation of the internal unas-
signable eigenstructure of an (A, B)-controlled invari-
ant. A matrix P representing the map (A+BF)[
1/1
1
up to an isomorphism, is derived as follows. Let us
consider the similarity transformation T :=[T
1
T
2
T
3
],
with imT
1
=1
1
, imT
2
=1 and T
3
such that T is non-
singular. In the new basis matrix A+BF is expressed
by
(A+BF)
t
= T
−1
(A+BF) T =
_
_
A
t
11
A
t
12
A
t
13
O A
t
22
A
t
23
O O A
t
33
_
_
The requested matrix is P := A
t
22
.
29
Computational support with Matlab
Q = mainco(A,B,X) Maximal (A, imB)-controlled
invariant contained in imX
Q = miinco(A,C,X) Minimal (A, imC)-conditioned
invariant containing imX
F = effe(A,B,X) State feedback matrix such that
(A+BF) imX⊆imX
[P,Q] = stabv(A,B,X) Matrices for the internal and
external stability of the (A, imB)-controlled
invariant imX
F = effest(A,B,X,ei,ee) Stabilizing feedback matrix
setting the assignable eigenvalues as ei and
the assignable external eigenvalues as ee
30
The Geometric Characterization
of Some Properties of Linear Systems
Consider the standard continuous-time system – triple
(A, B, C)
˙ x(t) = Ax(t) +Bu(t)
y(t) = C x(t)
(3.1)
or the standard discrete-time system – triple
(A
d
, B
d
, C
d
)
x(k+1) = A
d
x(k) +B
d
u(k)
y(k) = C
d
x(k)
(3.2)
(we consider triples since they provide a better insight
and extension to quadruples is straightforward – ob-
tainable with a suitable state extension)
Systems (3.1) and (3.2) with x(0) =0 define linear
maps T
f
: |
f
→ ¸
f
from the space |
f
of the admissi-
ble input functions to the functional space ¸
f
of the
zero-state responses. These maps are defined by the
convolution integral and the convolution summation
y(t) = C
_
t
0
e
A(t−τ)
Bu(τ) dτ (3.3)
y(k) = C
d
k−1

h=0
A
(k−h−1)
d
B
d
u(h) (3.4)
The admissible input functions are:
- piecewise continuous and bounded functions of time
t for (3.3);
- bounded functions of the discrete time k for (3.4).
31
Left and Right Invertibility
Definition 3.1 Assume that B has maximal rank.
System (3.1) is said to be invertible (left-invertible)
if, given any output function y(t), t ∈[0, t
1
] t
1
>0 be-
longing to imT
f
, there exists a unique input function
u(t), t ∈[0, t
1
), such that (3.3) holds.
Definition 3.2 Assume that B
d
has maximal rank.
System (3.2) is said to be invertible (left-invertible)
if, given any output function y(k), k ∈[0, k
1
], k
1
≥n
belonging to imT
f
there exists a unique input function
u(k), k ∈[0, k
1
−1] such that (3.4) holds.
Definition 3.3 Assume that C has maximal rank.
System (3.1) is said to be functionally controllable
(right-invertible) if there exists an integer ρ ≥1 such
that, given any output function y(t), t ∈[0, t
1
], t
1
>0
with ρ-th derivative piecewise continuous and such
that y(0) =0, . . . y
(ρ)
(0) =0, there exists at least one
input function u(t), t ∈[0, t
1
) such that (3.3) holds.
The minimum value of ρ satisfying the above state-
ment is called the relative degree of the system.
Definition 3.4 Assume that C
d
has maximal rank.
System (3.2) is said to be functionally controllable
(right-invertible) if there exists an integer ρ ≥1 such
that, given an output function y(k), k ∈[0, k
1
], k
1
≥ρ
such that y(k) =0, k ∈[0, ρ −1], there exists at least
one input function u(k), k ∈[0, k
1
−1] such that (3.4)
holds. The minimum value of ρ satisfying the above
statement is called the relative degree of the system.
32
Let 1

:=max1(A, imB, kerC) and S

:=min1(A, kerC, imB)
Theorem 3.1 System (3.1) or (3.2) is invertible if
and only if
1

∩ S

= ¡0¦ (3.5)
Theorem 3.2 System (3.1) or (3.2) is functionally
controllable if and only if
1

+S

= . (3.6)
Note the duality: if system (A, B, C) or (A
d
, B
d
, C
d
) is
invertible (functionally controllable), the adjont sys-
tem (A
T
, C
T
, B
T
) or (A
T
d
, C
T
d
, B
T
d
) is functionally con-
trollable (invertible).
Relative Degree
Property 3.1 Assume that (3.6) holds and consider
the conditioned invariant computational sequence S
i
(i =1, 2, . . .). The relative degree is the least integer
ρ such that
1

+S
ρ
= .
Computational support with Matlab
r = reldeg(A,B,C,[D]) Relative degree of (A, B, C)
or (A, B, C, D)
33
+
_
+
_
Σ
i Σ
Σ
f
e
Σ
Σ
i
Σ
f
e
Fig. 3.1. Connections for right and left inversion
In Fig. 3.1 Σ
f
denotes a suitable relative-degree filter
in the continuous-time case or a relative degree delay
in the discrete-time case. The inverse system Σ
i
is to
be designed to null the error e.
If the system is nonminimum phase, i.e, has some un-
stable zeros, the inverse system is internally unstable,
so that the time interval considered for the system
inversion must be finite.
34
Invariant Zeros
Roughly speaking, an invariant zero corresponds to
a mode that, if suitably injected at the input of a
dynamic system, can be nulled at the output by a
suitable choice of the initial state.
Definition 3.5 The invariant zeros of (A, B, C) are
the internal unassignable eigenvalues of 1

. The
invariant zero structure of (A, B, C) is the internal
unassignable eigenstructure of 1

.
Recall that 1
1
∗ =1

∩S

. The invariant zeros are the
eigenvalues of the map (A+BF)[
1

/1
1

, where F de-
notes any matrix such that (A+BF)1

⊆ 1

.
1

1
1

unstable
zero
stable
zero
Fig. 3.2. Decomposition of the map (A+BF)[
1

35
Property 3.2 Let W be a real mm matrix having
the invariant zero structure of (A, B, C) as eigenstruc-
ture. A real p m matrix L and a real nm matrix X
exist, with (W, X) observable, such that by applying
to (A, B, C) the input function
u(t) = Le
Wt
v
0
(3.7)
where v
0
∈R
m
denotes an arbitrary column vector, and
starting from the initial state x
0
=X v
0
, the output y()
is identically zero, while the state evolution (on kerC)
is described by
x(t) = X e
Wt
v
0
(3.8)
v
0
x
0
= X v
0
Σ
e
L
Σ
v u
y
Fig. 3.3. The meaning of Property 3.2
Remark. In the discrete-time case equations
(3.7) and (3.8) are replaced by u(k) =LW
k
v
0
and
x(k) =XW
k
v
0
, respectively.
Computational support with Matlab
z = gazero(A,B,C,[D]) Invariant zeros of (A, B, C)
or (A, B, C, D)
36
Extension to Quadruples
Extension to quadruples of the above definitions and
properties can be obtained through a simple con-
trivance.
v
Σ
d
u
Σ
y
integrators
or delays
u
Σ
y
Σ
d
z
integrators
or delays
Fig. 3.4. Artifices to reduce a quadruple to a triple
Refer to the first figure: system Σ
d
is modeled by
˙ u(t) = v(t)
and the overall system by
˙
ˆ x(t) =
ˆ
Aˆ x(t) +
ˆ
Bv(t)
y(t) =
ˆ
C ˆ x(t)
with
ˆ x :=
_
x
u
_
ˆ
A :=
_
A B
0 0
_
ˆ
B :=
_
0
I
p
_
ˆ
C :=
_
C D
¸
37
The addition of integrators at inputs or outputs does
not affect the system right and left invertibility, while
the relative degree of (
ˆ
A,
ˆ
B,
ˆ
C) must be simply reduced
by 1 to be referred to (A, B, C, D)
In the discrete-time case Σ
d
is described by
u(k +1) = v(k)
and the overall system by
ˆ x(k +1) =
ˆ
A
d
ˆ x(k) +
ˆ
B
b
v(k)
y(k) =
ˆ
C
d
ˆ y(t)
with the extended matrices
ˆ
A
d
,
ˆ
B
d
,
ˆ
C
d
defined like in
the continuous-time case in terms of A
d
, B
d
, C
d
, D
d
.
This contrivance can also be used in most of the
synthesis problems considered in the sequel.
38
Disturbance Decoupling
The disturbance decoupling problem is one of the ear-
liest (1969) applications of the geometric approach.
u
d
e
x
Σ
F
Fig. 3.5. Disturbance decoupling
with state feedback
Let us consider the system
˙ x(t) = Ax(t) +Bu(t) +Dd(t)
e(t) = Ex(t)
(3.9)
where u denotes the manipulable input, d the distur-
bance input. Let B:=imB, T:=imD, c :=kerE.
The disturbance decoupling problem is: determine, if
possible, a state feedback matrix F such that distur-
bance d has no influence on output e.
The system with state feedback is described by
˙ x(t) = (A +BF) x(t) +Dd(t)
e(t) = E x(t)
(3.10)
It behaves as requested if and only if its reachable set
by d, i.e., the minimum (A+BF)-invariant containing
T, is contained in c.
39
Let 1

(B,c)
:=max 1(A, B, c). Since any (A+BF)-
invariant is an (A, B)-controlled invariant, the inacces-
sible disturbance decoupling problem has a solution if
and only if
T ⊆ 1

(B,c)
(3.11)
Equation (3.11) is a structural condition and does not
ensure internal stability. If stability is requested, we
have the disturbance decoupling problem with stabil-
ity. Stability is easily handled by using self-bounded
controlled invariants. Assume that (A, B) is stabiliz-
able (i.e., that 1=min¸(A, B) is externally stable)
and let
1
m
:= 1

(B,c)
∩ S

(c,B+T)
(3.12)
This subspace has already been defined in Property
2.3. The following result, providing both the struc-
tural and the stability condition, is a direct conse-
quence of Theorem 2.1.
Corollary 3.1 The disturbance decoupling problem
with stability admits a solution if and only if
T ⊆ 1

(B,c)
1
m
is internally stabilizable
(3.13)
If conditions (3.13) are satisfied, a solution is provided
by a state feedback matrix such that (A+BF) 1
m

1
m
and σ(A+BF) is stable.
If the state is not accessible, disturbance decoupling
may be achieved through a dynamic unit similar to a
state observer. This is called disturbance decoupling
problem with dynamic measurement feedback, and
will be considered later.
40
Feedforward
Decoupling of Measurable Signals
Consider now the system
˙ x(t) = Ax(t) +Bu(t) +H h(t)
e(t) = E x(t)
(3.14)
The triple (A, B, E) is assumed to be stable. This is
similar to (3.9), but with a different symbol for the
non-manipulable input, to denote that it is accessible
for measurement. Let 1:=imH. Signals d
1
and r
p
in
the general block diagram in Fig. 1.1 are of this type.
h
u Σ
Σ
c
e
Fig. 3.6. Measurable signal decoupling
The measurable signal decoupling problem is: deter-
mine, if possible, a feedforward compensator Σ
c
such
that the input h has no inflence on the output e. Con-
ditions for this problem to be solvable with stability
are similar to those of disturbance decoupling prob-
lem, but state feedback is not required (a feedforward
solution with a pre-compensator of the type shown in
Fig. 2.5 is possible). Define
1
m
:= 1

(B,c)
∩ S

(c,B+1)
(3.15)
41
The solvability conditions once again are consequence
of Theorem 2.1.
Corollary 3.2 The measurable signal decoupling
problem with stability admits a solution if and only
if
1 ⊆ 1

(B,c)
+B
1
m
is internally stabilizable
(3.16)
The feedforward unit Σ
c
has state dimension equal
to the dimension of 1
m
and includes a state feedback
matrix F such that (A+BF)[
1
m
is stable. It is not
necessary to reproduce (A+BF)[
./1
m
in Σ
c
since it
is not influenced by input h. The assumption that
Σ is stable is not restrictive. It can be relaxed to Σ
being stabilizable and detectable, so that the stabi-
lizing feedback connection shown in Fig. 2.5 can be
used. This does not influence conditions (3.16) since
input v in Fig 2.5 clearly overrides the feedback signal
through F.
Note that internal stabilizability of 1
m
is ensured if the
plant is minimum phase (with all the invariant zeros
stable), since the internal unassignable eigenvalues of
1
m
are a part of those of 1

(B,c)
, that are invariant zeros
of the plant.
It is possible to include feedthrough terms in (3.14)
by using the extensions to quadruples previously de-
scribed. In this case addition of a dynamic unit with
relative degree one at the output achieves our aim.
42
The Dual Problem: Unknown-Input Observation
Consider the system
˙ x(t) = Ax(t) +Dd(t)
y(t) = C x(t)
e(t) = Ex(t)
(3.17)
Triple (A, D, C) is assumed to be stable. Output e
denotes a linear function of the state to be estimated
(possible the whole state).
+
+
d
e
y
−˜e
Σ
Σ
o

Fig. 3.7. Unknown-input observation
The unknown-input observation problem is: deter-
mine, if possible, an observer Σ
o
such that the input
u has no inflence on the output . Conditions for this
problem to be solvable with stability are dual to those
of the measurable signal decoupling problem. The
problem can be solved by duality. Define
S
M
= S

((,T)
+1

(T,( ∩c)
(3.18)
like in (2.15). The solvability conditions are conse-
quence of Theorem 2.2.
Corollary 3.3 The unknown-input observation prob-
lem with stability admits a solution if and only if
S

((,T)
∩ ( ⊆ c
S
M
is externally stabilizable
(3.19)
43
Decoupling of Previewed Signals (Discrete-Time)
The role of controlled and conditioned invariants is
very clearly pointed out by the previewed signal de-
coupling problem in the discrete-time case. Consider
again signal decoupling, but suppose that there is
some preview (knowledge in advance) of the signal
h to be decoupled. To take into account preview,
replace the block diagram in Fig. 3.6 with that in
Fig. 3.8.
h
p
delay
h
u
Σ
Σ
c
e
Fig. 3.8. Previewed signal decoupling
a) relative-degree preview
If a relative-degree preview is available, the structural
condition in Corollary 3.2 is relaxed as follows.
Corollary 3.4 The relative-degree previewed signal
decoupling problem with stability admits a solution
if and only if
1 ⊆ 1

(B,c)
+S

(c,B)
1
m
is internally stabilizable
(3.20)
where 1
m
is defined again by (3.15).
Note that the first condition in (3.20) is satisfied if
Σ is right invertible and the second is satisfied if it is
minimum-phase.
44
a) large preview
A large preview time enables to overcome the stability
condition, thus making it possible to obtain signal de-
coupling also in the nonminimum-phase case. “Large”
means significantly greater than the time constant of
the unstable zero closest to the unit circle.
Property 3.3 The “largely” previewed signal decou-
pling problem with stability admits a solution if and
only if
1 ⊆ 1

(B,c)
+S

(c,B)
(3.21)
Suppose that an impulse is scheduled at input h at
time ρ. It can be decoupled with an input signal u of
the type shown in the following figure with preaction
concerning unstable zeros and postaction stable zeros.
−k
a
0
ρ
preaction
dead-beat
postaction
Fig. 3.9. Input sequence for decoupling
an impulse at time ρ.
Localization of a previewed generic signal h() is
achievable through a FIR system having such type
of functions as gain.
45
Two different strategies are outlined according to
whether condition 2 in Corollary 3.4 is satisfied or
not. The basic idea is synthesized as follows.
Denote by ρ the least integer such that 1⊆1

(B,c)
+S
ρ
.
Let us recall that 1
m
is a locus of initial states in c
corresponding to trajectories controllable indefinitely
in c, while (S
ρ
) is the maximum set of states that can
be reached from the origin in ρ steps with all the states
in c except the last one. Suppose that an impulse is
applied at input h at the time instant ρ, producing an
initial state x
h
∈1, decomposable as x
h
=x
h,s
+x
h,v
,
with x
h,s
∈S
ρ
and x
h,v
∈1
m
. Let us apply the control
sequence that drives the state from the origin to −x
h,s
along a trajectory in S
ρ
, thus nulling the first compo-
nent. The second component can be maintained on
1
m
by a suitable control action in the time interval
ρ ≤k <∞ while avoiding divergence of the state if all
the internal unassignale modes of 1
m
are stable or
stabilizable. If not, it can be further decomposed as
x
h,v
=x
t
h,v
+x
tt
h,v
, with x
t
h,v
belonging to the subspace
of the stable or stabilizable internal modes of 1
m
and
x
tt
h,v
to that of the unstable modes. The former com-
ponent can be maintained on 1
m
as before, while the
latter can be nulled by reaching −x
tt
h,v
with a control
action in the time interval −∞<k ≤ρ −1 correspond-
ing to a trajectory in 1
m
from the origin.
46
Unknown-input Delayed Observation
+
+
d
e
y
e
d
−˜e
d
delay
Σ
o
Σ

Fig. 3.10. Unknown-input delayed observation
The dual problem is the unknown-input observation
of a linear function of the state with relative degree
delay if Σ is minimum phase or “large” delay if not.
The duals of Corollary 3.4 and Property 3.3 are stated
as follows.
Corollary 3.5 The unknown-input observation prob-
lem of a linear function of the state with relative de-
gree delay and stability admits a solution if and only
if
1

(T,()
∩ S

((,T)
⊆ c
S
M
is externally stabilizable
(3.22)
where S
M
is defined again by (3.18).
Note that the unknown-input observation of any linear
function of the state (possibly the whole state) with
relative degree delay is achievable if Σ is left-invertible
and minimum phase.
Property 3.4 The unknown-input observation prob-
lem of a linear function of the state with “large” delay
and stability admits a solution if and only if
1

(T,()
∩ S

((,T)
⊆ c (3.23)
47
Feedforward Model Following
The feedforward model following problem reduces to
decoupling of measured signals, as the following figure
shows.
+
_
h
u
y
y
m
e
Σ
Σ
c
Σ
m
ˆ
Σ
Fig. 3.11. Feedforward model following
Assume that system Σ is described by the triple
(A, B, C) and model Σ
m
by the triple (A
m
, B
m
, C
m
).
The overall sistem
ˆ
Σ is described by
ˆ
A :=
_
A 0
0 A
m
_
ˆ
B :=
_
B
0
_
ˆ
H :=
_
0
B
m
_
ˆ
E :=
_
C −C
m
¸
(3.24)
Both system and model are assumed to be stable,
square, left and right invertible. The structural con-
dition expressed by the former of (3.16) is satisfied if
and only if the relative degree of Σ
m
is at least equal
to that of Σ.
48
It can be shown that the internal eigenvalues of
ˆ
1
m
are the union of the invariant zeros of Σ and the
eigenvalues of A
m
, so that in general model following
with stability is not achievable if Σ is nonminimum-
phase. If, on the other hand, the model Σ
m
consists
of q independent single-input single-output systems all
having as zeros some invariant zeros of Σ, these are
canceled as internal eigenvalues of
ˆ
1
m
. This makes it
possible to achieve both input-output decoupling and
internal stability, but restricts the model choice.
Note that the right inversion layout shown in Fig. 3.1
is achievable with a model consisting of q independent
relative-degree filters in the continuous-time case or q
independent relative-degree delays in the discrete-time
case.
The dual problem of model following is model follow-
ing by output feedforward correction, that reduces to
the left inversion layout shown in Fig. 3.1 if a model
consisting of p independent relative-degree filters in
the continuous-time case or p independent relative-
degree delays in the discrete-time case is adopted.
49
Feedback
Disturbance Decoupling by Dynamic Output Feedback
d
Σ u y
Σ
c
e
Fig. 4.1. Disturbance decoupling
by dynamic output feedback
Model of Σ:
˙ x(t) = Ax(t) +Bu(t) +Dd(t)
y(t) = C x(t)
e(t) = Ex(t)
(4.1)
The inputs u and d are the manipulable input and the
disturbance input, respectively, while outputs y and e
are the measured output and the controlled output,
respectively.
Model of Σ
c
:
˙ z(t) = N z(t) +M y(t)
u(t) = Lz(t) +K y(t)
(4.2)
The disturbance decoupling problem by dynamic out-
put feedback is stated as follows: determine, if pos-
sible, a dynamic compensator (N, M, L, K) such that
the disturbance d has no influence on the regulated
output e and the overall system is internally stable.
50
It has been shown that output dynamic feedback of
the type shown in Fig. 4.1 enables stabilization of
the overall system provided that (A, B) is stabilizable
and (A, C) detectable. Since overall system stability
is required, these conditions on (A, B) and (A, C) are
still necessary.
The overall system is described by
˙
ˆ x(t) =
ˆ
Aˆ x(t) +
ˆ
Dd(t)
e =
ˆ
Ex(t)
(4.3)
with
ˆ x :=
_
x
z
_
ˆ
A :=
_
A+BKC BL
MC N
_
ˆ
D :=
_
D
0
_
ˆ
E :=
_
E 0
¸
(4.4)
i.e., it can de described by a unique triple (
ˆ
A,
ˆ
B,
ˆ
C).
d
ˆ
Σ
e
Fig. 4.2. The overall system
Output e is decoupled from input d if and only if
min¸(
ˆ
A, im
ˆ
D) (the reachable subpace of the pair
(
ˆ
A,
ˆ
D)) is contained in ker
ˆ
E or, equivalently, im
ˆ
D is
contained in max¸(
ˆ
A, ker
ˆ
E). Furthermore, in order
the stability requirement to be satisfied,
ˆ
A must be
a stable matrix or min¸(
ˆ
A, im
ˆ
D) and max¸(
ˆ
A, ker
ˆ
E)
must be both internally and externally stable.
51
Stated in very simple terms, disturbance decoupling
is achieved if and only if the overall system (
ˆ
A,
ˆ
D,
ˆ
E)
exibits at least one
ˆ
A-invariant
ˆ
V such that
ˆ
T ⊆
ˆ
V ⊆
ˆ
c
ˆ
V is internally and externally stable
(4.5)
Necessary and sufficient conditions for solvability of
our problem are stated in the following theorem.
Theorem 4.1 The dynamic measurement feedback
disturbance decoupling problem with stability admits
at least one solution if and only if there exist an
(A, B)-controlled invariant 1 and an (A, ()-conditioned
invariant S such that:
T ⊆ S ⊆ 1 ⊆ c
S is externally stabilizable
1 is internally stabilizable
(4.6)
A short outline of the “only if” part of the proof.
Define the following operations on subspaces of the
extended state space ˆ x:
projection:
P(
ˆ
V) =
_
x :
_
x
z
_

ˆ
V
_
(4.7)
intersection:
I(
ˆ
V) =
_
x :
_
x
0
_

ˆ
V
_
(4.8)
52
Clearly, I(
ˆ
V) ⊆P(
ˆ
V), T = I(
ˆ
T) = P(
ˆ
T), c = P(
ˆ
c) =
I(
ˆ
c). The “only if” part of the proof of Theorem 4.1
follows from (4.5) and the following lemmas.
Lemma 4.1 Subspace
ˆ
V is an internally and/or ex-
ternally stable
ˆ
A-invariant only if P(
ˆ
V) is an internally
and/or externally stabilizable (A, B)-controlled invari-
ant.
Lemma 4.2 Subspace
ˆ
V is an internally and/or ex-
ternally stable
ˆ
A-invariant only if I(
ˆ
V) is an inter-
nally and/or externally stabilizable (A, ()-conditioned
invariant.
The “if” part of the proof is constructive, i.e., if a
resolvent pair (S, 1) is given, directly provides a com-
pensator (N, M, L, K) satisfying all the requirements
in the statement of the problem. This consists of a
special type of state observer fed by the measured
output y plus a special feedback connection from the
observer state to the manipulable input u.
53
A more constructive set of necessary and sufficient
conditions, based on the dual lattice structures af self-
bounded controlled invariants and their duals, provid-
ing a convenient set of resolvent pair, is stated in the
following theorem.
Theorem 4.2 Consider the subspaces 1
m
and S
M
de-
fined in (2.14) and (2.15). The dynamic measurement
feedback disturbance decoupling problem with stabil-
ity admits at least one solution if and only if
S

(C,D)
⊆ 1

(B,E)
S
M
is externally stabilizable
1
M
:=1
m
+S
M
is internally stabilizable
(4.9)
If Theorem 4.2 holds, (S
M
, 1
M
) is a convenient resol-
vent pair. Similarly, define S
m
:=1
m
∩S
M
. It can easily
be proven that (S
m
, 1
m
) is also a convenient resolvent
pair.
Note that conditions (4.9) consist of a structural con-
dition ensuring feasibility of disturbance decoupling
without internal stability and two stabilizability condi-
tions ensuring internal stability of the overall system.
54
The layout of the possible resolvent pairs in the dual
lattice structure is shown in the following figure, that
also points out the correspondences between any self-
bounded controlled invariant belonging to the first
lattice and an element of the second and viceversa.
This enables to derive other resolvent pairs satisfying
Theorem 4.1.
1

(B,E)
1
M
1
m
S
M
S
m
S

(C,D)
+1
m
∩S
M
Fig. 4.3. The resolvents with minimum fixed poles
55
The Autonomous Regulator Problem
Consider the block diagram shown in the following
figure.
+
r e u
y
_
Σ
r
Σ
Σ
e2
p
d
Σ
e1
Fig. 4.4. The closed-loop control scheme.
The regulator Σ
r
achieves:
(i) closed-loop asymptotic stability or, more generally,
pole assignability;
(ii) asymptotic (robust) tracking of reference r and
asymptotic (robust) rejection of disturbance d.
Both the reference and disturbance inputs are steps,
ramps, sinusoids, that can be generated by the exosys-
tems Σ
e1
and Σ
e2
. The eigenvalues of the exosystems
are assumed to belong to the closed rigth half-place
of the complex plane.
The overall system considered, included the exosys-
tems, is described by a linear homogeneous set of
differential equations, whose initial state is the only
variable affecting evolution in time.
56
The plant and the exosystems are modelled as a
unique regulated system which is not completely con-
trollable or stabilizable (the exosystem is not control-
lable). The corresponding equations are
˙ x(t) = Ax(t) +Bu(t)
e(t) = Ex(t)
(4.10)
with
x :=
_
x
1
x
2
_
A :=
_
A
1
A
3
0 A
2
_
B :=
_
B
1
0
_
E :=
_
E
1
E
2
¸
In (4.10) the plant corresponds to the triple
(A
1
, B
1
, E
1
). Note that the exosystem state x
2
in-
fluences both the plant through matrix A
3
and the
error e through matrix E
2
. (A
1
, B
1
) is assumed to be
stabilizable and (A, E) detectable.
The regulator is modelled like in the disturbance de-
couplig problem by measurement feedback, i.e.
˙ z(t) = N z(t) +M e(t)
u(t) = Lz(t) +K e(t)
(4.11)
57
exosystem
plant
regulator
u
e
regulated system
x
2
Σ
regul at or
u
e
Σ
Σ
r
Σ
e
Σ
p
Σ
r
regulated system
a)
b)
Fig. 4.5. Regulated system and regulator connection
The overall system is referred to as the autonomous
extended system
˙
ˆ x(t) =
ˆ
Aˆ x(t)
e(t) =
ˆ
E ˆ x(t)
(4.12)
with
ˆ x :=
_
_
x
1
x
2
z
_
_
ˆ
A :=
_
_
A
1
+B
1
KE
1
A
3
+B
1
KE
2
B
1
L
O A
2
O
ME
1
ME
2
N
_
_
ˆ
E :=
_
E
1
E
2
O
¸
58
Let x
1
∈R
n
1
, x
2
∈R
n
2
, z ∈R
m
. If the internal model
principle is used to design the regulator, the au-
tonomous extended system is characterized by an un-
observability subspace containing these modes, that
are all not strictly stable by assumption. In geometric
terms, an
ˆ
A-invariant
ˆ
V⊆ker
ˆ
E having dimension n
2
exists, that is internally not strictly stable.
Since the eigenvalues of
ˆ
A are clearly those of A
2
plus
those of the regulation loop, that are strictly stable,
ˆ
V is externally strictly stable. Hence
ˆ
A[
ˆ
V
has the
eigenstructure of A
2
(n
2
eigenvalues) and
ˆ
A
ˆ
./
ˆ
V
that
of the control loop (n
1
+n
2
eigenvalues).
The existence of this
ˆ
A-invariant
ˆ
V⊆ker
ˆ
E is pre-
served under parameter changes.
The autonumous regulator problem is stated as fol-
lows: derive, if possible, a regulator (N, M, L, K) such
that the closed-loop system with the exosystem dis-
connected is stable and lim
t→∞
e(t) =0 for all the ini-
tial states of the autonomous extended system.
In geometric terms it is stated as follows: refer to the
extended system (
ˆ
A,
ˆ
E) and let
ˆ
c :=ker
ˆ
E. Given the
mathematical model of the plant and the exosystem,
determine, if possible, a regulator (N, M, L, K) such
that an
ˆ
A-invariant
ˆ
V exists satisfying
ˆ
V ⊆
ˆ
c
σ(
ˆ
A[
ˆ
./
ˆ
/
) ⊆ C

(4.13)
59
In the extended state space
ˆ
. with dimension
n
1
+n
2
+m, define the
ˆ
A-invariant extended plant
ˆ
1
as
ˆ
1 := ¡ ˆ x : x
2
= 0¦ = im
_
_
I
n
1
O
O O
O I
m
_
_
(4.14)
By a dimensionality argument, the
ˆ
A invariant
ˆ
V,
besides (4.13), must satisfy
ˆ
V ⊕
ˆ
1 =
ˆ
. (4.15)
The main theorem on asymptotic regulation simply
translates the extended state space conditions (4.13)
and (4.15) into the plant plus exosystem state space
where matrices A, B and E are defined. Define the
A-invariant plant 1 through
1 := ¡ x : x
2
= 0¦ = im
_
I
n
1
O
_
(4.16)
Theorem 4.3 Let c :=kerE. The autonomous regu-
lator problem admits a solution if and only if an (A, B)-
controlled invariant 1 exists such that
1 ⊆ c
1 ⊕1 = .
(4.17)
The “only if” part of the proof derives from (4.13)
and (4.15), while the “if” part provides a quadruple
(N, M, L, K) that solves the problem.
60
Unfortunately the necessary and sufficient conditions
stated in Theorem 4.3 are nonconstructive. The fol-
lowing theorem provides constructive sufficient and al-
most necessary

conditions in terms of the invariant
zeros of the plant.
Theorem 4.4 Let us define 1

:=max1(A, B, c).
The autonomous regulator problem admits a solution
if
1

+1 = .
?(A
1
, B
1
, E
1
) ∩ σ(A
2
) = ∅
(4.18)
Remark:
We have again a structural condition and a stability
condition in terms of invariant zeros. However, the
stability condition is very mild in this case since it is
only required that the plant has no invariant zeros
equal to eigenvalues of the exosystem. Hence the au-
tonomous regulator problem may be also solvable if
the plant is nonminimum phase. In other words, min-
imality of phase is only required for perfect tracking,
non for asymptotic tracking.
Corollary 4.1 (Uniqueness of the resolvent) If the
plant is invertible and conditions (4.18) are satisfied,
a unique (A, B)-controlled invariant 1 satisfying con-
ditions (4.17) exists.

The conditions become necessary if the boundedness
of the control variable u is required. This is possible
also when the output y is unbounded if a part of the
internal model is contained in the plant.
61
Proof of Theorem 4.4:
Let F be a matrix such that (A+BF)1

⊆ 1

. In-
troduce the similarity transformation T :=[T
1
T
2
T
3
],
with imT
1
=1

∩1, im[T
1
T
2
] =1

and T
3
such that
im[T
1
T
3
] =1.
In the new basis the linear transformation A+BF has
the structure
A
t
= T
−1
(A+BF) T =
_
_
A
t
11
A
t
12
A
t
13
O A
t
22
O
O O A
t
33
_
_
(4.19)
Recall that 1 is an A-invariant and note that, owing
to the particular structure of B, it is also an (A+BF)-
invariant for any F.
By a dimensionality argument the eigenvalues of the
exosystem are those of A
t
22
, while the invariant zeros
of (A
1
, B
1
, E
1
) are a subset of σ(A
t
11
) since 1
1
∗ is
contained in 1

∩1. All the other elements of σ(A
t
11
)
are arbitrarily assignable with F. Hence, owing to
(4.18), the Sylvester equation
A
t
11
X −X A
t
22
= −A
t
12
(4.20)
admits a unique solution.
The matrix
V :=T
1
X +T
2
is a basis matrix of an (A, B)-controlled invariant 1
satisfying the solvability conditions (4.17).
62
Remarks:
• The proof of Theorem 4.4 provides the computa-
tional framework to derive a resolvent when the
sufficient conditions stated (that are also neces-
sary if the boundedness of the plant input is re-
quired) are satisfied.
• Relations (4.18) are respectively a structural con-
dition and a spectral condition; they are easily
checkable by means of the algorithms previously
described.
• When a resolvent has been determined by means
of the computational procedure described in the
proof of Theorem 4.4, it can be used to derive a
regulator with the procedure outlined in the “if”
part of the proof of Theorem 4.3.
• The order of the obtained regulator is n (that
of the plant plus that of the exosystem) with
the corresponding 2n
1
+n
2
closed-loop eigenval-
ues completely assignable under the assumption
that (A
1
, B
1
) is controllable and (E, A) observ-
able.
• The internal model principle is satisfied since the
from the proof of the “if” part of Theorem 4.3
it follows that the eigenstructure of the regulator
system matrix N contains that of A
2
.
• It is necessary to repeat an exosystem for every
regulated output to achieve independent steady-
state regulation (different internal models are ob-
tained in the regulator).
63
Feedback Model Following
The reference block diagram for feedback model fol-
lowing is shown in Fig. 4.6. Like in the feedforward
case, both Σ and Σ
m
are assumed to be stable and
Σ
m
to have at least the same relative degree as Σ.
+

+

Σ
Σ
m
Σ
c
r h u
e
y
y
m
Fig. 4.6. Feedback model following
Replacing the feedback connection with that shown
in Fig. 4.7 does not affect the structural properties
of the system. However, it may affect stability. The
new block diagram represents a feedforward model
following problem.
+

+

Σ
Σ
m
Σ
c
r
h
u
e
Fig. 4.7. A structurally equivalent connection
64
In fact, note that h is obtained as the difference of
r (applied to the input of the model) and y
m
(the
output of the model). This corresponds to the parallel
connection of Σ
m
and the opposite of the identity
matrix, that is invertible, having zero relative degree.
Its inverse is Σ
m
with a feedback connection through
the identity matrix, as shown in Fig. 4.8.
+

+
+
Σ
Σ
m
Σ
c
h
u
e
Σ
t
m
r
Fig. 4.8. A structurally equivalent block diagram
Let the model consist of q independent single-input
single-output systems all having as zeros the unstable
invariant zeros of Σ. Since the invariant zeros of a
system are preserved under any feedback connection,
a feedforward model following compensator designed
with reference to the block diagram in Fig. 4.8 does
not include them as poles.
It is also possible to include multiple internal models
in the feedback connection shown in the figure (this is
well known in the single input/output case), that are
repeated in the compensator, so that both Σ
t
m
and the
compensator may be unstable systems. In fact, zero
output in the modified system may be obtained as
the difference of diverging signals. However, stability
is recovered when going back to the original feedback
connection represented in Fig. 4.6.
65
Geometric Approach to LQR Problems
Consider again the disturbance decoupling problem by
state feedback, corresponding to the state equations
˙ x(t) = Ax(t) +Bu(t) +Dd(t)
e(t) = Ex(t)
(5.1)
in the continuous-time case and to the equations
x(k +1) = A
d
x(k) +B
d
u(k) +D
d
d(k)
e(k) = E
d
x(k)
(5.2)
in the discrete-time case. The corresponding block
diagram is represented in Fig. 5.1.
u
d
e
x
Σ
F
Fig. 5.1. Disturbance decoupling
by state feedback
Assume that the necessary and sufficient conditions
for its solvability with internal stability
T ⊆ 1

(B,c)
1
m
is internally stabilizable
(5.3)
are not satisfied. In this case a convenient resort is to
minimize the H
2
norm of the matrix transfer function
from input d to output e, defined by equation (1.14)
or (1.15) in the continuous-time case and equation
(1.16) or (1.17) in the discrete-time case.
66
The continuous-time case
Consider the following problem:
Problem 5.1 Referring to system (5.1), determine a
state feedback matrix F such that A+BF is stable
and the corresponding state trajectory for any initial
state x(0) minimizes the performance index
J =
_

0
e(t)
T
e(t) dt =
_

0
x(t)
T
E
T
Ex(t) dt (5.4)
This problem is the so-called “cheap version” of
the classical Kalman regulator problem or Linear
Quadratic Regulator (LQR) problem. In the Kalman
problem the performance index is
J =
_

0
x(t)
T
Qx(t) +u(t)
T
Ru(t) dt
=
_

0
x(t)
T
C
T
C x(t) +u(t)
T
D
T
Du(t) dt
where matrices Q and R are symmetric positive
semidefinite and positive definite respectively, hence
factorizable as shown. It can be proven that the cheap
version is the more general, since the input to output
feedthrough term u(t)
T
D
T
Du(t) can be accounted for
with a suitable state extension.
67
Problem 5.1 is solvable with the geometric tools. Ac-
cording to the classical optimal control approach, con-
sider the Hamiltonian function
H(t) := x(t)
T
E
T
E x(t) +p(t)
T
(Ax(t) +Bu(t))
and derive the state, costate equations and stationary
condition as
˙ x(t) =
_
∂H(t)
∂p(t)
_
T
= Ax(t) +Bu(t)
˙ p(t) =
_
∂H(t)
∂x(t)
_
T
= −2E
T
E x(t) −A
T
p(t)
0 =
_
∂H(t)
∂u(t)
_
T
= B
T
p(t)
This overall Hamiltonian system can also be written
as
˙
ˆ x(t) =
ˆ
Aˆ x(t) +
ˆ
Bu(t)
0 =
ˆ
E ˆ x(t)
(5.5)
with
ˆ x =
_
x
p
_
ˆ
A =
_
A 0
−2E
T
E −A
T
_
ˆ
B =
_
B
0
_
ˆ
E =
_
0 B
T
¸
(5.6)
Problem 5.1 admits a solution if and only if there
exixts an internally stable (
ˆ
A,
ˆ
B)-controlled invariant of
the overall Hamiltonian system contained in
ˆ
c whose
projection on the state space of system (5.1), defined
as in (4.7), contains the initial state x(0).
68
It can be proven that the internal unassignable eigen-
values of
ˆ
1

:= max1(
ˆ
A,
ˆ
B,
ˆ
c) are stable-unstable by
pairs. Hence a solution of Problem 5.1 is obtained as
follows:
1. compute
ˆ
1

;
2. compute a matrix
ˆ
F such that (
ˆ
A+
ˆ
B
ˆ
F)
ˆ
1


ˆ
1

and the assignable eigenvalues (those internal to
1
ˆ
1
∗) are stable;
3. compute
ˆ
1
s
, the maximum internally stable
(
ˆ
A+
ˆ
B
ˆ
F)-invariant contained in
ˆ
1

;
4. if x(0) ∈P(
ˆ
1
s
) the problem admits a solution F,
that is easily computable as a function of
ˆ
1
s
and
ˆ
F; if not, the problem has no solution.
Refer to Fig. 5.1. The above procedure also provides
a state feedback matrix F corresponding to the mini-
mum H
2
norm from d to e. This immediately follows
from expression (1.15) of the H
2
norm in terms of
the impulse response. In fact, the impulse response
corresponds to the set of initial states defined by the
column vectors of matrix D. Thus, the problem of
minimizing the H
2
norm from d to e has a solution if
and only if
T⊆P(
ˆ
1
s
)
Thus, the minimum H
2
norm disturbance almost de-
coupling problem has no solution if the above condi-
tion is not satisfied. The discrete-time case is partic-
ularly interesting since a solution always exist. The
reason for this will be pointed out below.
69
The discrete-time case
The discrete-time cheap LQR problem is stated as
follows.
Problem 5.2 Referring to system (5.2), determine a
state feedback matrix F
d
such that A
d
+B
d
F
d
is stable
and the corresponding state trajectory for any initial
state x(0) minimizes the performance index
J =

k=0
e(k)
T
e(k) =

k=0
x(k)
T
E
T
d
E
d
x(k) (5.7)
In this case the Hamiltonian function is
H(k) := x(k)
T
E
T
d
E
d
x(k) +p(k)
T
(A
d
x(k) +B
d
u(k))
and the state, costate equations and stationary con-
dition are
x(k +1) =
_
∂H(k)
∂p(k +1)
_
T
= A
d
x(k) +B
d
u(k)
p(k) =
_
∂H(k)
∂x(k)
_
T
= 2E
T
d
E
d
x(k) +A
T
d
p(k +1)
0 =
_
∂H(k)
∂u(k)
_
T
= B
T
p(k +1)
70
Like in the continuous-time case, it is convenient to
state the overall Hamiltonian system in compact form:
ˆ x(k +1) =
ˆ
A
d
ˆ x(k) +
ˆ
B
d
u(k)
0 =
ˆ
E
d
ˆ x(k)
(5.8)
with
ˆ x =
_
x
p
_
ˆ
A
d
=
_
A
d
0
−2A
−T
d
E
T
d
E
d
−A
−T
d
_
ˆ
B
d
=
_
B
d
0
_
ˆ
E
d
=
_
−2B
T
d
A
−T
d
E
T
d
E
d
B
T
d
A
−T
d
¸
(5.9)
A solution of Problem 5.2 is obtained again with a
geometric procedure, but, unlike the continuous-time
case, in this case a dead-beat like motion is also feasi-
ble and P(
ˆ
1
s
) covers the whole state space of system
(5.2). Hence both Problem 5.2 and the problem of
minimizing the H
2
norm from d to e are always solvable
in the discrete-time case.
71
0
ρ
dead-beat
postaction
Fig. 5.2. Cheap H
2
optimal control.
A typical control sequence is shown in Fig. 5.2: as
the sampling time approaches zero, the dead beat
segment tend to a distribution, which is not obtainable
with state feedback. For this reason solvability of the
H
2
optimal decoupling problem is more restricted in
the continuous-time case.
If the signal to be optimally decoupled is measurable
and the system considered is stable, state feedback
can be used in an auxiliary feedforward unit of the
type shown in Fig. 3.6, while the dual layout shown in
Fig 3.7 realizes the H
2
-optimal observation of a linear
function of the state or possibly of the whole state
(Kalman filter).
However, if the signal is not measurable and state
is not accessible, the problem of H
2
-optimal decou-
pling with dynamic output feedback can be stated and
solvability conditions derived by using geometric tech-
niques again.
72
Conclusions
The three types of input signals:
• disturbance (eliminable only with feedback)
• measurable
• previewed
The seven characterizing properties of systems:
• (internal) stability
• controllability
• observability
• invertibility
• functional controllability
• relative degree
• minimality of phase
In general, the necessary and sufficient conditions for
solvability of control problems consist of
• a structural condition
• s stability condition
When a tracking or disturbance rejection problem is
not perfectly solvable with internal stability, it is pos-
sible to resort to H
2
optimal solutions that can also
be obtained through the standard geometric tools and
algorithms.
http://www.deis.unibo.it/Staff/FullProf/GiovanniMarro/geometric.htm
73