You are on page 1of 20

Chapter 1

State-space methods
1.1 State-space representations
1.1.1 Example of a state-space model
State-space models arise naturally in the modeling of physical systems. For example, a per-
manent magnet brush DC motor is described by the equations
L
di
dt
= v Ri K
J
d
dt
= Ki (1.1)
where i(t) is the current in the motor (in A), v(t) is the voltage applied to the motor (in V ),
and (t) is the speed of the motor (in rad/s). For simplicity, it is assumed that there is no
load or friction torque. The parameters are R, the resistance of the armature winding (in ),
L, the inductance of the armature winding (in H), K, the back-emf constant (in V.s), and
J, the inertia of the motor (in kg.m
2
). The back-emf constant is also the torque constant (in
Nm/A).
The two dierential equations may be written as
dx
1
dt
=
R
L
x
1

K
L
x
2
+
1
L
u
dx
2
dt
=
K
J
x
1
(1.2)
where x
1
= i, x
2
= , and u = v. In matrix form,
d
dt
_
x
1
x
2
_
=
_
R/L K/L
K/J 0
__
x
1
x
2
_
+
_
1/L
0
_
u (1.3)
One also uses the notation x for dx/dt. Typically, the measured outputs of the system are the
current i and the speed , but in so-called sensorless methods of control, only the current is
1
measured. Then, the output can be written as
y =
_
1 0
_
_
x
1
x
2
_
(1.4)
where y = i. Equations (1.3) and (1.4) constitute a state-space model for the motor.
1.1.2 General form of a state-space model
In general, a state-space representation has the form
x = Ax + Bu
y = Cx +Du (1.5)
where:
x is a column vector of dimension n, called the state vector,
u is a scalar signal, and the input of the system,
y is a scalar signal, and the output of the system,
A is an n n matrix,
B is a column vector of dimension n,
C is a row vector of dimension n
D is a scalar.
The dimensions of all the elements and of the products are shown in Fig. 1.1.
x A
n 1 n n
1 1
n 1
n 1
x
= +
u
B
.
1 n 1 1 n 1
=
+
1 1 1 1
y
C x D u
Figure 1.1: Matrix products in the state-space model
It is also possible to consider systems with multiple inputs and multiple outputs, in which
case u and y become vectors, and B, C, and D become matrices. Many state-space results
extend trivially from single-input single-output to multiple-input multiple-output systems, but
some require signicant adjustments. Only single-input single-output systems will be consid-
ered here.
1.1.3 State-space analysis
Some simple yet general results can be obtained by applying the Laplace transform to the
state-space model. The equations for the state-space model are
x = Ax + Bu
y = Cx +Du (1.6)
so that, in the Laplace domain
sX(s) x(0) = AX(s) +BU(s)
Y (s) = CX(s) +DU(s) (1.7)
where x(0) is the initial condition of the state vector (x(t) at t = 0), and X(s), U(s), and Y (s)
are the Laplace transforms of x(t), u(t), and y(t), respectively. The rst (vector) equation of
(1.7) gives
sX(s) AX(s) = (sI A)X(s) = x(0) +BU(s) (1.8)
where I is the identity matrix with dimension nn. Therefore, the transform of the output is
Y (s) = C(sI A)
1
x(0)
. .
Response to initial conditions
+
_
C(sI A)
1
B +D
_
U(s)
. .
Response to input
(1.9)
The dimensions of the terms in the expression are, in the order in which they appear
1 1 = (1 n) (n n) (n 1) + ((1 n) (n n) (n 1) + (1 1)) (1 1)
(1.10)
As identied above, the output Y (s) obtained from the state-space model has two terms:
a term due to initial conditions, and a term due to the input. The transfer function relates
the input of the system to its output and is found to be
H(s) = C(sI A)
1
B +D (1.11)
Although this transfer function may be complicated to compute, the poles are determined by
the common denominator of the matrix (sI A)
1
, which is det(sI A). Therefore, the poles
are given by the roots of det(sI A) = 0, which are the eigenvalues of the matrix A. The
eigenvalues may be found easily, even for large matrices, using a mathematical package.
Example: for the DC motor model,
A =
_
R/L K/L
K/J 0
_
, B =
_
1/L
0
_
, C =
_
1 0
_
, D = 0 (1.12)
so that the transfer function is
H(s) =
_
1 0
_
_
s +R/L K/L
K/J s
_
1
_
1/L
0
_
=
1
s
2
+ (R/L)s +K
2
/LJ

_
1 0
_
_
s K/L
K/J s +R/L
__
1/L
0
_
=
(1/L)s
s
2
+ (R/L)s +K
2
/LJ
(1.13)
The poles of the system are the roots of s
2
+ (R/L)s +K
2
/LJ = 0. Note that there is a zero
in the transfer function at s = 0, because the steady-state current is zero for any constant
voltage and speed when there is no friction torque.
The response to the initial conditions is
C(sI A)
1
x(0) =
sx
1
(0) (K/L)x
2
(0)
s
2
+ (R/L)s +K
2
/LJ
(1.14)
where x
1
(0) is the initial current in the motor and x
2
(0) is the initial speed of the motor. The
denominator is det(sI A) = s
2
+ (R/L)s + K
2
/LJ, which is the same as for the transfer
function of the system.
1.1.4 State-space realizations
A state-space realization for a transfer function H(s) is a set of A, B, C, D such that H(s) =
C(sI A)
1
B +D. Before proceeding on this topic, we introduce the following denitions:
a function of s is called a rational function of s if it is the ratio of two polynomials,
a rational function of s is called proper if deg N(s) deg D(s),
a rational function of s is called strictly proper if deg N(s) < deg D(s),
a polynomial is called monic if the coecient of its highest power is equal to 1.
Note that (sI A)
1
is a matrix of rational functions of s. All the denominators in the
matrix originate from the monic polynomial det(sIA), whose degree is equal to the dimension
of the state x. Even after possible cancellations between the roots of the numerators and
denominators of the matrix, all denominator degrees remain strictly greater than the numerator
degrees. As a result, the transfer function of a state-space model must be strictly proper if
D = 0. For D = 0, the transfer function must be proper.
An interesting fact is that a state-space model can always be created that realizes a
proper transfer function. The procedure works as follows. Without loss of generality, one may
assume that the coecient of the highest power of s in D(s) is equal to 1. Indeed, if the rst
coecient was not equal to 1, one could divide both the numerator and the denominator by
the coecient, and the rst denominater coecient would become 1.
Next, dene
k
P
= lim
s
N(s)
D(s)
(1.15)
In other words, k
P
is the coecient of the highest power of s in N(s). Then, the transfer
function can be split as
N(s)
D(s)
= k
P
+
N

(s)
D(s)
(1.16)
where
N

(s) = N(s) k
P
.D(s) (1.17)
and deg N

(s) < deg D(s).


Next, label the coecients of N

(s)/D(s) with
N

(s)
D(s)
=
a
n
s
n1
+ +a
2
s +a
1
s
n
+b
n
s
n1
+ +b
2
s +b
1
(1.18)
Then, the matrices of the so-called controllable canonical realization are dened to be
A =
_
_
_
_
_
_
_
0 1 0 0
0 0 1 0 0
.
.
.
0 0 1
b
1
b
2
b
n
_
_
_
_
_
_
_
, B =
_
_
_
_
_
0
.
.
.
0
1
_
_
_
_
_
C =
_
a
1
a
n
_
, D = k
P
(1.19)
The state-space equations are identical to
x
1
= x
2
, x
2
= x
3
x
n1
= x
n
x
n
= b
1
x
1
b
n
x
n
+u
y = a
1
x
1
+a
n
x
n
+k
P
u (1.20)
so that the state-space realization can be implemented using integrators, multipliers, and
summing junctions, using the diagram of Fig. 1.2.
a
a
a
y
b
b
b
x x x
x
n
n
n-1
1
1
2
n
n-1
1
n-1
u
...

q
o
.
.
.
Figure 1.2: Realization of a linear time-invariant system using integrators
To prove that the state-space system indeed has the required transfer function, apply the
Laplace transform to the equations assuming zero initial conditions, so that
X
2
(s) = sX
1
(s), ...X
n
(s) = s
n1
X
1
(s) (1.21)
and
_
s
n
+b
n
s
n1
+ +b
2
s +b
1
_
X
1
(s) = U(s)
_
a
1
+a
2
s + +a
n
s
n1
_
X
1
(s) +k
P
U(s) = Y (s) (1.22)
Therefore
Y (s) =
_
N

(s)
D(s)
+k
P
_
U(s) (1.23)
which is the desired result.
Example: the DC motor considered earlier had a transfer function
H(s) =
(1/L)s
s
2
+ (R/L)s +K
2
/LJ
(1.24)
that may be realized using

A =
_
0 1
K
2
/LJ R/L
_
,

B =
_
0
1
_
,

C =
_
0 1/L
_
,

D = 0
(1.25)
Note that the controllable canonical realization is dierent from the one that gave rise to
the transfer function. Indeed, a state-space model is not unique. For a given state-space model,
another realization can be obtained by applying what is called a similarity transformation. A
new state z is dened through
z = Px (1.26)
where P is an invertible matrix. Then
z = P x = PAx +PBu = PAP
1
z +PBu
y = Cx +Du = CP
1
z +Du (1.27)
Therefore, z is the state of a model with matrices
A = PAP
1
, B = PB, C = CP
1
, D = D (1.28)
and with identical transfer function.
In the case of the DC motor, we have that
y = x
1
and y =
1
L
z
2
(1.29)
Therefore z
2
= Lx
1
. Further
z
1
= z
2
= Lx
1
=
LJ
K
x
2
(1.30)
so that z
1
= (LJ/K)x
2
. The two state-space models are related through the similarity trans-
formation
z = Px =
_
0 LJ/K
L 0
_
x (1.31)
In this case, the transformation is a simple swapping and rescaling of the states. However, the
set of all invertible matrices dene an innite number of equivalent state-space models.
The existence of the canonical realization raises an interesting question. Suppose that,
starting from a specic state-space model, the denominator degree of H(s) was lower than the
dimension of the state-space. Then, a controllable canonical realization could be found with
lower dimension. A state-space realization is called minimal if the dimension of the state is the
smallest dimension for which a realization is possible. If there are no pole/zero cancellation
between N(s) and D(s), the dimension of a minimal state-space realization is the degree of
D(s).
1.1.5 Time-domain solution of the state-space equations
The general solution of the state-space equations can be described analytically by dening the
exponential of a matrix M through the power series
e
M
= I +M +
M
2
2!
+
M
3
3!
+ (1.32)
The exponential matrix e
M
is a matrix whose dimension is the same as the dimension of M.
In the context of state-space models, we let M = At, so that the series implies that
d
dt
e
At
= Ae
At
(1.33)
with
e
At
= I for t = 0 (1.34)
In fact, an alternative denition of the matrix e
At
is that it is the unique solution of the linear
dierential equation (1.33) with initial condition (1.34). It can be checked (simply by plugging
the expressions below in the state-space model) that the solution of the state-space model can
be written as
x(t) = e
At
x(0) +
_
t
0
e
A(t)
Bu()d
y(t) = Ce
At
x(0) +
_
t
0
Ce
A(t)
Bu()d +Du(t) (1.35)
Considering the Laplace transform of the exponential matrix L(e
At
), one has
sL(e
At
) I = AL(e
At
) (1.36)
Therefore
L(e
At
) = (sI A)
1
(1.37)
From these basic properties, one can deduce that:
the transfer function H(s) = C(sI A)
1
B +D is the Laplace transform of the impulse
response
h(t) = Ce
At
B +D (1.38)
the elements of the matrix e
At
are linear combinations of the functions t
k
e
a
i
t
cos (b
i
t) and
t
k
e
a
i
t
sin(b
i
t) where
i
(A) = a
i
+jb
i
are the eigenvalues of the matrix A and k can take
values from 0 to m1, where m is the multiplicity of the i
th
eigenvalue. This property
follows from the fact that e
At
is the inverse Laplace transform of a matrix whose elements
are rational functions of s with denominators equal to det(sI A).
the matrix e
At
is invertible for all t. This property follows from (e
At
)
1
= e
At
=
L
1
((sI +A)
1
), where L
1
denotes the inverse Laplace transform. The inverse trans-
form exists for all rational functions of s, so that the inverse of e
At
must exist.

_
e
At
_
T
= e
A
T
t
.
1.2 Observability, controllability, and minimality
1.2.1 Observability
By denition, a state-space model is said to be observable if the state x(0) can be determined
from measurements of y(t) over some time interval. If the input is zero, the problem consists
in determining the state x(0) from measurements of y(t), with
y(t) = Ce
At
x(0) (1.39)
Eralier, we found that this problem could be solved using the least-squares algorithm
x(0) =
__
t
0
e
A
T

C
T
Ce
A
d
_
1
__
t
0
e
A
T

C
T
y()d
_
(1.40)
The estimate exists if and only if
W
o
(t) =
_
t
0
e
A
T

C
T
Ce
A
d > 0 (1.41)
for some t > 0 (i.e., W
o
(t) is nonsingular for some t > 0). The matrix W
o
(t) is known as the
observability gramian.
An alternative technique to obtain x(0) consists in reconstructing the derivatives of y(t),
so that
y(t) = Ce
At
x(0)
y(t) = CAe
At
x(0)
.
.
. (1.42)
In the case of a single-ouput system, the initial state can then be computed using
x(0) =
_
e
At
_
1
_
_
_
_
_
C
CA
.
.
.
CA
n1
_
_
_
_
_
1
_
_
_
_
_
y(t)
y(t)
.
.
.
(n1)
y (t)
_
_
_
_
_
(1.43)
where
(n1)
y (t) refers to the n 1
th
derivative of y. The estimate exists if and only if the
observability matrix
O =
_
_
_
_
_
C
CA
.
.
.
CA
n1
_
_
_
_
_
(1.44)
is nonsingular. It turns out that rows CA
n
, CA
n+1
, ... are linearly dependent on the rst n
rows, so that adding more rows would not help to determine x(0). Further
W
o
(t) > 0 det (O) = 0 (1.45)
so that both conditions are equivalent conditions for observability.
If the control input is not equal to zero, the same techniques can be used to obtain x(0)
by replacing y(t) by y(t)
_
t
0
Ce
A(t)
Bu()d Du(t). Further, if the initial state can be
determined exactly, the state at any time t > 0 can be computed as well using the general
solution of the state-space model.
Example: for the DC motor with current measurement,
O =
_
1 0
R/L K/L
_
(1.46)
Therefore, the system is observable (unless K = 0, which is not realistic). The result may
seem counterintuitive since, in steady-state, the current is zero for any velocity. However, it is
possible to observe through the rst equation of the system and the knowledge of the voltage
that is applied.
1.2.2 Controllability
By denition, a state-space system is said to be controllable if there exists a control input u(t)
that drives the system from an arbitrary state x(0) to x(t
1
) = 0 at some time t
1
. This problem
can be solved as follows. Let
u(t) = B
T
e
A
T
(t
1
t)
v (1.47)
where v is a vector to be determined. The state at time t
1
is then given by
x(t
1
) = e
At
1
x(0) +
_
t
1
0
e
A(t
1
)
BB
T
e
A
T
(t
1
)
d v (1.48)
Thus, it is possible to make x(t
1
) = 0 by choosing
v =
__
t
1
0
e
A(t
1
)
BB
T
e
A
T
(t
1
)
d
_
1
e
At
1
x(0) (1.49)
Dening the controllability gramian W
c
(t)
W
c
(t) =
_
t
0
e
At
BB
T
e
A
T
t
d =
_
t
0
e
A(t)
BB
T
e
A
T
(t)
d (1.50)
one nds that the solution exists if and only if W
c
(t) > 0 for some t > 0 (i.e., W
c
(t) is
nonsingular for some t > 0). It turns out that the input minimizes
_
t
1
0
|u()|
2
d, so that there
is some similarity between this approach and the least-squares estimate used for observability.
As for observability, it turns out that the controllability condition is equivalent to a simpler
condition, namely the condition that the controllability matrix
C =
_
B AB . . . A
n1
B
_
(1.51)
is nonsingular.
Example: consider the controllable canonical form (1.19), whose controllability matrix can
be computed to be
_
B AB . . . A
n1
B
_
=
_
_
_
_
_
_
_
0 0 0 1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 x
0 1 b
n
x
1 b
n
x x
_
_
_
_
_
_
_
(1.52)
where

x

denotes elements that are functions of b


1
, b
2
, , b
n
. Because the matrix is lower
triangular with 1s on the diagonal, its determinant is equal to 1. Therefore, the controllable
canonical form is always controllable, as the name suggests. It can be shown that it is not
always observable, but rather observable if and only if there are no pole/zero cancellations
between the numerator and denominator polynomials of N

(s)/D(s).
1.2.3 Unobservable and uncontrollable modes
So far, our examples of systems have been observable and controllable. However, examples of
unobservable and uncontrollable systems can easily be found by setting up pole/zero cancel-
lations. For example, consider Fig. 1.3, where a pole at s = 2 is cancelled by a zero at the
same location, so that the transfer function of the system is simply
H(s) =
1
s + 1
(1.53)
Using the controllable canonical realization, state-space realizations for the two systems of
Fig. 1.3 are (in the order in which they appear),
x
1
= x
1
+u
1
y
1
= x
1
+u
1
(1.54)
1
(s + 2)
u
y
(s + 1)
(s + 2)
Figure 1.3: Example of pole/zero cancellation
and
x
2
= 2x
2
+u
2
y
2
= x
2
(1.55)
With u = u
1
, u
2
= y
1
, and y = y
2
, an overall state-space realization is found to be
_
x
1
x
2
_
=
_
1 0
1 2
__
x
1
x
2
_
+
_
1
1
_
u
y =
_
0 1
_
_
x
1
x
2
_
(1.56)
One has
_
B AB
_
=
_
1 1
1 1
_
(1.57)
so that the system is not controllable. On the other hand
_
C
CA
_
=
_
0 1
1 2
_
(1.58)
so that the system is observable. Intuitively, the system is not controllable because the input
is blocked by the zero at the same location from reaching the pole at s = 2 .
Interestingly, if the order of the transfer functions is reversed, the system becomes control-
lable, but not observable. In that case, the zero in the transfer function prevents the pole at
s = 2 from being observed at the output.
Other examples of unobservable and uncontrollable systems can be obtained with less
obvious pole/zero cancellations. For example, consider Fig. 1.4.
Using controllable canonical realizations, a state-space realization for Fig. 1.4 is
_
_
x
1
x
2
x
2
_
_
=
_
_
0 1 0
2 3 0
0 0 1
_
_
_
x
1
x
2
_
+
_
_
0
1
1
_
_
u
y =
_
1 0 1
_
_
_
x
1
x
2
x
3
_
_
(1.59)
1
(s + 2)
1
(s + 1)
1
(s + 1)
Figure 1.4:
Although there is immediately visible pole/zero cancellation, the transfer function of the system
H(s) =
s + 3
(s + 1)(s + 2)
(1.60)
is only of order 2, while the state-space has dimension 3. The two poles at s = 1 have been
merged into a single pole. The controllability matrix
C =
_
_
0 1 3
1 3 7
1 1 1
_
_
(1.61)
is singular, so that the system is not controllable. An interpretation is that, on Fig. 1.4, the
two systems with poles at s = 1 cannot be controlled separately, because their inputs are
the same. In practice, identical components are not likely to occur, but models of systems and
control systems are more likely to have this property.
The converse of the loss of controllability for observability occurs when the output is a linear
combination of the outputs of identical systems. Such an example can be created by swapping
the two transfer functions in the top branch of Fig. 1.4. However, the second example is
essentially the same as the rst. It turns out that the system of Fig. 1.4 is both uncontrollable
and unobservable.
Note that because the loss of controllability or observability is associated with a pole/zero
cancellation, it can also be associated with specic poles of the system or, more precisely, to
specic eigenvalues of the A matrix. This concept is made clearer in the following section.
1.2.4 Popov-Belevitch-Hautus (PBH) test
It turns out that the lack of observability or controllability can be tied to specic eigenvalues
of the matrix A. Specically, consider the following fact.
Fact (PBH test): a system is observable if and only if
_
C
sI A
_
has rank n for all s
i
(A) (1.62)
where n = dim(A) and
i
(A) denotes the eigenvalues of the matrix A. Similarly, a system is
controllable if and only if
_
sI A B
_
has rank n for all s
i
(A) (1.63)
In other words, the columns of the matrix in (1.62) must be linearly independent, and the
rows of the matrix in (1.63) must be linearly independent. Equivalently, one must be able to
nd n linearly independent rows in the matrix of (1.62), and n linearly independent columns
in the matrix of (1.63).
Note that the linear independence is to be evaluated in the complex space. Specically,
one must have that, for all column vectors v C

with v = 0,
Cv = 0 and (sI A)v = 0 (1.64)
or
v
T
B = 0 and v
T
(sI A) = 0 (1.65)
An eigenvalue of A such that the observability test fails is called an unobservable mode. Un-
controllable modes correspond to eigenvalues where the controllability test fails.
Example: for Fig. 1.4,
_
sI A B
_
=
_
s + 1 0
1 s + 2
1
1
_
(1.66)
and the PBH controllability test gives two conditions
s = 1 :
_
0 0 1
1 1 1
_
s = 2 :
_
1 0 1
1 0 1
_
(1.67)
The rst matrix has rank 2, but the rows of the second matrix are equal. This allows us to
determine not only that the system is not controllable, but also that the pole at s = 2 is the
uncontrollable mode.
1.2.5 Kalman decomposition theorem and minimal realizations
Kalmans decomposition theorem states that any system is equivalent, through a similarity
transformation, to a system of the form
_
_
_
_
x
1
x
2
x
3
x
4
_
_
_
_
=
_
_
_
_
A
11
0 A
13
0
A
21
A
22
A
23
A
24
0 0 A
33
0
0 0 A
43
A
44
_
_
_
_
_
_
_
_
x
1
x
2
x
3
x
4
_
_
_
_
+
_
_
_
_
B
1
B
2
0
0
_
_
_
_
u
y =
_
C
1
0 C
3
0
_
_
_
_
_
x
1
x
2
x
3
x
4
_
_
_
_
+Du (1.68)
where (A
11
, B
1
) and (A
22
, B
2
) are controllable and (A
11
, C
1
) and (A
33
, C
3
). are observable.
The signicance of the theorem can be appreciated by considering Fig. 1.5, which shows the
connections between the dierent subsystems. In particular:
H
1
is both controllable and observable,
H
2
is controllable but not observable,
H
3
is observable but not controllable,
H
4
is neither observable nor controllable.
H
y
1
u
H
3
H
2
H
4
Figure 1.5: Structure of the system in the Kalman decomposition theorem
As can be seen from Fig. 1.5, only the controllable and observable subsystem contributes
to the transfer function, which is equal to
H(s) = C
1
(sI A
11
)
1
B
1
+D (1.69)
One can conclude that a state-space realization A, B, C, D is minimal if and only if it is
observable and controllable.
The subject of model reduction considers algorithms that can be applied to the A, B, and
C matrices to directly derive a minimal state-space realization. In practice, reduction may also
be achieved by eliminating weakly obervable and weakly controllable modes, which correspond
to near pole/zero cancellations.
Although it is desirable to reduce the size of state-space models, there are situations in
control systems where non-minimal realizations present advantages. As for pole/zero cancella-
tions, non-minimal realizations only pose problems when the unobservable and uncontrollable
modes are unstable, or poorly damped.
1.3 Pole placement and state observers
1.3.1 State feedback and pole placement
Consider a state-space model with
x = Ax +Bu
u = Gx (1.70)
where G is a row vector (assuming a single-input, single-output system). Note that the state
is fed back to the input, hence the name of state feedback. The closed-loop system is described
by
x = (ABG)x (1.71)
and the closed-loop poles are determined by the eigenvalues of
A
CL
= A BG (1.72)
It turns out that the eigenvalues of A
CL
can be placed at arbitrary locations if and only if
(A, B) is controllable. Specically, Ackermanns formula states that, for arbitrary parameters
a
d,1
, a
d,2
, ..., a
d,n
, one has that
det(sI A
CL
) = s
n
+a
d,n
s
n1
+... +a
d,2
s +a
d,1
(1.73)
if
G =
_
0 0 0 1
_ _
B AB . . . A
n1
B
_
1
_
A
n
+a
d,n
A
n1
+ +a
d,1
I
_
(1.74)
Note that the formula makes the need for controllability clear, given the inverse of the control-
lability matrix.
Example: consider the controllable canonical realization for a second-order system
A =
_
0 1
a
1
a
2
_
, B =
_
0
1
_
(1.75)
With a gain matrix G =
_
g
1
g
2
_
, the matrix A
CL
retains the same structure as A, with
A
CL
=
_
0 1
a
1
g
1
a
2
g
2
_
(1.76)
Thus, the poles can be placed at arbitrary locations by using
g
1
= a
d,1
a
1
, g
2
= a
d,2
a
2
(1.77)
Ackermanns formula gives the same result. Indeed
A
2
+a
d,2
A+a
d,1
I =
_
a
d,1
a
1
a
d,2
a
2
a
1
(a
d,2
a
2
) a
d,1
a
2
(a
d,2
a
2
)
_
_
B AB
_
=
_
0 1
1 a
2
_
,
_
B AB
_
1
=
_
a
2
1
1 0
_
(1.78)
After multiplication of the terms, one obtains the same expressions for the gains g
1
and g
2
.
The technique described here is called pole placement. An alternative to pole placement is
linear quadratic regulator (LQR) theory, which does not place the poles at arbitrary locations,
but guarantees stable closed-loop poles with more desirable robustness properties.
1.3.2 State observers
Considering the system
x = Ax + Bu
y = Cx +Du (1.79)
a state observer is described by the equations
.
x = A x +Bu +L(y y)
y = C x +Du (1.80)
where x is an estimate of the state x, and L is a column vector of parameters to be determined.
The error between the state vector and the estimate is dened to be
e = x x (1.81)
and satises
e = (ALC)e (1.82)
It follows that x x exponentially as t for any initial estimate x(0) if and only if the
eigenvalues of ALC have negative real parts.
A fundamental fact is that the observer poles (i.e., the eigenvalues of A LC) can be
placed at arbitrary locations if and only if (A, C) is observable. This fact can be derived from
Ackermanns formula. Indeed, use the formula to place the poles of A

G where A

= A
T
and B

= C
T
. Then, letting L = G
T
, which results in arbitrary placement of the eigenvalues of
(A
T
C
T
L
T
). Since the eigenvalues of (A LC)
T
are the same as the eigenvalues of ALC, the
desired result is achieved. The invertibility of the controllability matrix needed in Ackermanns
formula requires the invertibility of the observability matrix for the state observer.
Example: consider a state observer for the estimation of the state of the DC motor through
the current
d
dt
_


_
=
_
R/L K/L
K/J 0
__


_
+
_
1/L
0
_
v +
_
l
1
l
2
_
(i )
(1.83)
The closed-loop matrix is
ALC =
_
R/L l
1
K/L
K/J l
2
0
_
(1.84)
First, note that since A is stable, one can choose L = 0, which yields
=
K/LJ
s
2
+ (R/L)s +K
2
/LJ
[v] (1.85)
Such an observer is called an open-loop observer. It simply duplicates the dynamics of the
open-loop system to obtain an estimate of the speed.
In contrast, a closed-loop observer uses measurements of the current i to obtain a better
estimate of the speed. In the Laplace domain
_


_
=
_
s +R/L +l
1
K/L
K/J +l
2
s
_
1
__
1/L
0
_
v +
_
l
1
l
2
_
i
_
(1.86)
which gives
=
Kl
1
/J + (s +R/L)l
2
s
2
+ (R/L +l
1
)s + (K/L)(K/J l
2
)
[i]
+
(1/L)(K/J l
2
)
s
2
+ (R/L +l
1
)s + (K/L)(K/J l
2
)
[v] (1.87)
Although Ackermanns formula can be used to place the poles of the observer, the denominators
in the above expression show how this can be achieved. Note that the speed estimate is now a
blend of the current and voltage signals, appropriately ltered. An advantage of the state-space
approach is not only that it gives us an answer on how to blend these signals, but also how
to do so in more complicated sitations. An alternative to pole placement is the Kalman lter,
which does not place poles, but guarantees stability and provides an optimal state estimate
under certain assumptions on the noise aecting the system.
1.3.3 Combined state feedback and state observer
The estimate of a state observer can be used in a state feedback control law to obtain the
combined control law
.
x = A x +Bu +L(y y)
y = C x +Du
u = G x (1.88)
The control law is also described by
.
x = (A LC BG+LDG) x +Ly
u = G x (1.89)
Therefore, the transfer function of the compensator u = C(s)[y] is given by
C(s) = G(ALC BG+LDG)
1
L (1.90)
A remarkable result is that the poles of the closed-loop system are the union of the poles that
would be obtained using pure state feedback and the poles of the observer. Thus, by combining
a state observer with state feedback, arbitrary locations can be obtained for the closed-loop
poles of any system.
1.4 Problems
Problem 3.3: Show that a dierent state-space realization can be obtained from the con-
trollable canonical realization by noting that H
T
(s) = H(s) for a single-input single-output
system. Show that the alternate realization is always observable.
Problem 3.4: Show that the state-space realization that is obtained after swapping the two
transfer functions in Fig. 1.3 is controllable but not observable. First use the controllability and
observability matrices, then the PBH test to show that the unobservable mode is at s = 2.
Problem 3.5:
(a) Using a root-locus argument, show that a system
P(s) =
s 1
s
2
4
(1.91)
cannot be stabilized unless the feedback controller has an unstable pole. Where must the pole
be located?
(b) In Matlab, set the matrices of a state-space model of P(s) and, using Ackermanns formula,
compute the gain vector G that places all the closed-loop poles of a state feedback system at
s = 1. Verify that the eigenvalues of ABG are indeed at s = 1.
(c) Similarly, compute the gain vector L that places all the poles of an observer at s = 1.
Verify that the eigenvalues of ALC are indeed at s = 1.
(d) Compute the eigenvalues of the state-space model that represents the control system com-
bining state feedback and the state observer. Compare the results with the conclusion of part
a).
(e) Have Matlab compute the polynomials of the compensator C(s) = n
c
(s)/d
c
(s).
(f) Letting P(s) = n
p
(s)/d
P
(s), compute the poles of the closed-loop system.
(g) Have Matlab sketch the root-locus of the system, i.e., the roots of d
P
(s)d
C
(s)+kn
P
(s)n
C
(s) =
0 for k = 0 .
Hint: in Matlab, the following functions may be useful:
eig(a) computes the eigenvalues of a matrix a;
sys=ss(a,b,c,d) creates a Matlab object associated with a state-space model described
by a, b, c, d;
[num,den]=tfdata(sys,v) returns the numerator and denominator polynomials of the
transfer function associated with the system sys, where the polynomials are represented
as vectors;
conv(nump,numc) computes the product of the polynomials nump and numc;
roots(den) computes the roots of the polynomial den;
rlocus(num,den) plots the root-locus associated with the polynomials num and den.

You might also like