Professional Documents
Culture Documents
email: glp@eas.uccs.edu
Grading:
90-100
80-89
70-79
60-69
0-59
A
B
C
D
F
Topics
1. Fundamentals of feedback control.
2. Linear algebra (matrix) review.
3. Continuous-time state-space systems.
4. Discrete-time state-space systems.
5. Observability and controllability.
6. Controllers, observers and compensators.
7. Linear quadratic regulation.
8. Review of multivariable control.
Text
Ch. 2
Ch. 3
Ch. 4
Ch. 4
Ch. 6
Chs. 89
Est. Weeks
0.5
1.5
2.5
1.0. . . (Exam I)
2.0
4.0. . . (Exam II)
2.0
0.5
Work Load: This is an aggressive course requiring weekly homework assignments. Expect to spend six to nine hours
per week outside of class reading the textbook and completing homework assignments. This is in accord with UCCS
policy relating credit hours for a lecture course to student workload. Some students will find that more time is required,
while others will find that less time is required.
Homework will be collected at the beginning of class on the assigned date. Homework
turned in after the class period will be penalized 10%. Homework turned in after the due date will be penalized an additional 25% per day unless previous arrangements have been made with the instructor. Examinations will be based on the
homework problems and the material covered in class. It is to your advantage to understand the fundamental concepts
that are demonstrated in the homework problems. It will be difficult to earn higher than a C without performing well
on the homework assignments.
Part of your engineering education involves learning how to communicate technical information to others. Basic standards of neatness and clarity are essential to this process of communication. Your process of
solving a problem must be presented in a logical sequence. Consider your assignments to represent your performance
as an engineer. Do not submit scrap paper, and do not submit paper containing scratched out notes. Graphs are to be
titled and axes are to be labeled (with correct units). The above standards of clarity and neatness also apply to your work
on exams.
Attendance:
Attendance is your responsibility. Class lectures will cover a significant amount of material. Some
will not be in the text or may be explained differently. It is to your advantage to take notes, ask questions, and to fully
participate in the classroom experience.
Missed Exams: Missed exams will count as ZERO without a physicians documentation of an illness, or other appropriate documenta tion of an emergency beyond your control and requiring your absence.
Points will be deducted for failure to comply with the following rules:
The Course Reader: These notes have been entered using LYX, and typeset with LATEX2 on a Pentium-II class
computer running the Linux operating system. All diagrams have been created using either xfig or Matlab.
Some sections of these notes have been adapted from lectures given by Drs. Theresa Meng, Jonathan How and Stephen
Boyd at Stanford University.
(mostly blank)
(mostly blank)
11
12
Dynamic Response
We wish to control linear-time invariant (LTI) systems.
Their dynamics may be specified via linear, constant-coefficient
ordinary differential equations (LCCODE).
Examples include:
Mechanical systems: Use Newtons laws.
Electrical systems: Use Kirchoffs laws.
Electro-mechanical systems (generator/motor).
Thermodynamic systems.
Fluid-dynamic systems.
EXAMPLE:
13
EXAMPLE:
Second-order system:
s 2Y (s) + 2 n sY (s) + n2 Y (s) = n2U (s)
n2
Y (s) = 2
U (s).
s + 2 n s + n2
p
)
i
i =1
sys=zpk(z,p,k);
% in Matlab
. . . U (s) = k
. . . U (s) = k/s
. . . U (s) = k/s 2
k
u(t) = k exp(t) 1(t) . . . U (s) =
s+
k
u(t) = k sin(t) 1(t)
. . . U (s) = 2
s + 2
impulse
step
ramp
exponential
sinusoid
14
Sfrag replacements
s0
replacements H2 (s)
H (s)
Block
Diagrams
UsefulR(s)
when analyzing systems comprised of a number of sub-units.
U1 (s)
Y2 (s)
U (s)
replacements
R(s) U2 (s)
UH1 (s)
Y2 (s)
U (s)
U2 (s)
H1 (s)
replacements
R(s)
H
(s)
(s)
U1U
(s)
Y2 (s)
U2 (s)
U (s)
R(s)
Y (s)
H (s)
H2 (s)
Y (s)
Y (s)
H1 (s)
H2 (s)
U1 (s)
H1 (s)
Y (s)
Y (s) =
Y2 (s)
H2 (s)
H1 (s)
R(s)
1 + H2(s)H1(s)
U2 (s)
15
acements
H (s)
Y2 (s)
1
H1 (s)
U1 (s)
H (s)
Y1 (s)
H (s)
Y (s)
1
H (s)
U1 (s)
H (s)
U2 (s)
H (s)
Y2 (s)
Y (s)
U2 (s)
R(s)
U (s)
Y1 (s)
H1 (s)
Y (s)
R(s)
H2 (s)
1
H2 (s)
H2 (s)
H1 (s)
Y (s)
Unity Feedback
16
ements
1 1]);
PSfrag replacements
0.8
dc gain
ondition
0.6
0.4
e
e t
h(t)
1
e
0.2
0
0
0.8
K (1 et/ )
System response. K = dc gain
y(t) K
h(t)
(t) K
0.6
0.4
0.2
t =
Time (sec )
0
0
t =
Time (sec )
sin(d t) .
Step response y(t) = 1 e t cos(d t) +
d
(s)
(s)
17
ments
=0
0.1
0.3
0.5
0.7
0.9
0.1
0.3
0.5
0.7
0.8
0.9
=1
0.7
0.9
0.2
0.5
0.4
0.7
0.8
0.9
1.0
y(t)
y(t)
0.6
0
=1
0.8
0.5
=0
0.2
1.5
0.4
0.6
0.8
0.5
1.0
1
Impulse
Responses
of
2nd-Order
Systems
0
2
4
6
8
10
12
n t
stems
0
0
n t
10
12
(s)
ements
(s)
PSfrag replacements
PSfrag replacements
(s)
tp
1
0.9
0.1
tr
ts
(s)
18
80
M p, %
70
50
40
30
20
10
Overshoot M p = maximum
PERCENT overshoot.
0
0
0.2
0.4
0.6
0.8
1.0
tr 1.8/n
ts 4.6/
/ 1 2
Mp e
(s)
60
n 1.8/tr
4.6/ts
...
...
fn(M p )
...
(s)
(s)
(s)
n
PSfrag
replacements
sin1
(s)
(s)
(s)
= 0.707
(s)PSfrag replacementsPSfrag replacements
(s)
ements
(s)
ements
r (t)
D(s)
G(s)
y(t)
Y (s)
D(s)G(s)
=
= T (s).
R(s) 1 + D(s)G(s)
19
s0
K v = lim s D(s)G(s).
s0
K a = lim s 2 D(s)G(s).
s0
1 + Kp
1
0
Type 1
Kv
1
Type 2
0
0
Ka
Some Types of Controllers
Proportional ctrlr: u(t) = K e(t).
Z
K t
e(t) dt.
Integral ctrlr
u(t) =
TI
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
D(s) = K .
K
D(s) =
TI s
110
Derivative ctrlr.
Combinations:
Lead:
Lag:
Lead/Lag:
u(t) = K TD e(t)
D(s) = K TD s
1
;
TI s
PD: D(s) = K (1
+ TD s) ;
1
+ TD s .
PID:D(s) = K 1 +
TI s
Ts + 1
D(s) = K
,
< 1 (approx PD)
T s + 1
Ts + 1
D(s) = K
,
> 1 (approx PI;
T s + 1
often, K = )
(T1s + 1)(T2s + 1)
D(s) = K
, 1 < 1, 2 > 1.
(1 T1 s + 1)(2 T2 s + 1)
PI: D(s) = K 1 +
Root Locus
A root locus plot shows (parametrically) the possible locations of the
roots of the equation
b(s)
1+ K
= 0.
a(s)
For a unity-gain feedback system,
T (s) =
D(s)G(s)
.
1 + D(s)G(s)
111
Drawing the root locus allows us to select K for good pole locations.
Intuition into the root-locus helps us design D0(s) with lead/ lag/ PI/
PID. . . controllers.
(mostly blank)
21
The i, j entry is the value in the ith row and the jth column.
The positive integers i and j are called the (row and column,
respectively) indices.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
22
A13 = 2.4, A31 = 0.2. The row index of the bottom row is 3,
the column index of the first column is 1.
EXAMPLE :
v = 0.5
1
is a 3-vector (or 3 1 matrix); its third component is v3 = 1.
8 1 0.1
23
For example,
Greek letters (, , . . .) might be used for numbers;
Lower-case letters (a, x, y, . . .) might be used for vectors;
Upper-case letters (A, B, . . .) for matrices.
Other notational conventions include matrices given in bold font (H),
or vectors written with arrows above them (E
a ).
But, there are about as many notational conventions as authors!
Be prepared to figure out what things are (i.e., scalars, vectors,
matrices) despite the authors notational scheme (if any exists!).
Zero Matrices
The zero matrix (of size m n) has all entries equal to zero.
Sometimes written as 0mn where subscript denotes size.
Often just written as 0, the same symbol used to denote the number 0.
You need to figure out the size of the zero matrix from the context.
Zero matrices of different sizes are different matrices, even though we
use the same symbol (i.e., 0).
In programming, this is called overloading; we say that the symbol 0 is
overloaded because it can mean different things depending on its
context (i.e., the equation it appears in).
When a zero matrix is a (row or column) vector, we call it a zero (row
or column) vector.
24
Identity Matrices
An identity matrix is another common matrix.
It is always square, i.e., has the same number of rows as columns.
Its diagonal entries, i.e., those with equal row and column indices, are
all equal to 1.
Its off-diagonal entries, i.e., those with unequal row and column
indices, are equal to 0.
Identity matrices are denoted by the letter I . Sometimes a subscript
denotes the size, as in I3 or maybe I22.
More often, size must be determined from context (just like for zero
matrices).
Formally, the identity matrix is defined by
(
1, i = j;
Ii j =
0, i 6= j.
EXAMPLES:
"
1 0
0 1
0
1
0
0
0
0
1
0
25
Matrix Operations
Matrices may be combined in various operations to form other
matrices.
Matrix Transpose
If A is an m n matrix, its transpose, denoted A T (or sometimes A0),
is the n m matrix given by
T
A i j = A ji.
1 2 3
1 4 7 1
4 5 6
2
5
8
2
.
7 8 9
3 6 9 3
1 2 3
Note that transposition converts row vectors into column vectors, and
vice versa.
Two matrices of the same size can be added together to form another
matrix (of the same size) by adding the corresponding entries.
Matrix addition is denoted by the symbol +. (Thus the symbol + is
overloaded to mean scalar addition when scalars appear on its leftand right-hand side, and matrix addition when matrices appear on its
left- and right-hand sides.)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
26
EXAMPLE :
"
1 2
3 4
"
5 6
7 8
"
6 8
10 12
Note that (row or column) vectors of the same size can be added, but
you cannot add together a row vector and a column vector (except if
they are both scalars!).
Matrix subtraction is similar:
EXAMPLE :
"
#
"
#
1 2
0 2
I =
.
3 4
3 3
Note that this gives an example where we have to figure out what size
the identity matrix is. Since you can only add (or subtract) matrices of
the same size, we conclude that I must refer to a 2 2 identity matrix.
Matrix addition is commutative; i.e., if A and B are matrices of the
same size, then A + B = B + A.
It is also associative; i.e., (A + B) + C = A + (B + C), so we write
both as A + B + C.
We always have A + 0 = 0 + A = A; i.e., adding the zero matrix has
no effect.
Scalar Multiplication
If we multiply a matrix by a scalar, the resulting matrix has every entry
multiplied by the scalar.
27
1 4
2 8
(2) 2 5 = 4 10 .
3 6
6 12
Sometimes you see scalar multiplication with the scalar on the right,
or even scalar division with the scalar shown in the denominator
(which just means scalar multiplication by one over the scalar), as in
"
#
1 3 5
"
#
1 4
2 8
2
4
6
0.5 1.5 2.5
=
,
2 5 2 = 4 10 ,
2
1 2 3
3 6
6 12
but these look ugly.
28
You can multiply two matrices A and B provided that their dimensions
are compatible, which means that the number of columns of A equals
the number of rows of B.
EXAMPLE :
Am p B pn = Cmn .
The product is defined by
p
X
Ai k Bk j = Ai 1 B1 j + + Ai p B pj ,
Ci j =
k=1
i = 1, . . . , m,
j = 1, . . . , n.
A11 A1 p
C11 C 1n
..
..
...
...
.
.
B11 B1 j B1n
...
...
... =
Ai 1 Ai p
Ci j
..
..
... B B B
...
.
.
pj
pn
p1
Am1 Amp
Cm1 C mn
To find the i, jth entry of the product C = AB, you need to know the
ith row of A and the jth column of B.
The summation above can be interpreted as moving left-to-right
along the row i of A while moving top-to-bottom down column j of B.
As you go, keep a running sum of the product of entries: one from A
and one from B.
Now we can explain why I is called the identity matrix: If A is any
m n matrix, then AI = A and I A = A, i.e., when you multiply a
matrix by an identity matrix, it has no effect. (The identity matrices in
AI = A and I A = A have different sizeswhat are they?)
29
not even make sense (due to dimensions) and even if it does make
sense, it may have different dimension than AB so that equality in
AB = B A is meaningless.
If A is 2 3 and B is 3 4 then AB makes sense, and is 2 4.
B A does not make sense.
EXAMPLE :
Even if both make sense (as in when both are square, for
example) AB 6= B A in general
"
#"
# "
#
"
#"
# "
#
1 2
5 6
19 22
5 6
1 2
23 34
=
,
=
.
3 4
7 8
43 50
7 8
3 4
31 46
EXAMPLE :
210
Inner Product
Another special case is the product of a row vector with a column
vector of same size.
Then vw makes sense, and has size 1 1 (i.e., a scalar).
Matrix Powers
When a matrix A is square, then it makes sense to multiply A by
itself; i.e., to form A A. We call this A 2. Similarly, k copies multiplied
together are Ak .
1
211
"
1 1
2 2
212
0 G
A=
"
C=
"
1 2
0 2
1 0
A B
C D
#
i
B=
"
D=
1 2
= 0 2
1 0
3
1
0
#
i
1 .
0
213
Linear Equations
Any set of m linear equations in (scalar) variables x 1, . . . x n can be
represented by the compact matrix equation Ax = b, where x is a
vector made from the variables, A is a m n matrix and b is a
m-vector.
EXAMPLE :
1 + x2 x3 = 2x 1,
x3 = x2 2.
214
Rewrite the equations with the variables lined up in columns, and the
constants on the right-hand side.
2x1 +x2 x3 = 1
0x1 x2 +x3 = 2.
215
From a practical point of view, either you dont have enough equations
or you have the wrong ones. Otherwise, A 1 exists, and you can
solve x = A1b.
Solving Linear Equations in Practice
When we solve linear equations by computer, we dont use x = A 1b,
although that would work. Practical methods compute x = A 1b
directly.
A may be large, sparse, or poorly conditioned. There exist efficient
methods to handle each case.
In Matlab, x=A\b;
The Determinant Function
Consider the set of equations
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
or
"
a11 a12
a21 a22
#"
x1
x2
"
b1
b2
Multiply the first equation by a22 and the second equation by a12.
Add the resulting two equations:
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
216
Multiply first equation by a21 and the second by a11. Add the
resulting two equations:
a11a21 x1 a12a21 x2 = a21b1
a11a21 x1 + a11a22 x2 = a11b2
(a
a12a21})x2 = a11b2 a21b1
| 11a22 {z
det( A)
and so forth.
adj(A)
, det(AB) = det(A) det(B) when both are square, and
A1 =
det(A)
det(A T ) = det(A).
217
mn
is defined as
(A) = x n Ax = 0 .
If y = Ax and z
Range-Space
The rangespace of A
mn
is defined as
(A) = Ax x n
218
Rank
rank(A) + dim(
(A)) = n.
Interpreting y = Ax
Consider the system of linear equations
y1 = A11 x1 + A12 x2 + + A1n xn
y2 = A21 x1 + A22 x2 + + A2n xn
...
ym = Am1 x1 + Am2 x2 + + Amn xn
which can be written as y = Ax, where
A=
y = . ,
...
. . . ...
..
Am1 Am2 Amn
ym
Some interpretations of y = Ax :
x=
x1
x2
...
xn
into y
219
n
X
Ai j x j
j =1
Ai j is the gain factor from the jth input (x j ) to the ith output (yi ).
Thus,
ith row of A concerns ith output.
jth column of A concerns jth input.
A27 = 0 means 2nd output (y2) doesnt depend on 7th input (x 7).
A=
a1 a2 a n ,
220
y=
"
1.5
1.5
a1
Ax = (1)a1 + (0.5)a2
= (1.5, 1.5)
a2
PSfrag replacements
x = (1, 0.5)
where a i
a 1T
a 2T
a 1T x
a 2T x
A= .
..
a nT
y=
...
T
a n x
221
n
(normal
ha,
xi = 2
PSfrag replacements
a
ha,
xi = 1
PSfrag replacements
a 2
a 1
x
x
a 1T x = 10
222
y = V 3V 1 x
(will show this later on). V is a collection of all the eigenvectors put
into a matrix.
V 1 decomposes x into the eigenvector coordinates. 3 is a
diagonal matrix multiplying each component of the resulting vector by
the eigenvalue associated with that component, and V puts
everything back together.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
223
224
A v1 v2 . . . v n =
|
{z
}
AT = T 3
v1 v2 . . . v n
1
...
0
T 1 AT = 3.
n
{z
3
det(I A) = 2
J1
0
...
T 1 AT = J =
where
Ji =
Jq
i 1
0
i . . .
... 1
0
i
n i n i
225
q
X
n i ).
i =1
NOTE :
dim (i I A) = dim
eigenvalue i .
The sizes of each Jordan block may also be computed, but this is
complicated. i.e., leave it to Matlab!
EXAMPLE :
Consider
A=
0
1
1
0
0
0
0
1
1
2
0
0
0
1
1
0
2
0
0
0
0
1
1
1
1
.
1
226
0 0 0 0 0
0 2 1 0 0
0 0 2 1 0
J =
0 0 0 2 0
0 0 0 0 2
0 0 0 0 0
0
0
0
0
1
2
0 0 0 0 0 0
0 2 0 0 0 0
0 0 2 1 0 0
J =
0 0 0 2 1 0
0 0 0 0 2 1
0 0 0 0 0 2
Cayley-Hamilton Theorem
The square matrix A satisfies its own characteristic equation. That is,
if
() = det(I A) = 0
then
(A) = 0.
227
Then
A2 = V 13V V 13V
= V 132 V
Ak = V 13k V.
228
Then
Ji =
"
i 1
0 i
(Ji ) =
"
i3
0
3i2
i3
+ a2
"
i2
2i
0 i2
+ a1
"
i 1
0 i
+ a0
"
1 0
0 1
= 3i2 + 2a2i + a1
d
() = 0 which completes the sketch.
d
n
SIGNIFICANCE : The Cayley-Hamilton theorem shows us that A is a
function of matrix powers A n1 down to A0. Therefore, to compute any
polynomial of A it suffices to compute only powers of A up to A n1
and appropriately weight their sum. A lot of proofs use the
Cayley-Hamilton theorem.
"
#
1 2
EXAMPLE : With A =
we have (s) = s 2 5s 2 so
3 4
but =
(A) = A2 5A 2I
"
#
"
#
"
#
7 10
1 2
1 0
=
5
2
15 22
3 4
0 1
#
"
0 0
=
.
0 0
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
229
That is, the Kronecker product is a large matrix containing all possible
permutations of A and B.
VECTORIZED MATRICES :
230
so
[A I ] + [I B] (X) = (C)
(mostly blank)
(mostly blank)
31
acements
Classic example: 2nd-order E.O.M.
k
m
b
f (t)
y(t)
32
then, xE (t) =
"
y(t)
y (t)
y (t)
y (t)
y (t)
.
k
b
1
y(t) y (t) + f (t)
m
m
m
We can write this in the form xE (t) = A xE(t) + B f (t), where A and B are
constant matrices.
A=
B=
D=
DEFINITION :
33
or
and
1
Y (s) = [C(s
I A)
A)1 x(0)} .
{z B + D]} U (s) + C(s
| I {z
|
transfer function of system
34
So,
Y (s)
= C(s I A)1 B + D,
U (s)
but
(s I A)1 =
adj(s I A)
.
det(s I A)
Y (s)
= C(s I A)1 B + D
U (s)
#
"
sI A B
det
C D
.
=
det(s I A)
0
0
1
B=
A=
1
k
b
m
m
m
C=
1
h
i s
1
0
= 1 0
k
b
1
s+
m
m
m
b
h
i s+
1 0
m
1 0
1
k
s
m
m
=
s 2 + (b/m)s + (k/m)
1 0
35
1/m
s 2 + (b/m)s + (k/m)
1
.
=
ms 2 + bs + k
This is exactly what we expect from the first example in this section.
EXAMPLE :
Same result.
s
1
0
b 1
k
det
s+
m
m m
1
0
0
G(s) =
s
1
det k
b
s+
m
m
1/m
= 2
s + (b/m)s + (k/m)
1
=
.
ms 2 + bs + k
36
State Space block from the Continuous library, or we can make our
own. The following method has advantages because it gives us
explicit access to the state and other internal signals. It is a direct
implementation of the transfer function above, and the initial state
may be set by setting the initial integrator values.
K
u
1
xdot
D
1
s
C
K
A
y
1
y (t)
y (t)
1
a1 a2 a3
x(t)
= y (t) = 1
0
0 y (t) + 0 u(t)
0
1
0
y(t)
0
y (t)
37
y(t) =
0 0 1 x(t) + 0 u(t).
y(t) = b1v(t)
+ b2v(t)
+ b3v(t).
But,
or,
...
v(t) + a1v(t)
+ a2v(t)
+ a3v(t) = u(t).
The representation for this is the same as in Case [1]. Let
h
iT
x(t) = v(t)
v(t)
v(t) .
Then
...
v(t)
v(t)
a1 a2 a3
1
x(t)
= v(t)
= 1
0
0 v(t)
+ 0 u(t)
v(t)
0
1
0
v(t)
0
38
y(t) =
b1 b2 b3 x(t) + 0 u(t).
1 s 2 + 2 s + 3
+ D,
= 3
s + a1 s 2 + a2 s + a3
where the i terms are computed via long division. The remainder
D is the feedthrough term.
b2
u(t)
x 1c
a1
x 2c
x 3c
b3
a2
a3
39
a3
Or,
b2
R
x 3o
a2
b1
R
x 2o
a1
x 1o
y(t)
b1
a1 1 0
x(t)
= a2 0 1 x(t) + b2 u(t)
a3 0 0
b3
h
i
y(t) = 1 0 0 x(t).
310
u(t)
a3
x 1co
R
1
(x2 a1 x3)
s
1
x2 = (x1 a2 x3)
s
1
x1 = (u a3 x3)
s
a2
x3 =
Thus,
x 2co
R
y(t)
a1
x 3co
1
U (s)
s 3 + a1 s 2 + a2 s + a3
s + a1
U (s)
X 2(s) = 3
s + a1 s 2 + a2 s + a3
s 2 + a1 s + a2
X 1(s) = 3
U (s).
s + a1 s 2 + a2 s + a3
X 3(s) =
1 0 0
1
b1
a 1 1 0 2 = b 2
a2 a1 1
3
b3
1
1
b1
1 0 0
2 = a 1 1 0 b 2
3
a2 a1 1
b3
b1
=
b2 a 1 b1
.
b3 a1b2 a2b1 + a12b1
311
Or,
x(t)
=1
0
h
y(t) = 1
0 a3
1
0 a2 x(t) + 0 u(t)
0
1 a1
i
2 3 x(t).
= B T (s I A)T C T + D T
= B T (s I A T )1C T + D T .
So, C B T , A A T , B C T and D D T are dual forms.
We have already seen this (!). Controller and observer are dual
forms. Likewise, we can come up with
1
0
1
0
x(t)
= 0
0
1 x(t) + 2 u(t)
a3 a2 a1
h
i
y(t) = 1 0 0 x(t)
312
u(t)
3
2
R
x 3ob
x 2ob
a1
x 1ob
y(t)
a2
a3
X 1(s)
r1
=
U (s)
s p1
...
Or,
X n (s)
rn
=
U (s)
s pn
x(t)
= Ax(t) + Bu(t)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
313
p1
0
p2
A=
...
C=
pn
1 1 1 ,
r1
r2
B=
...
rn
h i
D= 0 .
314
Transformations
We have seen that state-space representations are not unique.
Selection of state x are quite arbitrary.
Can we convert from one representation to another and get
equivalent systems?
Analyze the transformation of
x(t)
= Ax(t) + Bu(t)
y(t) = C x(t) + Du(t).
Let x(t) = T z(t), where T is an invertible (similarity) transformation
matrix.
z (t) = T 1 x(t)
= T 1[Ax(t) + Bu(t)]
1
= |T 1
{zAT} z(t) + |T {z B} u(t)
A
y(t) = |{z}
C T z(t) + |{z}
D u(t)
C
so z (t) = Az(t)
+ Bu(t)
H2 (s) = C(s
= C T T 1(s I A)1 T T 1 B + D
Consider
b1 s 2 + b 2 s + b 3
H (s) = 3
.
s + a1 s 2 + a2 s + a3
315
316
is
3 2
xc (t) +
1 0
h
i
y(t) = 2 3 xc (t).
xc (t) =
Let
"
T =
"
2 1
1 1
"
1
0
u(t)
= (T 1 AT )x(t)
x(t)
+ (T 1 B)u(t)
y(t) = (C T )x(t).
Plugging in A, B, C and T :
#
" #
"
1
= 2 0 x(t)
u(t)
+
x(t)
1
0 1
h
i
y(t) = 1 1 x(t),
EXAMPLE :
317
Amplitude
PSfrag replacements
3
2
1
0
1
0
Time
The systems have the same transfer function, but different responses
to initial states since the states have different interpretations.
Time (Dynamic) Response
Develop more insight into the system response by looking at
time-domain solution for x(t).
Scalar case first, then many states and MIMO.
Homogeneous Part (scalar)
x(t)
= ax(t),
x(0).
x(0).
[(s I A)1]x(0).
318
But,
(s I A)
A
I
A2
= + 2 + 3 +
s s
s
so,
1
A2 t 2 A3t 3
[(s I A) ] = I + At +
+
+
2!
3!
1
= e At
matrix exponential
x(t) = e At x(0).
e At : Transition matrix or state-transition matrix.
Matrix exponential
e( A+B)t = e At e Bt
expm.m
iff
Computation of e At =
EXAMPLE :
x = Ax,
(s I A)1 =
"
"
s
2
s+3
2
A=
1
s +3
1
s
"
0 1
2 3
#1
#
1
(s + 2)(s + 1)
2
1
s+1 s+2
=
2
2
[7mm]
+
s+1 s+2
1
1
s+1 s+2
1
2
+
s+1 s+2
319
At
"
2et e2t
2et + 2e2t
et e2t
et + 2e2t
1(t)
d
2. eat [x(t)
ax(t)] = [eat x(t)] = eat bu(t).
dt
Z t
Z t
d a
ea bu( ) d.
[e x( )] d = eat x(t) x(0) =
3.
0
0 dt
Forced Solution (full solution)
Now, let x(t)
= Ax(t) + Bu(t),
x(t) = e x(0) +
x
Z
t
0
n1
m1
e A(t ) Bu( ) d
Have seen the key role of e At in the solution for x(t). Impacts the
system response, but need more insight.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
320
1 t
0
e 2 t
...
e n t
T 1
w1T
T
w2
=
...
wnT
i.e., rows of T 1.
h
i
= v1 v2 . . . v n
n
X
1 t
0
e
2 t
...
0
e n t
ei t vi wiT .
i =1
w1T
w2T
...
wn T
321
x(t)
= Ax(t)
x(t) = e At x(0)
= T e3t T 1 x(0)
n
X
ei t vi (wiT x(0)).
=
i =1
with x(t)
322
Impulse Response
2
1.5
Amplitude
PSfrag replacements
0.5
0
0.5
1
1.5
2
50
100
150
Time (sec.)
200
250
300
323
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
50
100
150
200
250
0.5
0
0.5
1
0
1
0.5
0
0.5
0.5
0
0.5
1
0
1
1
0
1
acements
Response
Amplitude
me (sec.)
0.5
0
0.5
324
1/s
x1
Can write
ements
1/s
xn
x(t) = e At x(0)
= T e3t T 1 x(0)
n
X
=
ei t vi (wiT x(0)).
i =1
325
J1
0
...
T 1 AT = J =
where
Ji =
Jq
i 1
0
.
.
.
i
... 1
0
i
n i n i
q
X
n i ).
i =1
1
s
1
s
1
s
cn
y(t)
c2
c1
1
s 1
0
.
.
.
s
1
(s I J ) =
. . . 1
0
s
326
(s )1 (s )2 (s )k
1
k+1
(s
(s
)
=
...
...
0
(s )1
= (s )1 I + (s )2 F1 + + (s )k Fk
k1
1 t t /(k 1)!
k2
t
/(k
2)!
J t
t
e =e
...
...
= et (I + t F1 + + t k1/(k 1)!Fk ).
Thus, Jordan blocks yield repeated poles and terms of the form t p et
in e At .
Canonical Forms for MIMO Systems
Consider
Now,
and
327
1 I 2 I n1 I n I
I
0
0
0
x(t)
= 0
I
0
0 x(t) +
..
.
.
.
..
..
..
.
0
0
I
0
i
h
y(t) = N1 N2 Nn1 Nn x(t) + G()u(t),
I
0
0
...
0
u(t)
1 I I
2 I 0
...
...
x(t)
=
n1 I 0
n I 0
h
y(t) = I 0 0
N1
0
N2
0
... x(t) +
...
I
Nn1
Nn
0
i
0 x(t) + G()u(t),
0
I
...
0
0
u(t)
We generally find that SISO canonical forms are more useful than
MIMO canonical forms.
Zeros of a State-Space System
Seen eigenvalues of A, or poles of the entries of G(s) are the poles.
Zeros of transfer function?
What is a zero? Two types of zero in a MIMO system: Blocking zeros
and transmission zeros.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
328
K
, K 6= 0 a constant vector.
s
If is a blocking zero, e t will not appear at the output for any K , x(0).
(Not considered a very useful definition for MIMO zero).
PSfrag replacements
IDEA : (but not the entire story) Consider a two-input two-output system
u 1 (t)
y1(t)
G 11 (s)
G 21 (s)
G 12 (s)
u 2 (t)
y2(t)
G 22 (s)
x(t) = x 0e zi t
...
y(t) = 0.
329
x(t)
= Ax(t) + Bu(t) z i e zi t x0 = Ax0e zi t + Bu 0e zi t
i x0
h
=0
zi I A B
u 0
i x0
h
= 0.
C D
u 0
zi I A
C
B
D
x0
u 0
= 0.
zi I A B
= 0.
det
C
D
Recall
G(s) =
det
sI A
C
B
D
det(s I A)
zi I A B
< n + min{ p, q}
rank
C
D
330
EXAMPLE :
s(s + 1)
s+1
2
+2
G(s) = s + 1 (s +s2)(s
+ 1) .
0
s 2 + 2s + 2
We can find that G(s) has a blocking zero at s = 1 and has
transmision zeros at s = 0, s = 1 and s = 2.
s +1
s(s + 1)
U1(s) +
U2(s)
Y1(s) = 2
s +1
s +2
(s + 2)(s + 1)
Y2(s) = 2
U2(s).
s + 2s + 2
Let U = K /(s )
s + 1 s(s + 2)k1 + (s 2 + 1)k2
Y1(s) =
s
(s 2 + 1)(s + 2)
s + 1 (s + 2)k2
.
Y2(s) =
s s 2 + 2s + 2
zi I A
C
B
D
x0
u 0
= 0.
(mostly blank)
(mostly blank)
41
placements
e(t)
A2D
e[k]
D(z)
u[k]
D2A
zoh
w(t)
u(t)
G(s)
y(t)
v(t)
The z-Transform
Just as Laplace transforms are used for continuous-time systems, the
z-transform is used with discrete-time systems.
DEFINITION :
X
X (z) =
x[k]z k .
k=0
42
(z)
rag replacements
Sfrag replacements
PSfrag replacements
j
T
j
j
T
T
s-plane
z-plane
(z)
43
Good
cements
PSfrag replacements
Damping
Good
Good
PSfrag replacements
Frequency n
Settling Time
44
v[k + 3]
x[k + 1] = v[k + 2]
v[k + 1]
a1 a2 a3
v[k + 2]
1
= 1
0
0 v[k + 1] + 0 u[k].
0
1
0
v[k]
0
b1 z 2 + b 2 z + b 3
Y (z)
G(z) = 3
=
.
z + a1 z 2 + a2 z + a3 U (z)
V (z)
contains all of the
U (z)
Y (z)
. Then,
U (z)
Y (z) = b1 z 2 + b2 z + b3 V (z).
45
Or,
Then
a1 a2 a3
1
v[k + 2]
x[k + 1] = 1
0
0 v[k + 1] + 0 u[k]
0
0
1
0
v[k]
h
i
h i
y[k] = b1 b2 b3 x[k] + 0 u[k].
46
b1
y[k]
b2
u[k]
z 1
x 1c
z 1
x 2c
z 1
x 3c
b3
a1
a2
a3
or
and
1
Y (z) = [C(z
I A)
A)1 zx[0]} .
|
{z B + D]} U (z) + C(z
| I {z
transfer function of system
So,
Y (z)
= C(z I A)1 B + D
U (z)
47
Transformation
State-space representations are not unique. Selection of state x are
quite arbitrary.
Analyze the transformation of
x[k + 1] = Ax[k] + Bu[k]
y[k] = C x[k] + Du[k]
Let x[k] = T w[k], where T is an invertible (similarity) transformation
matrix.
1
w[k + 1] = |T 1
{zAT} w[k] + |T {z B} u[k]
A
D u[k]
y[k] = |{z}
C T w[k] + |{z}
C
y[k] = Cw[k]
+ Du[k].
Same as for continuous-time.
Time (Dynamic) Response
Homogeneous Part
First, consider the scalar case
x[k + 1] = ax[k],
x[0].
48
k1
X
j =0
Ak1 j Bu[ j] .
{z
convolution
PSfrag replacements
ZOH
u(t)
A, B, C, D
y(t)
49
x((k + 1)T ) =
(k+1)T
e A((k+1)T ) Bu( ) d
With malice aforethought, break up the integral into two pieces. The
first piece will become A d times x(kT ). The second part will become
Bd times u(kT ).
Z kT
Z (k+1)T
=
e A((k+1)T ) Bu( ) d +
e A((k+1)T ) Bu( ) d
0
=e
kT
kT
0
AT
AT A(kT )
x(kT ) +
Bu( ) d +
(k+1)T
kT
(k+1)T
kT
e A((k+1)T ) Bu( ) d
e A((k+1)T ) Bu( ) d.
Z
e A B d u[k].
Similarly,
410
That is, C d = C; Dd = D.
Calculating Ad , Bd , Cd and Dd
Cd and Dd require no calculation since C d = C and Dd = D.
2
I + A + A2 + . . . B d
=
2
0
3
T2
T
= I T + A + A2 + . . . B
2!
3!
Z
= A1(e AT I )B
= A1(Ad I )B.
411
(mostly blank)
51
y (0) = C(Ax(0)
+ Bu(0)}) + D u(0)
{z
|
x(0)
52
+ D u(0).
In general,
or,
where
y(0)
D
0 0
u(0)
C
C A2
| {z }
|
{z
}
Thus, if
(C, A)
u(0)
y(0)
1
x(0) =
y (0) u(0)
.
y (0)
u(0)
is nonsingular.
If
is nonsingular, then we can determine/estimate the
initial state of the system x(0) using only u(t) and y(t) (and therefore,
we can estimate x(t) for all t 0).
CONCLUSION :
EXAMPLE :
1
0
1
0
x(t)
= 0
0
1 x(t) + 2 u(t)
3
a3 a2 a1
h
i
y(t) = 1 0 0 x(t).
53
Then
C
1 0 0
= C A = 0 . . . 0 = In .
C A2
0 0 1
ments
1
u
1
1H x 1
1
y
2
1
1
1H x 1
x2
1F
2
1
x2
1F
(Redrawn)
u(t)
A, B, C, D
y(t)
x
Observer
54
A, B, C, D
u(t)
y(t)
x
u
1
u
s
u
s2
1
s
s2
x
= lim
= B.
A
I
s
1
55
Then
1
1
A
X (s) = (s I A)1 Bs k =
Bs k
I
s
s
A A2
1
I + + 2 + . . . Bs k
=
s
s
s
{z
}
|
holds for large s
Ak+1 B
Ak B
+
+
= Bs
+ ABs
+ A Bs
+ +
s
s2
The first terms are impulsive: they have zero value for t > 0.
k1
k2
Thus,
x(0+) = lim s
s
= Ak B.
k3
Ak+1 B
Ak B
+
+
s
s2
x(0+) = B AB An1 B g2
|
{z
}
g3
where is called the controllability matrix.
CONCLUSION :
56
g1
g2 = B
g3
n1
X
gi (i )(t)
i =0
AB
n1
1
xd
0 0 a3
1
x(t)
= 1 0 a2 x(t) + 0 u(t)
y(t) =
Then
0 1 a1
1 2 3 x(t).
= [B
AB
An1 B]
1 0 0
= 0 . . . 0 = In .
0 0 1
57
Later, well see that smooth inputs can effect the state transfer (not
instantaneously, though!).
Sfrag replacements
T
T
T
T
DUALITY: If {A, B, C, D} controllable {A , C , B , D } is observable.
EXAMPLE :
1
1F
x 1 1F
x2
u
1
1
1
1
1
0
2
x(t) + 2 u(t)
x(t)
=
...
...
n
0
n
h
i
h i
y(t) = 1 2 n x(t) + 0 u(t).
58
x 1(t)
1
..
.
u(t)
y(t)
x n (t)
When controllable?
When
observable?
C
1
2
n
C A 1 1
2 2
n n
= ... =
...
n1
n1
n1
n1
CA
1 1 2 2 n n
1
0
1
1 1
2
2
n
.
...
...
n1
n1
n1
n
1
2
{z
}
|
Vandermonde matrix
Singular?
cements
det{
} = (1 n ) det{ } = (1 n )
CONCLUSION :
u(t)
Y
i< j
( j i ).
Observable i 6= j , i 6= j and i 6= 0 i = 1, , n.
1
s+1
1
s+1
x 1(t)
y(t)
x 2(t)
u(t)
1
s+2
1
s+1
x 1(t)
y(t)
x 2(t)
59
cements
What about controllability? Use duality and switch s and s.
CONCLUSION :
u(t)
Controllable i 6= j , i 6= j and i 6= 0 i = 1, , n.
1
s+1
1
s+1
x 1(t)
y(t)
u(t)
x 2(t)
1
s+2
1
s+1
x 1(t)
y(t)
x 2(t)
h
i u[n 1]
...
x[n] = An x[0] + B AB A2 B An1 B
{z
}
|
u[0]
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
510
Which leads to
u[n 1]
...
=
u[0]
x[n] An x[0] .
REMARK :
Discrete-Time Reachability
In the literature, there are three different controllability definitions:
1. Transfer any state to any other state.
2. Transfer any state to the zero state, called controllability to the
origin.
3. Transfer the zero state to any state, called controllability from the
origin, or reachability.
In continuous time, because e At is nonsingular, the three definitions
are equivalent.
In discrete time, if A is nonsingular, the three definitions are also
equivalent.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
511
However, if A is singular, (1) and (3) are equivalent but not (2) and (3).
EXAMPLE :
0
0 1 0
Its controllability matrix has rank 0 and the equation is not controllable
in (1) or (3).
However, Ak = 0 for k 3 so x[3] = A 3 x[0] = 0 for any initial state
x[0] and any input u[k].
Thus, the system is controllable to the origin but not controllable from
the origin or reachable.
Definition (1) encompasses the other two definitions, so is used as
our definition of controllable.
Discrete-Time Observability
Can we reconstruct the state x[0] from the output y[k]?
y[k] = C x[k] + Du[k]
y[0] = C x[0] + Du[0]
y[1] = C [Ax[0] + Bu[0]] + Du[1]
y[2] = C A2 x[0] + ABu[0] + Bu[1] + Du[2]
...
y[n 1] = C An1 x[0] + An2 Bu[0] + + Bu[n 1] + Du[n 2]
512
C
D
0
y[0]
y[1] C A
x[0] + C B D
=
...
...
C AB C B
...
...
n1
CA
y[n 1]
|
|
{z
{z
}
So,
x[0] =
...
0
0
0
D
u[0]
u[1]
...
u[n 1]
y[0]
u[0]
...
...
1
.
y[n 1]
u[n 1]
If
is full-rank or nonsingular, x[0] may be reconstructed with any
y[k], u[k]. We say that {C, A} form an observable pair.
Do more measurements of y[n], y[n + 1], . . . help in reconstructing
x[0]? No! (Caley-Hamilton theorem). So, if the original state is not
observable with n measurements, then it will not be observable with
more than n measurements either.
Since we know u[k] and the dynamics of the system, if the system is
observable we can determine the entire state sequence x[k], k 0
once we determine x[0]
n1
X
x[n] = An x[0] +
An1i Bu[k]
i =0
= An
y[0]
u[0]
u[n 1]
...
...
...
1
+
.
y[n 1]
u[n 1]
u[0]
513
x =
"
F
0 F
x =
0
1
x +
0 e
0
F
0 F
0
0
x +
0 a
1
514
Physical intuition can be better than finding . Other tools can help. . .
SIGNIFICANCE :
Consider
x(t1) = e
At1
x(0) +
t1
0
e A(t1 ) Bu( ) d.
We claim that for any x(0) = x 0 and any x(t1) = x1 the input
T
u(t) = B T e A (t1t) Wc1(t1) e At1 x0 x1
"
0.5 0.25
1
1
515
two seconds.
Z
Wc (2) =
"
"
0.2162 0.3167
0.3167 0.4908
and
h
0.5
u(t) = 0.5 1
"
#"
#
0.5
1
0.5 1
"
0.5
#!
0.5(2t)
0
e(2t)
Wc (2)1
"
= 58.82e0.5t + 27.96et .
e
0
0
e2
#"
10
1
If A is stable, Wc1 > 0 which implies we cant get anywhere for free.
516
t
0
e A C T Ce A d
Wo1(t1)
where
y (t) = y(t) C
t1
e A t C T y (t) dt
517
If A is stable, then Wo1 > 0 and we cant estimate the initial state
perfectly even with an infinite number of measurements u(t) and y(t)
for t 0 (since memory of x(0) fades).
If A is not stable then Wo1 can have a nonzero nullspace Wo1 x(0) = 0
which means that the covariance goes to zero as t .
Wo may be a better indicator of observability than
n1
X
Am B B T (A T )m
m=0
Am B B T (A T )m
m=0
ASIDE :
Ac = (A + I )1(A I )
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
518
and
Cc = 2(A + I )1 B B T (A T + I )1.
2(A + I )1 B B T (A T + I )1.
is nonsingular. In particular,
Wdo =
(A T )m CC T Am
m=0
519
x(t)
= Ax(t) + Bu(t)
y(t) = C x(t) + Du(t),
1
0 0 an
... a
1
0
n1
x
(t)
+
xco (t) =
... co
... u(t)
... . . .
0 1 a1
0
i
h
y(t) = 1 2 n xco (t) + Du(t)
T 1 AT = Aco ,
C T = Cco ,
T 1 B = Bco
D = Dco .
T = [t1, . . . , tn ].
520
1
0
B=T
... = t1.
0
So
0 0 an
... a
1
n1
A[t1, . . . , tn ] = [t1, . . . , tn ]
...
... . . .
0 1 a1
"
n
X
ak tnk+1 .
k=1
At1 = t2 = AB
T = [B AB An1 B] = .
CONCLUSION :
1
If x old = T xnew then T = old new
. That is, to convert
between any two realizations, T is a combination of the controllability
EXTENSION I :
521
matrices of the
two different realizations.
n1
new = Bnew Anew Bnew Anew Bnew
1
1
1
1 n1
1
= T Bold T Aold T T Bold T Aold T T Bold
= T 1
old ,
or
T =
EXTENSION II :
1
old new .
1
old
new .
Canonical Decompositions
What happens if {A, B} not controllable or if {A, C} not observable?
Is part of the system controllable?
Is part of the system observable?
Given a system with {A, B} not controllable, x(t)
= Ax(t) + Bu(t),
y(t) = C x(t), lets try to transform to controllability canonical form.
Let t1 = B, t2 = AB, . . . tr = Ar 1 B, and suppose t1, t2, . . . , tr are
independent but
tr +1 = Ar B = r t1 1tr
for some constants 1, . . . , r .
Then rank( ) = r since the vectors Ar B, Ar +1 B, . . . , An1 B can all be
expressed as a linear combination of t1, t2, . . . , tr .
Let sr +1, . . . , sn be your favorite vectors for which
h
i
T = t1 tr sr +1 sn
522
Consider
s1
s 1
1
=
= 2
.
s + 1 (s + 1)(s 1) s 1
In observer-canonical form,
"
#
"
#
0 1
1
x(t)
=
x(t) +
u(t)
1
1 0
h
i
y(t) = 1 0 x(t).
So,
t1 =
"
1
1
and
t2 = At1 =
"
1
1
= t1.
523
PROOF :
524
so we have
Az = z
where
z=T
"
0
v2
DUAL :
INTERPRETATION :
Or,
i =1
y(t) =
n
X
i =1
525
526
PSfrag replacements
u(t)
Controllable
and
observable
Controllable
but not
observable
y(t)
Observable
but not
controllable
Neither
observable nor
controllable
STABILIZABLE :
DETECTABLE :
1
s
1
"
#
"
#
h
i
0 1
1
A=
,B=
, C = 1 0 , D = [0].
1 0
1
s1
1
C adj(s I A)B
= 2
=
.
det(s I A)
s 1 s +1
"
"
527
1 0
. Observable.
0 1
"
#
h
i
1 1
= B AB =
. Not controllable.
1 1
C
CA
s + 10
1 s + 10
= 2
.
3. Controller realization of
s
+
1
s
+
10
s
+
11s
+
10
"
#
" #
h
i
11 10
1
A=
,B=
, C = 1 10 , D = [0].
1
0
0
C adj(s I A)B
s + 10
1
= 2
=
.
det(s I A)
s + 11s + 10 s + 1
#
"
h
i
1 11
= B AB =
. Controllable.
0 1
# "
#
"
C
1 10
=
=
. Not observable.
CA
1 10
TREND :
III :
IV :
DEFINITION:
PROOF : I II
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
I H II :
528
If there are common roots in C adj(s I A)B and det(s I A) they will
cancel out of the transfer function. But, eigenvalues of A are
det(s I A) = 0, so poles of G(s) will not contain all eigenvalues of A.
Eigenvalues of A are det(s I A) = 0. Poles of G(s), from above,
are det(s I A) = 0 unless canceled. The only way to cancel a pole is
to have common root in C adj(s I A)B.
II H I :
PROOF : I IV
I H IV :
Cancel to get
C adj(s I A)B
br (s)
+D=
det(s I A)
ar (s)
where br (s) and ar (s) are coprime (r means reduced).
G(s) =
IV H I :
529
G(s) has k poles unless some cancel with C adj(s I A)B. Therefore,
if k > n, C adj(s I A)B and det(s I A) are not coprime.
PROOF : III IV Controllable and observable iff minimal.
PSfrag replacements
III H IV : Uncontrollable or unobservable H not minimal.
Bc
x 1(t)
Cc
Ac
y(t)
A12
Uncontrollable
part of
realization
x 2(t)
Cc
Ac
"
"
h
i
A
A
B
c
12
c
A =
, B =
, C = Cc Cc , D = [0].
0 Ac
0
B,
C.
Therefore
Same transfer function using A c , Bc, Cc as A,
uncontrollable and/or unobservable means not minimal.
IV H III :
A)
B
+
D
{z
} |
{z
}
|
n states
r <n states
C B C A B C A 2 B
C B C AB C A2 B
+ 2 +
+ 2 +
+ =
+
s
s
s3
s
s
s3
530
Consider
CA h
i
2
n1
2
=
C A B AB A B A B
...
C An1
C B
C A n1 B
CB
C An1 B
...
...
...
...
=
=
C A n1 B C A 2n2 B
C An1 B C A2n2 B
x
n1
C A
B A B A B r
y
.
.
n
=
.
0
C A n2
nr
0
y C A n1
y
nr
Therefore det(
) det( ) = 0, so the system is either unobservable,
or uncontrollable, or both.
The four equivalences have now been proven.
All minimal realization of G(s) are related by a unique change of
coordinates T . Can you prove this?
FACT :
(mostly blank)
(mostly blank)
61
x(t)
= Ax(t) + Bu(t)
y(t) = C x(t) + Du(t).
K
x(t),
K
PSfrag replacements
r (t)
u(t)
A, B, C, D
y(t)
x(t)
= Ax(t) + B(r(t) K x(t))
= (A B K )x(t) + Br(t)
y(t) = C x(t) + Du(t).
62
OBJECTIVE :
x(t)
=
"
1 1
1 2
x(t) +
"
1
0
u(t).
det(s I A) = (s 1)(s 2) 1 = s 2 3s + 1.
(Note. The original system is unstable).
Let
ACL = A B K =
k1 k2
0
1 2
#
"
1 k1 1 k2
.
=
1
2
63
with
So,
k1 3 = 11,
1 2k1 + k2 = 30,
or, k1 = 14
or, k2 = 57.
K = [14 57].
So, with the n parameters in K , can we always relocate all n eig(A CL )?
Most physical systems, qualified yes.
Mathematically, EMPHATIC NO!
Boils down to whether or not the system is controllable. That is, if
every internal system mode can be excited by inputs, either directly or
indirectly.
EXAMPLE :
x(t)
=
ACL
"
1 1
0 2
x(t) +
"
1
0
u(t)
u(t) = K x(t).
"
#
1 k1 1 k2
= A BK =
0
2
FACT:
64
COROLLARY:
PROOF OF FACT:
a1 a2 an
1
0
1
T AT = Ac =
...
...
0 1
0
and
1
0
1
T B = Bc =
...
0
k1 k2 kn
0
0
Bc K c =
... .
...
1
0
,
ACL = Ac Bc K c =
...
...
1
0
65
frag replacements
b2
u(t)
a1
x 1c
x 2c
x 3c
b3
a2
a3
k1
k2
k3
det(s I Ac + Bc K c ) = det(s I T Ac T 1 + T Bc K c T 1)
= det(s I A + B K c T 1).
66
2
1 a1 an1
1 a1 a1 a2
.
.
.
0 1
0 1
a1
0
1
= .
c = ...
.
.
.
.
.
. a1
.
1
0
1
0
1
|
{z
}
upper 1 Toeplitz
So,
1 a1 an1
h
i 0 1
an2
K = (1 a1) (n an )
...
. . . ...
0
1
Ackermanns Formula
We need to know ai to use the Bass-Gura formula. Ackermanns
method may require less work.
Consider (a system already in controller canonical form, for now),
(Ac ) = Anc + a1 An1
+ + an I = 0
c
by Cayley-Hamilton.
Also,
= (1 a1)An1
+ + (n an )I.
c
67
so that
h
0 1 d (Ac ) =
Therefore,
(1 a1) (2 a2) (n an )
K = K c T 1
h
= 0
h
= 0
h
= 0
h
= 0
= K c.
1 d (T 1 AT )T 1
i
1 T 1d (A)
i
1 c 1d (A)
i
1 1d (A).
place.m
polyvalm.m
To compute d (A).
68
xdot
D
1
s
y
1
C
K
A
K
Some Comments
The eigenvalues associated with uncontrollable modes are fixed
(dont change) under state feedback, but those associated with
controllable modes can be arbitrarily assigned.
FACT:
FACT:
FACT:
FACT:
Reference Input
So far, we have looked at how to pick K to get homogeneous
dynamics that we want.
eig(ACL) fast/slow/real poles ...
How does this improve our ability to track a reference?
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
69
"
"
h
i
1 1
1
A=
,
B=
,
K = 14 57 .
1 2
0
h
i
Y (s)
= C(s I A + B K )1 B.
Let C = 1 0 . Then
R(s)
"
#1 " #
h
i
Y (s)
s + 13 56
1
= 1 0
R(s)
1 s 2
0
"
#
s 2 56
" #
h
i
1 s + 13
s2
1
= 1 0 2
= 2
.
s + 11s 26 + 56 0
s + 11s + 30
2
6= 1 !
30
0.06
Amplitude
0.04
PSfrag replacements
0.02
0
0.02
0.04
0.06
0.08
10
Time (sec.)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
610
OBSERVATION :
u ss = |{z}
Nu rss
11
xss = |{z}
N x rss .
n1
x(t)
= 0 = Ax ss + Bu ss
y(t) = rss = C xss + Du ss .
Nu
"
0
I
In steady-state we had
(u(t) u ss ) = K (x(t) x ss )
which is achieved by the control signal
u(t) = Nu r(t) K (x(t) N x r(t))
= K x(t) + (Nu + K N x )r(t)
= K x(t) + Nr(t).
N computed without knowing r(t). It works for any r(t).
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
611
Nu + K N x = 15.
= 1/2 .
1/2
New equations:
x(t)
= Ax(t) + B(Nu + K N x )r(t) B K x(t)
= (A B K )x(t) + B Nr(t)
y(t) = C x(t).
Therefore,
Y (s)
R(s)
new
Y (s)
R(s)
15s + 30
N = 2
s + 11s + 30
old
u(t)
Nu
A, B, C, D
y(t)
x
K
Sfrag replacements
Nx
r (t)
u(t)
A, B, C, D
x
y(t)
612
Step Response
1
0.8
Amplitude
0.6
PSfrag replacements
0.4
0.2
0
0.2
0.4
0.6
0.8
10
Time (sec.)
Pole Placement
Classical question: Where do we place the closed-loop poles?
Dominant second-order behavior, just as before.
Assume dominant behavior given by roots of
n
o
p
2
2
2
s + 2 n s + n s = n jn 1
Put other poles so that the time response is much faster than this
dominant behavior.
Place them so that they are sufficiently damped.
Real part < 4 n .
613
ments
0.8
0.8
0.6
0.4
PSfrag replacements
0.2
0
0
10
12
14
Amplitude
Amplitude
0.6
0.4
0.2
0
0
16
0.5
1.5
2.5
Time (sec.)
Time (sec.)
ments
0.8
0.8
Amplitude
Amplitude
0.6
0.4
PSfrag replacements
0.2
0
0
10
12
14
16
0.6
0.4
0.2
0
0
Time (sec.)
PROCEDURE :
0.5
1.5
2.5
Time (sec.)
1:
2:
3:
4:
5:
6:
1.000
0.707 0.707 j
0.708
0.376 1.292 j
0.576 0.534 j
0.310 0.962 j
0.626 0.414 j
0.581 0.783 j
3:
4:
0.896
0.735 0.287 j
2:
3:
4:
5:
6:
1.000
0.866 0.500 j
0.942
0.591 0.907 j
0.852 0.443 j
0.539 0.962 j
5:
6:
0.905 0.271 j
1:
0.800 0.562 j
3:
4:
0.926
0.909 0.186 j
4.620
4.660 4.660 j
4.350 8.918 j
5.913
4.236 12.617 j
6.254 4.139 j
2.990 12.192 j
5.602 7.554 j
3.948 13.553 j
6.040 5.601 j
9.394
7.089 2.772 j
0.746 0.711 j
0.657 0.830 j
1:
2:
0.521 1.068 j
0.424 1.263 j
5:
6:
4.620
4.053 2.340 j
3.967 3.785 j
5.009
4.110 6.314 j
5.927 3.081 j
4.016 5.072 j
4.217 7.530 j
5.528 1.655 j
6.261 4.402 j
6.448
7.121 1.454 j
614
615
Magnitude
20
PSfrag replacements
40
60
80
100
120
140
1
10
10
10
, (rads/sec.)
EXAMPLE :
G(s) =
1
s(s + 1)(s + 4)
A= 1
0
h
C= 0 0
4 0
0 0 ,
1 0
i
1 .
1
B=0
0
0.45
d (s) = s 3 + 6.473s 2 +
17.456s + 18.827.
h
i
K = 1.473 13.456 18.827 .
Amplitude
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0.5
1.5
2.5
Time
3.5
4.5
616
Poles are now in the right place, but poor steady-state error to step
input. Use reference-tracking method from before
5 4 0 1
0
1 0 0 0 Nx 0
0 1 0 0 = 0 .
Nu
0 0 1 0
1
or
0
Nx 0
=
1
Nu
0
or
N = 18.827.
This fixes our step response, but we cannot use a similar strategy to
improve ramp responses etc. Recall from ECE4510 that we need
integrators in the open-loop system to increase system type.
Integral Control for Continuous-Time Systems
In many practical designs, integral control is needed to counteract
disturbances, plant variations, or other noises in the system.
Up until now, we have not seen a design that has integral action. In
fact state-space designs will NOT produce integral action unless we
make special steps to include it!
How do we introduce integral control? We augment our system with
one or more integrators:
placements
617
u(t)
KI
s
r (t)
A, B, C, D
y(t)
x
K
Nx
x I (t)
x(t)
"
0 C
0
#"
x I (t)
x(t)
"
0
B
u(t) +
"
I
0
r(t)
u(t) = K I K
"
#
i x (t)
I
x(t)
+ K N x r(t).
618
x I (t)
x I (t)
0 0 0 1
0 5 4 0
+
x(t)
0 1 0 0 x(t)
0 0 1 0
x I (t)
h
i
y(t) = 0 0 0 1
x(t) .
1
u(t) + 0 r(t)
0
0
0
0
Amplitude
PSfrag replacements
0.8
0.6
0.4
0.2
0
0
Time
619
Consider
x[k + 1] =
"
1 0
2 2
x[k] +
"
1
0
u[k].
620
(A B K )2 =
"
0 0
0 0
as claimed.
placements
KIz
z1
u[k]
A, B, C, D
y[k]
x
K
Nx
621
"
x I [k + 1]
x[k + 1]
"
1 C
0
#"
x I [k]
x[k]
"
0
B
u[k] +
"
I
0
r[k]
u[k] = K I K
#
"
i x I [k]
x[k]
+ K N x r[k].
PROCEDURE :
622
2. Find the nth-order poles from the table of constant settling time,
and divide pole locations by ts .
3. Convert s-plane locations to z-plane locations using z = e sT .
IDEA :
PSfrag replacements
623
w(t)
u(t)
A, B
A, B
x(t)
x(t)
y(t)
y (t)
We want x(t)
= 0.
For our estimator,
= x(t)
x(t)
x(t)
= Ax(t) + Bu(t) A x(t)
Bu(t)
= A x(t).
So,
x(t)
= e At x(0).
Hence, x(t)
x(t) if A is stable!
(This is not too impressive though since x(t)
x(t) because both
x(t)
and x(t) go to zero).
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
624
Sfrag replacements
Closed-Loop Estimator
w(t)
u(t)
A, B
A, B
x(t)
x(t)
y(t)
y (t)
.
Lets look at the error.
= x(t)
x(t)
x(t)
= A x(t)
L C x(t) C x(t)
= (A LC) x(t),
625
or, x(t)
x(t) if A LC is stable, for any value of x(0)
L
u
1
xhat
yhat
C
K
A
K
D
626
a1 1
0
. . . ...
a
2
1
T AT = Ao = .
.
1
.
an 0 0
and C T = C o =
1 0 0
l1 0 0
l2
.
L oCo = .
..
ln 0 0
Useful because characteristic equation obvious.
(a1 + l1) 1
...
(a2 + l2)
Ao L o C o =
...
0
...
,
1
(an + ln ) 0 0
627
1
1
0 0
...
a1 1
o = ..
.
.
. 0
.
an1 a1 1
So,
L=
1
0 0
.
.
1
.
1 a1
...
... 0
an1 a1 1
(1 a1)
(2 a2)
.
...
(n an )
and
628
By duality,
L=
hh
0 1
dT (A T )
i
T
(A, B) d (A)
A C
AA T
BC T
n1T
0
.
..
0
1
0
.
.
1 .
= d (A)
(C, A) .
0
1
CA
= d (A)
...
C An1
In Matlab,
iT
L=acker(A,C,poles);
L=place(A,C,poles);
iT
0
.
..
0
1
629
Sfrag replacements
A, B
A, B
x[k]
x p [k]
Lp
y[k]
y [k]
EXAMPLE :
Let G(s) =
1
and measure y[k]. Let
s2
"
#
y[k]
x[k] =
,
y [k]
630
then
A=
"
1 T
0 1
C=
1 0 .
We desire x[k]
= z 2 + z(l1 2) + l2 T l1 + 1.
So,
l1 2 = 1.6
l2 T l1 + 1 = 0.68
or
Lp =
"
0.4
0.08/T
The estimator is
x p [k + 1] = A x p [k] + Bu[k] + L p y[k] C x p [k]
= A L p C x p [k] + Bu[k] + L p y[k],
or
"
2
T
0.6 T
0.4
y p [k + 1]
y p [k]
= 0.08
+
u[k] + 0.08 y[k].
2
y p [k + 1]
y p [k]
1
T
T
T
In general, we can arbitrarily select the prediction estimator poles iff
{C, A} is observable.
#
"
631
g replacements
w(t)
x(t)
= Ax(t) + Bu(t)
u(t)
y(t)
r (t)
x(t)
x(t)
= A x(t)
+ Bu(t) +
L y(t) y (t)
D(s)
G(s)
y(t)
r (t)
D(s)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
632
so that we have
x(t)
= Ax(t) B K x(t)
x(t)
= (A B K ) x(t)
+ L C x(t) + Du(t) C x(t)
Du(t)
= (A B K LC) x(t)
+ LC x(t).
x(t)
A
B K
=
,
x(t)
x(t)
LC A B K LC
= x(t)
x(t)
x(t)
= Ax(t) B K x(t)
LC x(t) (A B K LC) x(t)
= (A LC) x(t)
x(t)
= Ax(t) B K (x(t) x(t))
or,
"
x(t)
x(t)
"
A BK
BK
A LC
#"
x(t)
x(t)
x(t)
I I
|
{z
}
T
633
634
so that
D(s) =
U (s)
= K (s I A + B K + LC L D K )1 L .
Y (s)
replacements
Lp
x p [k]
u[k]
A B K L pC
+L p D K
Note that the control signal u[k] only contains information about
y[k 1], y[k 2], . . ., not about y[k].
So, our compensator is not taking advantage of the most current
measurement. (More on this later).
for loop:
u=-K*xhatp;
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
635
B=
"
1
1.4
C=
D = [0],
1 0 ,
2
Control design: Find
h K such that
i det (z I A + B K ) = z 1.6z + 0.7.
This leads to K = 0.05 0.25 .
A
+
L
C
=
z
1.28z + 0.45. This leads
p
#
0.72
to L p =
.
0.12
So, our compensator is
"
x p [k + 1] =
u[k] =
or,
0.23 1.16
0.19 0.65
x p [k] +
"
0.72
0.12
D(z) = 0.68
z 0.87
z 0.44 0.423 j
y[k]
636
= 0.68
z 0.87
.
z 2 0.88z + 0.374
Lets see the root locus of D(z)G(z). (Note, since D(z) has a negative
sign, must use 0 locus).
1
0.8
Imag Axis
0.6
0.4
0.2
0
0.2
0.4
0.6
PSfrag replacements
0.8
1
1
0.5
0.5
Real Axis
and u[k].
EXAMPLE :
In Matlab,
637
ements
0.8
0.6
0.4
0.2
0
0.5
1
0
Error Dynamics
x and x p
1.5
0.2
5
10
15
20
25
30
0.4
0
10
Time
15
20
25
30
Time
638
xc [k], x p [x]
x p [k 1]
xc [k 2]
k2
xc [k 1]
k1
xc [k]
x p [k]
k
x p [k + 1]
xc [k + 1]
Tune up estimate
from y[k + 1]
Predict from value of
xc [k] and u[k]
k+1
Implementation:
Time update: Predict new state from old state estimate and system
dynamics
x p [k] = A xc [k 1] + Bu[k 1].
Measurement update: Measure the output and use that to
update/correct the estimate
xc [k] = x p [k] + L c y[k] C x p [k] .
Note: This works well for multi-rate systems and for systems where
samples are sometimes missed.
Questions: How does x c [k] relate to x p [k]? How are L c and L p
related?
What is the xc [k] recursion relation?
xc [k + 1] = x p [k + 1] + L c y[k + 1] C x p [k + 1]
{z
}
|
error
639
= (I L c C) x p [k + 1] + L c y[k + 1]
= (I L c C) A xc [k] + Bu[k] + L c y[k + 1]
So,
= x[k + 1] x p [k + 1] + L c C x[k + 1] L c C x p [k + 1]
= (I L c C) x[k + 1] x p [k + 1]
= (I L c C) Ax[k] + Bu[k] A xc [k] Bu[k]
= (A L c C |{z}
A )x[k].
new!
= A x p [k] + Bu[k] + AL c
+ Bu[k]
y[k] C x p [k] .
640
= (A AL c C) x[k].
So, in summary
x = x x p
x = x xc
x[k
+ 1] = (A L c C A) x[k]
x[k
+ 1] = (A AL c C) x[k].
These estimate errors have the same poles. They represent the
replacements
dynamics of the block diagrams:
u[k]
x p [k]
y[k]
Prediction Estimator
u[k]
x p [k]
xc [k]
Lp
C
Lc
Current Estimator
Design of L c
1. Relate coefficients of
det (z I A + L c C A) = ob,des (z).
y[k]
641
0
0
1
L p = ob,des (z)
{C, A}
...
1
replace C C A to find L c
0
CA
0
C A2
1
{C A, A}
L c = ob,des (A)
1
C An
| {z
[
Lc=acker(A,(C*A),poles);
Lc=place(A,(C*A),poles);
3. Find L p and then L c = A1 L p .
Compensator Design using the Current Estimator.
x[k + 1] = Ax[k] + Bu[k]
y[k] = C x[k].
Estimator equations
x p [k + 1] = A xc [k] + Bu[k]
Control
xc [k] = x p [k] + L c y[k] C x p [k] .
u[k] = K xc [k].
A]1
In Matlab,
Plant equations
0
0
.
...
1
642
Therefore, we have
x[k + 1] = Ax[k] B K x c [k]
xc [k + 1] = (I L c C) A xc [k] + (I L c C) Bu[k] + L c y[k + 1]
= (I L c C) (A B K ) xc [k] + L c y[k + 1].
In terms of x[k
+ 1] = x[k + 1] x c [k + 1]
"
# "
#"
#
x[k]
x[k + 1]
A BK
BK
=
.
0
A L cC A
x[k
+ 1]
x[k]
Therefore
u[k] = K xc [k]
also works!
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
643
so
D(z) =
or
U (z)
= K (z I (I L c C)(A B K ))1 L c z
Y (z)
D(z) = K (z I A + B K + L c C A L c C B K )1 L c z.
cements
Block Diagram:
y[k]
Lc
zI
z 1
u[k]
(I L c C)( A B K )
644
= K (I L c C) x p [k] K L c y[k].
x p [k + 1] = A x p [k] + B y[k]
where
A = (A B K )(I L c C)
B = (A B K )L c
C = K (I L c C)
D = K L c
or
D(z) = K L c K (I L c C) (z I (A B K )(I L c C))1 (A B K )L c.
which you can verify is equivalent to the previous expression.
New block diagram
645
K L c
ments
y[k]
( A B K )L c
z 1
K (I L c C)
u[k]
( A B K )(I L c C)
1: (not good)
xhatp=xhatpnew
A2D(y)
xhatc=xhatp+Lc*(y-C*xhatp)
u=-K*xhatc
D2A(u)
xhatpnew=A*xhatc+B*u
IMPLEMENTATION METHOD
2: (good)
xhatp=xhatpnew
upartial=-K*(I-Lc*C)*xhatp
A2D(y)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
646
u=upartial-K*Lc*y
D2A(u)
xhatpnew=A*xhatp+B*u+A*Lc*(y-C*xhatp)
1
; T = 1.4 seconds.
EXAMPLE : G(s) =
s2
System description
"
#
"
#
h
i
1 1.4
1
A=
,
B=
,
C= 1 0 ,
0 1
1.4
Pick closed-loop poles as we did for prediction estimator
des (z) = z 2 1.6z + 0.7
ob,des (z) = z 2 1.28z + 0.45.
h
i
Control design K = 0.05 0.25 .
Estimator design
L c = A1 L p = A1
"
0.72
0.12
"
0.55
0.12
z(z 0.85)
z 0.47 0.31 j
z(z 0.85)
.
= 0.06 2
z 0.94z + 0.316
D(z) = 0.06
647
Imag Axis
0.6
0.4
0.2
0
0.2
0.4
0.6
PSfrag replacements
0.8
1
1
0.5
0.5
Real Axis
In Matlab,
ements
0.5
1
0
Error Dynamics
1
0.8
0.6
0.4
x and x p
1.5
0.2
0
10
15
20
25
30
0.2
0
10
Time
15
Time
20
25
30
648
Reduced-Order Estimator
Why construct the entire state vector when you are directly measuring
a state? If there is little noise in your sensor, you get a great estimate
by just letting
x1 = y
(C = [1 0 . . . 0]).
If there is noise in the measurement
y[k] = C x[k] + v,
v = noise,
y = C x = xa .
"
y[k] =
xa [k + 1]
xb [k + 1]
Aaa Aab
Aba Abb
I
#"
xa [k]
xb [k]
"
#
i x [k]
a
"
Ba [k]
Bb [k]
xb [k]
u[k]
649
Then
A Abb ;
Bu[k] Br u r [k];
C Aab .
Reduced-order estimator:
xb [k + 1] = Abb xb [k] + Br u r [k] + L r z[k] Aab xb [k] .
L r xa [k + 1] Aaa xa [k] Ba u[k] Aab xb [k] .
650
In Matlab,
L r = r,des (Abb )
C Aab ,
1
Aab
0
.
.
Aab Abb
. .
...
0
n1
Aab Abb
1
Lr=acker(Abb,Aab,poles);
Lr=place(Abb,Aab,poles);
Design a current estimator with a compensator at the origin with
"
#
h
i
I
Lc =
,
C= I 0
Lr
(homework problem?)
Reduced-Order Compensator
Control law:
651
Plant:
Estimator:
xb [k + 1] = Abb xb [k] + Aba xa [k] + Bbu[k] +
using
"
x[k + 1]
xb [k + 1]
"
=
A B K aC
L r C A + Aba C Bb K a C L r Aaa C
B K b
Abb Bb K b L r Aab
#"
What is D(z)?
xb [k + 1] = (Abb Bb K b + L r Ba K b L r Aab ) xb [k]
+ (Aba + L r Ba K a L r Aaa Bb K a ) y[k]
+L r y[k + 1]
xb [k + 1] = A xb [k] + B y[k] + L r y[k + 1],
and
= K a K b z I A
B + zL r .
D(z) =
Y (z)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
x[k]
xb [k]
652
1
; T = 1.4 seconds.
s2
System description
"
#
"
#
1 1.4
1
A=
,
B=
,
0 1
1.4
EXAMPLE :
G(s) =
C=
1 0 ,
Reduced-order estimator
r (z) = det(z I Abb + L r Aab )
where
A=
"
Aaa Aab
Aba Abb
so
Lr =
implies
"
1 1.4
0 1
1
= 0.714
T
Reduced-order estimator:
T2
A as above, Ba =
= 0.98;
2
Bb = T = 1.4.
653
implies
T2
1
y[k + 1] y[k] u[k] T v[k]
v[k
+ 1] = v[k]
+ T u[k] +
T
2
y[k + 1] y[k]
=
+ 0.7u[k].
1.4
Control:
1
1
u[k] = y[k] v[k],
20
4
so
T
1
1
T
+ y[k + 1]
+
y[k]
v[k
+ 1] = v[k]
8
T
T
40
1
1
y[k].
u[k] = v[k]
4
20
Taking the transfer function:
T
U (z)
1
1 T1 z T1 40
D(z) =
=
,
T
Y (z)
20 4
z+ 8
or
T
1
5 z 1+ T5
D(z) =
1+
.
20
T
z + T8
D(z) = 0.229
Root locus and transient response
z 0.781
z + 0.175
654
cements
PSfrag replacements
0.4
0.2
x and x p
Imag Axis
0.6
0
0.2
0.4
0.6
0.5
0.8
1
1
0.5
0.5
1
0
10
15
20
25
30
35
Time
Real Axis
cl = des ob,des.
655
PSfrag replacements
versus
Slower response
Smaller intermediate states
Smaller control effort
656
If v[k] = 0 then
small.
657
If A L p C slow:
1. Place poles two to six times faster than controller poles and in
well-damped locations. This will limit the estimator influence on
output response.
2. If the sensor noise is too big, the estimator poles can be placed as
much as two times slower than controller poles.
Notes about (2):
Controller may need to be redesigned since the estimator will
strongly influence transient behavior.
These slow estimator dynamics will not be excited by our
reference command if our signal is properly included!
MIMO Control Design
So far, we have only discussed control design for SISO systems.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
658
FACT:
FACT:
659
A matrix is cyclic if it has one and only one Jordan block associated
with each distinct eigenvalue. Note, this does not imply that all
eigenvalues are distinct. (Although, if all eigenvalues are distinct, the
matrix is cyclic).
If the n-dimensional p-input pair {A, B} is controllable and if A is
cyclic, then for almost any p 1 vector v, the single-input pair {A, Bv}
is controllable.
Controllability is invariant under any equivalence transformation;
thus, we may assume A to be in Jordan form. For example,
x
2 1 0 0 0
0 1
" #
x
0 2 1 0 0
0 0
v1
=
A = 0 0 2 0 0 , B = 1 2 ; Bv = B
v
2
x
0 0 0 1 1
4 3
0 0 0 0 1
1 0
Design method:
1. Randomly choose a vector v. Test for controllability of {A, Bv}.
Repeat until controllable.
2. The multi-input system {A, B, C, D} has been reduced to a
single-input system by stating that u(t) = vu 0(t). The new system
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
660
is {A, Bv, C, D} with input u 0(t) and output y(t). Use single-input
design methods such as Bass-Gura or Ackermann to find k to
place the poles of the single-input system. Then, the overall state
feedback is: u(t) = kvu 0(t), or, K = kv.
What if A is not cyclic? It can be shown that if {A, B} is controllable
then for almost any p n real constant matrix K the matrix (A B K )
has only distinct eigenvalues, and is therefore cyclic. Randomly
choose K matrices until (A B K ) is cyclic. Then, design for this
system.
Use small random numbers for low control effort.
DESIGN METHOD SUMMARY:
x(t)
y(t)
661
or
A B K = T F T 1.
(mostly blank)
(mostly blank)
(mostly blank)
71
Consider
versus
Slower response
Smaller intermediate states
Smaller control effort
x(t)
= x(t) u(t)
PSfrag replacements
Amplitude
Time
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
72
As k (i.e., large) u(t) looks more and more like (t), the input
we found earlier which (instantaneously) moves x(t) to 0!
To see this,
for k large,
k large.
u(t) dt =
(k)
,
(k) 1
X
T
2
J =
x[k] x[k] + u[k] .
k=0
We will find the K such that u[k] = K x[k] minimizes this cost.
We make large if we dont want large inputs (high cost of
control);
We make small if we want fast response and dont mind large
inputs (cheap control).
EXAMPLE :
with
73
X
(x[k]2 + u[k]2)
J =
k=0
2
x[0]2 1 + K
, 0 < K < 2, x[0] 6= 0;
=
1 (1 K )2
,
otherwise.
1 + K 2
.
p=
K (2 K )
2 K 2 K 3 = 1 + K 2 K K 3
K 2 + K 1 = 0.
So,
K opt =
1 +
1 + 4
.
2
( 1 + 4 1)
.
2 1 + 4 + 1
1 + 1 + 4
2(1 + 4)1/2
lim
= lim
= 1,
0
0
2
2
which is deadbeat control; closed-loop eigenvalues at 0.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
74
1
For high cost (expensive) control, let then K opt , which
which is < 1.
Dynamic Programming: Bellmans Principle of Optimality
We will want to minimize a more general cost function
J minimization is a topic in optimization theory.
We will use a tool called dynamic programming.
Consider the task of finding the lowest-cost route from point x o to x f ,
where there are many possible ways to get there.
J23
J12
J24
J36
J46
J47
J15
Then
J38
J68
J78
J58
75
Quadratic Forms
In the cost function
J=
m=0
X
10 0
J =
x[m]T
x[m] + u[m]2 .
0 0.1
m=0
EXAMPLE :
76
Vector Derivatives
In the following discussion we will often need to take derivatives of
vector/ matrix quantities.
This small dictionary should help: a(x) and b(x) are m 1 vector
functions with respect to the vector x, y is some other vector and A is
some matrix.
T
T
b(x)
a(x)
b(x) +
a(x),
1.
a T (x)b(x) =
x
x
x
T
2.
(x y) = y,
x
T
(x x) = 2x,
3.
x
T
4.
(x Ay) = Ay,
x
T
(y Ax) = A T y,
5.
x
T
(x Ax) = (A + A T )x,
6.
x
T
a(x) T
7.
a (x)Qa(x) = 2
Qa(x), where Q is a symmetric
x
x
matrix.
This brings us back to our problem. . .
The Discrete-Time Linear Quadratic Regulator Problem
Most generally, the discrete-time LQR problem is posed as minimizing
Ji,N = x T [N ]P x[N ] +
N 1
X
k=i
x T [k]Qx[k] + u T [k]Ru[k] ,
77
To find the optimum u[k], we start at the last step and work
backwards.
JN 1,N = x T [N ]P x[N ] + x T [N 1]Qx[N 1] + u T [N 1]Ru[N 1].
We express x[N ] as a function of x[N 1] and u[N 1] via the
system dynamics
JN 1,N = (Ax[N 1] + Bu[N 1])T P (Ax[N 1] + Bu[N 1])
+x T [N 1]Qx[N 1] + u T [N 1]Ru[N 1]
u [N 1] = R + B T P B
1
B T P Ax[N 1].
The exciting point is that the optimal u[N 1], with no constraints on
its functional form, turns out to be a linear state feedback! To ease
notation, define
1 T
T
K N 1 = R + B P B
B PA
78
such that
u [N 1] = K N 1 x[N 1].
= x T [N 1] (A B K N 1)T P(A B K N 1) + Q
+K NT 1 R K N 1 x[N 1].
Simplify notation once again by defining
PN 1 = (A B K N 1)T P(A B K N 1) + Q + K NT 1 R K N 1,
so that
Now, we take another step backwards and compute the cost J N 2,N
JN 2,N = JN 2,N 1 + JN 1,N .
Therefore, the optimal policy (via dynamic programming) is
JN 2,N = JN 2,N 1 + JN 1,N .
79
K N 2 = R + B PN 1 B
In general,
B T PN 1 A.
u [k] = K k x[k]
where
T
K k = R + B Pk+1 B
and
1
1
B T Pk+1 A
Pk = (A B K k )T Pk+1(A B K k ) + Q + K kT R K k ,
EXAMPLE :
710
R = [2].
2.5
2
k2
Value
Value
1.5
0
1
k1
1
0.5
0
0.5
3
0
4
6
Time sample, k
10
1
1
4
5
6
Time sample, k
60
P11
50
Value
40
P12=P21
30
P22
20
10
0
1
5
6
Time sample, k
10
711
1
B T Pss A
1
B T Pss A + Q
EXAMPLE :
712
Consider G(z) =
1
so
z1
1
1+
=0
(z 1)(z 1 1)
2 + 1 z z 1 =
z =1+
1
1
+
.
4 2
The locus of optimal pole locations for all form a Reciprocal Root
Locus.
Reciprocal Root Locus in Matlab (SISO)
We want to plot the root locus
1
1 + G T (z 1)G(z) = 0,
where
713
D
u[k]
x[k + 1]
x[k]
[k + 1]
C
C T
AT
BT
z 1
[k]
[A,B,C,D]=ssdata(sys);
y[k]
714
rrlsys=ss(bigA,bigB,bigC,bigD,-1);
rlocus(rrlsys);
Let
(z + 0.25)(z 2 + z + 0.5)
G(z) =
.
PSfrag
(z 0.2)(z 2
2z +replacements
2)
1.5
Imag Axis
EXAMPLE :
1
0.5
0
0.5
1
1.5
2
2
1.5
0.5
0.5
Real Axis
1.5
OBSERVATIONS :
RESULTS :
715
P(t)
= P(t)B R 1 B T P(t) Q P(t)A A T P(t),
with the boundary condition that P(t f ) = Pt f . The differential
equation runs backwards in time to find P(t).
If t f , P(t) Pss as t 0. Then,
716
subject to x(t)
= Ax(t) + Bu(t).
u(t)
tf
to +t
x T (t)Qx(t) + u T (t)Ru(t) dt.
to
+V (x(to + t), to + t) .
The minimum cost is the cost to go from x(to) to x(to + t) plus the
optimal cost to go from x(to + t) to x(t f ). The latter part includes the
terminal cost.
717
V (xo , to ) = min
u(t)
Z
to +t
to
x T (t)Qx(t) + u T (t)Ru(t) dt
V (x, t)
t
+V (xo, to ) +
t
x o ,to
|
{z
}
Not functions of u(t)
V (x, t)
+
x
x o ,to
[x(to + t) x(to)]
V (x, t)
= V (xo, to ) +
t
t xo ,to
(Z
to +t
T
T
+ min
x (t)Qx(t) + u (t)Ru(t) dt
u(t)
{z
}
| to
[x T (to )Qx(to )+u T (to )Ru(to )]t
V (x, t)
+
x
|[x(to + t)
{z x(to)]} .
x o ,to
[ Ax(to )+Bu(to )]t
718
min
u(t)
V
(x,
t)
[x T (to)Qx(to) + u T (to )Ru(to)] +
[Ax(to) + Bu(to)]
x xo ,to
|
{z
}
Hamiltonian
[x T (to)Qx(to) + u T (to)Ru(to )] +
[Ax(to) + Bu(to)]
0=
u(to )
x xo ,to
T
V (x, t)
.
= 2Ru(to ) + B T
x
x o ,to
So,
V
(x,
t)
1
,
u (to) = R 1 B T
2
x
x o ,to
719
PROPERTY I :
Let x(t) be the state that corresponds to an input u(t) and an initial
condition z. Then,
Z t
x(t) = e At z +
e A(t ) Bu( ) d.
0
Thus,
2 T
= 2 J (z, u, to )
and
tf
to
x T (t)Qx(t) + u T (t)Ru(t) dt
V (z, to ) = 2 V (z, to ).
PROPERTY II :
x(t)
= Ax(t) + Bu(t),
= A x(t)
x(t)
+ B u(t),
x o = z;
x o = z .
+ B(u(t) u(t)),
x o xo = z z .
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
720
= (x x)
T (t f )Pt f (x x)(t
f)
Z tf
T
T
+
(x x)
(t)Q(x x)(t)
+ (u u)
(t)R(u u)(t)
dt
to
+
to
dt.
Therefore,
J (z + z , u + u,
to ) + J (z z , u u,
to ) = 2J (z, u, to ) + 2J (z , u,
to ).
PROPERTY III :
Conclude
Minimizing
V (z + z , to ) + V (z z , to ) 2V (z, to ) + 2V (z , to ).
to ) + J (z z , u u,
to )}
min {J (z + z , u + u,
u,u
Now,
but
RHS = 2V (z, to ) + 2V (z , to )
LHS V (z + z , to ) + V (z z , to )
721
PROPERTY IV:
Substitute asdirected.
z + z
z z
2V
, to + 2V
, to V (z, to ) + V (z , to ).
2
2
PROPERTY V:
722
V (z + z , to ) (z + z ) V (z z , to ) (z z )
+
= 2V (z, to )T
(z + z )
z
(z z )
z
V (z + z , to ) + V (z z , to ) = 2V (z, to ).
Take partial derivatives of the equation with respect to z :
V (z, to )
V (z , to )
V (z + z , to ) V (z z , to )
+
=2
+2
z
z
z
z
V (z + z , to ) (z + z ) V (z z , to ) (z z )
+
= 2V (z , to )T
(z + z )
z
(z z )
z
V (z + z , to ) V (z z , to ) = 2V (z , to ).
Add the two results and divide by two to get
V (z + z , to ) = V (z, to ) + V (z , to ).
To show linearity of the gradient the last step we must
perform is to show
V (z, to ) = V (z, to ).
PROPERTY VI :
V (z, to )T
V (z, to ) =
(z)
2 V (z, to )T
=
z
= V (z, to ).
So, the gradient is linear. This means that V (z, to ), which is a vector,
is linear in z and hence has a matrix representation
V (z, to ) = M(to )z
where M(to )
nn
723
PROPERTY VII :
Let f () = V ( z, to ). Then,
V ( z, to )
d
0
Z 1
V ( z, to ) z
d
V (z, to ) = V (0, to) +
(
z)
0
Z 1
= V (0, to) +
V ( z, to )T z d
PROPERTY VIII :
2 1
= z T M T (to)z
2 0
= z T M T (to)z/2.
724
M(to ) + M T (to )
. Also, P(to) 0 since
Therefore, P(to) =
4
J (z, u, to ) 0 for all u, z. Thus
V (z, to ) = min J (z, u, to ) = z T P(to)z 0 z,
u
V
(x,
t)
T
Because V (x, t) = x P(t)x,
= 2x T (to)P T (to). We can
x
x o ,to
now state
u (t) = |R 1 B{zT P(t)} x(t)
K (t)
This expression is valid for all to . Also note that we can write
2x T (to )P(to)Ax(to) = x T (to)P(to)Ax(to) + x T (to )A T P(to)x(to),
so
1 T
T
P(t)
= P(t)B R 1 B T P(t) P(t)A A T P(t) Q
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
725
P(t)
= P(t)B R 1 B T P(t) P(t)A A T P(t) Q
backward in time.
The problem we discover is that Matlabs integration routines
ode45.m will only work on vector differential equations, not matrix
differential equations such as this.
The Kronecker product comes to the rescue once again, along with
the matrix stacking operator. We can write the above matrix
differential equation as a vector differential equation:
T
T
T
1 T
Pst = A I + I A Pst + Q st P P B R B st .
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
726
EXAMPLE :
Solve the differential matrix Riccati equation that results in the control
signal that minimizes the cost function
#
"
Z 5
2
0
[y T (t)y(t) + u T (t)u(t)] dt.
J = x T (5)
x(5) +
0 2
0
First, note that the open-loop system is unstable, with poles at 0 and
1. It is controllable and observable.
The cost function is written in terms of y(t) but not x(t). However,
since there is no feedthrough term, we can also write it as
"
#
Z 5
T
2
0
x (t)C T C x(t) + u T (t)u(t) dt.
J = x T (5)
x(5) +
0 2
0
This is a common trick.
727
MATLAB
Function
(kron(A,eye(size(A)))+
MATLAB
kron(eye(size(A)),A))*u Function
kron(unst(u),unst(u))*
st(B*inv(R)*B)
Final time.
5
pvec
Solving for P
4
3.5
P11
3
2.5
2
1.5
tvec
P22
P12=P21
0.5
Clock
To plot: plot(tvec.signals.values,pvec.signals.values)
0
0
Time (sec)
0 0
0 0
p21 p22
p21 p22
2 0
0 1
#
#"
#"
"
1 0
p11 p12
p11 p12
p21 p22
0 0
p21 p22
"
# "
#
p11 + 2 p12 p12 + 2 p22
p11 + 2 p12 0
=
+
+
0
0
p12 + 2 p22 0
#
"
# "
2
0 0
p11 p11 p12
.
2
p11 p12 p12
0 1
This matrix equality represents a set of three simultaneous equations
(because P is symmetric). They are:
2
2 p11 p11
+ 4 p12 = 0
p12 + 2 p22 p11 p12 = 0
2
1 p12
= 0.
728
5
3
For this feedback, the closed-loop poles are at
j (stable).
2
2
A T P + P A P B 1 B T P + C T C = 0
or
C T C = P(s I A) + (s I A T )P + P B 1 B T P.
1n
729
1
1
1 T
B T (s I A T )1 |P B
(s
I
A)
B
+
B
P
{z } | {z }
KT
1 T
G (s)G(s). Add 1 to both sides and collect
1
(1 + K (s I A)1 B)(1 + K (s I A)1 B) = 1 + G T (s)G(s).
A B
det( D C A 1 B).
= det( A)
det
C D
We will not prove this fact here, but the result may be found in many
linear algebra books.
det(s I A + B K ) cl (s)
1
FACT: 1 + K (s I A) B =
=
.
det(s I A)
ol (s)
PROOF : Consider the block matrix
sI A B
M1 =
K
1
det(M1) = det(s I A) det(1 + K (s I A)1 B)
= det(s I A)(1 + K (s I A)1 B).
730
M2 =
or
sI A
K
B
1
I
K
p
r
s I A + B K (s I A) p + br
0
Kp +r
1 + K (s I A)1 B =
= 1(s).
Therefore 1(s) = 0 requires cl (s) = 0 or cl (s) = 0. LQR requires
that cl (s) be Hurwitz (stable), so we have the conclusion:
1
Closed-loop poles are the LHP zeros of 1 + G T (s)G(s).
731
and
G T (s) = B T s I A T
1
C T + DT .
D
u(t)
x(t)
x(t)
(t)
R
(t)
y(t)
BT
AT
x(t)
A
0
B
=
+
u(t)
T
T
T
(t)
(t)
C C A
C D
#
"
h
i x(t)
y(t) = D T C B T
+ D T Du(t).
(t)
function srl(sys)
[A,B,C,D]=ssdata(sys);
bigA=[A zeros(size(A)); -C*C -A];
bigB=[B; -C*D];
bigC=[D*C B];
bigD=D*D;
srlsys=ss(bigA,bigB,bigC,bigD);
rlocus(srlsys);
Let
Imag Axis
EXAMPLE :
1
G(s) =
.
replacements
(s 1.5)(s 2 +PSfrag
2s + 2)
1
0
1
2
3
3
Real Axis
732
EXAMPLE :
1.5
0.5
0
0.5
0
1
1.5
2
0
0.2
0.4
0.6
0.8
1
Time (sec)
Amplitude
Amplitude
1.2
1.4
1.6
4
0
0.2
0.4
0.6
0.8
1
Time (sec)
1.2
1.4
1.6
81
82
1
e
t
h(t)
0.8
y(t) K
(1 et/ )
tem response.
= dc gain
o init. condition
e t
0.6
0.4
e
h(t)
1
e
1
0.8
K (1 et/ )
System response.
K = dc gain
y(t) K
replacements
1],[1 1]);
0.6
0.4
0.2
0.2
0
0
t =
0
0
Time (sec )
t =
Time (sec )
(s)
= sin1 ( )
y(t)
0.1
0.3
0.5
0.7
0.9
0.7
0.8
0.9
1.0
Responses
PSfrag replacements
Impulse Responses
=0
0.2
0.4
0.6
0.5
= 1 0.8
0.1
0.3
0.5
0.7
0.8
0.9
=1
0.7
0.9
y(t)
placements
0.5
1
0
Impulse
Responses
6
8
10
12
n t
Step Responses
=0
0.2
1.5
0.4
0.6
0.8
0.5
1.0
0
0
n t
10
12
(s)
83
(s)
(s)
PSfrag replacements
ements
PSfrag replacements
(s)
(s)
Time-domain
specifications determine
where poles SHOULD be
placed in the s-plane.
(step-response).
Mp
tp
1
0.9
0.1
tr
ts
100
90
80
70
M p, %
60
50
40
30
20
10
0
0
0.2
0.4
0.6
0.8
1.0
84
Linear Functions
Suppose that f is a function that takes as input n-vectors and returns
m-vectors.
We say that f is linear iff
Scaling: for any n-vector x and scalar , f (x) = f (x).
85
86
Null-Space
The nullspace of A
mn
is defined as
(A) = x n Ax = 0 .
Range-Space
The rangespace of A
(A) then
mn
is defined as
(A) = Ax x n
Rank
87
y1
A11 A12 A1n
y2
A21 A22 A2n
,
y=
A
=
...
...
. . . ...
ym
Am1 Am2 Amn
Some interpretations of y = Ax :
x=
x1
x2
...
xn
into y
A=
a1 a2 a n ,
88
89
J1
0
...
T 1 AT = J =
where
Ji =
Jq
i 1
0
i . . .
... 1
0
i
n i n i
q
X
n i ).
i =1
dim
( I A) = dim
i
eigenvalue i .
The sizes of each Jordan block may also be computed, but this is
complicated. i.e., leave it to Matlab!
810
Cayley-Hamilton Theorem
The square matrix A satisfies its own characteristic equation. That is,
if
() = det(I A) = 0
then
(A) = 0.
SIGNIFICANCE :
DEFINITION :
811
So,
Y (s)
= C(s I A)1 B + D,
U (s)
but
(s I A)1 =
adj(s I A)
.
det(s I A)
812
D
1
xdot
y
1
K
A
b1 s 2 + b 2 s + b 3
G(s) = 3
s + a1 s 2 + a2 s + a3
Controller Canonical Form
...
y (t)
a1 a2 a3
y (t)
1
x(t)
= y (t) = 1
0
0 y (t) + 0 u(t)
frag replacements
y (t)
0
1
0
y(t)
0
h
i
y(t) = b1 b2 b3 x(t).
b1
y(t)
b2
u(t)
a1
x 1c
x 2c
x 3c
b3
a2
a3
813
PSfrag replacements
u(t)
a1 1 0
b1
x(t)
= a2 0 1 x(t) + b2 u(t)
a3 0 0
b3
h
i
y(t) = 1 0 0 x(t).
b3
b2
a3
b1
x 3o
a2
x 2o
a1
x 1o
y(t)
Then,
b1
1 0 0
1
a 1 1 0 2 = b 2 .
b3
3
a2 a1 1
x(t)
=1
0
h
y(t) = 1
1
0 a3
0 a2 x(t) + 0 u(t)
0
1 a1
i
2 3 x(t).
814
1
x 1co
R
u(t)
a3
a2
x 2co
R
y(t)
a1
x 3co
0
1
0
1
x(t)
= 0
0
1 x(t) + 2 u(t)
PSfrag replacements
a3 a2 a1
3
h
i
y(t) = 1 0 0 x(t).
u(t)
3
R
x 3ob
a1
x 2ob
x 1ob
y(t)
a2
a3
G(s) =
r2
rn
r1
+
+ +
.
s p1 s p2
s pn
815
Then,
p1
r1
p2
x(t) + r2
...
...
0
pn
rn
h
i
y(t) = 1 1 1 x(t).
x(t)
=
u(t)
816
Transformations
Let x(t) = T z(t), where T is an invertible (similarity) transformation
matrix.
1
z (t) = |T 1
{zAT} z(t) + |T {z B} u(t)
A
D u(t).
y(t) = |{z}
C T z(t) + |{z}
D
x(t) = e x(0) +
t
0
e A(t ) Bu( ) d
e 1 t
0
e 2 t
3t
e =
...
e n t
= V e3t V 1, and
J1
0
...
T 1 AT = J =
Jq
817
where
Ji =
i 1
0
i . . .
... 1
0
i
n i n i
g replacements
q
X
i =1
n i ).
1
s
1
s
1
s
cn
y(t)
c2
c1
1
s ...
...
0
(s )
s
2
(s )
(s )
(s )1 (s )k+1
...
...
(s )1
= (s )1 I + (s )2 F1 + + (s )k Fk
818
k1
1 t t /(k 1)!
k2
t
/(k
2)!
J t
t
e =e
...
...
= et (I + t F1 + + t k1/(k 1)!Fk ).
Thus, Jordan blocks yield repeated poles and terms of the form t p et
in e At .
Zeros of a State-Space System
Blocking Zero
Consider the transfer-function matrix G(s).
A blocking zero is a value s0 for which G(s0) is identically zero.
Put in u(t) = u 0e zi t and you get zero output (except for output due to
initial conditions).
Not considered a very useful definition of MIMO zero.
Transmission Zero
Put in u(t) = u 0e zi t and you get a zero output at frequency e zi t .
State space: Have input and state contributions (consider first the
SISO case)
u(t) = u 0e zi t ,
x(t) = x 0e zi t
...
y(t) = 0.
x(t)
= Ax(t) + Bu(t) z i e zi t x0 = Ax0e zi t + Bu 0e zi t
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
819
zi I A
x0
u 0
=0
i x0
h
= 0.
C D
u 0
zi I A
Zero at frequency z i if
rank
placements
zi I A
C
B
D
B
D
x0
u 0
= 0.
< n + min{ p, q}
e(t)
A2D
e[k]
D(z)
u[k]
D2A
zoh
w(t)
u(t)
G(s)
y(t)
v(t)
(z)
820
rag replacements
Sfrag replacements
PSfrag replacements
j
T
j
j
T
T
s-plane
z-plane
(z)
821
Good
cements
Good
PSfrag replacements
Damping
Good
PSfrag replacements
Frequency n
Settling Time
1
1
Y (z) = [C(z
I
A)
B
+
D]
U
(z)
+
C(z
I
A)
|
{z
}
|
{z zx[0]} .
transfer function of system
822
So,
Y (z)
= C(z I A)1 B + D
U (z)
k1
X
k1
X
j =0
Ak1 j Bu[ j] .
{z
convolution
823
PSfrag replacements
ZOH
u(t)
A, B, C, D
y(t)
Similarly,
That is, C d = C; Dd = D.
OBSERVABILITY AND CONTROLLABILITY
Continuous-Time Observability: Where am I?
Define
If
CA
(C, A) =
...
C An1
824
(A, B) = B
AB
n1
B .
T A T (t1 t)
u(t) = B e
Wc1(t1)
At1
e x0 x1
825
t
0
e A C T Ce A d
where
y (t) = y(t) C
826
is nonsingular.
In particular,
Wdc =
Am B B T (A T )m
m=0
is nonsingular.
In particular,
Wdo =
(A T )m CC T Am
m=0
827
the matrix T
x(t)
= Ax(t) + Bu(t)
y(t) = C x(t) + Du(t),
T = [B AB An1 B] =
transforms the system into controllability form iff the original system is
controllable.
EXTENSION I :
EXTENSION II :
1
old
new .
828
1n
1 a1 an1
h
i 0 1
an2
K = (1 a1) (n an )
...
. . . ...
0
1
Ackermann design:
K =
0 1
d (Ac ).
829
xdot
D
1
y
1
C
K
A
K
Reference Input
A constant output yss requires constant state x ss and
constant input u ss . We can change the tracking problem to a
regulation problem around u(t) = u ss and x(t) = x ss .
(u(t) u ss ) = K (x(t) x ss ).
OBSERVATION :
u ss = |{z}
Nu rss
11
xss = |{z}
N x rss .
n1
830
ments
0.8
0.8
0.6
0.4
PSfrag replacements
0.2
0
0
10
12
14
Amplitude
Amplitude
0.6
0.4
0.2
0
0
16
0.5
1.5
2.5
Time (sec.)
Time (sec.)
ments
0.8
0.8
Amplitude
Amplitude
0.6
0.4
PSfrag replacements
0.2
0
0
10
12
14
16
0.6
0.4
0.2
0
0
Time (sec.)
PROCEDURE :
0.5
1.5
2.5
Time (sec.)
1:
2:
3:
4:
5:
6:
1.000
0.707 0.707 j
0.708
0.376 1.292 j
0.576 0.534 j
0.310 0.962 j
0.626 0.414 j
0.581 0.783 j
3:
4:
0.896
0.735 0.287 j
2:
3:
4:
5:
6:
1.000
0.866 0.500 j
0.942
0.591 0.907 j
0.852 0.443 j
0.539 0.962 j
5:
6:
0.905 0.271 j
1:
0.800 0.562 j
3:
4:
0.926
0.909 0.186 j
4.620
4.660 4.660 j
4.350 8.918 j
5.913
4.236 12.617 j
6.254 4.139 j
2.990 12.192 j
5.602 7.554 j
3.948 13.553 j
6.040 5.601 j
9.394
7.089 2.772 j
0.746 0.711 j
0.657 0.830 j
1:
2:
0.521 1.068 j
0.424 1.263 j
5:
6:
4.620
4.053 2.340 j
3.967 3.785 j
5.009
4.110 6.314 j
5.927 3.081 j
4.016 5.072 j
4.217 7.530 j
5.528 1.655 j
6.261 4.402 j
6.448
7.121 1.454 j
831
832
2. Find the nth-order poles from the table of constant settling time,
and divide pole locations by ts .
3. Use Acker/place to locate poles. Simulate and check control effort.
Integral Control for Continuous-Time Systems
In many practical designs, integral control is needed to counteract
disturbances, plant variations, or other noises in the system.
placements
u(t)
KI
s
r (t)
A, B, C, D
y(t)
x
K
Nx
x I (t)
x(t)
"
0 C
0
#"
x I (t)
x(t)
"
0
B
u(t) +
"
I
0
r(t)
833
u(t) = K I K
#
"
i x I (t)
x(t)
+ K N x r(t).
834
Integral Control
eplacements
r [k]
u[k]
A, B, C, D
y[k]
x
K
Nx
x I [k + 1]
x[k + 1]
"
1 C
0
#"
x I [k]
x[k]
"
0
B
u[k] +
"
I
0
r[k]
u[k] = K I K
"
#
i x I [k]
x[k]
+ K N x r[k].
835
Sfrag replacements
836
w(t)
u(t)
A, B
A, B
x(t)
x(t)
y(t)
y (t)
.
Lets look at the error.
= x(t)
x(t)
x(t)
= A x(t)
L C x(t) C x(t)
= (A LC) x(t),
or, x(t)
x(t) if A LC is stable, for any value of x(0)
837
L
u
1
xhat
yhat
C
K
A
K
D
1
0 0
.
.
1
.
1 a1
...
... 0
an1 a1 1
Ackermann method:
(1 a1)
(2 a2)
.
...
(n an )
0
.
.
1 .
L = d (A)
(C, A) .
0
1
838
Sfrag replacements
A, B
A, B
x[k]
x p [k]
Lp
y[k]
y [k]
Now that we have a structure to estimate the state x(t), lets feed
back x(t)
to control the plant. That is,
u(t) = r(t) K x(t),
839
x(t)
A BK
BK
=
.
x(t)
x(t)
0
A LC
The CompensatorContinuous-Time
U (s)
= K (s I A + B K + LC L D K )1 L .
Y (s)
The CompensatorDiscrete-Time, Prediction-Estimator Based
D(s) =
D(z) =
1
U (z)
= K z I A + B K + L p C L p D K
L p.
Y (z)
Time update: Predict new state from old state estimate and system
dynamics
x p [k] = A xc [k 1] + Bu[k 1].
Measurement update: Measure the output and use that to
update/correct the estimate
xc [k] = x p [k] + L c y[k] C x p [k] .
840
xc [k + 1] = (A AL c C) xc [k].
xa [k + 1]
"
Aaa Aab
#"
xa [k]
"
Ba [k]
+
u[k]
xb [k]
Bb [k]
"
#
h
i x [k]
a
y[k] = 1 0 0
xb [k]
xb [k + 1]
Aba Abb
Reduced-order estimator:
xb [k + 1] = Abb xb [k] + Aba xa [k] + Bb u[k] +
841
PSfrag replacements
K2
K1
u(t)
r (t)
x(t)
y(t)
842
or
A B K = T F T 1.
versus
Slower response
Smaller intermediate states
Smaller control effort
843
N 1
X
k=i
x T [k]Qx[k] + u T [k]Ru[k] ,
RESULTS :
1
B T Pk+1 A
Pk = (A B K k )T Pk+1(A B K k ) + Q + K kT R K k ,
K = R + B Pss B
1
B T P A.
844
tf
t0
x T (t)Qx(t) + u T (t)Ru(t) dt.
P(t)
= P(t)B R 1 B T P(t) Q P(t)A A T P(t),
with the boundary condition that P(t f ) = Pt f . The differential
equation runs backwards in time to find P(t).
If t f , P(t) Pss as t 0. Then,
845
There are many ways to solve the the C.A.R.E., but when Q has the
form C T C, and the system is SISO, a variant of the Chang-Letov
method may be used:
The optimal eigenvalues are the roots of the equation
1
1 + G T (s)G(s) = 0
(mostly blank)
(mostly blank)
(mostly blank)