State feedback control law: u(t) = −K x(t)
C
A
1
s
K
B
P(s)
+
−
+
v v

6

6
u(s) sx(s) x(s)
x(s)
y(s)
Salient points:
1. There is no reference input (we will add one later). We are driving the state to zero.
2. We can measure all of the states which is often not realistic. We will remove this
assumption later by designing an estimator.
3. The feedback gain, K, is static. This controller has no states.
Roy Smith: ECE 147b 11: 2
State Feedback
State Feedback
Statespace models allow us to use a very powerful design method: state feedback.
To begin: assume that we can measure all of the state vector.
We will address the case where this cannot be done later.
State feedback control law: u(t) = −K x(t) or, equivalently, u(s) = −K x(s)
C
A
1
s
K
B
P(s)
+
−
+
v v

6

6
u(s) sx(s) x(s)
x(s)
y(s)
Roy Smith: ECE 147b 11: 1
State feedback
Structure of the state feedback control gains:
Examine the feedback gain matrix in more detail,. . .
u(k) = −K x(k)
= −
_
K
1
K
2
· · · K
n
¸
_
¸
¸
¸
_
x
1
(k)
x
2
(k)
.
.
.
x
n
(k)
_
¸
¸
¸
_
Note that K is a static nu ×nx matrix.
We have at least as many control gains in K as states in the system.
We can often use state feedback to place the closedloop poles anywhere we want.
This may take a lot of gain. In other words, large values of K
i
.
Roy Smith: ECE 147b 11: 4
State feedback
Discretetime state feedback
Given a discretetime statespace system,
x(k + 1) = Ax(k) + Bu(k)
Recall that the poles of the system are the eigenvalues of A.
State feedback law: u(k) = −K x(k).
This gives,
x(k + 1) = Ax(k) − BK x(k)
= (A−BK)
. ¸¸ .
x(k)
closedloop dynamics
So state feedback changes the pole locations (which is what we expect feedback to do).
The closedloop poles are given by the eigenvalues of A−BK.
Roy Smith: ECE 147b 11: 3
State feedback
Controllable canonical form
One particular statespace representation (controllable canonical form) makes this
particularly easy.
For example,
P(z) =
b
1
z
2
+b
2
z +b
3
z
3
+a
1
z
2
+a
2
z +a
3
.
The controllable canonical form for this is:
A =
_
_
−a
1
−a
2
−a
3
1 0 0
0 1 0
_
_
, B =
_
_
1
0
0
_
_
, C =
_
b
1
b
2
b
3
¸
.
Now the closedloop dynamics, A−BK, are given by,
A−BK =
_
_
−a
1
−a
2
−a
3
1 0 0
0 1 0
_
_
−
_
_
1
0
0
_
_
_
K
1
K
2
K
3
¸
,
=
_
_
−a
1
−K
1
−a
2
−K
2
−a
3
−K
3
1 0 0
0 1 0
_
_
←− easy to see the characteristic polynomial
Roy Smith: ECE 147b 11: 6
State feedback
Calculating K
Start by speciﬁying the desired closedloop poles. Say they have positions in the zplane:
z = β
1
, β
2
, · · · , β
n
.
Note that rootlocus arguments, or a knowledge of “nice” pole positions, can often be used to
come up with suitable closedloop pole locations.
The closedloop pole locations can also be speciﬁed as the roots of the polynomial,
(z −β
1
)(z −β
2
) · · · (z −β
n
) = 0.
Now, under state feedback, the closedloop poles are the roots of,
det(zI −(A−BK)) = det(zI −A +BK) = 0.
One approach:
Match coeﬃcients in these two expressions and solve for K
1
, K
2
, · · · , K
n
.
Roy Smith: ECE 147b 11: 5
State feedback
General case: Ackermann’s Formula
Given the state equation (A, B), and desired closedloop pole positions (speciﬁed by
coeﬃcients γ
1
, · · · , γ
n
), Ackermann’s formula calculates K.
First form the “controllability matrix”,
C =
_
B AB A
2
B · · · A
n−1
B
¸
. ¸¸ .
n ×n matrix when nu = 1
If the desired closedloop pole positions are given by,
z
n
+ γ
1
z
n−1
+ γ
2
z
n−2
+ · · · + γ
n
= 0,
then deﬁne:
γ
c
(A) := A
n
+ γ
1
A
n−1
+ · · · + γ
n
I ←− this is an n ×n matrix.
Ackermann’s formula: K =
_
0 · · · 0 1
¸
C
−1
γ
c
(A)
In Matlab, K can be calculated by the command: place.
Roy Smith: ECE 147b 11: 8
State feedback
Controllable canonical form
The closedloop matrix is still in controllable canonical form,
A −BK =
_
_
−a
1
−K
1
−a
2
−K
2
−a
3
−K
3
1 0 0
0 1 0
_
_
and so,
det(zI −A +BK) = z
3
+ (a
1
+K
1
)z
2
+ (a
2
+K
2
)z + (a
3
+K
3
).
Equating coeﬃcients with the desired closedloop poles is easy:
Say, the desired closedloop poles are given by,
det(zI −A +BK) = (z −β
1
)(z −β
2
)(z −β
3
)
z
3
+ γ
1
z
2
+ γ
2
z + γ
3
= 0,
Then,
K
1
= γ
1
−a
1
, K
2
= γ
2
−a
2
, K
3
= γ
3
−a
3
.
Roy Smith: ECE 147b 11: 7
Controllability
Controllability under similarity transforms
What is the eﬀect of a similarity transform, x(k) = V ξ(k).
This gives,
ξ(k + 1) = V
−1
AV ξ(k) +V
−1
Bu(k),
and the controllability matrix is now,
C
ξ
=
_
(V
−1
B) (V
−1
AV V
−1
B) (V
−1
AV V
−1
AV V
−1
B) · · ·
¸
= V
−1
_
B AB A
2
B · · · A
n−1
B
¸
= V
−1
C.
Because V
−1
is invertible, rank(C
ξ
) = rank(C).
In other words controllability is not aﬀected by a similarity transform. C
ξ
= C, but if one is
invertible, so is the other.
We can transform (via similarity) the system into controllable canonical form iﬀ the system is
controllable.
Note that controllability applies to the state of a system. It doesn’t make sense in terms of
transfer functions (which can always be expressed as controllable canonical forms).
Roy Smith: ECE 147b 11: 10
Controllability
Controllability
Note that Ackermann’s formula,
K =
_
0 · · · 0 1
¸
C
−1
γ
c
(A),
will not work if C
−1
does not exist.
Controllability matrix:
C =
_
B AB A
2
B · · · A
n−1
B
¸
.
A system is called “controllable” if and only if C is invertible (equivalently, rank(C) = n).
Formal deﬁnition:
A system is controllable (or state controllable) if and only if for any given initial state,
x(t
0
) = x
0
, and any given ﬁnal state, x
1
, speciﬁed at an arbitrary future time, t
1
> t
0
, there
exists a control input u(t) that will drive the system from x(t
0
) = x
0
to x(t
1
) = x
1
.
Roy Smith: ECE 147b 11: 9
Controllability
Another example: Flexible space structures
Large space telescopes:
• Flexible modes must be suppressed
• Limited number of piezoelectric actuators
• Many locations for placing actuators
• Which locations give the most controllable
structure
Roy Smith: ECE 147b 11: 12
State feedback
Controllability as a design tool
Consider a twoinput, twooutput system,
_
x
1
(k + 1)
x
2
(k + 1)
_
=
_
1 T
0 1
_ _
x
1
(k)
x
2
(k)
_
+
_
T
2
/2 0
0 T
_ _
u
1
(k)
u
2
(k)
_
_
y
1
(k)
y
2
(k)
_
=
_
1 0
0 1
_ _
x
1
(k)
x
2
(k)
_
,
where T is the sample period.
This system is unstable (where are the poles?)
Suppose that implementing u
1
(k) costs $50 and implementing u
2
(k) costs $100. What is the
minimum cost to stabilize the system: $50, $100 or $150?
Controllability can answer questions like this and is useful when actually building systems to
be controlled.
Roy Smith: ECE 147b 11: 11
State feedback
Example continued
The closedloop statespace system with u(k) = −Kx(k) is,
x(k + 1) = (A−BK) x(k)
=
_
e
T
−(1 −e
T
)K
_
x(k).
And to get the only pole at zero means that A−BK = 0.
e
T
−(1 −e
T
)K = 0 which implies that K =
e
T
(1 −e
T
)
.
Controllability?
C =
_
B
¸
= 1 −e
T
= 0, so it is controllable.
Other pole positions
For β = 0.5, K =
(e
T
−0.5)
(1 −e
T
)
and for β = 0.75, K =
(e
T
−0.75)
(1 −e
T
)
.
Simulate these choices, with intersample behavior.
Roy Smith: ECE 147b 11: 14
State feedback
A simple pole placement example
P(s) =
−1
s −1
←− an unstable system
The ZOH equivalent of this system is,
P(z) =
1 −e
T
z −e
T
←− pole at z = e
T
(outside unit circle)
One particular statespace representation is,
x(k + 1) = e
T
x(k) + (1 −e
T
) u(k)
y(k) = 1 x(k)
In this case we are measuring the entire state (1 pole = 1 state), and so state feedback is the
same as output feedback.
Pole placement: Let’s try β = 0 (deadbeat control).
The desired closedloop characteristic polynomial is simply γ
c
(z) = z = 0.
Roy Smith: ECE 147b 11: 13
State feedback
State feedback: picking good closedloop pole locations
Choosing the closedloop pole locations can be diﬃcult.
You know the general good locations (in zplane and splane). But,. . .
1. Look at the transient responses of the system and identify the various pole locations.
Change the ones that you don’t like.
2. The further that you move the poles the greater the gain that you will require. This can
lead to cost and/or saturation problems. (I.e. move them as little as possible).
3. Take an “optimal control” course to ﬁnd out the best tradeoﬀ between fast response and
input signal size. This can be viewed as ﬁnding K which minimizes the following “cost”:
∞
k=0
x(k)
T
Qx(k) +u(k)
T
Ru(k).
where Q and R are positive deﬁnite weighting matrices for the state error and the control
action respectively. This is known as a Linear Quadratic Regulator (LQR) problem.
ECE 230a and ECE 147c cover this material.
Roy Smith: ECE 147b 11: 16
State feedback
Pole position versus actuator eﬀort:
0 1 2 3 4 5 6 7 8 9 10
0. 2
0
0.2
0.4
0.6
0.8
1
1.2
Sample index
State, x(k)
0 1 2 3 4 5 6 7 8 9 10
0. 5
0
0.5
1
1.5
2
Sample index
Input, u(k)
e − 0.75
T
1 − e
T
K =
e − 0.5
T
1 − e
T
K =
e
T
1 − e
T
K = , β = 0
, β = 0.5
, β = 0.75
e − 0.75
T
1 − e
T
K = , β = 0.75
e − 0.5
T
1 − e
T
K = , β = 0.5
e
T
1 − e
T
K = , β = 0
“Nice” intersample behavior is due to the simplicity of the system (1 state).
Roy Smith: ECE 147b 11: 15
+ 6 B u(s) .
+ 6− A P (s)  K x(s) Roy Smith: ECE 147b 11: 1 State feedback State feedback control law: u(t) = −K x(t) y(s) C x(s) v v 1 s  sx(s) .
+ 6 B u(s) .
is static. We are driving the state to zero. 3. 2. There is no reference input (we will add one later). This controller has no states. K. We will remove this assumption later by designing an estimator. Roy Smith: ECE 147b 11: 2 . + 6− A P (s)  K x(s) Salient points: 1. We can measure all of the states which is often not realistic. The feedback gain.
Roy Smith: ECE 147b 11: 3 State feedback Structure of the state feedback control gains: Examine the feedback gain matrix in more detail. . large values of Ki . The closedloop poles are given by the eigenvalues of A − BK. .. . In other words. x(k + 1) = A x(k) − BK x(k) = (A − BK) x(k) closedloop dynamics So state feedback changes the pole locations (which is what we expect feedback to do). We have at least as many control gains in K as states in the system. State feedback law: This gives. This may take a lot of gain. .State feedback Discretetime state feedback Given a discretetime statespace system. u(k) = −K x(k) x1(k) x (k) 2 . x(k + 1) = A x(k) + B u(k) Recall that the poles of the system are the eigenvalues of A. Roy Smith: ECE 147b 11: 4 . u(k) = −K x(k). We can often use state feedback to place the closedloop poles anywhere we want. xn(k) = − K1 K2 · · · Kn Note that K is a static nu × nx matrix.
A − BK. Say they have positions in the zplane: z = β1 . are given by. The closedloop pole locations can also be speciﬁed as the roots of the polynomial. · · · . β2. C = b1 b2 b3 . det(zI − (A − BK)) = det(zI − A + BK) = 0.State feedback Calculating K Start by speciﬁying the desired closedloop poles. B = 0 . 0 1 0 0 −a1 − K1 −a2 − K2 −a3 − K3 ←− easy to see the characteristic polynomial = 1 0 0 0 1 0 Roy Smith: ECE 147b 11: 6 . 0 1 0 0 Now the closedloop dynamics. For example. (z − β1)(z − β2 ) · · · (z − βn ) = 0. K2. under state feedback. −a1 −a2 −a3 1 A − BK = 1 0 0 − 0 K1 K2 K3 . + a1 z 2 + a2 z + a3 The controllable canonical form for this is: −a1 −a2 −a3 1 A= 1 0 0 . · · · . βn . Roy Smith: ECE 147b 11: 5 State feedback Controllable canonical form One particular statespace representation (controllable canonical form) makes this particularly easy. Now. P (z) = z3 b1 z 2 + b2 z + b3 . or a knowledge of “nice” pole positions. Kn. One approach: Match coeﬃcients in these two expressions and solve for K1 . can often be used to come up with suitable closedloop pole locations. Note that rootlocus arguments. the closedloop poles are the roots of.
then deﬁne: γc (A) := An + γ1An−1 + · · · + γn I Ackermann’s formula: ←− this is an n × n matrix. K1 = γ1 − a1 . Roy Smith: ECE 147b 11: 7 State feedback General case: Ackermann’s Formula Given the state equation (A. Ackermann’s formula calculates K. and desired closedloop pole positions (speciﬁed by coeﬃcients γ1 . Equating coeﬃcients with the desired closedloop poles is easy: Say. the desired closedloop poles are given by. γn). Then. K2 = γ2 − a2 . K3 = γ3 − a3 . det(zI − A + BK) = z 3 + (a1 + K1)z 2 + (a2 + K2)z + (a3 + K3 ). First form the “controllability matrix”. Roy Smith: ECE 147b 11: 8 . B). det(zI − A + BK) = (z − β1 )(z − β2 )(z − β3) z 3 + γ1z 2 + γ2z + γ3 = 0.State feedback Controllable canonical form The closedloop matrix is still in controllable canonical form. K can be calculated by the command: place. −a1 − K1 −a2 − K2 −a3 − K3 A − BK = 1 0 0 0 1 0 and so. C = B AB A2B · · · An−1B n × n matrix when nu = 1 If the desired closedloop pole positions are given by. z n + γ1 z n−1 + γ2 z n−2 + · · · + γn = 0. K = 0 · · · 0 1 C −1 γc (A) In Matlab. · · · .
x(t0) = x0. rank(C) = n). We can transform (via similarity) the system into controllable canonical form iﬀ the system is controllable. speciﬁed at an arbitrary future time. will not work if C −1 does not exist. rank(Cξ ) = rank(C). and any given ﬁnal state. but if one is invertible. In other words controllability is not aﬀected by a similarity transform. so is the other. Controllability matrix: C = B AB A2 B · · · An−1B . x(k) = V ξ(k). x1. Roy Smith: ECE 147b 11: 9 Controllability Controllability under similarity transforms What is the eﬀect of a similarity transform. there exists a control input u(t) that will drive the system from x(t0) = x0 to x(t1) = x1. Roy Smith: ECE 147b 11: 10 . and the controllability matrix is now. Because V −1 is invertible. ξ(k + 1) = V −1AV ξ(k) + V −1 B u(k). Note that controllability applies to the state of a system. t1 > t0 .Controllability Controllability Note that Ackermann’s formula. Cξ = (V −1 B) (V −1AV V −1 B) (V −1AV V −1 AV V −1B) · · · = V −1 B AB A2B · · · An−1B = V −1C. Formal deﬁnition: A system is controllable (or state controllable) if and only if for any given initial state. A system is called “controllable” if and only if C is invertible (equivalently. K = 0 · · · 0 1 C −1 γc(A). This gives. Cξ = C. It doesn’t make sense in terms of transfer functions (which can always be expressed as controllable canonical forms).
twooutput system. What is the minimum cost to stabilize the system: $50.State feedback Controllability as a design tool Consider a twoinput. Roy Smith: ECE 147b 11: 11 Controllability Another example: Flexible space structures Large space telescopes: • Flexible modes must be suppressed • Limited number of piezoelectric actuators • Many locations for placing actuators • Which locations give the most controllable structure Roy Smith: ECE 147b 11: 12 . x1(k + 1) x2(k + 1) y1 (k) y2 (k) = = 1 T 0 1 1 0 0 1 x1(k) x2(k) x1(k) x2(k) . + T 2 /2 0 0 T u1(k) u2(k) where T is the sample period. This system is unstable (where are the poles?) Suppose that implementing u1 (k) costs $50 and implementing u2(k) costs $100. $100 or $150? Controllability can answer questions like this and is useful when actually building systems to be controlled.
K = (eT − 0. x(k + 1) = eT x(k) + (1 − eT ) u(k) y(k) = 1 x(k) In this case we are measuring the entire state (1 pole = 1 state). T) (1 − e (1 − eT ) so it is controllable. K= eT . Other pole positions For β = 0. And to get the only pole at zero means that A − BK = 0.75.State feedback A simple pole placement example P (s) = −1 s−1 ←− an unstable system The ZOH equivalent of this system is.5. and so state feedback is the same as output feedback. Pole placement: Let’s try β = 0 (deadbeat control). (1 − eT ) Simulate these choices.5) (eT − 0. P (z) = 1 − eT z − eT ←− pole at z = eT (outside unit circle) One particular statespace representation is. eT − (1 − eT )K = 0 which implies that Controllability? C = B = 1 − eT = 0. Roy Smith: ECE 147b 11: 13 State feedback Example continued The closedloop statespace system with u(k) = −Kx(k) is. K = . with intersample behavior. x(k + 1) = (A − BK) x(k) = eT − (1 − eT )K x(k). The desired closedloop characteristic polynomial is simply γc(z) = z = 0.75) and for β = 0. Roy Smith: ECE 147b 11: 14 .
. x(k) K= eT .8 0.5 0 0. u(k) K= eT .5 1 0. (I.e. β = 0 1 − eT T K = e − 0. Take an “optimal control” course to ﬁnd out the best tradeoﬀ between fast response and input signal size. The further that you move the poles the greater the gain that you will require.4 0. This can be viewed as ﬁnding K which minimizes the following “cost”: ∞ x(k)T Qx(k) + u(k)T Ru(k). β = 0. But. β = 0.75 1 − eT 1 2 3 4 5 6 Sample index 7 8 9 10 “Nice” intersample behavior is due to the simplicity of the system (1 state). β = 0 1 − eT T K = e − 0. .5 1 − eT T K = e − 0.75 1 − eT 2 1.2 1 0.2 0 1 2 3 4 5 6 Sample index 7 8 9 10 State. k=0 where Q and R are positive deﬁnite weighting matrices for the state error and the control action respectively. 2. 3. Roy Smith: ECE 147b 11: 15 State feedback State feedback: picking good closedloop pole locations Choosing the closedloop pole locations can be diﬃcult. 1.5 0 Input. β = 0.5 1 − eT T K = e − 0.75 .5 .. Change the ones that you don’t like.6 0. You know the general good locations (in zplane and splane).75 . Look at the transient responses of the system and identify the various pole locations.5 . This can lead to cost and/or saturation problems. This is known as a Linear Quadratic Regulator (LQR) problem. move them as little as possible).State feedback Pole position versus actuator eﬀort: 1. Roy Smith: ECE 147b 11: 16 . β = 0. ECE 230a and ECE 147c cover this material.2 0 0.