You are on page 1of 8

State feedback

State feedback control law: u(t) = −K x(t)
C
A
1
s
K
B
P(s)

+

+
v v
-
6
-
6
u(s) sx(s) x(s)
x(s)
y(s)
Salient points:
1. There is no reference input (we will add one later). We are driving the state to zero.
2. We can measure all of the states which is often not realistic. We will remove this
assumption later by designing an estimator.
3. The feedback gain, K, is static. This controller has no states.
Roy Smith: ECE 147b 11: 2
State Feedback
State Feedback
State-space models allow us to use a very powerful design method: state feedback.
To begin: assume that we can measure all of the state vector.
We will address the case where this cannot be done later.
State feedback control law: u(t) = −K x(t) or, equivalently, u(s) = −K x(s)
C
A
1
s
K
B
P(s)

+

+
v v
-
6
-
6
u(s) sx(s) x(s)
x(s)
y(s)
Roy Smith: ECE 147b 11: 1
State feedback
Structure of the state feedback control gains:
Examine the feedback gain matrix in more detail,. . .
u(k) = −K x(k)
= −
_
K
1
K
2
· · · K
n
¸
_
¸
¸
¸
_
x
1
(k)
x
2
(k)
.
.
.
x
n
(k)
_
¸
¸
¸
_
Note that K is a static nu ×nx matrix.
We have at least as many control gains in K as states in the system.
We can often use state feedback to place the closed-loop poles anywhere we want.
This may take a lot of gain. In other words, large values of K
i
.
Roy Smith: ECE 147b 11: 4
State feedback
Discrete-time state feedback
Given a discrete-time state-space system,
x(k + 1) = Ax(k) + Bu(k)
Recall that the poles of the system are the eigenvalues of A.
State feedback law: u(k) = −K x(k).
This gives,
x(k + 1) = Ax(k) − BK x(k)
= (A−BK)
. ¸¸ .
x(k)
closed-loop dynamics
So state feedback changes the pole locations (which is what we expect feedback to do).
The closed-loop poles are given by the eigenvalues of A−BK.
Roy Smith: ECE 147b 11: 3
State feedback
Controllable canonical form
One particular state-space representation (controllable canonical form) makes this
particularly easy.
For example,
P(z) =
b
1
z
2
+b
2
z +b
3
z
3
+a
1
z
2
+a
2
z +a
3
.
The controllable canonical form for this is:
A =
_
_
−a
1
−a
2
−a
3
1 0 0
0 1 0
_
_
, B =
_
_
1
0
0
_
_
, C =
_
b
1
b
2
b
3
¸
.
Now the closed-loop dynamics, A−BK, are given by,
A−BK =
_
_
−a
1
−a
2
−a
3
1 0 0
0 1 0
_
_

_
_
1
0
0
_
_
_
K
1
K
2
K
3
¸
,
=
_
_
−a
1
−K
1
−a
2
−K
2
−a
3
−K
3
1 0 0
0 1 0
_
_
←− easy to see the characteristic polynomial
Roy Smith: ECE 147b 11: 6
State feedback
Calculating K
Start by specifiying the desired closed-loop poles. Say they have positions in the z-plane:
z = β
1
, β
2
, · · · , β
n
.
Note that root-locus arguments, or a knowledge of “nice” pole positions, can often be used to
come up with suitable closed-loop pole locations.
The closed-loop pole locations can also be specified as the roots of the polynomial,
(z −β
1
)(z −β
2
) · · · (z −β
n
) = 0.
Now, under state feedback, the closed-loop poles are the roots of,
det(zI −(A−BK)) = det(zI −A +BK) = 0.
One approach:
Match coefficients in these two expressions and solve for K
1
, K
2
, · · · , K
n
.
Roy Smith: ECE 147b 11: 5
State feedback
General case: Ackermann’s Formula
Given the state equation (A, B), and desired closed-loop pole positions (specified by
coefficients γ
1
, · · · , γ
n
), Ackermann’s formula calculates K.
First form the “controllability matrix”,
C =
_
B AB A
2
B · · · A
n−1
B
¸
. ¸¸ .
n ×n matrix when nu = 1
If the desired closed-loop pole positions are given by,
z
n
+ γ
1
z
n−1
+ γ
2
z
n−2
+ · · · + γ
n
= 0,
then define:
γ
c
(A) := A
n
+ γ
1
A
n−1
+ · · · + γ
n
I ←− this is an n ×n matrix.
Ackermann’s formula: K =
_
0 · · · 0 1
¸
C
−1
γ
c
(A)
In Matlab, K can be calculated by the command: place.
Roy Smith: ECE 147b 11: 8
State feedback
Controllable canonical form
The closed-loop matrix is still in controllable canonical form,
A −BK =
_
_
−a
1
−K
1
−a
2
−K
2
−a
3
−K
3
1 0 0
0 1 0
_
_
and so,
det(zI −A +BK) = z
3
+ (a
1
+K
1
)z
2
+ (a
2
+K
2
)z + (a
3
+K
3
).
Equating coefficients with the desired closed-loop poles is easy:
Say, the desired closed-loop poles are given by,
det(zI −A +BK) = (z −β
1
)(z −β
2
)(z −β
3
)
z
3
+ γ
1
z
2
+ γ
2
z + γ
3
= 0,
Then,
K
1
= γ
1
−a
1
, K
2
= γ
2
−a
2
, K
3
= γ
3
−a
3
.
Roy Smith: ECE 147b 11: 7
Controllability
Controllability under similarity transforms
What is the effect of a similarity transform, x(k) = V ξ(k).
This gives,
ξ(k + 1) = V
−1
AV ξ(k) +V
−1
Bu(k),
and the controllability matrix is now,
C
ξ
=
_
(V
−1
B) (V
−1
AV V
−1
B) (V
−1
AV V
−1
AV V
−1
B) · · ·
¸
= V
−1
_
B AB A
2
B · · · A
n−1
B
¸
= V
−1
C.
Because V
−1
is invertible, rank(C
ξ
) = rank(C).
In other words controllability is not affected by a similarity transform. C
ξ
= C, but if one is
invertible, so is the other.
We can transform (via similarity) the system into controllable canonical form iff the system is
controllable.
Note that controllability applies to the state of a system. It doesn’t make sense in terms of
transfer functions (which can always be expressed as controllable canonical forms).
Roy Smith: ECE 147b 11: 10
Controllability
Controllability
Note that Ackermann’s formula,
K =
_
0 · · · 0 1
¸
C
−1
γ
c
(A),
will not work if C
−1
does not exist.
Controllability matrix:
C =
_
B AB A
2
B · · · A
n−1
B
¸
.
A system is called “controllable” if and only if C is invertible (equivalently, rank(C) = n).
Formal definition:
A system is controllable (or state controllable) if and only if for any given initial state,
x(t
0
) = x
0
, and any given final state, x
1
, specified at an arbitrary future time, t
1
> t
0
, there
exists a control input u(t) that will drive the system from x(t
0
) = x
0
to x(t
1
) = x
1
.
Roy Smith: ECE 147b 11: 9
Controllability
Another example: Flexible space structures
Large space telescopes:
• Flexible modes must be suppressed
• Limited number of piezoelectric actuators
• Many locations for placing actuators
• Which locations give the most controllable
structure
Roy Smith: ECE 147b 11: 12
State feedback
Controllability as a design tool
Consider a two-input, two-output system,
_
x
1
(k + 1)
x
2
(k + 1)
_
=
_
1 T
0 1
_ _
x
1
(k)
x
2
(k)
_
+
_
T
2
/2 0
0 T
_ _
u
1
(k)
u
2
(k)
_
_
y
1
(k)
y
2
(k)
_
=
_
1 0
0 1
_ _
x
1
(k)
x
2
(k)
_
,
where T is the sample period.
This system is unstable (where are the poles?)
Suppose that implementing u
1
(k) costs $50 and implementing u
2
(k) costs $100. What is the
minimum cost to stabilize the system: $50, $100 or $150?
Controllability can answer questions like this and is useful when actually building systems to
be controlled.
Roy Smith: ECE 147b 11: 11
State feedback
Example continued
The closed-loop state-space system with u(k) = −Kx(k) is,
x(k + 1) = (A−BK) x(k)
=
_
e
T
−(1 −e
T
)K
_
x(k).
And to get the only pole at zero means that A−BK = 0.
e
T
−(1 −e
T
)K = 0 which implies that K =
e
T
(1 −e
T
)
.
Controllability?
C =
_
B
¸
= 1 −e
T
= 0, so it is controllable.
Other pole positions
For β = 0.5, K =
(e
T
−0.5)
(1 −e
T
)
and for β = 0.75, K =
(e
T
−0.75)
(1 −e
T
)
.
Simulate these choices, with intersample behavior.
Roy Smith: ECE 147b 11: 14
State feedback
A simple pole placement example
P(s) =
−1
s −1
←− an unstable system
The ZOH equivalent of this system is,
P(z) =
1 −e
T
z −e
T
←− pole at z = e
T
(outside unit circle)
One particular state-space representation is,
x(k + 1) = e
T
x(k) + (1 −e
T
) u(k)
y(k) = 1 x(k)
In this case we are measuring the entire state (1 pole = 1 state), and so state feedback is the
same as output feedback.
Pole placement: Let’s try β = 0 (deadbeat control).
The desired closed-loop characteristic polynomial is simply γ
c
(z) = z = 0.
Roy Smith: ECE 147b 11: 13
State feedback
State feedback: picking good closed-loop pole locations
Choosing the closed-loop pole locations can be difficult.
You know the general good locations (in z-plane and s-plane). But,. . .
1. Look at the transient responses of the system and identify the various pole locations.
Change the ones that you don’t like.
2. The further that you move the poles the greater the gain that you will require. This can
lead to cost and/or saturation problems. (I.e. move them as little as possible).
3. Take an “optimal control” course to find out the best tradeoff between fast response and
input signal size. This can be viewed as finding K which minimizes the following “cost”:

k=0
x(k)
T
Qx(k) +u(k)
T
Ru(k).
where Q and R are positive definite weighting matrices for the state error and the control
action respectively. This is known as a Linear Quadratic Regulator (LQR) problem.
ECE 230a and ECE 147c cover this material.
Roy Smith: ECE 147b 11: 16
State feedback
Pole position versus actuator effort:
0 1 2 3 4 5 6 7 8 9 10
-0. 2
0
0.2
0.4
0.6
0.8
1
1.2
Sample index
State, x(k)
0 1 2 3 4 5 6 7 8 9 10
-0. 5
0
0.5
1
1.5
2
Sample index
Input, u(k)
e − 0.75
T
1 − e
T
K =
e − 0.5
T
1 − e
T
K =
e
T
1 − e
T
K = , β = 0
, β = 0.5
, β = 0.75
e − 0.75
T
1 − e
T
K = , β = 0.75
e − 0.5
T
1 − e
T
K = , β = 0.5
e
T
1 − e
T
K = , β = 0
“Nice” intersample behavior is due to the simplicity of the system (1 state).
Roy Smith: ECE 147b 11: 15

+  6 B  u(s) .

+  6− A P (s) - K x(s) Roy Smith: ECE 147b 11: 1 State feedback State feedback control law: u(t) = −K x(t) y(s)  C  x(s) v v 1 s -  sx(s) .

+  6 B  u(s) .

is static. We are driving the state to zero. 3. 2. There is no reference input (we will add one later). This controller has no states. K. We will remove this assumption later by designing an estimator. Roy Smith: ECE 147b 11: 2 . +  6− A P (s) - K x(s) Salient points: 1. We can measure all of the states which is often not realistic. The feedback gain.

Roy Smith: ECE 147b 11: 3 State feedback Structure of the state feedback control gains: Examine the feedback gain matrix in more detail.  . large values of Ki . The closed-loop poles are given by the eigenvalues of A − BK. .. . In other words. x(k + 1) = A x(k) − BK x(k) = (A − BK) x(k) closed-loop dynamics So state feedback changes the pole locations (which is what we expect feedback to do). We have at least as many control gains in K as states in the system. State feedback law: This gives. This may take a lot of gain.   .State feedback Discrete-time state feedback Given a discrete-time state-space system. u(k) = −K x(k)   x1(k)  x (k)  2   . x(k + 1) = A x(k) + B u(k) Recall that the poles of the system are the eigenvalues of A. Roy Smith: ECE 147b 11: 4 . u(k) = −K x(k). We can often use state feedback to place the closed-loop poles anywhere we want. xn(k) = − K1 K2 · · · Kn Note that K is a static nu × nx matrix.

A − BK. Say they have positions in the z-plane: z = β1 . are given by. The closed-loop pole locations can also be specified as the roots of the polynomial. · · · . β2. C = b1 b2 b3 . det(zI − (A − BK)) = det(zI − A + BK) = 0.State feedback Calculating K Start by specifiying the desired closed-loop poles. B = 0 . 0 1 0 0   −a1 − K1 −a2 − K2 −a3 − K3  ←− easy to see the characteristic polynomial =  1 0 0 0 1 0 Roy Smith: ECE 147b 11: 6  . 0 1 0 0 Now the closed-loop dynamics. For example. (z − β1)(z − β2 ) · · · (z − βn ) = 0. K2. under state feedback.    −a1 −a2 −a3 1 A − BK =  1 0 0  − 0 K1 K2 K3 . + a1 z 2 + a2 z + a3 The controllable canonical form for this is:     −a1 −a2 −a3 1 A= 1 0 0  . · · · . βn . Roy Smith: ECE 147b 11: 5 State feedback Controllable canonical form One particular state-space representation (controllable canonical form) makes this particularly easy. Now. P (z) = z3 b1 z 2 + b2 z + b3 . or a knowledge of “nice” pole positions. Kn. One approach: Match coefficients in these two expressions and solve for K1 . can often be used to come up with suitable closed-loop pole locations. Note that root-locus arguments. the closed-loop poles are the roots of.

then define: γc (A) := An + γ1An−1 + · · · + γn I Ackermann’s formula: ←− this is an n × n matrix. K1 = γ1 − a1 . Roy Smith: ECE 147b 11: 7 State feedback General case: Ackermann’s Formula Given the state equation (A. Ackermann’s formula calculates K. and desired closed-loop pole positions (specified by coefficients γ1 . Equating coefficients with the desired closed-loop poles is easy: Say. the desired closed-loop poles are given by. γn). Then. K2 = γ2 − a2 . K3 = γ3 − a3 . det(zI − A + BK) = z 3 + (a1 + K1)z 2 + (a2 + K2)z + (a3 + K3 ). First form the “controllability matrix”. Roy Smith: ECE 147b 11: 8 . B). det(zI − A + BK) = (z − β1 )(z − β2 )(z − β3) z 3 + γ1z 2 + γ2z + γ3 = 0.State feedback Controllable canonical form The closed-loop matrix is still in controllable canonical form. K can be calculated by the command: place.   −a1 − K1 −a2 − K2 −a3 − K3  A − BK =  1 0 0 0 1 0 and so. C = B AB A2B · · · An−1B n × n matrix when nu = 1 If the desired closed-loop pole positions are given by. z n + γ1 z n−1 + γ2 z n−2 + · · · + γn = 0. K = 0 · · · 0 1 C −1 γc (A) In Matlab. · · · .

x(t0) = x0. rank(C) = n). We can transform (via similarity) the system into controllable canonical form iff the system is controllable. specified at an arbitrary future time. will not work if C −1 does not exist. rank(Cξ ) = rank(C). and any given final state. but if one is invertible. In other words controllability is not affected by a similarity transform. so is the other. Controllability matrix: C = B AB A2 B · · · An−1B . x(k) = V ξ(k). x1. Roy Smith: ECE 147b 11: 9 Controllability Controllability under similarity transforms What is the effect of a similarity transform. there exists a control input u(t) that will drive the system from x(t0) = x0 to x(t1) = x1. Roy Smith: ECE 147b 11: 10 . and the controllability matrix is now. Because V −1 is invertible. ξ(k + 1) = V −1AV ξ(k) + V −1 B u(k). Note that controllability applies to the state of a system. t1 > t0 .Controllability Controllability Note that Ackermann’s formula. Cξ = (V −1 B) (V −1AV V −1 B) (V −1AV V −1 AV V −1B) · · · = V −1 B AB A2B · · · An−1B = V −1C. Formal definition: A system is controllable (or state controllable) if and only if for any given initial state. A system is called “controllable” if and only if C is invertible (equivalently. K = 0 · · · 0 1 C −1 γc(A). This gives. Cξ = C. It doesn’t make sense in terms of transfer functions (which can always be expressed as controllable canonical forms).

two-output system. What is the minimum cost to stabilize the system: $50.State feedback Controllability as a design tool Consider a two-input. Roy Smith: ECE 147b 11: 11 Controllability Another example: Flexible space structures Large space telescopes: • Flexible modes must be suppressed • Limited number of piezoelectric actuators • Many locations for placing actuators • Which locations give the most controllable structure Roy Smith: ECE 147b 11: 12 . x1(k + 1) x2(k + 1) y1 (k) y2 (k) = = 1 T 0 1 1 0 0 1 x1(k) x2(k) x1(k) x2(k) . + T 2 /2 0 0 T u1(k) u2(k) where T is the sample period. This system is unstable (where are the poles?) Suppose that implementing u1 (k) costs $50 and implementing u2(k) costs $100. $100 or $150? Controllability can answer questions like this and is useful when actually building systems to be controlled.

K = (eT − 0. x(k + 1) = eT x(k) + (1 − eT ) u(k) y(k) = 1 x(k) In this case we are measuring the entire state (1 pole = 1 state). T) (1 − e (1 − eT ) so it is controllable. K= eT . Other pole positions For β = 0. And to get the only pole at zero means that A − BK = 0.75.State feedback A simple pole placement example P (s) = −1 s−1 ←− an unstable system The ZOH equivalent of this system is.5. and so state feedback is the same as output feedback. Pole placement: Let’s try β = 0 (deadbeat control). (1 − eT ) Simulate these choices.5) (eT − 0. P (z) = 1 − eT z − eT ←− pole at z = eT (outside unit circle) One particular state-space representation is. eT − (1 − eT )K = 0 which implies that Controllability? C = B = 1 − eT = 0. Roy Smith: ECE 147b 11: 13 State feedback Example continued The closed-loop state-space system with u(k) = −Kx(k) is. K = . with intersample behavior. x(k + 1) = (A − BK) x(k) = eT − (1 − eT )K x(k). The desired closed-loop characteristic polynomial is simply γc(z) = z = 0.75) and for β = 0. Roy Smith: ECE 147b 11: 14 .

. x(k) K= eT .8 0.5 0 -0. u(k) K= eT .5 1 0. (I.e. β = 0 1 − eT T K = e − 0. Take an “optimal control” course to find out the best tradeoff between fast response and input signal size. The further that you move the poles the greater the gain that you will require.4 0. This can be viewed as finding K which minimizes the following “cost”: ∞ x(k)T Qx(k) + u(k)T Ru(k). β = 0. But. β = 0.75 1 − eT 1 2 3 4 5 6 Sample index 7 8 9 10 “Nice” intersample behavior is due to the simplicity of the system (1 state). β = 0 1 − eT T K = e − 0. .5 1 − eT T K = e − 0.75 1 − eT 2 1.2 1 0.2 0 1 2 3 4 5 6 Sample index 7 8 9 10 State. k=0 where Q and R are positive definite weighting matrices for the state error and the control action respectively. 2. 3. Roy Smith: ECE 147b 11: 15 State feedback State feedback: picking good closed-loop pole locations Choosing the closed-loop pole locations can be difficult. 1.5 0 Input. β = 0.5 1 − eT T K = e − 0.75 .5 .. Change the ones that you don’t like.6 0. You know the general good locations (in z-plane and s-plane).75 . Look at the transient responses of the system and identify the various pole locations.5 . This can lead to cost and/or saturation problems. This is known as a Linear Quadratic Regulator (LQR) problem. move them as little as possible).State feedback Pole position versus actuator effort: 1. Roy Smith: ECE 147b 11: 16 . β = 0. ECE 230a and ECE 147c cover this material.2 0 -0.