5 Design of Digital Control Using State Space
Method
As we have seen in chapter 4 the advantage of the state-space
formulation are especially apparent when designing controllers for
multiple-input, multiple-output (MIMO) systems, that is, those with more
than one control input and/or sensed output.
5.1 Control-Law Design
The state-space description of a discrete system is given
by Equation (5.1).
x(k + 1) = Gx(k) + Hu(k) (5.1
y(k) = Cx(k) + Du(k) )
One of the attractive features of state-space design methods is that the
procedure consists of two independent steps. The first step assumes that
we have all the state elements at our disposal for feedback purposes. The
assumption that all states are available merely allows us to proceed with
the first design step, namely, the control law. The remaining step is to
design an ”estimator” (or ”observer”), which estimates the entire state
vector, given measurements of the portion of the state provided by
Equation (5.1). The final control algorithm will consist of a combination
of the control law and the estimator with the control law calculations
based on the estimated states rather than on the actual states.
As for the continuous case, the control law is simply the feedback of a
linear combination of all the state elements, that is
x
1 u = -Kx = −[K K ·
1 2
··] x (5.2)
2
.
.
Note that this structure does not allow for a reference input to the
system. The control law, Equation (5.2), assumes that r=0 and is,
therefore, usually referred to as a regulator.
u( + Σ x(k
k) H z −1
I )
+
1 −
K
Figure 5.1: Closed loop control law
design Substituting Equation (5.2) in Equation (5.1), we have
x(k + 1) = Gx(k) − HKx(k)
(5.3)
2
Therefore the z-transform of Eq. (5.3) is
(zI − G + HK)X(z) = 0
and the characteristic equation of the system with the hypothetical
control law is
|zI − G + HK| = 0 (5.4)
Pole Placement: The approach we wish to take at this point is pole
placement; that is, hav- ing picked a control law with enough parameters
to influence all the closed-loop poles, we will arbitrarily select the desired
pole locations of the closed-loop system and see if the approach will work.
The control-law design, then, consists of finding the elements of K so that
the roots of Eq. (5.4), that is, the poles of the closed-loop system, are in
the desired locations. Unlike classical design, where we iterated on
parameters in the compensator (hoping) to find acceptable closed-loop root
locations, the full state feedback, pole-placement approach guarantees
success and allows us to arbitrarily pick any pole location, providing that n
poles are specified for an nth-order system.
Given desired pole locations, say
zi = β1, β2, β3, . . . , βn,
the desired control-characteristic equation is
αc(z) = (z − β1)(z − β2) · · · (z − βn) = 0 (5.5)
Equations (5.4) and (5.5) are both the characteristic equation of the
controlled system; therefore, they must be identical, term by term. Thus
we see that the required elements of K are obtained by matching the
coefficients of each power of z in Eq. (5.4) and Eq. (5.5), and there will
be n equations for an nth-order system.
Example 1: Design a control law for the satellite attitude-control system
described by Eq. (5.6).
" # "
x(k + 1) = 1 T T 2/2
x(k) # u(k)
+
0 1
(5.6)
T
Pick the z-plane roots of the closed-loop characteristic equation so that the
equivalent s-plane roots have a damping ratio of ζ = 0.5 and real part of s =
−1.8 rad/sec (i.e., s = −1.8 ± j3.12 rad/sec). Use a sample period of T = 0.1
sec.
Solution: Using Zero-Pole Matching Equivalents z = esT with a sample
period of T = 0.1 sec, we find that s = −1.8 ± j3.12 rad/sec translates to z =
3
0.8 ± j0.25. The desired characteristic equation is then given by
z2 − 1.6z + 0.70 = 0 (5.7)
and the evaluation of Eq.(5.4) for any control law K leads to
|zI − G + HK| = 0
4
"#
0 # " 2
" − T T /2
1 + # [K1 K 2] = 0
10 1 0 1 T
z
or
z2 + (TK2 + (T 2/2)K1 − 2)z + (T 2/2)K1 − TK 2 + 1 = 0
(5.8)
Equating coefficients in Eqs. (5.7) and (5.8) with like powers of z, we
obtain two simultaneous equations in the two unknown elements of K
TK2 + (T 2/2)K1 − 2 =
−1.6 (T 2/2)K1 − TK 2 +
1 = 0.70
which are easily solved for the coefficients and evaluated for T = 0.1 sec
k2 = 0.10 0.3
= 10, k2 5
=2
T = 3.5
T
The calculation of the gains using the method illustrated in the previous
example becomes rather tedious when the order of the system is greater
than 2.Therefore, other approaches have been developed to provide
convenient computer-based solutions to this problem.
The algebra for finding the specific value of K is especially simple if the
system matrices happen to be in the form associated with controllable
canonical form. Lets assume a discrete transfer function as given in Eq.
(5.9).
Y (z) b 0z 3 + b 1z 2 + b 2z + b 3
= 3 (5.9)
U (z) z +1a z2 + 2+a z
3
+a
The controllable canonical form for Eq. (5.9) assuming b0 = 0 will be
x (k + 1)
x2(k) + 0 u(k)
x1(k + 1) 0 1 0 x1(k)
= 0
2 0 0 1 (5.10)
x1(k)
y(k) = [b3 b2 b1] x2(k)
(5.11)
x3(k)
5
Note from Eq. (5.9), the characteristic polynomial of this system is a(z) =
z3 + a1z2 + a2z + a3. The key idea here is that the elements of the last row
of Gc are exactly the coefficients of the characteristic polynomial of the
system. If we now form the closed-loop system matrix Gc − HcK, we find
0 1 0
Gc − H c K 0 0 1 (5.12)
= −a3 −a2 −a1
− K1 − K2 − K3
6
By inspection, we find that the characteristic equation of Eq. (5.12) is
z3 + (a1 + K3)z2 + (a2 + K2)z + (a3 + K1) = 0
Thus, if the desired pole locations result in the characteristic equation
z3 + α1z2 + α2z + α3 =
0, then the necessary values for control gains are
K 1 = α1 − a 1 , K 2 = α2 − a 2 , K 3 = α3 − a 3
(5.13)
Conceptually, then, we have the canonical-form design method: Given an
arbitrary discrete transfer function and a desired characteristic equation
α(z) = 0, we convert to controllable form (Gc, Hc) and solve for the gain by
Eq. (5.13).
5.2 Estimator Design
The control law design in the last section assumed that all state elements
were available for feedback. Typically, not all elements are measured;
therefore, the missing portion of the state needs to be reconstructed for
use in the control law. We will first disuss methods to obtain an estimate of
the entire state given a measurement of one of the state elements. This
will provide the missing elements as well as provide a smoothed value of the
measurement, which is often contaminated with random errors or ”noise”.
There are two basic kinds of estimates of the state, x(k): We call it the
current estimate, xˆ (k ), if based on measurements y(k) up to and including
the kth instant; and we call it the predictor estimate, x¯ (k),if based on
measurements up to y(k − 1). The idea eventually will be to let u =
−K xˆ (k) or u = −K x¯(k), replacing the true state used in Eq. (5.2) by
its estimate.
5.2.1 Prediction Estimators
One method of estimating the state vector which might come to mind is
to construct a model of the plant dynamics,
x¯(k + 1) = Gx¯(k) + Hu(k)
(5.14)
We know G,H, and u(k), and hence this estimator should work if we can
obtain the correct x(0) and set x¯ (0) equal to it. Figure (5.2) depicts this
7
”open-loop” estimator. If we define the error in the estimate as
x˜ = x − x¯ ,
(5.15)
and substitute Eqs. (5.14) into Eq. (5.15), we find that the dynamics of the
resulting system are described by the estimator-error equation
x˜ (k + 1) = Gx˜(k)
(5.16)
8
Thus, if the initial value of x¯ is off, the dynamics of the estimate error
are those of the uncom-
pensated plant, G. For a marginally stable or unstable plant, the error
will never decrease from initial value. For an asymptotically stable plant,
an initial error will decrease only because the plant and estimate will
both approach zero. Basically, the estimator is running open loop and not
utilizing any continuing measurements of the system’s behavior, and we
would expect that it would diverge from the truth. However, if we
feedback the difference between the measured output and the estimated
output and constantly correct the model with error signal, the divergence
should be minimized.
x(0)
Plant
u(k) x(k) y(k)
C
G, H
Model
x¯ (k ) y¯(k)
C
G, H
xˆ (0)
Figure 5.2: Open-loop estimator
The idea is to construct a feedback system around the open-loop
estimator with the estimated output error as the feed back. This scheme
is shown in Figure (5.3); the equation for it is
x¯ ( k + 1) = Gx¯ (k) + Hu(k) + Lp[y(k) − C x¯(k)],
(5.17)
where Lp is the feedback gain matrix.We call this a prediction estimator
because a measurement at time k results in an estimate of the state
vector that is valid at time k + 1; that is, the estimate has been predicted
one cycle in the future. A difference equation describing the behavior of
the estimation errors is obtained by subtracting Eq. (5.17) from Eq. (5.1).
9
The result is
x˜ (k + 1) = [G − LpC]x˜(k),
(5.18)
1
0
u(k) Plant
x(k) y(k)
C
G, H
Model
x¯( y¯(k Σ +
k) C )
G, H
—
Lp
Figure 5.3: Closed-loop estimator
This is a homogeneous equation, but the dynamics are given by [G −
LpC]; and if this system matrix represents an asymptotically stable
system, x˜ will converge to zero for any value of x˜ (0). In other words,
x¯ (k ) will converge towards x(k) regardless of the value of x¯ (0) and
could do so faster than the normal (open-loop) motion of x(k) if the
estimator gain, Lp, were large enough so that the roots of [G − LpC] are
sufficiently fast. In an actual implementation, x¯ (k ) will not equal x(k)
because the model is not perfect, there are unmodelled disturbances, and
the sensor has some errors and added noise. However, typically the
sensed quantity and Lp can be chosen so that the system is stable and the
error is acceptably small.
To find the value of Lp, we take the same approach that we did when
designing the control law. First, specify the desired estimator pole
locations in the z-plane to obtain the desired estimator characteristic
equation,
(z − β1)(z − β2) · · · (z − βn) = 0
(5.19)
where the β’s are the desired estimator pole locations and represent how
fast the estimator state vector converges towards the plant state vector.
Then form the characteristic equation from the estimator-error equation
(5.18),
|zI − G + LpC| = 0
(5.20)
1
1
Equation (5.19) and (5.20) must be identical. Therefore, the coefficient of
each power of z must be the same, and, just as in the control case, we
obtain n equations in n unknown elements of Lp for an nth-order system.
Example 2: Construct an estimator for the same case as in Example 1,
where the measurement is the position state element, x1, so that C = [1
0]. Pick the desired poles of the estimator to be at z = 0.4 ± j0.4.
Solution: The desired characteristic equation is then (approximately)
z2 − 0.8z + 0.32 = 0,
(5.21)
1
2
and the evaluation of Eq, (5.20) for any estimator gain Lp leads to
" |zI − G + LpC| = 0
#
1 #
z # " [1 0] = 0
"
0 0
1 0 1
T Lp
1 − + 2
o Lp1
r z2 + (Lp1 − 2)z + TLp2 + 1 − Lp1 = 0
(5.22)
Equating coefficients in Eq. (5.21) and (5.22) with powers of z, we
obtain two simultaneous equations in the two unknown elements of Lp
Lp1 − 2 = −0.8
TLp2 + 1 − Lp1 = 0.32,
which are easily solved for the coefficients and evaluated for T = 0.1 sec
0.52
Lp1 = 1.2, Lp2 = 5.2
= (5.23)
T
Thus the estimator algorithm would be Eq. (5.17) with Lp given by Eq.
(5.23), and the equation to be coded in the computer are
x¯ 1 (k + 1) = x¯ 1 (k ) + 0.1x¯ 2 (k) + 0.005u(k) + 1.2[y(k) − x¯ 1 (k)]
x¯ 2 (k + 1) = x¯ 2 (k ) + 0.1u(k) + 5.2[y(k) − x¯ 1 (k)]
Figure 5.4 shows the time history of the estimator error from Eq. (5.18)
for the gains in Eq. (5.23) and with an initial error of 0 for the position(x1)
estimate and 1 rad/sec for the velocity (x2) estimate.
The transient settling in x˜ 2 could be speed up by higher values of the
gain, Lp, that would result by selecting faster estimator poles, but this
would occur at the expense of more response of both x˜ 1 and x˜ 2 to
measurement noise.
It is important to note that an initial estimator transient or, equivalently,
the occurrence of an unmodelled input to the plant, can be a rare event.
If the problem is one of regulation, the initial transient might be
unimportant compared to the long-term performance of the estimator in
the presence of noisy measurements. In the regulator case with very
small plant disturbances, very slow poles and their low estimator gains
would give smaller estimate errors.
1
3
Figure 5.4: Time history of the prediction estimator error
5.2.2 Current Estimators
As was already noted, the previous form of the estimator equation (5.17)
arrives at the state vector estimate x¯ after receiving measurements up
through y(k − 1). This means that the current value of control does not
depend on the most current value of the observation and thus might not
be as accurate as it could be. For high-order systems controlled with a
slow computer or any time the sample periods are comparable to the
computation time, this delay between the observation instant and the
validity time of the control output can be a blessing because it allows
time for the computer to complete the calculations. In many systems,
however, the computation time required to evaluate Eq. (5.17) is quite
short compared to the sample period, and the delay of almost a cycle
between the measurement and the proper time to apply the resulting
control calculation represents an unnecessary waste. Therefore, it is
useful to construct an alternative estimator formulation that
provides a current based in the current measurement y(k). Modifying
estimate xˆ yield this Eq. (5.17) to
feature, we obtain
1
4
xˆ (k) = x¯ (k) + Lc[y(k) − Cx¯(k)],
(5.24)
where x¯ (k ) is the predicted estimate based on a model prediction from
the previous time estimate, that is
1
5
x¯(k) = Gxˆ(k − 1) + Hu(k − 1).
(5.25)
Control from this estimator cannot be implemented exactly because it is
impossible to sample, perform calculations, and output with absolutely no
time elapsed. However, the calculation of u(k) based on Eq. (8.33) can be
arranged to minimize computational delays by performing all calculations
before the sample the sample instant that do not directly depend on the
y(k) mea- surement.
To help understand the difference between the prediction and current
form of estimation - that is,
Eq.(5.17) and (5.24) - it is useful to substitute Eq. (5.24) into Eq. (5.25).
This results in
x¯ ( k + 1) = Gx¯ (k ) + Hu(k) + GLc[y(k) − Cx¯ (k)]
(5.26) Furthermore, the estimation-error equation for x¯ (k ), obtained by
subtracting Eq. (5.1) from Eq. (5.26), is
x˜ (k + 1) = [G − GLcCx˜(k)]
(5.27)
By comparing Eqs. (5.26) with (5.17) and (5.27) with (5.18), we can
conclude that x¯ in the current estimator equation, (8.24), is the same
quantity as x¯ in the predictor estimator equation, (5.17), and that the
estimator gain matrices are related by
Lp = GLc
(5.28)
The relationship between the two estimates is further illuminated by
writing Eqs. (5.24) and (5.25)
as a block diagram, as in Figure 5.4. It x¯ represent different outputs of
shows that xˆ estimator system. the same
We can also determine the estimator-error by subtracting Eq. (5.24)
equation for xˆ The result is from (5.1).
x˜ (k + 1) = [G − LcCG]x˜(k)
(5.29)
The two error equations, (5.27) and (5.29), can be shown to have the
same roots, as should be the case because they simply represent the
dynamics of different outputs of the system. Therefore, we could use
1
6
either form as the basis for computing the estimator gain, Lc. Using Eq.
(5.29), we note that it is similar to Eq. (5.18) except that CG appears
instead of C.
u + x¯ — Σ + y
H Σ z −1
I C
+
Lc
+
Σ
G
x +
ˆ
Figure 5.5: Estimator block diagram
1
7