You are on page 1of 4

602 I EEE TRANSACTIONS ON AUTOMATIC COhTROL, VOL. AC-30. NO. 6.

J U N E 1985
Then by using (5) and (9) theequality (8) yields
n n
n n
= C C A , [ ~ ( ~ + I + ~ , j +1+t)-AI +(i +1+k, j +t )
i = O J = Q
-A2$(i +k, j +l +t ) - Ao$( i +k, j + t ) ] . (10)
Then, for k >0 or f >0, we ought to consider the following cases:
follows from Theorem 1;
to zero according to the second property of stm (2-DGM);
equation and Theorem 1 yield
i) fori +k >0 and j +t >0 the right side of (10) is equal to zero, it
ii) fori +k +1 >Oorj +f +1 >Otherightsideof(l0)isequal
iii) for i +k -I- 1 =0 and j +f >0 the right side of the above
A@(O, j + 1 +f )-A19(0, j +t ) l
=A,-[ A1$(O, j +t )-A,+(O, j+t)l=O;
and
zero analogously to iii).
iv) fori +k >Oandj +f +1 =Otherightsideof(10)isequalto
Thus, we get
aii@(i+k, j +t ) =o, for /c>Oor t >o or k = t = ~
n n
;=o j = o
which is equivalent to (6). For completing the proof one should note that
for k =1 and t =- n the above equality may be IWWitten in the form
i = O
Next use k =1 and f =- n +1, etc. E
Remarks:
1) Based on the proof one can note for k >0 and 0 6 t <n, that is
i =O j = O
Thesimilar formula for 0 <k <n and t >0 is obvious.
well-known Cayley-Hamilton theorem
2) For k =t =0 Theorem 3 may be written as a generalization of the
An slm(2-DGM) fulfills ce(Z-DGM), i.e., uo+(i, j ) = 0.
n n
r =o 1- 0
However, taking into account the presented definitions and proof of the
theorem we can state a more general result.
Theorem 4: A state-transition matrix of linear time-invariant digital
system always satisfies the characteristic equation of a system. 0
V. CONCLUDING REMARKS
The general state-space model of 2-D linear digital system is considered
in this note. However, theresults may be easily applied to any linear
digital system, e.g., N-D. Moreover, we must point out that theconcepts
of state-transition matrix, etc., apply to continuous systems, too.
By using the proposed defmition of state-transition matrix of a causal
system it is easy to find a recursive formula for state-transition matrix
calculation as shown in Theorem 1. Then, having a state-transition matrix
one can simply get a general response formula for a system wi th any
boundary conditions analogously to Theorem 2.
It is rather evident, e.g., [4], [ 5] , that the state-transition matrix is
essential for structural properties of a system, for instance, stability,
controllability, observability. Then Theorem 3 and characteristic function
of the model definition can be useful when the state-transition matrices are
calculated.
Finally, it should be noted that the presented theorems are valid for
matrices Ai and Bi, i =0, 1, 2, over any field, not only real as it was
assumed in Section II.
ACKNOWLEDGMENT
The author wishes to express his grateful thanks to Prof. T. Kaczorek
from the Technical University of Warsaw for stimulating discussions on
two-dimensional systems theory.
REFERENCES
S. A mi , Systemes lineaires homogenes a dwx indices, Rapporr Laboria,
E. Fornasini and G. Marchesini. State-space realization thwry of two-
vol. 31, Sept. 1973.
dimensional filters, IEEE Trans. Automat. Conrr., vol. AC-21, no. 4, pp.
484-492, Aug. 1976.
E. Fornasini and G. Marchesini. Doubly-indexed dynamical systems: State-
space models and structural pmpenies, Math. Syst. Theory, vol. 12, pp. 59-72,
1978.
E. Fornasini and G. Marchesini, A critical review of recent results on 2-D
systems theory (Preprints) Proc. 8th IFAC Congress, 1981, vol. II, pp. 147-
153.
S.-Y. Kung, B. C. Levy, M. Mod, and T. Kailath, New results in 2-D systems
thwry-Pan U: 2-D state-space models-realization and the notions of controllabil-
ity, observability and minimality, ROC. IEEE, vol. 65, pp. 945-961, June
1977.
W. Marszalek, Two dimensional state-space discrete models for hyperbolic
partial differential equations. Appl. Math. Model., vol. 8. pp. 11-14, Feb.
1984.
R. R. Roesser, A discrete state-space model for linear image processing, IEEE
S. G. Tzafestas and T. G. Fimenides. Exact model-matching control of t hr ee-
Trans. Automat. Contr., vol. AC-20, pp. 1-10. Feb. 1975.
dimensional systems using state and output feedback, In?. J. Sysr. Sci., V O~. 13,
~ ~~
pp. 1171-1187. 1982.
Asymptotic Recovery for Discrete-Time Systems
J. M. MACIEJOWSKI
Abstract-An asymptotic recovery design procedure is proposed for
square, discrete-time, linear, time-invariant multivariable systems, which
allows a state-feedback design to be approximately recovered by a
dynamic outpnt feedback scheme. Both the case of negligible processing
lime (compared to the sampling interval) and of significant processieg
time are discussed. In the former case, it is possible to obtain perfect
Paper recommended by Past Associate Editor, D. P. Looze.This work was supported by
Manuscript received June 23, 1983; revised October 8, 1984 and October 26, 1984.
theScience and Engineering Research Council.
The author is with the Department of Engineering, Cambridge University, Cam-
bridge, England.
0018-9286l 85/0600-062$01.00 0 1985 IEEE
IEEE TRANSACTIONS ON AUTOMATIC CONTROL. VOL. AC-30. NO. 6. J UNE 1985
603
recovery if the plant is minimum-phase and has the smallest possible
number of zeros at infinity. In other cases good recovery is frequently
possible. New conditions are found which ensure that the return-ratio
beiig recovered exhibits good robustness properties.
I. INTRODUCTION
Asymptotic recovery techniques have been developed by Kwakernaak
[I] and by Doyle and Stein 121, [3] for continuous-time, minimum-phase
systems. These are design techniques which allow the excellent robust-
ness and sensitivity properties of optimal state feedback schemes to be
almost recovered by output feedback schemes. Although this was the
original motivation for the development of these techniques, a wider and
more important aspect of them is that they simplify the use of the LQG
methodology, allowing practical feedback designs to be attained with a
reasonable amount of effort.
Whereas output feedback design via LQG methods usually requires the
specification of two pairs of matrices, namely a pair of cost-weighting
matrices and a pair of noise covariance matrices, the asymptotic recovery
approach requires only one of these pairs to be designed, with the other
pair being assigned values according to an automatic procedure. Further-
more, if the overall design specification is formulated in terms of gain/
frequency characteristics, thedesigner can obtain considerable guidance
on how to adjust the values of the pair of matrices to be designed, in such
a way as to approach the specification more closely. This results in a
tremendous reduction in the complexity of the design process.
Consequently, it is of great importance to obtain an analogous
procedure for discrete-time systems, in spite of the fact that discrete-time
state feedback schemes do not possess all theattractive features which are
present in the continuous-time case-for example, stability margins and
sensitivity properties are not guaranteed.
We shall consider two cases. In the first case the processing time
required to compute each control signal is negligible when compared to
the interval between observations of plant variables. In this case, the
control signal uk can be dowed to depend on the output observations up
to Yk . h the second case, the processing time is comparable to the
observation interval, and u k can be allowed to depend only on
observations up to yk- I , with a consequent impairment of performance.
We shall find that, unlike the continuous-time case, it is possible to
actually obtain perfect recovery for discrete-time systems, under certain
conditions.
II. DESIGN PROCEDURE
We assume that the plant to becontrolled is modeled as
xk+l=AXk+BUk; Yk=CXk (1)
where u and y are rn-dimensional input and output vectors, x is the n-
dimensional state vector (n 2 rn), and A, B, C are constant matrices. As
usual, we assume this model is stabilizable and detectable.
Our procedure is the following. First, fictitious process and measure-
ment noise covariance matrices, Wand V, are used to obtain a steady-
state Kalman filter. As shown in [4], this takes the following form:
~ k - c / k = A ~ k r / k - I + B U k - K ; C F k / k - I - Y k ) (2)
j k / k - 1 =c f k l k - I (3)
~ k , k = ~ k l k - l - K ; ~ k / k - l - Y k ) (4)
K;= AK; ( 5 )
K; =Pc ( c Pc + V)-1 (6)
and P isthe positive semidefinite solution of the Riccati equation
P=APA-APC(CPC+ Y)-l CPA +W. (7)
where
k/k-l
+
yk
-
-E
Fig. 1. Thestructure of the discrete-time observer.
~
lic
I I
-
u
Fig. 1. Thestructure of the discrete-time observer
A block-diagram representation of these equations is shown in Fig. I , in a
form which emphasizes that KY is a feedback matrix, while K$ is a
feedforward matrix.
The dynamics of the plant are augmented if necessary, and the Wand V
matrices are adjusted until the frequency response characteristics of the
Kalman filter are those which the designer would like to obtain at the
output of the compensated plant. By frequency response characteristics
we mean the behavior of indicators such as the characteristic loci or the
singular values of the fdters open-loop return ratio
+(z)=C(ZZ-A)-KY
andor its closed-loop transfer function
W Z ) =+( Z) [ Z +q z ) ] - .
(9)
Next a cheap optimal state feedback controller is synthesized for the
augmented plant, with state weighting matrix Q = CC and control
weighting matrix R =0. This requires one to solve equations dual to ( 5) .
(6), (7) to obtain the state feedback matrix Kc (see [5] for details) which is
used to generate the control according to
uk =- K&!k. (loa)
If it is impractical to use yk for the estimation of &, we replace (loa) by
uk= -K&?k/k-l. (lob)
Shaked has recently found closed-form expressions for Kc for the cheap
control problem [6]. Note that in the discrete-time case it is quite possible
to set R =0, whereas in the continuous-time case this would lead to the
use of infinite gains.
Finally, a feedback compensator is synthesized as the series connection
of rhe Kalman filter and theoptimal state-feedback controller in the usual
way.
From (2)-(4) and (loa) it follows that the resulting (filtering)
compensator is defined by
1; +I =(A - B K N - K;c)<k +( A - BKc)K9k (1 1)
Uk=KdI- K;c){k +K&;ek (12)
where {k = and ek =rk - yk is the error between a reference
signal f k and the output Y k . If (2)-(4) and (lob) are used, the (predicting)
compensator is defined by
5;+1=(A-BK, -K~c)S;+K;l e, (13)
uk =K&. (14)
Let Hf ( z ) be the transfer function of the compensator defined by (1 l ) ,
(12), and let H&z) be the transfer function of the compensator defined by
(13), (14). Also let
C( z ) =C( z I - A) - B (15)
be the plant transfer function. We shall show that, if G(z) is minimum-
phase and det (CB) #0, then
G(z)H,(z) =+(z). (16)
604 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. AC-30, NO. 6, JUNE 1985
If one or both of these conditions fail to hold, (16) often holds
approximately over a useful frequency range.
m. USE OF Hf(Z) WITH det (CB) #0 AND G( Z) MINIMUM-PHASE
First, consider the case in which processing time is negligible, so that
the filtering version of the compensator can be used. Suppose that det
(CB) #0 and that G( z) has no finite zeros outside the unit circle (we shall
call such G(z) minimum-phase, even though this term should really be
reserved for discrete-time systems with neither finite nor infinite zeros
outside the unit circle). Let S(A, B, C) denote the plant model (l), and let
Kc be the state feedback matrix obtained as the optimal solution to the
problem
E
minimize J= y&. (15)
k =O
Lemma (Shaked [6]): If det (CB) #0, then
K,=(CB)-lCA. (16)
(This is the simplest case of Shaked's much more general results. It is
easily verified that C'C is the solution of the Riccati equation for this
problem, from which (16) follows.)
It is easy to show that
det (CB)#O iff W" =range (B) (B ker ( C ) (17)
so that
II =B(CB) -IC (18)
isthe projector onto range (B) along ker (0. Therefore,
range ( A - BKJ= range ([Z-mA)C ker (C). (19)
Now H&) can be manipulated into the form
H~(Z)=ZK,[ZZ-(Z-K~~(A-BK,)]-~K$ (20)
=zK, (zI-A+BK, )-' K$ (21)
in view of (19).
0, then
Theorem: If G(z) has no (finite) zeros in ( z : IzI >11 and det (CB) #
A(z) =G(z)HJ(z) - "(z) =0. (22)
Proof:
A(Z)=C(ZZ--A)-~[ZBK~(ZZ-A+BK,)-'K$-K~] (23)
=C(ZI-A)-~{Z~IA[ZI-(Z-T~)A]-'-A}K$ (24)
=C(tI-A)-'{zII[zZ-A(Z-II)-'-I)AK$ (25)
= C( ZI - A) - ' [ ( A- ZI ) ( Z- ~ ] [ ZZ- A( Z- ~ ) ] - ' AK~ (26)
=-C(I-TT)[ZZ-A(Z-IT)]-~AK$ (27)
= O since C(Z-rr)=O.
This shows that in this case we obtain perfect recovery. Note that, as in
the continuous-time case, the recovery does not depend on any properties
of K$ Any (stable) observer with the given structure can be recovered,
providing that KT =AKf
J-
I V. USE OF Hf(Z) WITH G(Z) NONMINIMUM-PHASE
It is known [5] that (with Q =C'C and R =0), the eigenvalues of A
i) those zeros of C(z) which lie in {z:lzl <I};
ii) the reciprocals of those zeros of G(z) which lie in { z : IzI >I } ;
iii) and the remainder at the origin.
It is also known [7l that the condition det (CB) #0 ensures that G(z) has
- BK, are located at:
the maximum possible number (n - m) of finite zeros and the
minimum possible number (m) of infinite zeros. The perfect recovery
obtained in Section IU is only possible because the nonzero poles of Hf ( z )
cancel the n - m finite zeros of G( z) , and the m origin poles of HAz).
Cancel the m origin zeros introduced by the factor z in (21).
The mechanism by which recovery is achieved is thus essentially the
same as in the continuous-time case: the compensator cancels the plant
zeros and possibly some of the stable poles, and inserts the observer's
zeros. Clearly, this will fail if the plant has zeros outside the unit circle,
since the compensator HJz) guarantees internal stability. This is
potentially a more serious limitation for disciete-time than for continuous-
time systems, since the standard sampling process is known to introduce
zeros, some of which usually lie outside the unit circle.
However, in the following paragraphs we shall show that H&) always
cancels those zeros of G(z) which lie inside the unit circle. The
importance of this is that the zeros introduced by sampling usually lie near
the negative real axi s [SI, and thus any zeros which remain uncanceled
will usually lie near the negative real axis (unless the original continuous-
time plant has zeros in the right half-plane [SI). This raises the possibility
that G( z) Hf ( z) differs significantly from +(z) only at high frequencies
(0.5 <UT <T, say, where Tis the sampling interval), and that recovery
is in effect achieved over the closed-loop bandwidth. The author has
observed that this possibility is frequently realized.
The following lemma is due to Shaked, but we provide a much simpler
proof.
Lemma (Shaked [6j): Let zi be a zero of G(z) which is also an
eigenvalue of A - BK,, let z, #0, and let w; be the corresponding
eigenvector. Then
w; E ker (C). (28)
Proof: Since zi is a zero of G(z), there exists a matrix K and a vector
ui such that
( A - BK)u, =Z~U, (29)
and
ui E ker ( C) (30)
(Kailath A). Since C is real, 17i E ker (0 also. Suppose that CWi f 0,
and choose =ui +17,.Then the cost incurred by using the feedback
matrix K is
E
J K = [ z j ' ( c u , ) ' + ~ ~ c ~ ; ) ' I [ z ~ u ; + i ~ c ~ , l = o (3 1)
k=O
whereas the cost incurred by using Kc is
J., =x z,"@o, 7,) *(CWj) '(cw, ) >0 (32)
9 "
P = O j = I
(where { n } is a reciprocal basis corresponding to {w;}). This contradicts
the qximality of Kc. In (32) it is assumed that the set of eigenvectors {w,}
spxs the state space. If this is not true then a suitably modified version of
(32) leads to the same conclusion.
Corollaries:
i) ( Z - K$C)(A - BK,)w;=Z,W; (33)
ii) ( A - BK, - K;C) W, =Zi Wc. (34)
Corollary i) shows that to each zero of C( z ) which lies in {z:lzl <1)
there corresponds an eigenvalue of the compensator's state transition
matrix which can potentially cancel that zero. Furthermore, that
eigenvalue is also an eigenvalue of ( A - BK,). But it is well known [ 5]
that the closed-loop eigenvalues of the compensated system are the union
of the eigenvalues of (A - BK,) and those of (A - KTC). It follows that
the potential cancellation does in fact take place, since the eigenvalue in
question is not shifted when the feedback loop is closed.
V. USE OF H,(Z) WITH det (CB) =0
If det (CB) =0, then G( z ) has fewer than n - m finite zeros [;1. Even
if G(z) is minimum-phase, there are now not enough finite zeros to cancel
IEEE TRANSA(3TIONS ON AUTOMATIC CONTROL, VOL. AC-30, NO. 6, JUNE 1985 605
all the poles introduced by H&), and perfect recovery is again
impossible.
We remark that this is not a serious limitation for sampled-data
systems. The sampling process almost always yields a nonsingular
product CB, unless some input or output channel contains a delay whose
duration exceeds the sampling interval.
It is interesting to consider why the restriction det (CB) #0 does not
appear in the continuous-time case. In that case, as the weighting on the
control (R) is reduced, some elements of Kc become arbitrarily large, so it
is possible for infinite zeros to be canceled (in the limit) by infinite
compenstor poles [ 2] . In the discrete-time case all thecompensator poles
remain bounded as R +0.
VI. THE USE OF ITp(,?)
If the time required to compute the control signal is not negligible, then
the controller defined by (13), (14) must be used. This has transfer
function
HP(z) =K,(z I - A +BK, +K;C, -K:. (35)
If G(z) has no finite zeros in (z:lzl >I } then, by Corollary ii) and the
argument used in Section IV, all of its finite zeros are canceled by
compensator poles. However, the compensator will introduce a further set
of poles ( m in number if det (CB) #0, more than m otherwise) which
will not be canceled, and which are not (in general) poles of @(z). Again,
perfect recovery cannot be obtained in this case.
VII. IS @(Z) WORTH RECOVERING?
Since @(z) does not, in general, possess the excellent performance and
robustness properties which are obtained with continuous-time Kalman
filters, the designer has to work harder to obtain a useful @(z). It is
therefore helpful to know the circumstances under which good properties
can be guaranteed. Safonov [lo] showed that the continuous-time
stability margins are obtained if
CqCPC ) Q o( V ) (36)
where 6 (0) denotes the maximum (minimum) singular value of a matrix.
He also remarked that this condition is certainly achieved in the limit as
thesampling interval is reduced to zero. Here we use (36) to obtain a
rather more useful indication of how to choose the sampling interval.
Theorem: Suppose that, for some zo =eiBo:
i) 4@(z31< 6
ii) ~ C ( Z + - A) W] < E
iii) o(V)=CqV)
iv) <28(V).
Then E 4 1 implies that (36) holds.
Remarks: Condition i) will usually hold over some high-frequency
interval (6, E [e,, TI ) , with E small if the sampling interval is small
enough. Both i) and ii) can be expected to hold if either of them holds.
Condition iii) need hold only approximately for the theorem to be useful.
Proof:
Let
F(z) = I + @(z).
Then, for any X,
(37)
3X - F (a- )XF(Zo)] =f l @ (z; ) X+ X+( Z~) ++ (z; )X@(zo)] (38)
<6(X)(2 +). (39)
Now Arcasoy [ l l ] has shown that
F(z-)(CPC +V)F(z)= V+L(z-)L(z) (40)
L(Z)=C(zI-A)-W*. (41)
where
From (39), (a), i), and ii):
4cPc-L(z;)L(zo)l<$cPc+V)(2+) (42)
and, hence,
4CPC) <6(CPC +VI42 +E ) + 2 (43)
<$CPC+ V)44+4 by iv). (44)
qCPC ) a6( cPC +V ) (45)
=a 6(CPC) Q 6( V )
(46)
2 6(CPC)Qa( V ) by iii). (47)
It should be noted that the theorem guarantees only stability margins,
not performance. For example, the conditions can be met by setting W =
0, which will give @(z) =0. It can be expected that if one tries to force
the conditions to be met, for example by inserting zeros at z =- 1, then
theresulting @(z) will offer poor performance.
Thus, E 4 1 =2
Vm. DISCUSSION
In many, and perhaps most, applications, the conditions under which
exact recovery is obtained will not be met. But the author has observed
that a useful degree of recovery is obtained very frequently, even if the
plant is nonminimum-phase (before sampling), and even if one is forced
to use the predicting version of the observer. Furthermore, this occurs
even if thebandwidth is a significant fraction of the Nyquist frequency (as
high as 113 in some cases).
When designing continuous-time control systems, thecomplete duality
between state feedback and state reconstruction allows one the option of
recovering either the observer return-ratio, as in this paper, or thestate
feedback return-ratio KJsI - A ) - B, by following the dual procedure
[2]. But with discrete-time systems, the duality is not so complete. The
standard state-feedback scheme is the dual of the predicting observer, but
not of the filtering observer. Consequently, exact recovery of the state-
feedback return-ratio cannot be obtained. However, since use of the
predicting observer frequently yields useful results, one expects that the
dual procedure will also be useful, even if not probably so.
The development both in this note and in [1]-[3] assumes that the plant
model is strictly proper. This is a significant restriction, since discrete-
time models frequently have a direct feedthrough from inputs to outputs.
Incidentally, if it were not for this restriction, it would be possible to
obtain the discrete-time results from the continuous-time results simply by
using a standard bilinear transformation between the s and z domains.
ACKNOWLEDGMENT
P. F. Westaway worked out many examples which showed that
asymptotic recovery isuseful with discrete-time systems.
REFERENCES
H. Kwakernaak, Optimal low sensitivity linear feedback systems, Automatica,
J. C. Doyle and G. Stein, Robustness with observers. IEEE Trans. Automat.
Contr.. vol. AC-24. pp. 607-611. 1979.
-, Multivariable feedback design: Concepts for a classicallmodern synthe-
A. Gelb, Ed., Applied Optimal Estimation. Cambridge, MA: M.I.T. Press,
sis, IEEE Trans. Automat. Contr., vol. AC-26, pp. 4-16, 1981.
1974.
VOI. 5, pp. 279-286, 1969.
H. Kwakernaak and R. Sivan. Linear Optimal Control Systems. New York:
Wiley, 1972.
U. Shaked, Explicit solution to the singular discrete-time stationary linear
A. G. J. MacFarlane, Ed., Complex Variable Methods f or Linear Multivaria-
filtering problem, Tel-Aviv University, Tel-Aviv, Israel, Tech. Rep., 1983.
K. J . Astrom, P. Hagander, and J . Sternby, Zeros of sampled systems, in Proc.
ble Feedback Systems. London: Taylor and Francis, 1980.
Contr. Decision Conf.. 1980, pp. 1077-1081.
M. G. Safonov, Stabiliry and Robustners of Multivariable Feedback Sys-
T. Kailath, Linear Systems. Englewood Cliffs, NJ: Prentice-Hall, 1980.
tems. Cambridge, MA: M.I.T. Press, 1980.
C. C. Arcasoy, Return difference matrix properties for optimal stationary
discrete Kalman fdter, in Proc. Insr. Elec. Eng., vol. 118, pp. 1831-1834,
1971.

You might also like