You are on page 1of 6

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO.

5, MAY 1996 727

ACKNOWLEDGMENT New Square-Root Smoothing Algorithms


The authors would like to thank C. Scherer and R. Schrama for PooGyeon Park and Thomas Kailath
fruitful discussions that contributed to the results of this paper.

Abstract-This paper presents new square-root smoothing algorithms


REFERENCES for the three best-known smoothing formulas: 1) Rauch-Tung-Striebel
(RTS) formulas, 2) Desai-Weinert-Yusypchuk (DWY) formulas, called
[ l ] P. M. M. Bongers, “On a new robust stability margin,” Recent Advances backward RTS formulas, and 3) Mayne-Fraser (MF) formulas, called
Math. Theory Syst., Contr., Networks Signal Processing-Proc. Int. two-filter formulas. The main feature of the new algorithms is that they
Symp. MTNS-91, 1991, pp. 377-382. use unitary rotations to replace all matrix inversion and backsubstitution
[2] -, “Modeling and identification of flexible wind turbines and a steps common in earlier algorithms with unitary operations; this feature
factorizational approach to robust control,” Ph.D. dissertation, Delft enables more efficient systolic array and parallel implementations and
Univ. of Technol., Mech. Eng. Systems and Control Group, 1994. leads to algorithms with better numerical stability and conditioning
131 C. T. Chen and C. A. Desoer, “Algebraic theory for robust stability of properties.
interconnected systems: Necessary and sufficient conditions,” in Proc.
2Ist IEEE Con$ Decision Contr., 1982, pp. 491494.
141 J. C. Doyle, B. A. Francis, and A. R. Tannenbaum, Feedback Control I. INTRODUCTION
Theory. New York MacMillan, 1992. Square-root (or factorized, as they are sometimes called) algorithms
151 J. C. Doyle and C. Stein, “Multivariable feedback design: Concepts for a for state-space estimation have been found to have several advantages
classical/modem synthesis,” IEEE Trans. Automat. Contr., vol. AC-26,
pp. 4-16, 1981. over the conventional equation-based algorithms in terms of numer-
[6] B. A. Francis, A Course in H , Control Theory (Lecture Notes Control ical stability, conditioning, and amenability to parallel and systolic
Information Sciences), vol. 88. Berlin: Springer-Verlag, 1987. implementation. While such algorithms for prediction and filtering
[7] E. J. M. Geddes and I. Postlethwaite, “The weighted gap metric and have by now been studied quite extensively (see, e.g., [l]-[S]), the
structured uncertainty,” in Proc. Amer. Contr. Conc, pp. 1138-1 142,
1992.
picture is not quite as complete for smoothing.
[8] T. T. Georgiou. “On the commtation of the gau metric.” Svvt.
U I I ,~ Contr. In the literature, there are two classes of square-root smoothing
Lett., vol. i l , pp. 253-257, 1$88. algorithms, both based on using quantities propagated by the square-
T. T. Georgiou and M. C. Smith, “Robust control of feedback systems root information filter algorithm (SRIF) presented by Dyer and
with combined plant and controller uncertainty,” in Proc. Amer. Contr. McReynolds in 1969 [4]. In 1971, Kaminski [9] proposed the square-
CO@, 1990, pp. 2009-2013.
-, “Optimal robustness in the gap metric,” IEEE Trans. Automat. root information smoother (SRIS) of which Bierman in 1983 [lo]
Contr., vol. 35, pp. 673-686, 1990. gave a so-called UD (free of arithmetic square-root) version. The
K. Glover and D. McFarlane; “Robust stabilization of normalized SRIF and SRIS propagate the square-root of the inverse of the filter-
coprime factor plant descriptions with H , -bounded uncertainty,” IEEE ing and smoothing error covariances, respectively, hence the name
Trans. Automat. Contr., vol. 34, pp. 821-830, 1989.
G. C. Hsieb and M. G. Safonov, “Conservatism of the gap metric,”
“infomation” form. In 1974, Bierman [ 111 proposed propagating
IEEE Trans. Automat. Contr., vol. 38, pp. 594-598, 1993. the smoothing error covariance itself, using certain outputs from the
A. Packard and M. Helwig, “Relating the gap and graph metrics via SRIF to provide the coefficients of certain smoothing error covariance
the triangle inequality,” IEEE Trans. Automat. Contr., vol. 34, pp. recursions. He called this the DMCS (Dyer-McReynolds Covariance
1296-1297, 1989. Smoothing)-SRIF algorithm. A UD version of the DMCS-SRIF was
L. Qui and E. J. Davidson, “Pointwise gap metrics on transfer matrices,”
IEEE Trans. Automat. Contr., vol. 37, pp. 741-758, 1992. given by Watanabe and Tzafestas [12]; see also McReynolds [13].
M. G. Safonov and M. Athans, “A multiloop generalization of the circle Watanabe [14] also gave a square-root form of certain smoothing
criterion for stability margin analysis,’’ IEEE Trans. Automat. Contr., formulas of Desai-Weinert-Yusypchuk (DWY) formulas [ 151, while
vol. AC-26, pp. 415422, 1981. Dobbins [ 161 derived a square-root version of the Mayne-Fraser (MF)
M. Vidyasagar, “The graph metric for unstable plants and robustness
estimates for feedback stability,” IEEE Trans. Automat. Contr., vol.
(or two-filter) formulas.
AC-29, pp. 403418, 1984. These square-root algorithms have various advantages and disad-
-, Control System Synthesis: A Factorization Approach. Cambridge, vantages. However, all of them require certain matrix inversion and/or
MA: MIT Press, 1985. backsubstitution steps and, thus, none of them is particularly well-
M. Vidyasagar, H. Schneider, and B.A. Francis. “Algebraic and topo- suited for parallel implementation. Recently, we have presented in
logical aspects of feedback stabilization,” IEEE Trans. Automat. Contr.,
vol. AC-27, pp. 880-894, 1982. [ 171 a new square-root smoothing algorithm for Bryson-Frazier (BF)
G. Zames and A. K. El-Sakkary. “Unstable systems and feedback The formulas [18] (1963) that employs unitary rotations instead of matrix
gap metric,” in Proc. Allerton Con&, 1980, pp. 380-385. inversion and backsubstitution steps, thus simultaneously improving
numerical stability and conditioning and also making parallel and
systolic implementation easier-see, e.g., the discussion of these
issues in [19] and 1201.
There are essentially three more best-known smoothing formulas:
those of Rauch-Tung-Striebel (RTS) [21] (1965), DWY [15] (1983),
and Mayne [22] (1966) and Fraser 1231 (1967). In this paper, we
Manuscript received May 20, 1994; revised March 17, 1994 and November
3, 1995. This work was supported in part by the Advanced Research Projects
Agency of the Department of Defense and was monitored by the Air Force
Office of Scientific Research under Grant F49620-93- 1-0085.
The authors are with the Information Systems Laboratory, Department of
Electrical Engineering, Stanford University, Stanford, CA 94305 USA.
Publisher Item Identifier S 0018-9286(96)02824-3.

0018-9286/96$05.00 0 1996 IEEE

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
728 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 41, NO. 5, MAY 1996

shall provide their square-root versions. An interesting conclusion In Case 2, the square-root version of the RTS smoothing formulas
from our results is that the apparently most computationally intensive is not very different from those for the BF smoothing formulas in [17,
traditional algorithm-the two-filter solution of Mayne (1966) and Proposition LLT.11. Multiplying the left side of (1) by Pt-”’ yields
Fraser (1967)-has the conceptually least complex square-root form the following backward recursions for a, PZ-’/’ X t I N
A

among the four square-root smoothing algorithms.


+
State-Space Model: x;+l = Fix, G;u; and y; = H;x; +vi for
i 2 0, where { X O , U ; , U ~ } are zero-mean white Gaussian random
variables uncorrelated with each other and E(u;uf) = Q; >
O,E(vivt) = R, > 0. We define
gtlJ4 the linear least-squares estimate of 2; and multiplying both sides of (2) by P,-’” and (P%-”’)* yields the
given {YO,. . . , ~ j } following backward recursions for

PZlj E [ & - P z 1 j ] [ & - &,J*


the error covariance of the estimate P ; l j .
When j = i - l,P;lzpl is called the one-step predicted estimate,
; called the filtered estimate. When j > i ,
while when j = i , d ; ~ is
we have smoothed estimates. A corresponding terminology will be
used for the error-variance matrices. For compactness we shall write
P:212-l 4 P, and P;1ipl 4 P, unless amplification is necessary for
emphasis or comparison.
One-Step Predicted Estimates: Obey
Recall Step 1 in [17, Proposition 111.11.
+
dz+i = Fp,;O; Iip,zy’z, do = 0
For (PL’/2P~)
= 0 and (Pt”) = lIi’2,propagate
Pz+1 = F,P,F,* + G,Q,G,* - Ii7p,;Re,;Ki,i, Po = Eo
where {Pt-1/2P,} and {Ptl”}
Fp,i 2 F, - ICp,,Hz
KP,; F i l i ~ , ~ using a forward recursion
&-f . L! P . H ? R - ~
,z - , z e,z
Re,, 4R, + H;P,H,*.
Over the years, several square-root algorithms for thls formula
have been obtained [24], [25]. In [26], we showed that various
previous results on covariance-form and information-form square-
root algorithms could be usefully written in the “combined” forms.
For easy understanding, we refer to [26].
The fixed-interval smoothing problem is to find {i?;l~}~=~:. given
the data { Y ~ } ~ = O : In
N . the following section, we shall provide new
square-root versions for RTS, DWY, and MF smoothing formulas
which will be obtained by the appropriate modification of the arrays where 0, is any umtary operator that lower-tciangularizes the first
in [26, Algorithm IIIS]. and second rows of the pre-array and “(*)” indicates the redundant
For convenience, we shall use the following notations: entnes
Qt4 Q ; - Q,GZ;PG;G;Q; Using the fact that the above Step 1 in [17, Proposition 111.11
E-;,t & ~;+~G;Q;Q;*/’ provides (P,*/2H:R,,*/2), (P,*/’FF,”tPtyT’z),(R;,’/’eC), (Pzl/’)
and (P,-”’&), we can obtan a coii-esponding square-root version
F s , zi2 PzF;,;PzTi = F,-’( I - G ; Q , G : P z t ) Algorithm 11.1. Square-Root RTS (Case I )
- F ; ~ ( I- ~ , Q t / ’ h - ~ , . ) . Step 1. Same as Step 1 in [17, Proposition 111 13,
Step 2: With U N + ~ = (Pi:/l”?~+i)and AN+^ = I, propa-
FORMULAS
RTS SMOOTHING
11. (FORWARDS) ~ ) (Pzpl”P,~~Pt-*”)
gate ( P , - i ~ 2 d c l and using the backward
The RTS smoothing formulas introduced by Rauch, Tung, and recursions (4) and ( 5 ) ,
Striebel [21] in 1965 generate ? , I N from a backward recursion using Step 3 Calculate the smoothed estimates and their error covan-
the 9, There are two cases. ances via
Case 1: (singular F,) 9 z l =~ (P61/2)a,and P,IN = (Ptl/’)Ac(P:/’).
? , I N = 9, +
PW’R;: +
Fs,t(di.,+11~
- (1) This algorithm has characteristics simlar to those of [17
P 2 = ~Pt - ~ PzHt*R,:H,P, - FS,%(Pz+i- Pz+lIN)F:,,. (2) Propositlon 111 11, the error covanance P , I N mght not be positive
semidefinite because of round-off errors and, during the procedures,
Case 2: (nonsingular F,) we should save (P;“”H:R,r/2), (P,”’2FF,*,Pzy;’2),(R,,’/’eZ),
QN = Fs,t&+ij,v + Fz-lGtQ,GTP~~dt+i (Ptp’/’&) which will require memory on the order of
+
P,IN= Fs,zPz+ilNFs*,,2 F ~ l G , Q z G ~ F ~ ” .
(3) + + + + + +
( N l ) n p , ( N l ) n 2 (, N l ) P , ( N l ) n ( n I)/& ( N l ) n ,
respectively. The major costs arise from the (P,*’2 F; ,Pzy:’2) and
Here, ~ N + ~ I=N~ N + Iand PN+lIN = PN+i.
(P?’)

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996 129

However, if the F, are nonsingular, we can significantly reduce the where 0 is any unitary operator that zeros out the (1, 2) entry of the
amount of memory and computation as we did for the BF smoothing pre-array. This procedure requires inversion of (Pz:/!).
formulas. Thus for (3), we shall use Fs,t
as formed in [17,Proposition Remark 3: Watanabe and Tzafestas [ 121 suggested a square-root
111.31 and then explain how to reduce the number of operations version of the RTS smoothing formulas using a rank-2 UD covariance
required to compute matrix factorization. Their approach was based on the following
equations:

From the entries of the time-update post arrays in [17, Proposition


111.31, we can find the terms (2r+lx-i,z) and (Qi").
This suggests
that instead of computing 2, itself, we use the entries of the post-array
to yield Fz-' G, QtG: Pt;i
&+I.
One more attractive feature is our ability to use the square-root
algorithm to update the smoothing error covariances as well as the where without direct construction of Fs,cthey propagated ( Pcil),
smoothed estimates. Consider the following array: (Pz~z12tlc), (ac), and (F,-*Pti12,~,) by using a rank-two UD infor-
mation matrix factorization for the forward information filtering. Even
though their method was not the most compact (i.e., see a modified
version of Algorithm 11.2 in the following remark), they successfully
where provided an inversion-free algorithm, except in a step connecting the
forward information filtering to the backward-pass smoothing, which
( 1 , l ) 2 FC'(I- G,{ ( Q ; " ) ( I - i - b , ; ) } ) (P:/;lN). required the inverse of ( P G i l l N ) .
From inner- and cross-products of the array rows, we obtain
Remark 4: If we allow this inversion step in the procedures, we
can obtain a more compact form of the first step of Algorithm 11.2
P,IA~ = F ~ , ~ P , + , I N F ~ * , ,+ F,-~G~Q~G,*F,-* by using the SRIF blocks in combined measurement and time update
=XX" (7) square-root filtering arrays in [26, Section IV].
QN = (Fs,iPzy;lN)
(Pt;;(;2i+llN) Step 1: Set Po-1f2P~= 0 and P:" = II;". Then propagate
(e-'/22t) and (Pt'/') using the measurement- and time-update
+ (F?G, (Q:/'))(&,iii+l) equations:
=Xa. (8)
Therefore, we can identify that

X = (Pt'l/,") and Q = (Pt~~i'2il,).


In summary, we have the following algorithm.
Algorithm 11.2: Square-Root RTS (Case 2): Assume that the Fi
are nonsingular.
Step 1: (Same as Step 1 in [17, Proposition 111.31.) Save
(&-t,+), ( C ? ; j 2 ) , and (K-b,i&+~). where @ % , I is any unitary rotation that zeros out the
Step 2: With ( P ~ : / ; f , 2 , ~ + I ~ =~ )( P & $ 2 ~ + 1 ) , propagate (1,l) entry of the pre-array;
( Ptyk/2.EtIN) using the backward recursions (6). Calculate b)
smoothed estimates and their error covariances using (7) and (8).
This algorithm has several other advantages over the previous r 0
algorithms. The first one is that the P z i ~is guaranteed to be
positive definite. The second is that this algorithm requires relatively
little memory, on the order of (N l ) n p , ( N + + +
l ) p 2 , ( N 1).
corresponding to ( f i - b , i ) , (a:/'),
and ( I ? b , i & + ~ ) . Finally, there is
less computation because we do not need to form 2%itself (by
multiplying (PZ1")and (Pzr1'22;)).
Remark 1: Bierman derived a square-root version of the RTS
smoothing formulas using the so-called UD covariance matrix fac-
torization in [lo]. The main goal was to express Fs,; as a product
of elementary rank-1 type matrices. However, this procedure still where 0 , , 2 is any unitary rotation that zeros out the
required matrix inversion and backsubstitution steps. (3,1) entry of the pre-array. To find the initial value of
Remark 2: McReynolds [ 131 modified Bierman's UD version of P$ in Step 2, we have to invert
the RTS smoothing formulas by using the following square-root array
for constructing Fa,%: This modified algorithm in Remark 4 uses the SRIF form for
both measurement- and time-update steps, while [ 17, Proposition
111.31 and Algorithm 11.2 use the SRIF form and the square-root
covariance filtering (SRCF) form for measurement- and time-update
steps, respectively. Therefore, the size of the arrays for measurement-
updates in this algorithm is smaller than that in [17, Proposition 111.31
and Algorithm 11.2. This modified RTS algorithm, then, needs less

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
730 E E E TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

computation than [17, Proposition 111.31and Algorithm II.2. However, where we can easily identify that X = Rb,z*/2.Moreover, as in
this algorithm requires inversion of PG;{iN . Algorithm II.2, the error covariance P z l ~can also be found in
Rernark5: The modified SRCF Algorithm (Algorithm DI.2 in square-root form. Therefore, without further explanation, we shall
[26]) was also found by Gaston and Irwin in 1989 [30]. now present the corresponding square-root version.
Algorithm iiI.1: Square-Root DWY:
111. DWY (OR BACKWARDS
RTS) SMOOTHING
FORMULAS Step 1: With L$zl = 0 and z&+, = 0, propagate z,b via
The smoothing formulas of Desai, Weinert, and Yusypchuk [15] the backward recursions (12). Save the following variables
separate out the dependence on no by using (Fisher-type) backward ((Rb,z*’2)),(get112Gt* zt+1),
b [ ( ~ 7 b , ~ , z ) ( R ~ zfor
1 1Step
2 ) , 13.)
Kalman filtering formulas (with infinite “initial” covariance). The Step 2: Using (II;”)), (L:”)), and ( L i b / 2 ~ t construct
),
equations are P$/2fo,N) and (Pi$
P;+IIN = ( I + G~Q,G:L~+I)-~(F;&~N + GiQiGfz,b+l) (9)
= ( I + G,Q,Gz;+,,)-~
x ( F , P , ~ ~ F+
, *G;Q;(Q;’ + G:L:+,,G,)Q~G:)
+
x (I L!+~G~Q~G:)-’
with initial conditions where 00is any unitary operator that zeros out the (1, 2) entry of
?OlN = (I+IIoLyIIoz; the pre-may. As a result of this rotation, P o ~ ~ ’ 2=?Pl$o.
~p
Therefore, &IN = ( P $ ) ( P l k z o ) .
POlN = ( I + rIoL:)-lIIo
Step 3: Using (PGh’2&l~)and (Pi;),propagate (P$’2?tl~)
where for Lh+l = 0 and =0
ancl (P,;C,
zp = F;”( I + L ; + ~ G , Q , G : ) - ~ ~ , ~++H, , R ; ~ ~ ;
(10)
L; = F: ( I + L:+lG;Q,G:)-lL,b+lF;+ HiR,’H,*.
A square-root algorithm for the DWY smoothing formulas is
essentially based on the S N F formulas. For (lo), consider the
following arrays:

1 i,’
0 x 0
= Y z
P’ p1
0
(1 1)
where eZ+. is any unitary operator that zeros out the (1, 2) entry of
the pre-array. The smoothed estimates and the error covariances are

where @,+I is any unitary operator that lower-triangularizes the pre-


array. From appropriate inner- or cross-products of the array rows,
we find
This square-root DWY algorithm behaves like the forward square-
xx*= Q ; ~+ G:L;++,G;e ( ~ k j , 2 (R;?)
) root RTS algorithm (Algorithm 11.2) in terms of flop count, array
Y X ’ = F;”L;+lG, A l?b,p,zX* size, and storage. But there is no constraint on F;, whereas in
+ H,*R;’H;
zz”= F%*L;+~F; - YY* Algorithm 11.2 the F; must be nonsingular. A possible disadvantage
of this algorithm, however, is that we cannot begin processing the
A (p)
(L;*+)
measurements before the last YN is available; this means extra delay
Xa = (Rij:) (R,Z’/2G:z,b+l) in the smoother. On the other hand, the BF and RTS smoothing
formulas can be carried out in parallel with collecting measurements
zp = ~;“~,b,,
+ H , * R ; ~-~ya
~ A L ~ L/ i -~~ I ~ ~ P V Z 2) ,
because they are based on forward Kalman filtering.
We see that the array (1 1) propagates zp and L,b. Next, since (9) can Remark 6: Watanabe derived a square-root version of the DWY
be written as smoothing formulas using the UD information matrix factorization in
2i+llN= ( I +G,Q~G,*L;+~)-~F,L~.~~~ [ 141. He successfully provided an inversion-free algorithm except
in a step connecting backward Kalman filtering to forward-pass
+ (I+ G;Q,G:L,4tl)-lG,Q;G,*z,b+, smoothing. This inversion step can be written with our expression as
- ( I - G;R;, G; L ~ + , ) F , P ,+
-
l * b
~G~ ~R;,~G:~:+~
- (F;* - G,R-*/’
b,i I*;L b , p , i ) z i J N + G;R$Gb,4tl
we will be done if we can construct the component (RCZ*/’)in the which corresponds to Step 2 in Algorithm I11 1 In fact, Watanabe’s
array. For this, we augment the array as follows: algorithm can be considered as a special implementation of Algorithm
II.2 (except for Step 2) designed to eliminate arithmetic square-root
operations (see [27] for a comprehensive discussion of how to avoid
anthmetic square-roots and division operations in the triangularizabon
procedure)

Iv. MAYNE-FRASER SMOOTHING FORMULAS


(OR TWO-FILTER)
By introducing a method of combining the outputs of forwards
Kalman filtering and (Fisher-type) backwards Kalman filtering,

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996 73 1

Mayne [22] and Fraser [23] suggested the so-called “two-filter’’ where 0: is any unitary operator that lower-triangularizes the
smoothing formulas pre-array.
Step 2-Backward Estimate: With Lg:, = 0 and L,+,b / 2 z b~ + ’
k t l=
~ P,inr(P,-’P, + z : ) , PZ$ = P,-’ + LLL. (13)
= 0, propagate and save L:/’ and L,b/2z,b using the backward
Due to their nature, the MF smoothing formulas perform well in recursions (11).
terms of speed in real-time processing’ and flexibility in handling Step 3-Smoothed Estimate: Using the quantities in Steps 1 and
variation of IIo; nevertheless, the MF smoothing formulas are con- 2, construct { Y t ,a , , p t } in (14) and compute smoothed estimates
sidered to have a great computational disadvantage because of the and error covariances via (15) and (16).
several matrix inversion and backsubstitution steps. In batch processing where all the measurements are in hand
We shall now show how to overcome this handicap by introducing before running the smoothing formulas, the MF smoothing formulas
an appropriate square-root algorithm. We have already shown how are the fastest of all the smoothing formulas. The calculations
to construct + +
require memory on the order of (N l ) n , ( N 1)n2 for saving
either {(P:”), (Pt-’’23t)} or {(Lp”), (L,b”z,b)}. Therefore, if
we have to calculate error covariances, Algorithm IV.l demands
in [26, Proposition 111.51 and Algorithm 111.1, respectively. Since we less memory than [17, Proposition 111.11 and 11.1 but more memory
cannot find a direct connection between the above quantities and than Algorithm 111.1. There are certain cases where we do not
the components in (13), we need to construct certain intermediate need to compute error covariances (see, e.g., adaptive filtering [28],
square-root arrays. By judicious use of the above quantities, we shall [29] in communications). Even in such cases, however, the MF
now introduce the following array, where { X c ,Y,, a t ,p,} are to be smoothing formulas still need to compute the error covariances to
determined obtain the smoothed estimates which is not the case for the [17,
Proposition 111.31 and Algorithm 11.2 corresponding to the RTS and
DWY smoothing formulas. Therefore, Algorithm IV.l requires more
memory than [17, Proposition 111.31 and Algorithm 11.2.
Remark 7: The MF (or two-filter) smoothing formulas that Dob-
bins used in [16] employed the recursions
where 0, is any unitary operator that (block) lower-triangularizes the
pre-array. Applying inner- and cross-products of the array rows yields
x,x:= P,*/‘LpP,‘/~+ I = P:/2Pt$P,1/2
= (Pz“l2P$2) (P Z $ pP y )
y,x: = P y = (Pz; e)( Pt$2 P y )
In his square-root version, the first inversion steps appeared when
3, was constructed; others arose in the following procedures used to
= (P,-1/22,)= ( P y P , $ 2 ) (P$P;%J
x,a, compute 2,1N from 2, and zg
*/2 b/2
xzp, = (P, L, )(L;b12Z:)
= (Pc*/*PzT;/2)
(P$Zp).

Therefore, we can identify


x, = (Pz*/2Pt$), y, = (P$)
a , = (Pt$P;’2%), pz = (P$zZ”) where 0 is any unitary operator that (block) lower-triangularizes the
pre-array .
and thus verify that

izlN = P,IN ( P ? & + 2) = K ( a z+ p,) REMARKS


V. CONCLUDING
Pzp= KK’. Table I compares various square-root smoothing algorithms in
terms of “running memory size” which means the amount of storage
This algorithm is summarized in the following. available for temporary quantities that we require for the propagation
Algorithm IV.1: Square-Root MF (or Two-Filter): of ? , I N and P,lh~.
Step 1-Forward Estimate: With (Pi”) = and ( P i 1 / ’ 3 0 ) If the F, are singular or if we need to obtain error covariances,
= 0, propagate and save PZ1” and (PZ-’/*Oz)using the following we cannot use either [17, Proposition 111.31 or Algorithm RTS (Case
forward recursions: 2). In this case, the square-root DWY algorithms require the smallest
RZ1I2 H,P,’/2 0
amount of memory. If we do not have to construct error covariances
and the F, are nonsingular, the square-root BF algorithm in [17],
F, P,’12 G.v;“] elf the RTS algorithm, and the DWY algorithm require almost the same
2,tp,-”12 0
small amount of memory.
0 The square-root algorithms for the BF and the RTS smoothing
formulas, both of which run backward recursions for the smoothed
e:R,:/2 2:++IPzJ;’,lz (w) estimates by using the outputs of a forwards Kalman filter, have an
advantage over the others in terms of the amount of computation
’ “Real-time processing” for fixed-interval smoothing is a series of actions needed for real-time processing. However, when the F, are singular,
or operations estimating each input frame from each output frame, while one
collects outputs frame by frame. This concept differs from that of real-time they require more memory--0( $ Nn2)-than does the square-root
processing for filtering or fixed-lag smoothing in which one executes ths DWY algorithm-O(Nnp). Therefore, when the matrices F, are
algorithms while collecting the outputs symbol by symbol. nonsingular and the smoothed estimates must be formed as soon as

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
132 [EEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996

TABLE I G. J. Bierman, “A new computationally efficient fixed-interval, discrete-


COMPARISONS
AMONGVARIOUS ALGOMTHMS
SR SMOOTHING time smoothers,” Automatica, vol. 19, p. 503, 1983.
-, “Sequential square-root filtering and smoothing discrete linear
Square-root Running systems,” Automatica, vol. 10, pp. 147-158, 1974.
Algorithms FZ PzlN Memory K. Watanabe and D. S. G. Tzafestas, “New computationally efficient for-
mula for bacwards-pass fixed-interval smoother and its UD factorization
BF (case 1) Available (3(Nn’) algorithm,” IEE Proc. D, 1989, vol. 136, do. 2, pp. 73-78.
BF (case 2) 3Ft- Not O(NnP) S. R. McReynolds, “Covariance factorization algorithms for fixed-
RTS (case 1) Available O(+Arn’) interval smoothing of linear discrete dynamic systems,” IEEE Trans.
Automat. Contr., vol. 35, pp. 1181-1183, Oct. 1990.
RTS (case 2) 3Fz- Available* O(NnP) K. Watanahe, “A new forward-pass fixed-interval smoother using the U -
DWY Available* O(Nnp) D information matrix factorization,” Automaticu, vol. 22, pp. 465-475,
MF Available* 1986.
U. Desai, H. Weinert, and G. Yusypchuk, “Discrete-time complementary
*guaranteed to be positive definite models and smoothing algorithms,” IEEE Trans. Automat. Contr., vol.
AC-28, pp. 536-539, Apr. 1983.
J. R. Dobbins, “Covariance factorization techniques for least squares
estimation,” Ph.D. dissertation, Dept. of Elec. Eng., Stanford Univ.,
possible after all outputs are collected, the BF and RTS algorithms Stanford, CA, Jan. 1979.
are recommended (in fact, the latter is more efficient than the former P. Park and T. Kailath, “Square-root Bryson-Frazier smoothing d-
because the smoothed estimates are directly constructed using the gorithms,” IEEE Trans. Automat. Contr., vol. 40, pp. 761-766, Apr.
outputs of the forwards Kalman filter). 1995.
A. E. J. Bryson and M. Frazier, “Smoothing for linear and nonlinear
The square-root DWY algorithm, which runs a forward recursion dynamic systems,” Aero. Syst. Div., Wright-Patterson Air Force Base,
for the smoothed estimates by using the outputs of (Fisher-type) OH, Tech. Rep. TDR 63-1 19, pp. 353-364, Feb. 1963.
backwards Kalman filtering, is very flexible in handling change of IIo, S. Y. Kung, VLSlArrays Processors. Englewood Cliffs, N.J.: Prentice-
has no constraint on the F,, and requires a relatively small amount of Hall, 1988.
N. Petkov, Systolic Parallel Processing. NY: North-Holland, 1993.
storage. When the Ft are singular or only a small amount of memory H. E. Rauch, F. Tung, and C. T. Striebel, “Maximum likelihood
is available, this algorithm is recommended. However, there is a time estimates of linear dynamic systems,” AIAA J., vol. 3, pp. 1445-1450,
delay because the backward Kalman filtering cannot start before the Aug. 1965.
last measurement is available. D. Q. Mayne, “A solution of the smoothing problem for linear dynamic
The MF smoothing formulas simultaneously run forwards Kalman systems,’’ Automatica, vol. 4, pp. 73-92, 1966.
D. C. Fraser, “A new technique for the optimal smoothing of data,”
filtering and Fisher-type backwards Kalman filtering algorithms. This Ph.D. dissertation, Massachusetts Inst. of Tech., Cambridge, MA, 1967.
feature enables the MF smoothing formulas to pick up the main merits F. M. F. Gaston and G. W. Irwin, “Systolic Kalman filtering: An
of both the above two methods: fast computation in both real-time and overview.” IEE Proc. D, 1990, vol. 137, no. 4, pp. 235-244.
batch processing and flexibility in handling change of IIo. The major M. Moonen, “Implementing the square-root information Kalman filter
on a jacobi-type systolic array,” J. VLSI Signal Processing, vol. 8 , pp.
drawback, however, is that it requires a somewhat larger amount of
283-292, Dec. 1994.
memory--O(Nn2). P. Park and T. Kailath, “New square-root algorithms for Kalman
In conclusion, we note that the main features of our new square- filtering,” IEEE Trans. Automat. Contr., vol. 40, pp. 895-899, May 1995.
root algorithms are that they use square-root arrays formed from S. F. Hsieh, K. J. R. Liu, and K. Yao, “A unified square-root-free
the state estimates and their error covariances and that they avoid approach for QRD-based recursive least squares estimation,” IEEE
Trans. Signal Processing, vol. 41, pp. 1405-1409, Mar. 1993.
matrix inversion and backsubstitution steps in forming the state S. Haykin, Adaptive Filter Theory, 2nd ed. Englewood Cliffs, NI:
estimates. These features provide many advantages over conventional Prentice-Hall, 199 1.
algorithms with respect to systolic array and parallel implementations K. Astrom, G. Goodwin, and P. Kumar, Adaptive Control, Filtering, and
as well as with respect to numerical stability and conditioning. Signal Processing. N Y Springer-Verlag, 1995.
F. M. F. Gaston and G. W. Irwin, “VLSI architectures for square root
covariance Kalman filtering,” in Proc. SPIE Int. Soc. Opt. Eng., Aug.
REFERENCES 1989, pp. 44-55.

J. E. Potter and R. G. Stern, “Statistical filtering of space navigation


measurements,” in Proc. I963 AIAA Guidance Contr. Cor$, p. 5.
S. F. Schmidt, “Computational techniques in Kalman filtering,” in
Theory Appl. Kalman Filtering, C. T. Leondes, Ed., NATO Advisory
Group for Aerospace Research and Development, AGARDograph 139,
Feb. 1970.
G. H. Golub, “Numerical methods for solving linear least squares
problems,” Numerical Math., vol. 7, pp. 206-216, 1965.
P. Dyer and S. McReynolds, “Extension of square-root filtering to
include process noise,” J. Optim. Theory Appl., vol. 3, pp. 4 4 4 4 5 9 ,
1969.
P. G. Kaminski, A. E. Bryson, and S. F. Schmidt, “Discrete square-
root filtering-A survey of current techniques,” IEEE Trans. Automat.
Contr., vol. AC-16, pp. 727-736, 1971.
M. Morf and T. Kailath, “Square root algorithms for least squares
estimation,” IEEE Trans. Automat. Contr., vol. AC-20, pp. 487497,
Aug. 1975.
B. D. 0. Anderson and J. B. Moore, Optimal Filtering. Englewood
Cliffs, NJ: Prentice-Hall, 1979.
T. Kailath, Lectures on Wiener and Kalman Filtering. NY: Springer-
Verlag, 1981.
P. G. Kaminski, “Square-root filtering and smoothing for discrete
processes,” Ph.D. dissertation, Dept. of Aero. and Astr., Stanford Univ.,
Stanford, CA, 1971.

Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.

You might also like