Professional Documents
Culture Documents
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
728 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL 41, NO. 5, MAY 1996
shall provide their square-root versions. An interesting conclusion In Case 2, the square-root version of the RTS smoothing formulas
from our results is that the apparently most computationally intensive is not very different from those for the BF smoothing formulas in [17,
traditional algorithm-the two-filter solution of Mayne (1966) and Proposition LLT.11. Multiplying the left side of (1) by Pt-”’ yields
Fraser (1967)-has the conceptually least complex square-root form the following backward recursions for a, PZ-’/’ X t I N
A
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996 129
However, if the F, are nonsingular, we can significantly reduce the where 0 is any unitary operator that zeros out the (1, 2) entry of the
amount of memory and computation as we did for the BF smoothing pre-array. This procedure requires inversion of (Pz:/!).
formulas. Thus for (3), we shall use Fs,t
as formed in [17,Proposition Remark 3: Watanabe and Tzafestas [ 121 suggested a square-root
111.31 and then explain how to reduce the number of operations version of the RTS smoothing formulas using a rank-2 UD covariance
required to compute matrix factorization. Their approach was based on the following
equations:
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
730 E E E TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996
computation than [17, Proposition 111.31and Algorithm II.2. However, where we can easily identify that X = Rb,z*/2.Moreover, as in
this algorithm requires inversion of PG;{iN . Algorithm II.2, the error covariance P z l ~can also be found in
Rernark5: The modified SRCF Algorithm (Algorithm DI.2 in square-root form. Therefore, without further explanation, we shall
[26]) was also found by Gaston and Irwin in 1989 [30]. now present the corresponding square-root version.
Algorithm iiI.1: Square-Root DWY:
111. DWY (OR BACKWARDS
RTS) SMOOTHING
FORMULAS Step 1: With L$zl = 0 and z&+, = 0, propagate z,b via
The smoothing formulas of Desai, Weinert, and Yusypchuk [15] the backward recursions (12). Save the following variables
separate out the dependence on no by using (Fisher-type) backward ((Rb,z*’2)),(get112Gt* zt+1),
b [ ( ~ 7 b , ~ , z ) ( R ~ zfor
1 1Step
2 ) , 13.)
Kalman filtering formulas (with infinite “initial” covariance). The Step 2: Using (II;”)), (L:”)), and ( L i b / 2 ~ t construct
),
equations are P$/2fo,N) and (Pi$
P;+IIN = ( I + G~Q,G:L~+I)-~(F;&~N + GiQiGfz,b+l) (9)
= ( I + G,Q,Gz;+,,)-~
x ( F , P , ~ ~ F+
, *G;Q;(Q;’ + G:L:+,,G,)Q~G:)
+
x (I L!+~G~Q~G:)-’
with initial conditions where 00is any unitary operator that zeros out the (1, 2) entry of
?OlN = (I+IIoLyIIoz; the pre-may. As a result of this rotation, P o ~ ~ ’ 2=?Pl$o.
~p
Therefore, &IN = ( P $ ) ( P l k z o ) .
POlN = ( I + rIoL:)-lIIo
Step 3: Using (PGh’2&l~)and (Pi;),propagate (P$’2?tl~)
where for Lh+l = 0 and =0
ancl (P,;C,
zp = F;”( I + L ; + ~ G , Q , G : ) - ~ ~ , ~++H, , R ; ~ ~ ;
(10)
L; = F: ( I + L:+lG;Q,G:)-lL,b+lF;+ HiR,’H,*.
A square-root algorithm for the DWY smoothing formulas is
essentially based on the S N F formulas. For (lo), consider the
following arrays:
1 i,’
0 x 0
= Y z
P’ p1
0
(1 1)
where eZ+. is any unitary operator that zeros out the (1, 2) entry of
the pre-array. The smoothed estimates and the error covariances are
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996 73 1
Mayne [22] and Fraser [23] suggested the so-called “two-filter’’ where 0: is any unitary operator that lower-triangularizes the
smoothing formulas pre-array.
Step 2-Backward Estimate: With Lg:, = 0 and L,+,b / 2 z b~ + ’
k t l=
~ P,inr(P,-’P, + z : ) , PZ$ = P,-’ + LLL. (13)
= 0, propagate and save L:/’ and L,b/2z,b using the backward
Due to their nature, the MF smoothing formulas perform well in recursions (11).
terms of speed in real-time processing’ and flexibility in handling Step 3-Smoothed Estimate: Using the quantities in Steps 1 and
variation of IIo; nevertheless, the MF smoothing formulas are con- 2, construct { Y t ,a , , p t } in (14) and compute smoothed estimates
sidered to have a great computational disadvantage because of the and error covariances via (15) and (16).
several matrix inversion and backsubstitution steps. In batch processing where all the measurements are in hand
We shall now show how to overcome this handicap by introducing before running the smoothing formulas, the MF smoothing formulas
an appropriate square-root algorithm. We have already shown how are the fastest of all the smoothing formulas. The calculations
to construct + +
require memory on the order of (N l ) n , ( N 1)n2 for saving
either {(P:”), (Pt-’’23t)} or {(Lp”), (L,b”z,b)}. Therefore, if
we have to calculate error covariances, Algorithm IV.l demands
in [26, Proposition 111.51 and Algorithm 111.1, respectively. Since we less memory than [17, Proposition 111.11 and 11.1 but more memory
cannot find a direct connection between the above quantities and than Algorithm 111.1. There are certain cases where we do not
the components in (13), we need to construct certain intermediate need to compute error covariances (see, e.g., adaptive filtering [28],
square-root arrays. By judicious use of the above quantities, we shall [29] in communications). Even in such cases, however, the MF
now introduce the following array, where { X c ,Y,, a t ,p,} are to be smoothing formulas still need to compute the error covariances to
determined obtain the smoothed estimates which is not the case for the [17,
Proposition 111.31 and Algorithm 11.2 corresponding to the RTS and
DWY smoothing formulas. Therefore, Algorithm IV.l requires more
memory than [17, Proposition 111.31 and Algorithm 11.2.
Remark 7: The MF (or two-filter) smoothing formulas that Dob-
bins used in [16] employed the recursions
where 0, is any unitary operator that (block) lower-triangularizes the
pre-array. Applying inner- and cross-products of the array rows yields
x,x:= P,*/‘LpP,‘/~+ I = P:/2Pt$P,1/2
= (Pz“l2P$2) (P Z $ pP y )
y,x: = P y = (Pz; e)( Pt$2 P y )
In his square-root version, the first inversion steps appeared when
3, was constructed; others arose in the following procedures used to
= (P,-1/22,)= ( P y P , $ 2 ) (P$P;%J
x,a, compute 2,1N from 2, and zg
*/2 b/2
xzp, = (P, L, )(L;b12Z:)
= (Pc*/*PzT;/2)
(P$Zp).
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.
132 [EEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 41, NO. 5, MAY 1996
Authorized licensed use limited to: University of Gavle. Downloaded on July 01,2021 at 13:19:31 UTC from IEEE Xplore. Restrictions apply.