You are on page 1of 10

IEEE TRANSAMOXS ON AUTOXATIC COWTROL, VOL. ac-15, NO.

2, APRIL 1970 175

On the Identification of Variances and Adaptive


Kalman Filtering

Abstract-A Kalman filter requires an exactknowledge of the the number of unknown elements in Q must be less than
processnoise covariance matrix Q andthemeasurement noise n X r where n is the dimension of t,he state vector and T
covariance matrix R. Here we consider the case in which the true
values of Q and R are unknown. The system is assumed to be
is t.he dimension of the measurement vector. It, is shown
constant, and the random inputs are stationary. First, a correlation that in spite of this limitation, the optimal steady-state
test is given which checks whether a particular Kalman filter is filter ga.in can be obtained by an it,erative procedure.As a
working optimally or not. If the filter is suboptimal, a technique is corolhry, it is shown that the Ka1ma.n filter gain depends
given to obtain asymptotically normal, unbiased,and consistent only on n X r linear relationships between the elements
estimates of Q and R. This technique works only for the case in
which the form of Q is known and the numberof unknown elements
of Q.
in Q is less than n X r where n is the dimension of the state vector A numerical example is included to il1ustrat.e the appli-
and r is the dimension of the measurement vector. For other cases, cation of the results derived in the paper. The extension
the optimal steady-state gain KO,is obtained directly by an itera- of the results to t,he continuous case is st.raightfonvard
tive procedurewithout identifying Q. As a corollary, it is shown and is given in the last sect.ion.
thatthesteady-state optimal Kalman filter gain KO, depends
only on R X r linear functionals of Q. The results are &st derived
for discrete systems. They are thenextended to continuous systems. 11. STATEMEKT
OF THE PROBLEM
A numerical example is given to show the usefulnessof the approach.
XYStt???t

I. IKTRODCCTIOX Consider a ndtivariable linear discrete system


= cpzi rui + (1)
T HE OPTIMUM filtering result,s of Kalman a.nd B U C ~
[l], [ a ] for linear dynamic systems require an exact
knowledge of the process noise covariance matrix Q and
zi = HZ; ~i + (2)

the measurement noise covariance ma,trix R. I n a number where x, is 7% X 1 state vector, 9 is n X n nonsingular
of practical situations, Q and R are either unknown or are transition matrix, r is n X q const.ant input matrix, zi
knownonlyapproximately.Heffes [3] andKishimura is T X 1 measurement vector, and H is r X 77. constant
[4] have considered the effect of errors in Q and R on the output matrix.
performance of the optimal filter. Several other investi- The sequences ui(q X 1) and vi(r X 1) are uncorre-
gators [5+[9] have proposed on-line schemes to identify lated Gaussian white noise sequences n-ith means and co-
Q a.nd R. Most of these schemes do well in identifying R variances as follows :
but run into diaculties in ident.ifying Q. Moreover, their E ( u ; ) = 0; E(uiujT]= Q S ,
extension to continuous cases is not clear. A different
approach ha.s been taken in this paper. It is assumed that ] R&j
E { v ~=) 0; E ( v , v , ~ =
the system under consideration is time invariant, com- E (u,$'} = 0, for all i,j
pletely controllable, and observable [a]. Both the system
andthe filter (optimal or suboptinlal)areassumed to where E { denotes the expectation, and 6 i j denotes the
a }

have reached steady-state conditions. First, a correla.tion Kronecker delta function.


test is performed on the filt.er to check whether it is work- Q and R are bounded positive definite matrices (Q > 0,
ing optimally or not. The t.est is based on the innovation R > 0). Initial state zo is normally distributed with zero
property of an optimal filter [lo]. If the filter is subopti- mean and covariance Po.
mal, the auto-correlation function of the innovationprocess The system is assumed to be completely observable and
is used to obtain asymptotically unbiased and consistent controllable, i.e.,
estimates of Q and R. The met,hod hasthe linut,a.tionthat rank CHT,( H @ T, ) --
(Han-')'1 = 72
e,

Manuscript received August 2, 1968; revised May 16, 1969.


-
rank Lr,@r,-.,cpn-lr] = n.
This paper was presented a tt h e 1969 Joint AutomaticControl Filter
Conference, Boulder, Colo.
The author WBS mjth theAnalytic Sciences Corporation, Reading, Let Qo and Ro be the initialestimates of Q and R
Mass. 01867. He is now withSystemsControl, Inc., Pdo Alto,
Calif. 94306. (Qo > 0, Ro > 0).
estimates,
these
Using steady-
the let
176 IEEE TRAZTSACTIOKS ON AUTOMATIC CONTROL, APRIL 1970

state Kdman filter gain be KO (n X matrix)' For i > j , vi is independent of ej and v j :


KO = &IoHT(HIWd2T + Bo)-' (3) E(vivjTj = E[Hei(He, + vj)'l
- A ~p
&lo = @ J P (lM ~ H T+ R ~ ) - ~ H M ~ +
Ho ] WrQorT. = E [ Hei (zj - H?j+l) T}.

(4) The orthogonality principle states that ei is orthogonal


to (zIs,k< i}. Since i j , j - l depends only on (z~s,k< j ) , we
M o may be recognized as the steady-state solution to the
conclude that
covariance equations of Kalman [ l ] . E(vivjT} = 0, f o r i > j.
The filtering equations a.re
Similarly, E(vivjT} = 0, for i < j.
?i+l/i = Cp& (5)
For i = j , E{viviT}= H M H T R. Further, since vi i s +
% q i = 2ili-1 + Ko(zi - HkIi-1) (6) a linearsum of Gaussianrandom variables, it is also
Gaussian. Hence vi is a Gaussian n-hite noise sequence.
n-here Pi+lli is t.he estimate of xi+l based on all the measure-
ments up to i, i.e., (zo, -- s , ~ } .
Heuristically, the innovation vi represents the new infor-
mation brought by zi. Iiailath [ l o ] shon-s tha.t v i and zi
I n a n optimal Ka1ma.n filter (i.e., n-hen Qo = Q and contain the same st.atist.icalinforma.tion and are equinlent
Ro = R ) , df0 is the covariance of the error in estimating as far as 1inea.r opera,tions are concerned. Schweppe [12]
the stat.e. But in a subopt,imal case, the covariance of the shon-s tha.t,vi can be obtained fromzi by a Gra.nlSchmidt
error ( H l ) is given by the follo-iving equation [3]: orthogonalization (or a whitening) procedure.
3 1 1 = @[,M1 - K J € M 1 - MIHTKoT
In this paper, we use the innovation sequence to check
the optimality of a Kalman filter and t,o estimat,e Q and
+ + +
K o ( H M I H T R)KoT]CpT I ' Q P ( 7 ) R. With this in mind, we investigate the effect. of sub-
optima1it.y on the innovat.ion sequence.
n-here M 1 = E { (xi - Oili-1) ( x i - &+I) T}.
117. INKOVATION SEQUEIWE FOR A
Problem SUBOPTIXAL FILTER

The true values of Q and R a.re unlmown. It is required Let K denote the steady-state filter gain. m e will show
that under steady state, the innovation sequence vi is a
check whether the Ka1ma.n filter construct,ed using stationary Gaussian sequence;
some estimates of Q a.nd R is close to optimal or = - H2ili-1
not (hypothesis testing),
obta.in unbiaseda.nd consistent estimates of Q and R = Hei + vi
(statistical estimation), and
adapt the Kalman filter a t regular intervals using
E [ v s i _ k T }= HE(eiei-t}HT + HE(eivi-;,T), f o r k > 0.
all the previous information (adaptive filtering). A recursive relationship can be obtained for e; by using
(11, (21, (5), and (6) :
To solve these problems, we ma.ke use of the innovat.ion
property of an optimal filter [ l o ] . ei = @ ( I- KH)ei-l - +Kuidl + (9)

111. THE INNOVATION


PROPERTY
OF
Carrying (9) k steps back,
a OPTIMALF'ILTER~ B
ei = [ @ ( I- KH)]Iseei-~:
- [ @ ( I- KH)]j-l@Kvi.-J
i-1
Statenwnt
For an optimal fdter, the sequence vi = (zi - H&-d, Is

k0n.n as the innovation sequence, is a Gaussian white + i=l


[ @ ( I- K H ) ] i - l I h - (j 1. 0 )
noise sequence.
proof; A direct proof is obta.ined using the orthogonality Postmultiplying (10) by eiAT and taking expecta.tions,
principle of linear estimation C10].3 Let ei = xi - &-I E { e i e i A T }= [ @ ( I- K H ) Y N
denote the error in estimating the state.
Then where iil is the steady-stat,e error covariance ma.tri.. An
V ; = Hei vi + expression for $1 is obtaineddirectlyfrom (9) orfrom
(7) :
E{vivjT}= E [ (Hei + vi) (Hej + vj)'}. (8)
iif = $ ( I - K H ) M ( I - K H ) T @ T+ @KRKTQT+ rQP.
1 The conditions of completecontrollability and observability
together with t.he positive definiteness of Qo and Ro ensure the as-
(11)
ymptotic global stability of theKalmanflter. See Deyst and Postmultiplying ( 1 0 ) by v+kT and t.aking expectations,
Price 1111.
2 For a detailed discussion, see Kaj1at.h [lo].
3 An alternate proof will be given in Section IT'. E(eai-kT} = -[+(I - K H ) I b l @ K R .
M3HR4: I D E N T I F I C A T I O X O F 1-ARIANCES A S D ADAPTIVE IC.4LM.4X FILTERING 177

Therefore,
E(vpi--kT) = H[@((IT- K H ) Y - ’
- @[,.MHT - K ( H X H T + R )1, k > 0.
When k = 0, E ( v i v i T j = H M H T R. +
It. is seen that the autocorrelation funct,ion of v i does
not dependon i. Therefore, vi is ast.ati0na.n.Gaussian
random sequence (Gaussian because of linearity) and we
can define
Ck = E { V i V i A * } .
Then
C/: = H N H T + R, h: = 0 (12)
= H[@CI - KH j]k-l@[MHT- KC0]? IC > 0. ( 1 3 ) Fig. 1. Kormalized autocorrelation function of innovation process.
(a) Srtboptimal filter. (b) Optimal filter. (Arro.crs indicate points
Furthermore, for which 95 percent confidence limits do not, enclose zero,)
c-, = CkT.

Kotice thatthe opt,imal choice of K , viz. K = It is seen from ( 1 3 ) that CI;+ 0 for large k. It can be
+
3 f H T ( H 3 f H T I?)-’ makes Cc vanish for all k # 0 (the shown4 that the infinite series in ( 1 6 ) has a finite sum so
innovation property). that the covariance of e k is proportional to l/N. Thus,
the estimates are asynzptotically unbiased and con-
IT. A TESTO F OPTIhL4LITY FOR -4I<-kL&1L41V’ FILTER sist.ent. Moreover, since all the eigenvalues of @((IT- K H )
From the discmsion of the preceding two sections, it is lie imide the unit. circle, v i belongs to t,he class of linear
clear tha.t a necessary and sufficient condition for the processes [I41 for which Parzen [lrj] has shou-n that, t k
opt,irna.lity of aKalman filter is thatthe innovation a.re asympt.otically norma,l.
sequence vi be white. This condit,ion can be t,ested statis- For t.he white noise case, ( 1 6 ) is greatly simplified by
tically by a number of different methods [13], [l6]-[19]. putt,ing C k = 0, for all k # 0:
Here lve consider a particular method given in Jenkins
and Wat.ts [13]. cov ( [ t k ] i j , [ t l l P P )= 0, k # 2
In this method, we obtain an estimate of Cr, denoted = ( 1 / N )[Co]ip[Co]jqq k = > 0
a.sel:, by using the ergodic propertg of a st,ationary ran-
dom sequence = (1/N) [C~lipCC~ljq[Coliq[Coljp, +
N k = I = 0. (17)
t k = ( l / N ) 2 YjVi-kT (14)
i=k
Estimates of t,he normalized autocorrelation coefficients
where N is the number of sample points. pe a.re obta,inedbydividing the elements of 6~. by the
eo,
A

The estinlat,es C k are biased for finite sample sizes: appropriate elements of e.g.,
n
E{C,I = ( 1 - J;/NJc~. (15) r.r. 7

I n case an unbiased estimat>e is desired, we divide by


( X - k ) instead of N in ( 1 4 ) . However, it is shown in Of particular interest here are the diagonal elements of
[13] that the estimate of ( 1 4 ) is preferable since it gives fik for the case of whit.e noise. Using (17), we can show that
less mean-square error than the corresponding unbia.sed
estimate. var [p& = l/N + 0 ( 1 / N ?()1. 9 )
An expression for the covariance of e k can be derived
Further, [p& like [ t k ] ; ; a.re asymptot.icallynormal
by straight,forward manipulation, but the general results
[15]. Therefore, the 95 percent confidence limits for [pklii,
are rat,her involved. We qu0t.e here approximate result.s k > 0 are =t( 1.96/N1/2),or equivalently the 93 percent
for large N given in Bartlett [14] :
confidence limits for [ e ~ . ] ia.re
i f( 1.96!AT19 [&]ii.
m

cov ([l+t:]ij,[?l]pq)e ([~tlip[~t+l-.ljq Test


&
- m
Look at a set of values for [&Iij, k > 0 and check the
+
[ C t + ~ l d C t ~ l j p )( 1 6 ) number of times they lie outside the band f( 1.96/N1’2).
where [(?kIij denotes the element in the ith row and t,he If this number is less than 5 percent of the total, the
j t h column of the matrix tkand cov (a.,b) denotes the sequence v i is white. (Examples of a nonlvhite and a white
covariance of a and b ; viz.
The proof is essentially similar to the
f one for proving the
COY (a$) E E ( [ u- E ( a ) ] [ b- E ( b ) ] } . st.ability of a K a h n mter 1111.
178 IEEE TRkYSACTIONS ON AUTOMATIC CONTROL, APRIL 1970

sequence are shown in Fig. 1. See the example in Section where


E.)
This test is based on the assumption of large N . If N
is small, othertests proposed by Anderson [17] and
Hannan [lS], etc., may be used. Jenkins and Watts [13]
also give a frequency domain test which is useful if there
are slow periodic components in the time series.
I n numerical computation, it has been found preferable
VI. ESTIMATION
OF Q AND R to use ( 2 2 ) since matrix A is better conditioned than
If the test of Section Tr revea.ls that the filter is sub- matrix B. (This is a n experimental observation.)
optimal, the next step will be to obtain better estimates 2 ) Obta.in an estimate of R using (12) :
of Q and R. This can be done using Ck computed ea.rlier. 2 = 80- H ( & € f i T ) . (23)
The method proceeds in t.hree steps.
1) Obtain an estimate of MHT using (13). Rewriting 3) Obtain an estimate of Q using (11).
(13) explicitly, This step gets complicated due to the fact that only
the estimate of MHT instead of M is anilable. Conse-
C1 = H@'MHT- H@KCo quently, only n X T linear relationships between the un-
known elements of Q are available. If the number of
unknowns in Q is n X T or less, a solution can beobtained
But if the number of unknowns in Q is greater than n X T.
a uniquesolutioncannot be obtained. Hoxever, it d,
Therefore be shown in the next section that a unique solution for

M H T = B" Cp
rcl + H@KCo
+ H@KC1+ HWKCo
1
I (20)
the optimal gain K , ca.n still be obtained.
Restricting ourselves to the case in which the number
of unlmonm in Q is n X T or less, we can solve for the
unknown elements of Q by rewriting (1 1) as follows :
LC, + H@KC,-1+ + H@,"KCo] M = @X@T + n + rQrT (24)

I-@.
where B* is the pseudo-inverse of matrix B [l] defined as myhere

1
B =@ ;

LH@~-~J M
f2

= @w(@)T
= @[

Substituting back
for
-K H M - &IHTKT + KC&']@'.

+ ~
Ill on the right-hand

P T
side

+ Q + @rQrTQT
+ rQF.
of (X),

Notice that B is the product of the observability matrix (25)


and the nonsingular transition matrix @.Therefore
Repeating the sa.me procedure n times and separating the
rank ( B ) = n terms involving Q on the left-hand side of the equation,
and we obtain
B + = (BTB)-lBT. k-1 k-1

Denoting5 bytheestimate of MHT a.ndusing (20), =M -


@jrQrT(@j)T @M(@k)T - @jn(@j)T,
i=O j=O
we can write
for k l,...,n . (26)
rel + H @ K ~ , 1
=

Premultiplying both sides of (26) by H and postmulti-


. (21) plying by ( Wk)'HT, we obtain
k- 1
H @ T Q r T( W k ) THT = Hill ( W k ) ' H T - H@klt;HT
i
d
An alternate form for MHT can be obta.ined directly
from (13):
-
(22)
k = l,-..,n. (27)
The right-ha.nd side of (27) is completely determined
A
5 The s ~ m b o -23
l always implies AB FS B single symbol. from &lHT and Co. Substituting their estimated values,
NEHFA: IDENTIFICATION OF VABMNCES A N D AD-4PTIVE K.ALMAK FILTERIWG 179

Subtracting (30) from (31) and simplifying


(kf2 - Mi) = @( I - K1H) ( M 2 - AI,) ( I - K1H)T@T
- @(K1- KO)(HM1HT + R )(K1- Ko)T@T.
(32)
The solution to (32) can be written as a.n infinite sum.
Then, using observability and controllability conditions,
it can be shown that6
P f l ' S l C A L ATSTEII
iAL:.!h\ t :LTER
lW2 -x 1 <0 or M z <MI.
Fig. 2. Identificat,ion scheme. +
Similarly, define K2 = il!12HT(H1MzHT R)-' and M a as
the correspondingerrorcovariance matrixThen,bya
similar argument,
we obtain M 3 < .LIZ2 < lM,.
B-1
H@frQrT(@.j--k)THT
= fi$l(@-k)THT - H@:,$IBT The above sequence of monotonically decreasing mat-
i=O rices mustconverge since it is boundedfrom below
IC-1
( A I > 0). Hence, the sequence Ko,K1,K2, must con- --
- H@jfj(@i-k) THT, verge to Kop.
j=O Based on thepreceding property of K , we now construct
the following scheme for est,imating KO,.
k = 1,"',12 (28)
1) Obtain an estimate of K1, denoted as 2 1 from (22) :
where
fj = @[ - KfiXf - & f B T K T KC'&T]@T. (29)+
The set of equations (28) is not linearly independent.
In any pa.rticular case, one has to choose a linearly in-
dependent subset of these equations. The procedure will
be illustrated by anexample in Section IX. Also, obtain estimates of M I H T and R from ( 2 2 ) and
The preceding identification scheme is shown schemat- (23).
ically in Fig. 2. 2) Define 6 M 1 = AIz - 311.Obtain &@1, an estimate of
6 M 1 , using (32) :
VII. DIRECTESTIMATION
OF THE OPTIMAL
GAIN
8&T1 = @ ( I- klH)8.&1(1- I Z I H ) ~ @ ~
If the number of unknowns in Q is more than n X r,
or thestructure of Q isunknown, themethod of the - a(& - Ko)(?o(kl- Ko.!T@T.(34)
previous section for estimating Q does not work. However,
it is still possible to estimate the opt.ima1 gain KO,by a.n 8.&, can be calcula.ted recursively in the same manner as
iterative procedure. JIo is calculated for a Kalnlan filter. For convergence, it
Following the notation of Section 11, let KOdenote the is sufficient tha,t @ ( I- k 1 H ) be stable, i.e., all eigenvalues
initialgain of theIialman filter. L,et M 1 be theerror be inside the unit circle.
covariance mat,rix corresponding to KO.Then 311 sat,isfies 3) Obtain &f2fiT a.nd k2 as follows:
the following equation [cf., (7)]: &2AT = &lAT &f41HT (35) +
AI1 = @[A41- KJIM1 - MIHTKoT 2 2 = &2BT(HM2BT + (36)
+ Ko(HMIHT+ R ) K J I P + r&rT. (30) 4) Repeat steps2) and 3 ) until 118.!@< I I or I I l?i - I?-1 ]I
Define become small compared to 11 &Ti 11 or 11 Ki 11 where 11 -11
K1 3 AIIHT(HXIHT R)-'. + denot,es a suitable matrix norm, An a.lternat,ive way to
get Kz would be to filter data z again using K1 and then
Let the error covariance matrix corresponding to gain use (33).
Kl be called &I2.Then This procedure for obtaining KO,reveals a.n interesting
relationship between KO,and Q. It is seen tha.t. theequa-

+ K1(H$lzHT + R ) K I ~ ]+@rQrT.
~ (31)
6 The proof is similar t.o the one by Kalman [l]for showing the
positive definiteness of M in (11).
180 AUTOMATIC COSTROL,
IEEE TRANGACTIOXS OXAPRIL 1970

tion for ( 3 1 2 - M I ) does not involve Q. We need Q only For N >> n, the bias in is negligible. The covariance
to calculate M I H T . This leads us to the following corol- ofA?aTfor large N is
lary.

of
corollmy: It, is sufficient to Icnon- n_xr linear functions
order
Q in to obtain the optimal
gain Kalman
of a
cov (:YfiP) w K var ( ~ O ) K T + -4::cov ([q ) P
C,
iilter.
Proof: Consider (30) which can be written a.s + K cov eo, (* [?j)
M1 = @ ( I- KOH)Ml(I - K o H ) T @ T cn

Writing the solution as an infinite series,

M1HT =
m

]=o
[ @ ( I- KoH)]j(@K&KoT@T + rQrTj
Expressions for

,)I:[(
cov
Cn
etc.

M I H T depends on n X r linear functions of Q ; x&., can be obta.ined from ( l G ) . It can be seen that cov (I@@’)
decreases as l / N for h g e sa,mple sizes. Similarly,
5 [@(I
i=O
- K a ) ] j r Q r T [ ( I- KJI)TPI~HT.

If these linear functionsare given, we do not need to know


Q itself to obtain M I H T . Furthermore, since the equation
for ( M 2 - M ~ does
) not, involve Q explicitly, the optimal
gain KO,can be obtained by knowing the preceding n x r
linear functions of Q only. +-
par (I?, = var (to)H cov ( & A T j H T
Notice that a complete knowledge of Q is required t,o
obt,ain the covariance matrix Ji of a Ihlman filter. If one
- cov ( eo,Kf@)HT - H cov ( JffiT,e0).
is interested only in KO,, the preceding corollary shows (43)
that a complete knowledge of Q is not essential. Since our
iterative scheme tries to identify KO,by mhit.ening t.he The expressions for EL$] a.nd var ([QJj) can be ob-
residuals vi, it fails to identify the complete Q matrix if tained similarly.
the unknowns in Q are more t.han n X r. The usefulness of t,he preceding expressions is limited by
the fact. that, theydepend on t.he a.ctual values of Q and R
VIII. STATISTICAL PROPERTIES OF THE ESTIMATES
which are unknon-n. If the values of Q and R are known
*
to lie n-it.hin a certain range, one might use these expres-
It -ms shorn-n inSection T: that t,he esthat.es C k are sions to plot curves of var (8)and var ([(jlij) versus Ar
asymptotica,lly normal! unbiased, a.nd consistent. Since fordifferent va.lues of Q and R. The dependence on *Q
1i1QT, 8,and 6 are linearly relat.ed to e k , it is easy t o and R ma,!* be removed by considering t.he covaria,nce of K :
shon- t,hat they are also asymptotically normal, unbiased,
and consistent..
The general expressions for the mean and covariance of
the estimates arerat.her involved. We, therefore, specialize
to the ca.se of a scalar measurement. It. canbe shown [14], [15] that Jl; are asymptot,ically
Csing ( Z j , (23), and (%), normal with mean p k a,nd covariance
00

cov ($k,$d M (l/lV) pjpj+a-a.


m
-,

A satisfactory estimate of cov (pk,$J is provided by


A’-1
or (1/2h7) C
r “1
BjJj+l-.s
j=-(X-l)

m-hich can be used in (43) to calculate cov (kj.For the


special case of an optimal filter, (43) reduces to

cov (I?) W ( l / N ) A * A * T= (l/N) (ATA)-l. (44)


NEHRA: IDEXTIFICATIOK O F YARIXTCEB ASD AD.4FTITE KALbL4S FILTERING 181

I 1,
Equat,ion(44) gives usa simple expression for the The startingvalues of Q a.nd R are taken as
minimumvalianceinestimating K. It can be used in

lo
0.25 0 0
deciding upon the minimum sa.mple size N .
0.4 0 ;
We now consider the asymptotic convergence ( N 1a.rge)
of theiterative scheme of Section VII.Equation(34)
QO = 0.5 0 RO= o.6] .
shows that E[8JI1] dependson the second- and higher
0 0 0.75
ordermoments of k1 which for a. normal process are
finite andtend
to zero asymptotically.
Therefore, Using these \dues, the innomtion sequence vi = (2, -
E [ ~ $ T J= 6 . ~ 1 . H&,iLl) is generated from (3) to ( 6 ) . Theestimates
Similarly, the comriance of 8 A f l asymptothlly tends eo,el,- -
.,6k of the autocorrelation are calculated using
to zero. Thus, 8L@l tends to 6 M 1 withprobability one. (14). For a typical sample of 950 points, Fig. 1(a) ahom
Extending the same argument, K 2 -+ K?, K 3 -+ Ka, * -, a plot of the first diagonal element, of fik for k = 0>40.
k,, -+ KO,with probabilit>yone. The 95 percent confidence limits are +0.0636 and four
points lie outside this band (i.e., 10 percent of the tota.1).
E. ,4NUMERICAL EXAMPLE FROM Therefore, we reject the hypot.hesis t,hat v i is n-hite. The
INERTIAL KAVIGATION same conclusion is reached by looking at the second diag-
onal element of PI;.
The results of Sections T’ and VI are
applied toa da.mped We now proceed to the ident,ification of Q and R. Since
Schuler loop forced by an exponentiallycorrelated sta- t,he number of unknowns in Q is less than n X r = 10,
tionary random input. Two measurements a.re made on we can identify Q completely. The set of equat,ions (28)
thesystem, bot,h of which arecorruptedby exponen- gives us alargenumber of linear equa.tions for G1, &,
tially correlated as well a.s white noise t,ype errors. The and G3. However, the most important of these occur along
state of the syst,em is augment,ed to include all the cor- the diagonal for k = 1 and k = 5.
relat,ed random input,sso that the augmented &ate vector For k = 1 t.he left-hand side of (28) is
z is 5 x 1, the ra.ndom input vector u is 3 X 1, a.nd the
measurement noise vector v is 3 X 1. The sgstem is
discretized using a timestep of 0.1 andthe result.ant
system mat.rices are

p.73 - 1.74 -0.3 0 -0.15 1 For k = 5 the left,-ha.ndside of ( 2 8 ) is

0.09 0.91 -0.0016 0 -- 0.008

+= 0 0 0.95 0 0

J
The diagonal elements of the first. equation are used to
0 0 0.55 0 calculate i j 3 and i2.The first diagonal elementof the second
equation is then used to calcula.te 61.
0 0 0 0.905 It is possible to use a. few other equations a.nd to make
a lea,st-squares fit for 41, &, and 4 3 . This, however, does
0 0 0 not. alter the resultssignificantly in the present example.
The results obtained by using the identification scheme
repeatedly onthe same bat,chof data areshown in TableI.
It is seen that most of t.he identification is done during
the first iteration. Further it.erations do not increase the
likelihood function7 much, even though the changes in Q
and R are significant. ,4check case using true values of
Q and R is also sholm in TableI. It is seen that the value
of t.he likelihood function in the check case is very close to
that in thefirst iteration. This indicatesthat theestimates

R = r
0
‘1.
T2
obtained are quite close to the maximum likelihood esti-
mates. It was further noticed that even if different starting
valuesare used for Q and R, the identificationscheme
converges to the same va.lues.

The actual values of ql, q2, qa, rl, and r2 are unity, but The ljkelihood function L(Q,R) has beengiven
[12]:
by SchwePPe
t.hey are assumed unknown. It is required to identify these s
values using n1easurement.s { z i , i = l,Ar]. L(Q,R) = Z:
- ( l / S ) i=l viT(HXHT + R) - 111 I H M H T + R I.
182 IEXE TMSACTIONS OR AUTOMATIC CONTROL, APBIL 1970

TABLE I
ESTIUTESO F Q -am R BASEDOK A SETOF 950 POINTS

Percentage of
Points Lying
Outside the 95 Percent
Con6dence Limits
Estimat,e
First Second of -4ctual Calculat.ed
Likelihood Measure- h.1easul.e- h*.lean- h,lean-
Number of Func$on
Square
Square ment ment
A A A *
Iterations ?1 fi !73 71 72 L (Q,R) (percent)
(percent) Error* Errori

0 0.5 0.25 0.75 0.4 0.6 -5.17


10 10 2915 902
1 0.7310.i761.31
1.444 0.867 -4.67B 2.5 5 2390 2755
0.87 2 2725 2720
3 0.91 1.40 0.776 1.565 0.765 -4.67% 2.5 5 2714 2814
4 0.92 1.41 0.i7 1.5i3 0.7646 -4.671 2.5 5 2712 x
440
Check case 1.O 1.0 1.0 1.o 1.o -4.669 2.5 5 2720 2900

* Estimate of mean-square error is


K
(l/lv) z (Xi - .;i i-l)T(Zi - G , L I )
i-1

tainedwhere zi is by actual simulation. h

t Calculated mean-square error is t r (AT,) There X, is obtained from the variance equation using Q and R [cf. (4)].
h

1.5 q;
Similarly,
-. ......;i-""-
t 0

0.5
J.
........L.....:..................-.
h : z T7
=-.
Rb+l = EP + C l l ( k + 1 ) ] ( 2 M l , k -
0.d
X. CONTINUOUS
SYSTEM
1.5
..... ........ .
......9 c. 01 The results of the Drevious sections can
tended t o continuous systems. We simplystate theresults
below.8
0.d

System
1.0 ...... -..- *=Fx+Gu (47)
0.5
z = Hx+v. (48)
: 2 3
%tch h'.xzber
4 5 b 7 E 9 l c Filter
Fig. 3. On-line ident,ification of Q and R.
2 = FP + Ko(z - HP) (49)
where
We now check the optimality of the filter after i d e n a - KO = PdY'R0-l (50)
cation. Fig. 1(b) shows a plot of the &st diagonal element and
of p k , for k = 0140. It is seen that only one point lies
outside the band of 95 percent confidence limits (2.5 per-
FPo PoF' GQoG' - P&'Ro-'HPo = 0. (51) + +
cent of the total). This supports the hypothesis that vi The error covariance P1 is given as
is white.
The a,symptotic convergence of Q and R towards their (8' - K J I ) Pi+ PI( F - KdY) GQG' K8KoT = 0. '+ +
actual values is s h o r n in Fig. 3. The estimates of Q and (52)
R are updated after every batch of N points ( N = 950).
In the absence of any knowledge about the variances of Innovation Process
the estimates, a simple averagingof all the previous values v=z-HP
is performed. This is equiva.lent to the following stochastic =He+v (53)
approximation scheme[5] :
where e = ( x - 2 ) . For an optimal filter, v is white with
the same covariance as 4) [lo]. For a suboptimal flter,
where k denotes the batch number, d= (F - Ka)e + Gu - Kov (54)
Qk theestimate of Q after k batches
QP+~,L the estimate of Q based on the ( k 1)th batch +
* These result.s havenot been applied to apractical problem
Qk+l the estimate of Q after ( k 1) batches. so fa.r. +
MEHRA: IDENTIFICATIOH OF TARIANCES A X 3 ADAPTIVE EALMAN FILTERlNG 183

and the autocorrelation functionC ( T ) of v is given as and


K 2 = I;,BT&l (61)
c(T)= E{V(t)VT(t- T) )
and so on unt,il the relative changes in I? become sma,ll.
= - T ) )HT
HE{e(t)eT(t R e omit the proof of the asymptotic convergence of
+ HE{e(t)vT(t- +R ~ ( T ) , > 0
7)) T
these estimates since t,hey are essentially similar to the
discrete case. All the estimates obtained are asymptotically
= HePtr[PIHT- K&] + R ~ ( T
, ) F f F - KJl. = unbiased and consistent.

(55) XI. SUMMARY


AND COXCLUSIOKS
Let, S ( w ) denote the Fourier t.ransform of C ( T )
The problem of optimal filtering for a linear time-
- Ff)-'(PIHT - K&)
S ( W )= H ( ~ w + (HP1- RKoT) invariant system with unknown Q (process noise covari-
- (-iw - FIT)--IHT + R. (56) a.nce ma.trix) and R (measurement noise covariance ma-
t.rix) is c.onsidered. Ba.sed on the innovation property of
Test of Optimality and the Estimation of Q and R an optima.1filter, a statistical testis given t.0 check whether
a part.icula.r filter is working optimally or not. I n case the
We may use either the estimates of C ( T ) or of S ( w ) filter is suboptimal, a.n identification scheme is given to
to t.est the optimality of the Kalman filter and to identify obtain asymptotically unbiased and consistent est.imates
Q and R . These estimates are obt.ained by using methods of Q and R. For the case in which the form of Q is un-
given in [13]. known ort,he number of unknowns in Q is more than n X P
PIHTand R may be obtained from the set of equations (nis the dimension of the state vector andr is the dimen-
(55) or (56) by using methods very similar to the dis- sion of t'he mea.surement vector), the preceding scheme
erebe case. If the number of unknowns in Q is n X r or fails and an a1ternat.e scheme is given to obtain an esti-
less, Q can be obtained using (52). We obtain expressions matmeof the optimal ga.in directly without identifying Q.
for A numerical example is given t,o illustrate the result,s and
k-1

j=O
(- l)jHFjGQGTFk-jHT, for k = 0,1, - to show the usefulness of the approach. The results a.re
first derived for a discrete system. They are then extended
to continuous systems.
[the set of equations analogous to (28)].
If the number of unknowns in Q is more than n X r, XOMENCLATURE
KO,is obtained directly without identifying Q. The pro-
cedure is as follon-s. D e h e Matrices
@,I',H,F,G,B,A System matrices
K1 = PIHTR-l. (57) covariance matrices
Q,R,N,P,M1,Pl
Let PP be the error cova.riance corresponding to K1. Qo,Ro,Jfo,Po initial values of Q,R,M,P
Then it canbe shown that K,K@l$op, * * Kalman filter gains
Q,&w,K,. .. estimated values of Q,R,M,K (caret
( F - K1H) (Pz - P I ) + (P2 - P I )( F - K I H ) over any quantity denotes an esti-
mate)
- ( K I - Ko)R(Kl- = 0. (58) autocorrelation function
Ck,&,C(T)
Therefore Pkrh normalized
autocorrelation
function
P, < P1. S(W) power spectral density
8~1,8~,8.ii?1,8P increment matrices.5
Similarly, define K2 = P2HTR--Iand let P3 be the error
covariance for K2. Then Vectors
xi,ii+~ Actual and estimated states
P3 < P2 < PI. ui,vi white noise sequences
I n this way, P is decreased at each step and the sequence zi measurements
--
Ko,Kl,Kz- converges to Kop. Vi innovations
Equation (58) is now used to obtain kOp, an estimate ei error in state estimation.
of K,. After obtaining = FlBT&l, me substitute it
Scalars
in (58) to get an estimate of 8P1 = P2 - P1: n,q,r Dimension variables
( F - &H) 8@1+ (F - k1H) N sample size
[Ck]ij element in the ith row and the jth column of
- (KI - KO)&(&- K o )=~ 0. (59) the matrix Ck
Then i d (. r.) Kroneckerdeltaand the delta function
6 ..
F,BT = fill?' + 8$1HT (60) L(&,R) likelihood function.
184 IEEE TRAWSACTIOXS O X .4UTQJUTIC COXTROL, APRIL 1970

Operations 1111 J. J. Deyst., Jr., and C. F. Price, “Conditions for asgmptot.ic


stability of the discrete minimum variance linear estimator,’‘
E(.) Expected
value
operator IEEE Trans. dutomatic Control (Short Papers), vol. AC13,
a , -
cov ( ) covarisnce operator
pp. iO2-iO5, December 1968.
[la] F. C. Bchaeppe, “Evaluation of likelihood functions for Gaus-
sian signals,” IEEE Trans. InformationTheory, vol. IT-11,
var ( - ) variance operator
pp. 61-70, January 1965.
( jT transpose of a matrix [13] G. N . Jenkjrsand U. G. Wat.ts, Spectral -4nalysis a-nd 2 s
( )” pseudo-inverse ofmatrix
a Applications. San Francisco: Holden Day Publ., 1968.
( >-‘ inverse of a matrix
1141 A i . S.Bartlet.t.,An Introdudim lo Stochastic f’rocesses. London:
Cambridge Universit,y Press, 1962.
II-II norm of a mat.rix. [15] E. Parxen, ”An approach to t,ime series analysis,” dnn. Math..
Statist., vol. 32, no. 4,December 1961.
[16] E. J. Ilannan, Time Series Analysis. K e a Tork: Wiley, 1960.
ACKNOTTZEDGMENT 1171 R. L. {Fderson, “Distribution of t,he serial correlabion co-
efficient, -4nn. Math.. Statist., vol. la, 19U.
The a,uthor wishes to thank all his colleagues a t The [18] G. S. \Vat-on and J. Durbin, “Exact tests of serial correlation
Anal\-tic Sciences Corporation for their help during the using noncircular statistics,’’ Ann. d l a h Statist.. vol. 22,
course of this research, and C. L. Bradleyinparticular pp. 446451, 1951.
. ~. 1191 C. W. J. Granaer. ‘‘-4 auick test for serial correlation suitable
many stimulating discussions and ideas. for use n i t h n&-&atioiary time serie3,” A m . Statist. Assoc. J.,
pp. 728-730, September 1963.
REFERENCES
R.E. Kalman, ‘‘Kern met.hods and results in linear prediction
and filtering theory,” PTOC.Symp. on Engineering Applications
of Random Function Theory and Probability. New York:
IViley, 1961.
R. E. Kalmsn and R. S. Bucy, “ K e a results in linear filtering
and prediction theory,” Trans. A S X E , J . BasicEngrg., ser.
I), vol. 8 3 , pp. 95-108, March 1961.
El. Heffes, “The effects of erroneous models on the Ihlman Raman E;.Mehra (S’6$-hI’68) was
filter response,” IEEE Trans. Autmm~icControl (Short Papers), born in Lahore, Pakistan, on
V O ~ .AC-11, pp. 541-543, July 1966.
T. Sishimura,“Error bounds of continuousKalnlan filters February 10, 1943. He received
and t.he application to orbit det.erminat,ion problems,” IEEE the B.S. degree in electrical engi-
Trans. Automatic Control, vol. -kc-12, pp. 268-”5, June 19ti7. neering from Punjab Engineering
R. C. K. Lee, “0pt.imal estimation, identificat,ion and cont.lol,” College, Chandigarh, India, in 1964,
Massachusetts 1nst.itute of Technology, Cambridge, l h . andthe 1,I.P. and P11.D. degrees
?rlono. 28, 1961. in engineering from Harvard Uni-
D. T. McGill, “0pt.imal adaptive estimation of sampled venity, Cambridge, AIass., in 1965
stochast.ic process,’’ St,anford Electronics Labs., Stanford, and 1968, respectively.
Calif., Tech. Rept. SEL-63-143(TR 6 3 0 7 3 ) : December 19ti3.
J. S. Shellenberger, “A multivarianceIesrninutechnique for He worked at B~11 Telephone
improved dynamic system performance,” 15& Proc. _\-EC, Laboratories, Inc., Andover, N a s . ,
vol. 23, pp. 146-151. during the summer of 1965. From
G. L. Smit.h, “Sequentialestimation of observationerror 1966t.o 196i, he was a Research
variances in a trajectory est.imat.ionproblem,” SldA J., vol. Assist.ant atHarvard University.
5, pp. 1964-1970, yovember 19Gi. From 1967 to 1969, he was em-
R. L. Kashyap, hIaximum likelihood identification of sto- ployed by the Analytic Sciences Corp., Reading, Mass., where he
chastic linearsystems,” Purdue University: Lafsyette,Ind., applied modern estimation and controltheory to problems in
Tech. Rept.. TR-EE 68-28, August 1968.
T. Kailath,“An innovationsapproach to least-squares esti- inert.ia1 navigation. He is presentlywith System Control,Inc.,
mat.ion, pt. I : linear filtering in additive white noise,’! ZEEE Palo Alto, Calif. His interests lie in the areas of trajectory opti-
Trans. rlufornatic Coonfrol, vol. AC-13, pp. 646-655,1)ec.ember mization; linear, nonlinear, and adaptive filtering; smoothing and
1968. System identificat.ion; stochasticcontrol; and pattern recognition.

You might also like