You are on page 1of 13

Discrete Kalman Filter Tutorial

Gabriel A. Terejanu
Department of Computer Science and Engineering
University at Buffalo, Buffalo, NY 14260
terejanu@buffalo.edu

1 Introduction
Consider the following stochastic dynamic model and the sequence of noisy observations zk :

xk = f(xk−1 , uk−1 , wk−1 , k) (1)


zk = h(xk , uk , vk , k) (2)

Also let x0 be the random initial condition of the system and

Zk = {zi |1 ≤ i ≤ k} (3)

be the set of k observations. Finding xak , the estimate or analysis of the state space xk , given Zk
and the initial conditions is called the filtering problem. When the dynamic model for the process,
f(·), and for the measurements, h(·), are linear, and the random components x0 , wk , vk are uncorre-
lated Gaussian random vectors, then the solution is given by the classical Kalman filter equations [7].

The Kalman filter is named after Rudolph E.Kalman, who in 1960 published his famous paper de-
scribing a recursive solution to the discrete-data linear filtering problem (Kalman 1960) [11]. It is
the optimal estimator for a large class of problems, finding the most probable state as an unbiased
linear minimum variance estimate of a system based on discrete observations of the system and a
model which describes the evolution of the system [5].

2 Dynamic process
A stochastic time-variant linear system is described by the difference equation and the observation
model:

xk = Ak−1 xk−1 + Bk−1 uk−1 + wk−1 (4)


zk = Hk xk + vk (5)

Where the control input uk is a known nonrandom vector. The initial state x0 is a random vector
with known mean µ0 = E[x0 ] and covariance P0 = E[(x0 − µ0 )(x0 − µ0 )T ].

In the following we assume that the random vector wk captures uncertainties in the model and vk
denotes the measurement noise. Both are temporally uncorrelated (white noise), zero-mean random
sequences with known covariances and both of them are uncorrelated with the initial state x0 .

E[wk ] = 0 E[wk wTk ] = Qk E[wk wTj ] = 0 for k 6= j E[wk xT0 ] = 0 for all k (6)
E[vk ] = 0 E[vk vTk ] = Rk E[vk vTj ] = 0 for k 6= j E[vk xT0 ] = 0 for all k (7)

Also the two random vectors wk and vk are uncorrelated:

E[wk vTj ] = 0 for all k and j (8)

The assumptions about the unbiasedness and uncorrelation are not that critical, extensions of Kalman
Filter can be derived in these situations.

Dimension and description of variables:

xk n×1 − State vector


uk l×1 − Input/control vector
wk n×1 − Process noise vector
zk m×1 − Observation vector
vk m×1 − Measurement noise vector
Ak n×n − State transition matrix
Bk n×l − Input/control matrix
Hk m×n − Observation matrix
Qk n×n − Process noise covariance matrix
Rk m × m − Measurement noise covariance matrix

3 KF derivation
The optimal (minimum variance unbiased) estimate is the conditional mean and is computed in two
steps: the forecast step using the model difference equations and the data assimilation step. Hence
the Kalman Filter has a ”predictor-corrector” structure.

Model Forecast Step


Initially, since the only available information is the mean, µ0 , and the covariance, P0 , of the initial
state then the initial optimal estimate xa0 and error covariance is:

xa0 = µ0 = E[x0 ] (9)


P0 = E[(x0 − xa0 )(x0 − xa0 )T ] (10)

Assume now that we have an optimal estimate xak−1 ≡ E[xk−1 |Zk−1 ] with Pk−1 ≡ E[(xk−1 −
xak−1 )(xk−1 − xak−1 )T ] covariance at time k − 1. The predictable part of xk is given by:

xfk ≡ E[xk |Zk−1 ] (11)


= E[Ak−1 xk−1 + Bk−1 uk−1 + wk−1 ]
= Ak−1 xak−1 + Bk−1 uk−1

2
The forecast error is:

efk ≡ xk − xfk (12)


= Ak−1 (xk−1 − xak−1 ) + wk−1
= Ak−1 ek−1 + wk−1

The forecast error covariance is given by:

Pfk ≡ E[efk (efk )T ] (13)


T
= E[(Ak−1 ek−1 + wk−1 )(Ak−1 ek−1 + wk−1 ) ]
= Ak−1 E[ek−1 (ek−1 )T ]ATk−1 + Qk−1
= Ak−1 Pk−1 ATk−1 + Qk−1 (14)

Data Assimilation Step


At time k we have two pieces of information: the forecast value xfk with the covariance Pfk and the
measurement zk with the covariance Rk . We know that:

xak ≡ E[xk |Zk ] (15)


= E[xk |Zk−1 ] + E[xk |zk ]

Assume that the last term is a linear operation on the innovation zk − Hk xfk [10](see also [9]: Projec-
tion theorem p.408 and Kalman innovations p.443). The innovation represents the new information
contained in the observation zk .

E[xk |zk ] = Kk (zk − Hk xfk ) (16)

Therefore:

xak = xfk + Kk (zk − Hk xfk )


= (I − Kk Hk )xfk + Kk zk

So, the easiest way to combine the two bits of information is to assume that the unbiased estimate xak
is a linear combination of both the forecast and the measurement.

xak = Lk xfk + Kk zk where Lk = I − Kk H k (17)

Or, the optimal estimate at time k is equal to the best prediction plus a correction term of an
optimal weighting value, Kk , times the innovation as in (17) [8].

Substitute (4), (5) and (11) into (16):

xak = Ak−1 xak−1 + Bk−1 uk−1 + Kk (Hk xk + vk − Hk (Ak−1 xak−1 + Bk−1 uk−1 )) (18)
= Ak−1 xak−1 + Bk−1 uk−1 + Kk (Hk Ak−1 (xk−1 − xak−1 ) + Hk wk−1 + vk )

3
Figure 1: Sequential assimilation

The error in the estimate xak is:

ek ≡ xk − xak (19)
= Ak−1 ek−1 − Kk Hk Ak−1 ek−1 + (I − Kk Hk )wk−1 − Kk vk
= (I − Kk Hk )(Ak−1 ek−1 + wk−1 ) − Kk vk

Then, the posterior covariance of the new estimate is:

Pk ≡ E[ek eTk ] (20)


T
= E[(Lk (Ak−1 ek−1 + wk−1 ) − Kk vk )(Lk (Ak−1 ek−1 + wk−1 ) − Kk vk ) ]
= Lk E[(Ak−1 ek−1 + wk−1 )(Ak−1 ek−1 + wk−1 )T ]LTk + Kk E[vk vtk ]KTk
= Lk (Ak−1 Pk−1 ATk−1 + Qk−1 )LTk + Kk Rk KTk
= (I − Kk Hk )Pfk (I − Kk Hk )T + Kk Rk KTk
= Pfk − Kk Hk Pfk − Pfk HTk KTk + Kk Dk KTk (21)

where

Dk = Hk Pfk HTk + Rk (22)

The posterior covariance formula holds for any Kk . The cross terms canceled because xk−1 , wk−1 and
vk are uncorrelated and ek−1 is a function of xk−1 .

Our goal is to minimize the error in the estimate eki for any state i = 1, n. The problem is con-
structed as a mean squared error minimiser. The cost functional to be minimized is given by [1]:
" n #
X
2
J =E eki (23)
i=1

4
This is the sum of error variances for each state variable. Therefore the cost functional can be expressed
as the trace of the error covariance:
J = tr(Pk ) (24)
Since tr(Pk ) is a function of Kk and Kk is the only unknown, we request to minimize the tr(Pk ) w.r.t.
Kk .
∂tr(Pk )
= 0 (25)
∂Kk
The partial derivative of the trace is easily given using matrix calculus rules [4].
∂tr(Pfk − Kk Hk Pfk − Pfk HTk KTk + Kk Dk KTk )
= 0 (26)
∂Kk
−(Hk Pfk )T − Pfk HTk + 2Kk Dk = 0
Thus, the Kalman gain is given by:
Kk = Pfk HTk D−1
k (27)
= Pfk HTk (Hk Pfk HTk + Rk )
−1

Substituting this back into (20):


Pk = Pfk − Kk Hk Pfk − Pfk HTk (D−1 T f T f T −1 −1 T f T
k ) Hk (Pk ) + Pk Hk Dk Dk (Dk ) Hk (Pk ) (28)
= (I − Kk Hk )Pfk
Note that the covariance Pk does not directly depend on observations, zk , or the input vector. This
property makes it possible to compute and to analyse the covariance matrix in absence of any obser-
vation.

4 Summary of Kalman Filter


Model and Observation:
xk = Ak−1 xk−1 + Bk−1 uk−1 + wk−1
zk = Hk xk + vk
Initialization:
xa0 = µ0 with error covariance P0
Model Forecast Step/Predictor:
xfk = Ak−1 xak−1 + Bk−1 uk−1
Pfk = Ak−1 Pk−1 ATk−1 + Qk−1
Data Assimilation Step/Corrector:
xak = xfk + Kk (zk − Hk xfk )
Kk = Pfk HTk (Hk Pfk HTk + Rk )−1
Pk = (I − Kk Hk )Pfk

5
τ

Dynamics and Observation Model

Kalman Filter

Innovation
τ

Figure 2: The block diagram for Kalman Filter

5 KF original derivation
The following derivation respects Kalman original concept of derivation [10]. The notation that has
been changed for the consistency of the tutorial. The optimal estimate for the system (4)-(5) is derived
using orthogonal projections on the vector space of random variables.

Orthogonal Projection
Let the vector space Zk be the set of all linear combinations of the random variables (observations)
zk . Zk is a finite-dimensional subspace of the space of all possible observations.
 Xk 


Zk ≡ zk zk = αi zi (29)
i=1

Two vectors u, v ∈ Zk are orthogonal if their correlation is zero. Any vector x can be uniquely
decomposed in two parts: x ∈ Zk and x
e⊥Zk .
x= x+x
e (30)
Theorem [10]: Let {xk }, {zk } be random processes with zero mean. If either (1) the random processes
are Gaussian or (2) the optimal estimate is restricted to be a linear function of the observed random
variables, and the loss function L(ek ) is quadratic in ek = xk − x̂k , where L(·) is a positive non-
decreasing function of error
x̂k = optimal estimate of xk given {zk } (31)
= orthogonal projection xk of xk on Zk
Derivation
Assume Zk−1 is known and zk is measured. Let e zk be the component of zk orthogonal on Zk−1 . The
zk generates a linear manifold Zek .
component e
Zk = Zk−1 ∪ Zek (32)

6
Every vector in Zek is orthogonal to every vector in Z(k − 1).

Assume that x̂k−1 is know, then:

xak ≡ E[xk |Zk ] (33)


= E[xk |Zk−1 ] + E[xk |Zek ]
= xf + E[xk |Zek ]
k

Where the forecast value xfk of xk can be obtained as in (11) and the forecast covariance matrix is
given by (13):

xfk = Ak−1 xak−1 + Bk−1 uk−1 (34)


Pfk = Ak−1 Pk−1 ATk−1 + Qk−1 (35)

Assume that the last term in (33) is a linear operation on the random variable e
zk (called innovation):

E[xk |Zek ] = Kk e
zk (36)

where

zk = zk − zk
e (37)

but zk is the orthogonal projection of zk on Zk−1 . So:

zk = E[zk |Zk−1 ] (38)


= E[Hk xk + vk |Zk−1 ]
= Hk xfk

Hence, (33) becomes:

xak = xfk + Kk (zk − Hk xfk ) (39)


= (I − Kk Hk )xfk + Kk zk

The estimate error:

ek ≡ xk − xak (40)
= Ak−1 ek−1 − Kk Hk Ak−1 ek−1 + (I − Kk Hk )wk−1 − Kk vk
= (I − Kk Hk )(Ak−1 ek−1 + wk−1 ) − Kk vk

Therefore the error covariance matrix, derived as in (20), is:

Pk ≡ E[ek eTk ] (41)


= Pfk − Kk Hk Pfk − Pfk HTk KTk + Kk Dk KTk

where

Dk = Hk Pfk HTk + Rk (42)

7
ek ] is orthogonal to
We have to find an explicit formula for Kk by noting that the residual xk − E[xk |Z
e
Zk , therefore it is orthogonal to e
zk . Results:
ek ])e
0 = E[(xk − E[xk |Z zTk ] (43)
= E[(xk − Kk e zTk ]
zk )e
zTk ] − Kk E[e
= E[xk e zTk ]
zk e
ek , where xk ∈ Zk−1 , therefore xk ⊥Zek so xk ⊥e
We know that xk = xk + x zk .
zTk ] = E[(xk + x
E[xk e zTk ]
ek )e (44)
= zTk ]
xk e
E[e
zTk ]
= E[(xk − xk )e
zTk ]
= E[(Ak−1 xk−1 + Bk−1 uk−1 + wk−1 − E[xk |Zk−1 ])e
zTk ]
= E[(Ak−1 xk−1 + Bk−1 uk−1 + wk−1 − Ak−1 xak−1 − Bk−1 uk−1 )e
zTk ]
= E[(Ak−1 ek−1 + wk−1 )e
= Ak−1 E[ek−1 (zk − Hk xfk )T ] + E[wk−1 (zk − Hk xfk )T ]
We can obtain an expression of the innovation as function of the estimate error:
zk − Hk xfk = Hk xk + vk − Hk xfk (45)
= Hk (xk − xfk ) + vk
= Hk (Ak−1 xk−1 + Bk−1 uk−1 + wk−1 − Ak−1 xak−1 − Bk−1 uk−1 ) + vk
= Hk Ak−1 ek−1 + Hk wk−1 + vk
Substituting this into (44):
zTk ] = Ak−1 E[ek−1 (Hk Ak−1 ek−1 + Hk wk−1 + vk )T ]
E[xk e (46)
T
+E[wk−1 (Hk Ak−1 ek−1 + Hk wk−1 + vk ) ]
= Ak−1 Pk−1 ATk−1 HTk + Qk−1 HTk
= Pfk HTk
The last term from (43) is:
zTk ] = E[(Hk Ak−1 ek−1 + Hk wk−1 + vk )(Hk Ak−1 ek−1 + Hk wk−1 + vk )T ]
zk e
E[e (47)
= Hk Ak−1 Pk−1 ATk−1 HTk + Hk Qk−1 HTk + Rk
= Hk Pfk HTk + Rk
With (46) and (47), (43) becomes:
zTk ] − Kk E[e
0 = E[xk e zTk ]
zk e (48)
= Pfk HTk − Kk (Hk Pfk HTk + Rk )
Results that the gain matrix Kk is:
Kk = Pfk HTk (Hk Pfk HTk + Rk )−1 (49)
The Kalman Filter equations are given by (34), (35), (33), (49) and (41). Note that in the original
Kalman paper the gain derived there, ∆k , is given by ∆k = Ak−1 Kk .

8
6 Information form
In the information filter (inverse-covariance filter) the estimated vector and the covariance matrix has
been replaced by the information state yk and the information matrix Yk respectively.

yak ≡ Yk xak (50)


Yk ≡ P−1
k (51)

The same information form has the forecast estimate and covariance matrix.

yfk ≡ Yfk xfk (52)


Yfk ≡ (Pfk )−1 (53)

With this changes we try to write Kalman filter equations in the information form [6]. So the data
assimilation equations become:

xak = xfk + Kk (zk − Hk xfk ) (54)


Pk yak = Pfk yfk + Kk (zk − Hk Pfk xfk )
= (I − Kk Hk )Pfk yfk + Kk zk
= Pk yfk + Kk zk
yak = yfk + P−1
k Kk z k
= yfk + (Pfk )−1 (I − Kk Hk )−1 Kk zk
= yfk + [K−1 f −1
k (I − Kk Hk )Pk ] zk
= yfk + (K−1 f f −1
k Pk − Hk Pk ) zk
= yfk + [(Hk Pfk HTk + Rk )(HTk )−1 (Pfk )−1 Pfk − Hk Pfk ]−1 zk
yak = yfk + HTk R−1
k zk (55)

Derivation of the information matrix follows immediately from the posterior covariance matrix formula.

P−1
k = (Pfk )−1 (I − Kk Hk )−1 (56)
= (Pfk )−1 [Kk (K−1
− Hk )]−1
k
 −1
f −1 f T T −1 f −1
= (Pk ) Kk [(Hk Pk Hk + Rk )(Hk ) (Pk ) − Hk ]
 −1
= (Pfk )−1 T −1 f −1
Kk Rk (Hk ) (Pk )

= HTk R−1
k Kk
−1

f T f −1
= HTk R−1 T −1
k (Hk Pk Hk + Rk )(Hk ) (Pk )
f −1
= HTk R−1 T −1 T −1
k Hk + Hk Rk Rk (Hk ) (Pk )
Yk = Yfk + HTk R−1
k Hk (57)

9
Provided that Ak−1 is nonsingular, equations of the model forecast become:

xfk = Ak−1 xak−1 + Bk−1 uk−1 (58)


yfk = (Pfk )−1 Ak−1 Pk−1 ŷk−1 + Yfk Bk−1 uk−1
= [Ak−1 Pk−1 ATk−1 + Qk−1 ]−1 Ak−1 Pk−1 ŷk−1 + Yfk Bk−1 uk−1
 −1
= T
Pk−1 Ak−1 [Ak−1 Pk−1 Ak−1 + Qk−1 ]
−1 −1
ŷk−1 + Yfk Bk−1 uk−1

= (ATk−1 + Yk−1 A−1


k−1 Qk−1 )
−1
+ Yfk Bk−1 uk−1

k−1 ) Yk−1 Ak−1 Qk−1 ) (Ak−1 )


= (I + (A−1 T −1 −1 T −1
+ Yfk Bk−1 uk−1
yfk = (I + Mk−1 Qk−1 )−1 (ATk−1 )−1 + Yfk Bk−1 uk−1 (59)

where Mk = (A−1 T
k ) Yk Ak . Using the forecast covariance matrix recurrence formula we can derive
−1

its counterpart information matrix.

Yfk = (Ak−1 Pk−1 ATk−1 + Qk−1 )−1 (60)


= (I + Mk−1 Qk−1 ) −1
Mk−1

The summary of the information form of the Kalman filter:


Initialization (given µ0 and P0 ):

Y0 = P−1
0
ya0 = Y0 µ0

Model Forecast Step/Predictor:


T
Mk = (A−1
k ) Yk Ak
−1

Yfk = (I + Mk−1 Qk−1 )−1 Mk−1


yfk = (I + Mk−1 Qk−1 )−1 (ATk−1 )−1 + Yfk Bk−1 uk−1

Data Assimilation Step/Corrector:

yak = yfk + HTk R−1


k zk
Yk = Yfk + HTk R−1
k Hk

7 Innovation approach
The method solves the estimation problem much easier using the innovation process, which is the
observed process converted into a white-noise process. The innovation represents the new information
measure in the observation variable zk , being given all the past observations and the old information
deduced therefrom. It is defined as:

zk = zk − E[zk |Zk−1 ]
e (61)

Several properties of the innovation [3]:

10
1. The innovation e zk , associated with the current observation, is uncorrelated with the past obser-
vations: E[e T
zk zj ] = 0 where j = 1, 2..., k − 1.

2. The innovations are orthogonal to each other: E[e zTj ] = 0 where i 6= j.


zi e

3. There is a one-to-one correspondence between the innovation and the associated observation.

4. It has zero mean.


We know that xak ≡ E[xk |Zk ]. Since the innovation sequence {e zk } “contains all the information in
the observation sequence” {zk } [2], the estimate can be assumed to be a linear combination of all the
innovations up to k [9]:
k
X
xak = Ii e
zi (62)
i=1

where Ii is an n × m matrix to be determined. We know by the Projection Theorem that the forecast
error ek is uncorrelated with the innovation sequence. Therefore, for all i up to k:

zTi ]
0 = E[ek e (63)
= E[(xk − zTi ]
xak )e
zTi ]
E[xk e zTi ]
= E[xak e
k
X
zTi ]
E[xk e = zTi ]
zl e
Il E[e
l=1
= Ii E[e zTi ]
zi e

All the terms up to k − 1 go to zero since the innovations are temporally uncorrelated.

Where E[e zTi ] is the innovation covariance Cov(e


zi e zi ). Results:

zTi ]Cov −1 (e
Ii = E[xk e zi ) (64)

Substitute this into (62):


k
X
xak = zTi ]Cov −1 (e
E[xk e zi )e
zi (65)
i=1
k−1
X
= zTi ]Cov −1 (e
E[xk e zi )e zTk ]Cov −1 (e
zi + E[xk e zk )e
zk
i=1
k−1
X
= zTi ]Cov −1 (e
E[(Ak−1 xk−1 + Bk−1 uk−1 + wk−1 )e zi )e zTk ]Cov −1 (e
zi + E[xk e zk )e
zk
i=1
k−1
X
= Ak−1 zTi ]Cov −1 (e
E[xk−1e zi )e zTk ]Cov −1 (e
zi + E[xk e zk )e
zk
i=1
= Ak−1 xak−1 zTk ]Cov −1 (e
+ E[xk e zk )e
zk
= Ak−1 xak−1 + Kk e
zk

11
zTk ]Cov −1 (e
where Kk = E[xk e zk ).

The error in the estimate is:

ek ≡ xk − xak (66)
= Ak−1 ek−1 + wk−1 − Kk e
zk

Then, the posterior covariance of the new estimate is:

Pk ≡ E[ek eTk ] (67)


= Ak−1 E[ek−1 eTk−1 ]ATk−1 + Ak−1 E[ek−1e zTk ]KTk + E[wk−1 wTk−1 ] − E[wk−1e
zTk ]KTk
zk eTk−1 ]ATk−1 − Kk E[e
−Kk E[e zk wTk−1 ] + Kk E[e zTk ]KTk
zk e
= Ak−1 Pk−1 ATk−1 + Ak−1 Pk−1 ATk−1 Hk KTk + Qk−1 − Qk−1 HTk KTk
−Kk Hk Ak−1 Ak−1 PTk−1 − Kk Hk QTk−1 + Kk Cov(e zk )KTk

Denote Pfk = Ak−1 Pk−1 ATk−1 + Qk−1 and substitute this back into (67) yields:

Pk = Pfk − Kk Hk Pfk − Pfk HTk KTk + Kk Cov(e


zk )KTk (68)
= Pfk − Kk Hk Pfk − Pfk HTk KTk + E[xk e
zTk ]KTk

zTk ] = Pfk HTk (see (46)) and Cov(e


But E[xk e zk ) = Hk Pfk HTk + Rk (see (47)). Then:

Pk = (I − Kk Hk )Pfk (69)
Kk = Pfk HTk (Hk Pfk HTk + Rk )
−1
(70)

The equations (61), (62), (69) and (70) define the Kalman Filter algorithm.

8 Properties of Kalman Filter


Stability - Theorem Jaswinski
Asymptotic stability of the KF means that its solution will gradually become insensitive to its initial
conditions, provided that the norms of the noise covariance matrices, Qk , Rk are bounded.
If the system (4) and (5) with x0 , wk , vk , independent, is uniformly completely observable and uni-
formly completely controllable and if P0 ≥ 0 then the discrete time KF is uniformly asymptotically
stable.

Filter Divergence
This phenomenon occurs when the filter seems to behave well, having low error variance, but the
estimate is far away from the truth. This is due to errors in the system modeling: the model error
is higher than expected, the system model has the wrong form, the system is unstable or has bias
errors.

9 Remarks
1. The filter produce the error covariance matrix Pk which is an important estimate for the accuracy
of the estimate.

12
2. The filter is optimal for Gaussian sequences only.

3. While the measurement noise covariance Rk is possible to be determined, the process noise
covariance matrix Qk has to be computed to adjust to different dynamics. We are not able to
directly observe the process we are estimating. Therefore a tuning on Qk has to be performed
for superior filter performances.

10 Conclusion
While most of the filters are formulated in the frequency domain, the Kalman Filter is a purely time-
domain filter.

The main issue remains how the uncertainties are represented.

References
[1] Michael Athans. The Control Handbook, chapter Kalman Filtering, pages 589–594. CRC Press,
1996.

[2] A.V.Balakrishnan. Kalman Filtering Theory. Optimization Software, Inc., 1984.

[3] Mourad Barkat. Signal Detection and Estimation. Artech House Inc, 2005.

[4] Jon Dattorro. Convex Optimization & Euclidean Distance Geometry, chapter Matrix Calculus.
Meboo Publishing USA, 2006.

[5] Henk Eskes. Data Assimilation: The Kalman Filter.

[6] Mohinder S. Grewal and Angus P. Andrews. Kalman Filtering. Theory and Practice using Matlab
2nd. John Wiley & Sons, 2001.

[7] John M. Lewis and S.Lakshmivarahan. Dynamic Data Assimilation, A Least Squares Approach.
2006.

[8] Peter S. Mayback. Introduction to Random Signals and Applied Kalman Filtering. Academic
Press, 1979.

[9] Athanasios Papoulis. Probability, Random Variables, and Stochastic Process. McGraw-Hill, Inc.,
2nd edition, 1965.

[10] R.E.Kalman. A New Approach to Linear Filtering and Prediction Problems. Trans.ASME, 1960.

[11] Greg Welch and Gary Bishop. An Introduction to the Kalman Filter. SIGGRAPH, ACM, 2001.

13

You might also like