You are on page 1of 35

Kalman Filter and Parameter

Identification

Florian Herzog

May 2010
Continuous-time Kalman Filter

In this chapter, we shall use stochastic processes with independent increments w1(.) and
w2(.) at the input and the output, respectively, of a dynamic system. We shall switch
back and forth between the mathematically precise description of these (normalized)
Brownian motions by their increments and the sloppy description by the corresponding
white noises v̇(.) and ṙ(.), respectively, which is preferred in engineering circles
1/2
Q (t)dw1(t) = [v̇(t) − u(t)]dt (1)
1/2
R (t)dw2(t) = [ṙ(t) − r(t)]dt , (2)

where Q1/2(t) and R1/2(t) are the volatility parameters.


We assume a linear dynamical system, where the inputs u(t) and the measurements
y(t) are disturbed by noise.

Stochastic Systems, 2010 2


Continuous-time Kalman Filter

Consider the following linear time-varying dynamic system of order n which is driven
by the m-vector-valued white noise v̇(.). Its initial state x(t0) is a random vector ξ
and its p-vector-valued output y(.) is corrupted by the additive white noise ṙ(.):

System description in the mathematically precise form:

dx(t) = A(t)x(t)dt + B(t)dv(t)


1/2
= [A(t)x(t) + B(t)u(t)]dt + B(t)Q (t)dw1(t) (3)
x(t0) = ξ (4)
y(t)dt = C(t)x(t)dt + dr(t)
1/2
= [C(t)x(t) + r(t)]dt + R (t)dw2(t) , (5)
(6)

Stochastic Systems, 2010 3


Continuous-time Kalman Filter

System description in the engineer’s form:

ẋ(t) = A(t)x(t) + B(t)v̇(t) (7)


x(t0) = ξ (8)
y(t) = C(t)x(t) + ṙ(t) , (9)
where
ξ : N (x0, Σ0) (10)
v̇(t) : N (u(t), Q(t)) (11)
ṙ(t) : N (r(t), R(t)) . (12)

1
Note that in the last two lines, we have deleted the factor dt which ought to multiply
Q(t) and R(t).

Stochastic Systems, 2010 4


Continuous-time Kalman Filter

The filtering problem is stated as follows: Find the optimal filter with the state vector
x
b which is optimal in the following sense:
• The state estimation is bias-free:
E{x(t) − x b(t)} ≡ 0.
• The error of the state estimate has infimal covariance matrix:
Σopt(t) ≤ Σsubopt(t).

Stochastic Systems, 2010 5


Continuous-time Kalman Filter

With the above-mentioned assumptions that ξ , v , and r are mutually independent and
Gaussian, the optimal filter is the following linear dynamic system:

x(t)
ḃ = A(t)b
x(t) + B(t)u(t) + H(t)[y(t)−r(t)−C(t)b
x(t)]
= [A(t)− H(t)C(t)]b
x(t) + B(t)u(t) + H(t)[y(t)−r(t)] (13)
x
b(t0) = x0 (14)
with
T −1
H(t) = Σ(t)C (t)R (t) , (15)

where Σ(t) is the covariance matrix of the error x


b(t) − x(t) of the state estimate
x
b(t).

Stochastic Systems, 2010 6


Continuous-time Kalman Filter

Σ(t) is the covariance matrix of the error x b(t) − x(t) of the state estimate x
b(t)
satisfying the following matrix Riccati differential equation:
T
Σ̇(t) = A(t)Σ(t) + Σ(t)A (t)
T −1 T
− Σ(t)C (t)R (t)C(t)Σ(t) + B(t)Q(t)B (t) (16)
Σ(t0) = Σ0 . (17)

The estimation error e(t) = x(t) − x


b(t) satisfies the differential equation

ė = ẋ − x

= Ax + Bu + B(v̇ −u) − [A−HC]b
x − Bu − HCx − H(ṙ−r)
= [A−HC]e + B(v̇ −u) − H(ṙ−r) . (18)

Stochastic Systems, 2010 7


Continuous-time Kalman Filter

The covariance matrix Σ(t) of the state estimation errer e(t) is governed by the matrix
differential equation
T T T
Σ̇ = [A−HC]Σ + Σ[A−HC] + BQB + HRH (19)
T T T −1
= AΣ + ΣA + BQB − ΣC R CΣ
T −1 T −1 T
+ [H −ΣC R ] R [H −ΣC R ]
T T T −1
≥ AΣ + ΣA + BQB − ΣC R CΣ . (20)

Obviously, Σ̇(t) is infimized at all times t for


T −1
H(t) = Σ(t)C (t)R (t) . (21)

Stochastic Systems, 2010 8


Continuous-time Kalman Filter

Given (19) for the covariance matrix Σ(t):


T T T
Σ̇ = [A−HC]Σ + Σ[A−HC] + BQB + HRH .

Its first differential with respect to H , at some value of H , with increment dH can be
formulated as follows:

∂ Σ̇(H)
dΣ̇(H, dH) = dH
∂H
T T T T
= − dHCΣ − ΣC dH + dHRH + HRdH
T
= U [HR−ΣC ]T dH , (22)

where T is the matrix transposing operator and U is the operator which adds the
transposed matrix to its matrix argument.

Stochastic Systems, 2010 9


Continuous-time Kalman Filter

Consider the following stochastic system of first order with the random initial condition
ξ : N (x0, Σ0) and the uncorrelated white noises v̇ : N (0, Q) and ṙ : N (0, R):

ẋ(t) = ax(t) + b[u(t) + v̇(t)]


x(0) = ξ
y(t) = x(t) + ṙ(t) .

Find the time-invariant Kalman filter! The resulting time-invariant Kalman filter is
described by the following equations:
r à r !
Q Q
x(t)
ḃ = − a2 + b2 x b(t) + bu(t) + a + a2 + b2 y(t)
R R
x
b(0) = x0 .

Stochastic Systems, 2010 10


Continuous/discrete-time Kalman Filter

The continuous-time measurement (9) which is corrupted by the white noise error ṙ(t)
with infinite covariance,

y(t) = C(t)x(t) + ṙ(t) ,

must be replaced by an “averaged” discrete-time measurement

yk = Ck x(tk ) + r
ek (23)
r
ek : N (r k , Rk ) (24)
Cov(e
ri, r
ej ) = Riδij . (25)

Stochastic Systems, 2010 11


Continuous/discrete-time Kalman Filter

At any sampling time tk we denote measurements by x b(tk |tk−1) and error covariance
matrix Σ(tk |tk−1) before the new measurement yk has been processed and the state
estimate x
b(tk |tk ) and the corresponding error covariance matrix Σ(tk |tk ) after the
new measurement yk has been processed.
As a consequence, the continuous-time/discrete-time Kalman filter alternatingly per-
forms update steps at the sampling times tk processing the latest measurement
information yk and open-loop extrapolation steps between the times tk and tK+1 with
no measurement information available:

Stochastic Systems, 2010 12


Continuous/discrete-time Kalman Filter

The Continuous-Time/Discrete-Time Kalman Filter


Update at time tk :

x
b(tk |tk ) = x
b(tk |tk−1)
h i−1
T T
+ Σ(tk |tk−1)Ck Ck Σ(tk |tk−1)Ck +Rk

× [yk −r k −Ck x
b(tk |tk−1)] (26)
Σ(tk |tk ) = Σ(tk |tk−1)
h i−1
T T
− Σ(tk |tk−1)Ck Ck Σ(tk |tk−1)Ck +Rk Ck Σ(tk |tk−1) (27)

or, equivalently:
−1 −1 T −1
Σ (tk |tk ) = Σ (tk |tk−1) + Ck Rk Ck . (28)

Stochastic Systems, 2010 13


Continuous/discrete-time Kalman Filter

Continuous-time extrapolation from tk to tk+1 with the initial conditions x


b(tk |tk ) and
Σ(tk |tk ):

x(t|t
ḃ k) = A(t)b
x(t|tk ) + B(t)u(t) (29)
T T
Σ̇(t|tk ) = A(t)Σ(t|tk ) + Σ(t|tk )A (t) + B(t)Q(t)B (t) . (30)

Assuming that the Kalman filter starts with an update at the initial time t0, this Kalman
filter is initialized at t0 as follows:

x
b(t0|t−1) = x0 (31)
Σ(t0|t−1) = Σ0 . (32)

Stochastic Systems, 2010 14


Discrete-time Kalman Filter

We consider the following discrete-time stochastic system:

xk+1 = Fk xk + Gk v
ek (33)
x0 = ξ (34)
yk = Ck xk + r
ek (35)
ξ : N (x0, Σ0) (36)
v
e : N (uk , Qk ) (37)
Cov(e
vi, v
ej ) = Qiδij (38)
r
e : N (r k , Rk ) (39)
Cov(e
ri, r
ej ) = Riδij (40)
ξ, v
e, r
e : mutually independent. (41)

Stochastic Systems, 2010 15


Discrete-time Kalman Filter

We have the following (approximate) “zero-order-hold-equivalence” correspondences to


the continuous-time stochastic system:

Fk = Φ(tk+1, tk ) transition matrix (42)


Z tk+1
Gk = Φ(tk+1, t)B(t)dt zero order hold equivalence (43)
tk

Ck = C(tk ) (44)
Q(tk )
Qk = (45)
tk+1 −tk
R(tk )
Rk = ∆tk = averaging time at time tk (46)
∆tk

Stochastic Systems, 2010 16


Discrete-time Kalman Filter

The Discrete-Time Kalman Filter


Update at time tk :

x
bk|k = x
bk|k−1
h i−1
T T
+ Σk|k−1Ck Ck Σk|k−1Ck +Rk [yk −r k −Ck x
bk|k−1] (47)
h i−1
T T
Σk|k = Σk|k−1 − Σk|k−1Ck Ck Σk|k−1Ck +Rk Ck Σk|k−1 (48)

or, equivalently:
−1 −1 T −1
Σk|k = Σk−1|k−1 + Ck Rk Ck (49)

Stochastic Systems, 2010 17


Discrete-time Kalman Filter

Extrapolation from tk to tk+1:

x
bk+1|k = Fk x
bk|k + Gk uk (50)
T T
Σk+1|k = Fk Σk|k Fk + Gk Qk Gk (51)

Initialization at time t0:

x
b0|−1 = x0 (52)
Σ0|−1 = Σ0 . (53)

Stochastic Systems, 2010 18


Extended Kalman Filter

The linearized problem may not be a good approximation we stick to the following
philosophy:
• Where it does not hurt: Analyze the nonlinear system.
• Where it hurts: use linearization.
We consider the following nonlinear stochastic system:

ẋ(t) = f (x(t), u(t), t) + B(t)v̇(t) (54)


x(t0) = ξ (55)
y(t) = g(x(t), t) + ṙ(t) (56)
ξ : N (x0, Σ0) (57)
v̇ : N (v(t), Q(t)) (58)
ṙ : N (r(t), R(t)) . (59)

Stochastic Systems, 2010 19


Extended Kalman Filter

The extended Kalman filter


Dynamics of the state estimation:

x(t)
ḃ = x(t), u(t), t) + B(t)v(t) + H(t)[y(t)−r(t)−g(b
f (b x(t), t)] (60)
x
b(t0) = x0 (61)

with H(t) = Σ(t)C T (t)R−1(t) . The “error covariance” matrix Σ(t) must be
calculated in real-time using the folloing matrix Riccati differential equation:
T
Σ̇(t) = A(t)Σ(t) + Σ(t)A (t)
T −1 T
− Σ(t)C (t)R (t)C(t)Σ(t) + B(t)Q(t)B (t) (62)
Σ(t0) = Σ0 . (63)

Stochastic Systems, 2010 20


Extended Kalman Filter

The matrices A(t) and C(t) correspond to the dynamics matrix and the output
matrix, respectively, of the linearization of the nonlinear system around the estimated
trajectory:

∂f (b
x(t), u(t), t)
A(t) = (64)
∂x
∂g(bx(t), t)
C(t) = . (65)
∂x

Stochastic Systems, 2010 21


Extended Kalman Filter

Because the dynamics of the “error covariance matrix” Σ(t) correspond to the linearized
system, we face the following problems:
• This filter is not optimal.
• The state estimate will be biased.
• The reference point for the linearization is questionable.
• The matrix Σ(t) is only an approximation of the state error covariance matrix.
• The state vectors x(t) und x
b(t) are not Gaussian even if ξ , v̇ , and ṙ are.

Stochastic Systems, 2010 22


Extended Kalman Filter

Again, the continous-time measurement corrupted with the white noise ṙ with infinite
covariance

y(t) = g(x(t), t) + ṙ(t) (66)

must be replaced by an “averaged” dicrete-time measurement

yk = g(x(tk ), tk ) + r
ek (67)
r
ek : N (r k , Rk ) (68)
Cov(e
ri, r
ej ) = Riδij . (69)

Stochastic Systems, 2010 23


Continuous-time/discrete-time extended Kalman filter

Update at time tk :

x
b(tk |tk ) = x
b(tk |tk−1)
h i−1
T T
+ Σ(tk |tk−1)Ck Ck Σ(tk |tk−1)Ck + Rk

× [yk −r k −g(b
x(tk |tk−1), tk )] (70)
Σ(tk |tk ) = Σ(tk |tk−1) (71)
h i−1
T T
− Σ(tk |tk−1)Ck Ck Σ(tk |tk−1)Ck + Rk Ck Σ(tk |tk−1) (72)

Extrapolation between tk and tk+1:

x(t|t
ḃ k) = f (b
x(t|tk ), u(t), t) + B(t)v(t) (73)
T T
Σ̇(t|tk ) = A(t)Σ(t|tk ) + Σ(t|tk )A (t) + B(t)Q(t)B (t) (74)

Stochastic Systems, 2010 24


Continuous-time/discrete-time extended Kalman filter

Again, the matrices A(t) and Ck are the dynamics matrix and the output matrix,
respectively, of the nonlinear system linearized around the estimated trajectory:

∂f (b
x(t), u(t), t)
A(t) = (75)
∂x
∂g(bx(tk |tk−1), tk )
Ck = (76)
∂x

Initialization at t0:

x
b(t0|t−1) = x0 (77)
Σ(t0|t−1) = Σ0 . (78)

Stochastic Systems, 2010 25


Kalman Filter and Parameter Identification

The system and the measurement equations in discrete time are given by
1
(1)
xk+1 = Fk xk + Gk uk + [Gk Qk2 ]wk (79)
1
(2)
yk = Ck xk + r k + [Rk2 ]wk (80)

The update equations of the Kalman Filter are


T T −1
x
bk|k = x
bk|k−1 + Σk|k−1Ck [Ck Σk|k−1Ck + Rk ] vk (81)
vk = [yk − r k − Ck x
bk|k−1] (82)
Σk|k = Σk|k−1
T T −1
−Σk|k−1Ck [Ck Σk|k−1Ck + Rk ] Ck Σk|k−1 (83)

where vk denotes the prediction error.

Stochastic Systems, 2010 26


Kalman Filter and Parameter Identification

The prediction equations are given by

x
bk|k−1 = Fk−1x
bk−1|k−1 + Gk−1uk−1 (84)
T T
Σk|k−1 = Fk−1Σk−1|k−1Fk−1 + Gk−1Qk Gk−1 (85)

The algorithm is carried out in two distinct parts:


• Prediction Step
The first step consists of forming an optimal predictor of the next observation, given
all the information available up to time tk−1. This results in a so-called a priori
estimate for time t.
• Updating Step
The a priori estimate is updated with the new information arriving at time tk that is
combined with the already available information from time tk−1. The result of this
step is the filtered estimate or the a posteriori estimate

Stochastic Systems, 2010 27


Kalman Filter and Parameter Identification

The prediction error is defined by

vk = yk − yk|k−1 = yk − E[yk |Fk−1] = yk − r k − Ck x


bk|k−1

and its covariance matrix can be calculated by

Kk|k−1 = Cov(vk |Fk−1)


T
= E[vk vk |Fk−1]
T T (2) (2)T
= Ck E[b
xk|k−1x
bk|k−1|Fk−1]Ck + E[wk wk |Fk−1]
T
= Ck Σk|k−1Ck + Rk

This result is important for forming the log likelihood estimator.

Stochastic Systems, 2010 28


Kalman Filter and Parameter Identification

We start to examine the appropriate log-likelihood function with our frame work of the
Kalman Filter. Let ψ ∈ Rd be the vector of unknown parameters, which belongs to the
admissible parameter space Ψ. The various system matrices of the state space models,
as well as the variances of stochastic processes, given in the previous section, depend
on ψ . The likelihood function of the state space model is given by the joint density of
the observational data y = (yN , yN −1, . . . , y1)

l(y, ψ) = p(yN , yN −1, . . . , y1; ψ) (86)

which reflects how likely it would have been to have observed the date if ψ were the
true values of the parameters.

Stochastic Systems, 2010 29


Kalman Filter and Parameter Identification

Using the definition of condition probability and employing Bayes’s theorem recursively,
we now write the joint density as product of conditional densities

l(y, ψ) = p(yN |yN −1, . . . , y1; ψ) · . . . · p(yk |yk−1, . . . , y1; ψ) · . . . · p(y1; ψ) (87)

where we approximate the initial density function p(y1; ψ) by p(y1|y0; ψ). Further-
more, since we deal with a Markovian system, future values of yl with l > k only
depend on (yk , yk−1, . . . , y1) through the current values of yk , the expression in (87)
can be rewritten to depend on only the last observed values, thus it is reduced to

l(y, ψ) = p(yN |yN −1; ψ) · . . . · p(yk |yk−1; ψ) · . . . · p(y1; ψ) (88)

Stochastic Systems, 2010 30


Kalman Filter and Parameter Identification

For the purpose of estimating the parameter vector ψ we express the likelihood function
in terms of the prediction error. Since the variance of the prediction error vk is the
same as the conditional variance of yk , i.e.

Cov[vk |Fk−1] = Cov[yk |Fk−1] (89)

we are able to state the density function. The density function p(yk |yk−1; ψ) is a
Gauss Normal distribution with conditional mean

E[yk |Fk−1] = r k + Ck x
bk|k−1 (90)

with conditional covariance matrix

Cov[yk |Fk−1] = Kk|k−1 . (91)

Stochastic Systems, 2010 31


Kalman Filter and Parameter Identification

The density function of a n-dimensional Normal distribution can be written in matrix


notation as
1 −1 T −1 (x−µ)
e2 (x−µ) Σ
np
(2π) 2 |Σ|

where µ denotes the mean value and Σ covariance matrix. In our case x − µ is replaced
by yk − r k − Ck x bk|k−1 = vk and the covariance matrix is Kk|k−1. The probability
density function thus takes the form

1 −1 T −1
2 vk Kk|k−1 vk
p(yk |yk−1; ψ) = pq e , (92)
(2π) 2 |Kk|k−1|

where vk ∈ Rp.

Stochastic Systems, 2010 32


Kalman Filter and Parameter Identification

Taking the logarithm of (92) yields

n 1 1 T −1
ln(p(yk |yk−1; ψ)) = − ln(2π) − ln |Kk|k−1| − vk Kk|k−1vk (93)
2 2 2
which gives us the whole log-likelihood function as the sum

N
1X T −1
L(y, ψ) = − n ln(2π) + ln |Kk|k−1| + vk Kk|k−1vk (94)
2 k=1

In order to estimate the unknown parameters from the log-likelihood function in (94)
we employ an appropriate optimization routine which maximizes L(y, ψ) with respect
to ψ .

Stochastic Systems, 2010 33


Kalman Filter and Parameter Identification

iven the simplicity of the innovation form of the likelihood function, we can find
the values for the conditional expectation of the prediction error and the conditional
covariance matrix for every given parameter vector ψ using the Kalman Filter algorithm.
Hence, we are able to calculate the value of the conditional log likelihood function
numerically. The parameter are estimated based on the optimization

bM L = arg max L(y, ψ)


ψ (95)
ψ∈Ψ

where Ψ denotes the parameter space. The optimization is either an unconstrained


optimization problem if ψ ∈ Rd or a constrained optimization if the parameter space
Ψ ⊂ Rd .

Stochastic Systems, 2010 34


Kalman Filter and Parameter Identification

Stochastic Systems, 2010 35

You might also like