Professional Documents
Culture Documents
KF PDF
KF PDF
Lecture 8
The Kalman filter
8–1
Linear system driven by stochastic process
and so
T
Σx(t + 1) = E (A(xt − x̄t) + B(ut − ūt)) (A(xt − x̄t) + B(ut − ūt))
= AΣx(t)AT + BΣu(t)B T + AΣxu(t)B T + BΣux(t)AT
where
Σxu(t) = Σux(t)T = E(xt − x̄t)(ut − ūt)T
Σx = AΣxAT + BΣuB T
100
80
Σ11(t)
60
40
20
0
0 10 20 30 40 50 60 70 80 90 100
10
(xt)1
0
−5
−10
−15
0 10 20 30 40 50 60 70 80 90 100
15
10
5
(xt)1
−5
−10
−15
0 10 20 30 40 50 60 70 80 90 100
v
w x y
z −1 C
• x0, w0, w1, . . ., v0, v1, . . . are jointly Gaussian and independent
• wt are IID with E wt = 0, E wtwtT = W
• vt are IID with E vt = 0, E vtvtT = V
• E x0 = x̄0, E(x0 − x̄0)(x0 − x̄0)T = Σ0
(it’s not hard to extend to case where wt, vt are not zero mean)
since Xt and Yt are linear functions of x0, Wt, and Vt, we conclude they
are all jointly Gaussian (i.e., the process x, w, v, y is Gaussian)
covariance satisfies
Σx(t + 1) = AΣx(t)AT + W
Σx = AΣxAT + W
• finding x̂t|t, i.e., estimating the current state, based on the current and
past observed outputs
• finding x̂t+1|t, i.e., predicting the next state, based on the current and
past observed outputs
since xt, Yt are jointly Gaussian, we can use the standard formula to find
x̂t|t (and similarly for x̂t+1|t)
the Kalman filter is a clever method for computing x̂t|t and x̂t+1|t
recursively
so xt|Yt−1 and yt|Yt−1 are jointly Gaussian with mean and covariance
T
x̂t|t−1 Σt|t−1 Σt|t−1C
,
C x̂t|t−1 CΣt|t−1 CΣt|t−1C T + V
(xt|Yt−1)|(yt|Yt−1),
T T
−1
x̂t|t = x̂t|t−1 + Σt|t−1C CΣt|t−1C + V (yt − C x̂t|t−1)
T T
−1
Σt|t = Σt|t−1 − Σt|t−1C CΣt|t−1C + V CΣt|t−1
this is called the measurement update since it gives our updated estimate
of xt based on the measurement yt becoming available
condition on Yt to get
since wt is independent of Yt
T T
−1
x̂t|t = x̂t|t−1 + Σt|t−1C CΣt|t−1C + V (yt − C x̂t|t−1)
T T
−1
Σt|t = Σt|t−1 − Σt|t−1C CΣt|t−1C + V CΣt|t−1
in LQR,
in Kalman filter,
we can express KF as
vt
wt xt yt
−1 C
z
et
Lt
x̂t|t−1 ŷt|t−1
z −1 C
x̂t|t−1
A
Σ̂ satisfies ARE
and
system is
xt+1 = Axt + wt, yt = Cxt + vt
with xt ∈ R6, yt ∈ R
eigenvalues of A:
18000
16000
14000
12000
E yt2
10000
8000
6000
4000
2000
0
0 20 40 60 80 100 120 140 160 180 200
20
19.5
19
E e2t
18.5
18
17.5
17
0 20 40 60 80 100 120 140 160 180 200
t
prediction error variance converges to steady-state value 18.7
top plot shows yt; bottom plot shows et (on different vertical scale)
300
200
100
yt 0
−100
−200
−300
0 20 40 60 80 100 120 140 160 180 200
t
30
20
10
et
−10
−20
−30
0 20 40 60 80 100 120 140 160 180 200