Professional Documents
Culture Documents
Derivation
Kalman Filter
Derivation
Overview
Kalman Filter
Derivation
References
Kalman Filter
Derivation
2. Covariance Extrapolation
Pk- +1 = k +1 Pk kT+1 + Qk
4. State Update
x$ k +1 = x$ -k +1 + K k +1 z k +1 H k +1x$ -k +1
5. Covariance Update
Pk +1 = Pk- +1 K k +1 Hk +1 Pk- +1
Kalman Filter
Derivation
x
z 1
= M
x
z
xn
nx1
a1n
a
z 11
= M O M
A
z
z
L
amn
am1
Kalman Filter
Derivation
(1)
x T y
y T x
=y=
x
x
(2)
x T Nx
= 2Nx
x
(3)
( A x + b) M( A x + b)
= 2 A TM A x + 2 A T M b = 2 A T M ( A x + b)
x
( where N is symmetric)
(4 )
(5 )
trace ( AC) = C T
A
Note for A C to be square, dim A = dim C T
trace ( ABA T ) = 2 A B
A
(where B is symmetric)
Kalman Filter
Derivation
(6)
(7)
Gradient Expression
(8)
(for W symmetric )
= P [ P 1 + H T R 1 H] H TR 1HP
1
Kalman Filter
Derivation
Assumptions
Kalman Filter
Derivation
Model
Process:
x k +1 = k +1 x k + w k
Measurement:
zk = Hkxk + vk
Assumptions:
E [ x 0 ] = 0x
E [ w k ] = 0k
E [ v k ] = 0k
cov w k , w j = Q k kj
cov v k , v j = R k kj
cov{ x 0 , x 0 } = P0
cov w k , v j = 0 , k ,
cov{ x 0 , w k } = 0 , k ,
cov x 0 , v j = 0 , j
Kalman Filter
Derivation
1.
Assumptions
2.
Kalman Filter
Derivation
Goal
Kalman Filter
Derivation
Derivation Steps
Kalman Filter
Derivation
Step 1
Kalman Filter
Derivation
Step 1
Kalman Filter
Derivation
Step 1
The final step is to take the expectation of this expression and set it equal
to zero. For the right hard side to be equal to zero, the following must be
true
E [ x$ k +1 x k +1 ] = [K k+1 Hk+1 k +1 - k+1 + K k+1
] E[ x k ] = 0
which implies
K k+1 Hk+1 k+1 - k+1 + K k+1
=0
or
K k+1
= (I - K k+1 H k+1 ) k+1
Kalman Filter
Derivation
Step 1
or equivalently
x$ k+1 = k+1 x$ k + K k+1 ( z k+1 H k +1 k +1 x$ k )
12
4 4
3 14444244443
extrapolated
state
residual of measurement
and prediction of
measurement
It remains to find the value of Kk+1 which minimizes the covariance of the
estimation error
Kalman Filter
Derivation
Step 1
Pk +1 looks like O
n2
Kalman Filter
Derivation
Step 2
Kalman Filter
Derivation
Step 2
has Cov Pk
k +1
x k +1 ~
x k +T1
Pk +1 = E ~
= E (~
x x
k +1
T
~
x
x
)
(
)
} kT+1 + E {w k+1 w kT+1}
k
k
k
= k +1 Pk kT+1 + Q k +1
Thus, equation 2 is
Pk+1 = k +1 Pk kT+1 + Q k +1
Kalman Filter
Derivation
Step 2
B. Find the cov ariance Pk +1 (the cov ariance of the final estimation error, equation 5).
It will be a function of K k +1 and Pk- +1 .
= [ I K k +1Hk +1 ] x$ -k+1 x k +1 + K k +1 v k +1
= [ I K k +1Hk +1 ] ~
x -k+1 K k +1 v k +1
Kalman Filter
Derivation
Thus
Step 2
~
x k +1 = x$ k +1 x k +1 = [ I K k +1 Hk +1 ] ~
x -k+1 + K k +1 x k +1
Taking the expected value of both sides will provide us with an expression
for Pk+1
x k +1 = E [ ~
x k +1 ~
x T k +1 ]
Pk +1 = Cov ~
= [ I K k +1 Hk +1 ]Pk- +1 [ I K k +1 Hk +1 ] + K k +1 R k +1 K Tk +1
T
Kalman Filter
Derivation
Step 2
Kalman Filter
Derivation
Step 2
(P HTK T ) = K H P
T
Tr (P H TK T ) = Tr (K H P)
Tr A B A T
= 2 A B where B is symmetric
A
Tr A C
= CT
A
We obtain the partial of Tr (Pk +1 )with request to K
Tr Pk +1
= 2 P HT + 2 K H P HT + 2 K
K
Kalman Filter
Derivation
Step 2
Kalman Filter
Derivation
Step 2
T
k +1
[H
k +1
T
k +1
Pk +1 H
+ R k +1
Kalman Filter
Derivation
Summary
x$ k +1 = x$ -k +1 + K k +1 z k +1 Hk +1 x$ -k +1
where
x$ -k +1 = k +1 x$ k
Pk- +1 = k +1 Pk kT+1 + Q k +1
Kalman Filter
Derivation
The standard Kalman Filter algorithm computes the gain Kk+1, then computes the
updated covariance Pk+1 as a function of the gain.
K k +1 = Pk +1 H
-
[H
T
k+1
k +1
T
k+1
Pk +1 H
+ R k +1
Pk +1 = [ I K k +1 Hk +1 ] Pk- +1
Usually dim z < dim x (size measurement vector smaller than number of states), so
this formulation is desirable.
Kalman Filter
Derivation
[(
Pk +1 = Pk +1
-
+H
T
k +1
1
k +1
H k +1
K k +1 = Pk +1 HTk +1 R k1+1
Kalman Filter
Derivation
(H
Pk +1 = (I K k +1 Hk +1 )Pk +1 = Pk +1 - Pk +1 H
-
[(P
T
k +1
(H
MIL
K k +1 = Pk +1 H
-
k +1
k +1
+H
T
k +1
T
k +1
Pk +1 H
= Pk- +1 HTk +1 R1
k +1
1
k +1
T
k +1
Hk +1
k +1
Pk +1 H
T
k +1
+ R k +1
-1
Hk +1 Pk- +1
+ R k +1
-1
GE
[(P )
-
k +1
+H
T
k +1
1
k +1
Hk +1
HTk +1 R k1+1