You are on page 1of 28

1

ACM 116: The Kalman filter

• Example

• General Setup

• Derivation

• Numerical examples
– Estimating the voltage
– 1D tracking
– 2D tracking
2

Example: Navigation Problem


• Truck on “frictionless” straight rails

• Initial position X0 = 0

• Movement is buffeted by random accelerations

• We measure the position every ∆t seconds

• State variables (Xk , Vk ) position and velocity at time k∆t

Xk = Xk−1 + Vk−1 ∆t + ak−1 ∆t2 /2


Vk = Vk−1 + ak−1 ∆t

where ak−1 is a random acceleration

• Observations
Yk = Xk + Zk
where Zk is a noise term.

The goal is to estimate the position and velocity at all times.


3

General Setup
Estimation of a stochastic dynamic system

• Dynamics
Xk = Fk−1 Xk−1 + Bk−1 uk−1 + Wk−1
– Xk : state of the system at time k
– uk−1 : control-input
– Wk−1 : noise

• Observations
Yk = Hk Xk + Zk
– Yk is observed
– Zk is noise

• The noise realizations are all independent

• Goal: predict state Xk from past data Y0 , Y1 , . . . , Yk−1 .


4

Derivation
Derivation in the simpler model where the dynamics is of the form

Xk = ak−1 Xk−1 + Wk−1

and the observations


Yk = Xk + Zk

The objective is to find, for each time k, the minimum MSE filter based on
Y0 , Y1 , . . . , Yk−1
k
(k−1)
X
X̂k = hj Yk−j
j=1

To find the filter, we apply the orthogonality principle


k
(k−1)
X
E((Xk − hj Yk−j )Y` ) = 0, ` = 0, 1, . . . , k − 1.
j=1
5

Recursion
The beautiful thing about the Kalman filter is that one can almost deduce the
optimal filter to predict Xk+1 from that predicting Xk .
(k) (k) (k−1)
hj+1 = (ak − h1 )hj , j = 1, . . . , k.

(k)
Given the filter h(k−1) , we only need to find h1 to get the filter at the next
time step.
6

(k)
How to find h1 ?
Observe that the next prediction is equal to
k
(k) (k) (k−1)
X
X̂k+1 = h1 Yk + (ak − h1 )hj Yk−j
j=1
(k)
= ak X̂k + h1 (Yk − X̂k )

Interpretation
(k)
X̂k+1 = ak X̂k + h1 Ik

• ak X̂k is the prediction based on the estimate at time k


(k)
• h1 Ik is a corrective term which is available since we now see Yk
(k)
– h1 is called the gain
– Ik = Yk − X̂k is called the innovation
7

Error of Prediction
(k)
To find h1 , we look at the error of prediction

k = Xk − X̂k

We have the recursion


(k) (k)
k+1 = (ak − h1 )k + Wk − h1 Zk

• 0 = Z0

• E(k ) = 0
(k) (k)
• E(2k+1 ) = [ak − h1 ]2 E(2k ) + E(Wk2 ) + [h1 ]2 E(Zk2 )
8

(k)
To minimize the MSE k+1 , we adjust h1 so that
(k) (k)
∂h(k) E(2k+1 ) = 0 = −2(ak − h1 )E(2k ) + 2h1 E(Zk2 )
1

which is given by
(k) ak E(2k )
h1 =
E(2k ) + E(Zk2 )

Note that this gives the recurrence relation


(k)
E(2k+1 ) = ak (ak − h1 )E(2k ) + E(Wk2 )
9

The Kalman Filter Algorithm


• Initialization X̂0 = 0, E(20 ) = E(Z02 )

• Loop: for k = 0, 1, . . .

(k) ak E(2k )
h1 =
E(2k ) + E(Zk2 )
(k)
X̂k+1 = ak X̂k + h1 (Yk − X̂k )
(k)
E(2k+1 ) = ak (ak − h1 )E(2k ) + E(Wk2 )
10

Benefits
• Requires no knowledge about the structure of Wk and Zk (only variances)

• Easy implementation

• Many applications
– Inertial guidance system
– Autopilot
– Satellite navigation system
– Many others
11

General Formulation

Xk = Fk−1 Xk−1 + Wk−1


Yk = Hk Xk + Zk

The covariance of Wk is Qk and that of Zk is Rk .


Two variables:

• X̂k|k estimate of the state at time k based upon Y0 , . . . , Yk−1

• Ek|k error covariance matrix, Ek|k = Cov(Xk − X̂k|k )


12

Prediction

X̂k+1|k = Fk X̂k|k−1
Ek+1|k = Fk Ek|k−1 FkT + Qk

Update

Ik = Yk − Hk X̂k+1|k Innovation
Sk = Hk Ek+1|k HkT + Rk Innovation covariance
Kk = Ek+1|k HkT Sk−1 Kalman Gain
X̂k+1|k+1 = X̂k+1|k + Kk Ik Updated state estimate
Ek+1|k+1 = (Id − Kk Hk )Ek+1|k Updated error covariance
13

Estimating Constant Voltage


We wish to estimate some voltage which is almost constant except for some
small random fluctuations. Our measuring device is imperfect (e.g. because of
a poor A/D conversion). The process is governed by:

X k = X 0 + Wk , k = 1, 2, . . .

with X0 = 0.5V , and the measurements are

Zk = Xk + Vk , k = 1, 2, . . .

where Wk , Vk are uncorrelated Gaussian white noise processes, with


R := Var(Vk ) = 0.01, Var(Wk ) = 10−5 .
14

Accurate knowledge of measurement variance, Rest=R=0.01


0.8

0.7

0.6

0.5
Voltage

0.4

0.3

0.2

0.1 estimate
exact process
measurements
0
0 5 10 15 20 25 30 35 40 45 50
Iteration
15

Optimistic estimate of measurement variance, Rest=0.0001


0.8

0.7

0.6

0.5
Voltage

0.4

0.3

0.2

0.1 estimate
exact process
measurements
0
0 5 10 15 20 25 30 35 40 45 50
Iteration
16

Pessimistic estimate of measurement variance, Rest=1


0.8

0.7

0.6

0.5
Voltage

0.4

0.3

0.2

0.1 estimate
exact process
measurements
0
0 5 10 15 20 25 30 35 40 45 50
Iteration
17

1D Tracking
Estimation of the position of a vehicle.
Let X be a state variable (position and speed), and A is a transition matrix
 
1 ∆t
A=  .
0 1

The process is governed by:

Xn+1 = AXn + Wn

where Wn is a zero-mean Gaussian white noise process. The observation is

Yn = CXn + Zn

where the matrix C only picks up the position and Zn is another zero-mean
Gaussian white noise process independent of Wn .
18

Estimation of a moving vehicle in 1!D.


8
exact position
measured position
estimated position
6

4
Position

!2

!4
0 5 10 15 20 25
Time
19

2D Example
General setup

X(t + 1) = F X(t) + W (t), W ∼ N (0, Q),


Y (t) = HX(t) + V (t), V ∼ N (0, R)

Moving particle at constant velocity subject to random perturbations in its


trajectory. The new position (x1, x2) is the old position plus the velocity
(dx1, dx2) plus noise w.

      
x1 (t) 1 0 1 0 x1 (t − 1) w1(t − 1)
      
 x (t) 
 2
0 1 0 1  x2 (t − 1)   w2 (t − 1) 
   
= +
  
  
dx1 (t)
 
0
 0 1 0 dx1 (t − 1) dw1 (t − 1)
   

dx2 (t) 0 0 0 1 dx2 (t − 1) dw2 (t − 1)
20

Observations
We only observe the position of the particle.

 
     x1 (t)   
y (t) 1 0 0 0  x2 (t)  v1 (t)
 1 = 
+
y2 (t) 0 1 0 0 dx1 (t)
 v2 (t)
dx2 (t)

Source: http://www.cs.ubc.ca/∼murphyk/Software/Kalman/kalman.html
21

Implementation
% Make a point move in the 2D plane
% State = (x y xdot ydot). We only observe (x y).

% This code was used to generate Figure 17.9 of


% "Artificial Intelligence: a Modern Approach",
% Russell and Norvig, 2nd edition, Prentice Hall, in preparation.

% X(t+1) = F X(t) + noise(Q)


% Y(t) = H X(t) + noise(R)

ss = 4; % state size
os = 2; % observation size
F = [1 0 1 0; 0 1 0 1; 0 0 1 0; 0 0 0 1];
H = [1 0 0 0; 0 1 0 0];
Q = 1*eye(ss);
R = 10*eye(os);
initx = [10 10 1 0]’;
initV = 10*eye(ss);
22

seed = 8; rand(’state’, seed);


randn(’state’, seed);
T = 50;
[x,y] = sample_lds(F,H,Q,R,initx,T);
23

Apply Kalman Filter


[xfilt,Vfilt] = kalman_filter(y,F,H,Q,R,initx,initV);
dfilt = x([1 2],:) - xfilt([1 2],:);
mse_filt = sqrt(sum(sum(dfilt.ˆ2)))

figure;
plot(x(1,:), x(2,:), ’ks-’);
hold on
plot(y(1,:), y(2,:), ’g*’);
plot(xfilt(1,:), xfilt(2,:), ’rx:’);
hold off
legend(’true’, ’observed’, ’filtered’, 0)
xlabel(’X1’), ylabel(’X2’)
24

70
true
observed
filtered
60

50
X2

40

30

20

10
!40 !30 !20 !10 0 10 20
X1
25

140

120

100

80
X2

60

40

20
true
observed
filtered
0
0 20 40 60 80 100 120 140
X1
26

Apply Kalman Smoother


[xsmooth, Vsmooth] = kalman_smoother(y,F,H,Q,R,initx,initV);
dsmooth = x([1 2],:) - xsmooth([1 2],:);
mse_smooth = sqrt(sum(sum(dsmooth.ˆ2)))

figure;
hold on
plot(x(1,:), x(2,:), ’ks-’);
plot(y(1,:), y(2,:), ’g*’);
plot(xsmooth(1,:), xsmooth(2,:), ’rx:’);
hold off
legend(’true’, ’observed’, ’smoothed’, 0)
xlabel(’X1’), ylabel(’X2’)
27

(0
*+,-
./0-+1-2
03..*4-2
'0

50
)2

!0

#0

20

10
!!0 !#0 !20 !10 0 10 20
)1
28

140

120

100

80
X2

60

40

20
true
observed
smoothed
0
0 20 40 60 80 100 120 140
X1

You might also like