You are on page 1of 17

13.

4 Scalar Kalman Filter


Data Model
To derive the Kalman filter we need the data model:
s[n ] = as[n 1] + u[n ] < State Equation >
x[n ] = s[n ] + w[n ]

< Observation Equation >

Assumptions
1. u[n] is zero mean Gaussian, White, E{u 2 [n ]} = u2
2. w[n] is zero mean Gaussian, White, E{w 2 [n ]} = n2
3. The initial state is s[ 1] ~ N ( s , s2 )
4. u[n], w[n], and s[1] are all independent of each other

Can vary
with time

To simplify the derivation: let s = 0 (well account for this later)


1

Goal and Two Properties


Goal: Recursively compute s[n | n ] = E{s[n ] | x[0], x[1], , x[n ]}
Notation:
X[n] is set of all observations
x[n] is a single vector-observation

X[n ] = [x[0], x[1], , x[n ]]T

Two Properties We Need


1. For the jointly Gaussian case, the MMSE estimator of zero mean
based on two uncorrelated data vectors x1 & x2 is (see p. 350 of
text)
= E{ | x1 , x 2 } = E{ | x1} + E{ | x 2 }
2. If = 1 + 2 then the MSEE estimator is
= E{ | x} = E{1 + 2 | x} = E{1 | x} + E{ 2 | x}
(a result of the linearity of E{.} operator)
2

Derivation of Scalar Kalman Filter


Recall from Section 12.6 Innovation: ~
x [n ] = x[n ] x[n | n 1]
By MMSE Orthogonality Principle

MMSE estimate of
x[n] given X[n 1]
(prediction!!)

E{ ~
x [n ]X[n 1] } = 0

~
x [n ] is part of x[n ] that is uncorrelated with the previous data
x [n ] }
Now note: X[n] is equivalent to {X[n 1], ~

Why? Because we can get get X[n] from it as follows:


X[n 1]
X[n 1]


= X[ n ]
~
x [n ]
x[n ]
x[n ] = ~
x [n ] +

n 1

ak x[k ]

k%
="
0 $"
#
x [ n|n 1]

What have we done so far?


Have shown that X[n ] {X[n 1], ~
x [n ] }
uncorrelated

Have split current data set into 2 parts:


1. Old data
2. Uncorrelated part of new data (just the new facts)
x [n ]}
s[n | n ] = E{s[n ] | X[n ]} = E{s[n ] | X[n 1], ~

Because of this

So what??!! Well can now exploit Property #1!!


s[n | n ] = E{s[n ] | X[n 1]}+ E{s[n ] | ~
x [n ]}
%""$""# %"
"$""
#

= s[n | n 1]
prediction of s[n]
based on past data

Update based on
innovation part
of new data

Now need to
look more
closely at
each of these!
4

Look at Prediction Term:

s[n | n 1]

Use the Dynamical Model it is the key to prediction because it tells us


how the state should progress from instant to instant

s[n | n 1] = E {s[n ] | X[n 1]} = E {as[n 1] + u[n ] | X[n 1]}


Now use Property #2:

s[n | n 1] = a E { s[n 1] | X[n 1]}+ E {u[n ] | X[n 1]}


%"""$"""# %""
"$"""
#
= s[ n 1|n 1]

By Definition

= E{u[ n ]}=0

By independence of u[n]
& X[n-1] See bottom
of p. 433 in textbook.

s[n | n 1] = as[n 1 | n 1]
The Dynamical Model provides the
update from estimate to prediction!!
5

Look at Update Term:

E{s[n ] | ~
x [n ]}

Use the form for the Gaussian MMSE estimate:


E{s[n ]~
x [n ]} ~
~
E{s[n ] | x [n ]} =
x [n ]
2
~
E{x [n ]}
~
%
""$""#
x [n ] = x[n ] x[n | n 1]

=
k [n ]

So E{s[n ] | ~
x [n ]} = k [n ](x[n ] x[n | n 1])
s[n | n 1] + w [n | n 1]
by Prop. #2 = %
"$"
# %"$"#
Prediction Shows Up Again!!!

Put these Results Together:


s[n | n ] = s[n | n 1] + k [n ][x[n ] + s[n | n 1]]
%"$"
#
= as[ n 1|n 1]

How to get the gain?

=0

Because w[n] is indep.


of {x[0], , x[n-1]}

This is the
Kalman Filter
6

Look at the Gain Term:


Need two properties
A. E{s[n ]( x[n ] s[n | n 1])} = E{( s[n ] s[n | n 1])( x[n ] s[n | n 1])}
Aside
<x,y> = <x+z,y>
for any z y

Linear combo of past data


thus w/ innovation

B. E{w[n ]( s[n ] s[n | n 1])} = 0

= x[n ] x[n | n 1]
=~
x [n ]

The innovation

proof
w[n] is the measurement noise and by assumption is indep. of the
dynamical driving noise u[n] and s[-1] In other words: w[n] is indep.
of everything dynamical So E{w[n]s[n]} = 0
s[n | n 1] is based on past data, which include {w[0], , w[n-1]}, and
since the measurement noise has indep. samples we get s[n | n 1] w[n ]
7

So we start with the gain as defined above:

Plug in for innovation

E{s[n ]~
x [n ]} E {s[n ][x[n ] s[n | n 1]]}
k [n ] =
=
2
~
E{x [n ]}
E [x[n ] s[n | n 1]]2

(!)

Use Prop. A in num.


Use x[n] = s[n]+ w[n]
in denominator

E {[s[n ] s[n | n 1]][x[n ] s[n | n 1]]}

E [s[n ] s[n | n 1] + w[n ]]2

(!!)

E {[s[n ] s[n | n 1]][s[n ] s[n | n 1] + w[n ]]}

E [s[n ] s[n | n 1] + w[n ]]2

Use
x[n] = s[n]+ w[n]
in numerator

E [s[n ] s[n | n 1]]2 + E {[s[n ] s[n | n 1]]w[n ]}

E [s[n ] s[n | n 1]]2

= M [n | n 1]

} +

2
n

Expand

+ 2 E {[s[n ] s[n | n 1]]w[n ]}


= 0 by Prop. B

MSE when s[n] is estimated


by 1-step prediction

This gives a form for the gain:


k [n ] =

M [n | n 1]

n2 + M [n | n 1]

This balances
the quality of the measured data
against the predicted state

In the Kalman filter the prediction acts like the


prior information about the state at time n
before we observe the data at time n

Look at the Prediction MSE Term:


But now we need to know how to find M[n|n 1]!!!

{
}
= E {[as[n 1] + u[n ] as[n 1 | n 1]] }
= E {[a (s[n 1] s[n 1 | n 1]) + u[n ]] }

M [n | n 1] = E [s[n ] s[n | n 1]2

Use dynamical
model & exploit
form for
prediction
Cross-terms = 0

Est. Error at previous time

M [n | n 1] = a 2 M [n 1 | n 1] + u2
Why are the cross-terms zero? Two parts:
1. s[n 1] depends on {u[0] u[n 1], s[-1]}, which are indep. of u[n]
2. s[n 1 | n 1] depends on {s[0]+w[0] s[n 1]+w[n 1]}, which are
indep. of u[n]
10

Look at a Recursion for MSE Term: M[n|n]


By def.: M [n | n ] = E {[s[n ] s[n | n ]]2 }= E {[s[n ] s[n | n 1] k [n ](x[n ] s[n | n 1])]2 }
Term A

Term B

Now well get three terms:


E{A2}, E{AB}, E{B2}

{ } = M [n | n 1]

E A2

by definition

2 E {AB} = 2k [n ]E {[s[n ] s[n | n 1]][x[n ] s[n | n 1]]}

{ }

= 2k [n ]M [n | n 1]

from (!!) is num. k[n]

E B 2 = k 2 [n ]E [x[n ] s[n | n 1]]2


= k [n ][Den. of k [n ]]
2

from (!) is den. k[n]

= k [n ][Num. of k [n ]] = k [n ]M [n | n 1]
Recall: k [n ] =

M [n | n 1]

n2 + M [n | n 1]

11

So this gives

M [n | n ] = M [n | n 1] 2k [n ]M [n | n 1] + k [n ]M [n | n 1]
M [n | n ] = (1 k [n ])M [n | n 1]
Putting all of these results together gives
some very simple equations to iterate
Called the Kalman Filter
We just derived the form for Scalar State & Scalar Observation.
On the next three charts we give the Kalman Filter equations for:
Scalar State & Scalar Observation
Vector State & Scalar Observation
Vector State & Vector Observation
12

Kalman Filter: Scalar State & Scalar Observation


State Model:

s[n] = as[n 1] + u[n]

Observation Model: x[n] = s[ n] + w[n]


Initialization:

2
u[n] WGN; WSS; ~ N (0, u )

w[n] WGN; ~ N (0, n2 )

s[1 | 1] = E{s[1]} = s

Varies
with n

2
2
2
Must
MustKnow
Know: :s,s,2s s, ,a,a,2u,u,2nn

M [1 | 1] = E{( s[1]} s[1 | 1]) 2 } = s2


Prediction:

s[n | n 1] = as[n 1 | n 1]

Pred. MSE:

M [n | n 1] = a 2 M [n 1 | n 1] + u2

Kalman Gain:

K [ n] =

Update:

s[n | n] = s[n | n 1] + K [n]( x[n] s[n | n 1])

Est. MSE:

M [n | n] = (1 K [n])M [n | n 1]

M [n | n 1]
n2 + M [n | n 1]

13

Kalman Filter: Vector State & Scalar Observation


State Model:

s[n ] = As[n 1] + Bu[n ] s p 1; A p p; B p r; u ~ N (0, Q) r 1

2
T
T
~
N
(
0
,

w[n]
WGN;
x
[
n
]
=
h
[
n
]
s
[
n
]
+
w
[
n
];
h
[
n
]
p

1
Observation Model:
n)

Initialization:

2
Must
MustKnow:
Know:s,s,CCs,s,A,
A,B,
B,h,h,Q,
Q,n2n

s[1 | 1] = E{s[1]} = s

M[1 | 1] = E (s[1]} s[1 | 1])(s[1]} s[1 | 1])T = C s


Prediction:

s[n | n 1] = As[n 1 | n 1]

Pred. MSE (pp):

M[n | n 1] = AM[n 1 | n 1]AT + BQBT

Kalman Gain (p1):

K[ n ] =

M[n | n 1]h[n]
n2 + h%T "
[n]"
M"
[n$| n""
1]"
h[#
n]
11

Update:

s[n | n] = s[n | n 1] + K[n]( x[n] hT [n]s[n | n 1])


%""$""#
[ n|n 1]
%"""
"$x"
"""
#
~
x [ n ]: innovations

Est. MSE (pp): :

M[n | n] = I K[n]hT [n] M[n | n 1]

14

Kalman Filter: Vector State & Vector Observation


State Model:

s[n] = As[n 1] + Bu[n] s p 1; A p p; B p r ; u ~ N(0, Q) r 1

Observation:

x[n] = H[n]s[n] + w[n]; x M 1; H[n] M p; w[n] ~ N(0, C[n]) M 1

Initialization:

s[1 | 1] = E{s[1]} = s

Must
MustKnow
Know: :s,s,CCs,s,AA, ,BB, ,HH, ,QQ, ,CC[n]}
[n]}

M[1 | 1] = E (s[1]} s[1 | 1])(s[1]} s[1 | 1])T = C s


Prediction:

s[n | n 1] = As[n 1 | n 1]

Pred. MSE (pp):

M[n | n 1] = AM[n 1 | n 1]AT + BQBT

Kalman Gain (pM): K[ n] = M[ n | n 1]H [n] C[ n] + H[ n]M[ n | n 1]H [ n]


%"""$"""#

M M

Update:

s[n | n] = s[n | n 1] + K[n](x[n] H[n]s[n | n 1])


%""$""#
[ n|n 1]
%"""
"$x"
"""
#
~
x [ n ]: innovations

Est. MSE (pp): :

M[n | n] = (I K[n]H[n])M[n | n 1]

15

Kalman Filter Block Diagram


Estimated
Driving Noise

Innovations

Observations

x[n] +

~
x[n]

K[n]

Bu [n]

x [n | n 1]

Predicted
Observation

Estimated
State

s[n | n]

H[n]
Embedded
Observation
Model

s[n | n 1]

Predicted
State

Az-1

Embedded
Dynamical
Model

Looks a lot like Sequential LS/MMSE except it


has the Embedded Dynamical Model!!!

16

Overview of MMSE Estimation


Assume
Gaussian
Jointly
Gaussian
Bayesian
Linear
Model
Optimal
Seq. Filter

Gen. MMSE

Force Linear

Squared Cost Function

Any PDF,
Known 2nd Moments

= E{ | x}
1
(x E{x})
= E{} + C x C xx

= + C HT HC HT + C w

LMMSE

(x H )

LMMSE
Linear
Model

Linear
Seq. Filter

n = n 1 + k n x[n ] hTn n 1

(No Dynamics)

(No Dynamics)

Optimal
Kalman Filter

Linear
Kalman Filter

(w/ Dynamics)

s[n | n ] = s[n | n 1] + K [n ]( x[n ] H[n ]As[n 1 | n 1])

(w/ Dynamics)17

You might also like