Professional Documents
Culture Documents
Eece 522 Notes - 30 CH - 13b
Eece 522 Notes - 30 CH - 13b
Assumptions
1. u[n] is zero mean Gaussian, White, E{u 2 [n ]} = u2
2. w[n] is zero mean Gaussian, White, E{w 2 [n ]} = n2
3. The initial state is s[ 1] ~ N ( s , s2 )
4. u[n], w[n], and s[1] are all independent of each other
Can vary
with time
MMSE estimate of
x[n] given X[n 1]
(prediction!!)
E{ ~
x [n ]X[n 1] } = 0
~
x [n ] is part of x[n ] that is uncorrelated with the previous data
x [n ] }
Now note: X[n] is equivalent to {X[n 1], ~
= X[ n ]
~
x [n ]
x[n ]
x[n ] = ~
x [n ] +
n 1
ak x[k ]
k%
="
0 $"
#
x [ n|n 1]
Because of this
= s[n | n 1]
prediction of s[n]
based on past data
Update based on
innovation part
of new data
Now need to
look more
closely at
each of these!
4
s[n | n 1]
By Definition
= E{u[ n ]}=0
By independence of u[n]
& X[n-1] See bottom
of p. 433 in textbook.
s[n | n 1] = as[n 1 | n 1]
The Dynamical Model provides the
update from estimate to prediction!!
5
E{s[n ] | ~
x [n ]}
=
k [n ]
So E{s[n ] | ~
x [n ]} = k [n ](x[n ] x[n | n 1])
s[n | n 1] + w [n | n 1]
by Prop. #2 = %
"$"
# %"$"#
Prediction Shows Up Again!!!
=0
This is the
Kalman Filter
6
= x[n ] x[n | n 1]
=~
x [n ]
The innovation
proof
w[n] is the measurement noise and by assumption is indep. of the
dynamical driving noise u[n] and s[-1] In other words: w[n] is indep.
of everything dynamical So E{w[n]s[n]} = 0
s[n | n 1] is based on past data, which include {w[0], , w[n-1]}, and
since the measurement noise has indep. samples we get s[n | n 1] w[n ]
7
E{s[n ]~
x [n ]} E {s[n ][x[n ] s[n | n 1]]}
k [n ] =
=
2
~
E{x [n ]}
E [x[n ] s[n | n 1]]2
(!)
(!!)
Use
x[n] = s[n]+ w[n]
in numerator
= M [n | n 1]
} +
2
n
Expand
M [n | n 1]
n2 + M [n | n 1]
This balances
the quality of the measured data
against the predicted state
{
}
= E {[as[n 1] + u[n ] as[n 1 | n 1]] }
= E {[a (s[n 1] s[n 1 | n 1]) + u[n ]] }
Use dynamical
model & exploit
form for
prediction
Cross-terms = 0
M [n | n 1] = a 2 M [n 1 | n 1] + u2
Why are the cross-terms zero? Two parts:
1. s[n 1] depends on {u[0] u[n 1], s[-1]}, which are indep. of u[n]
2. s[n 1 | n 1] depends on {s[0]+w[0] s[n 1]+w[n 1]}, which are
indep. of u[n]
10
Term B
{ } = M [n | n 1]
E A2
by definition
{ }
= 2k [n ]M [n | n 1]
= k [n ][Num. of k [n ]] = k [n ]M [n | n 1]
Recall: k [n ] =
M [n | n 1]
n2 + M [n | n 1]
11
So this gives
M [n | n ] = M [n | n 1] 2k [n ]M [n | n 1] + k [n ]M [n | n 1]
M [n | n ] = (1 k [n ])M [n | n 1]
Putting all of these results together gives
some very simple equations to iterate
Called the Kalman Filter
We just derived the form for Scalar State & Scalar Observation.
On the next three charts we give the Kalman Filter equations for:
Scalar State & Scalar Observation
Vector State & Scalar Observation
Vector State & Vector Observation
12
2
u[n] WGN; WSS; ~ N (0, u )
s[1 | 1] = E{s[1]} = s
Varies
with n
2
2
2
Must
MustKnow
Know: :s,s,2s s, ,a,a,2u,u,2nn
s[n | n 1] = as[n 1 | n 1]
Pred. MSE:
M [n | n 1] = a 2 M [n 1 | n 1] + u2
Kalman Gain:
K [ n] =
Update:
Est. MSE:
M [n | n] = (1 K [n])M [n | n 1]
M [n | n 1]
n2 + M [n | n 1]
13
2
T
T
~
N
(
0
,
w[n]
WGN;
x
[
n
]
=
h
[
n
]
s
[
n
]
+
w
[
n
];
h
[
n
]
p
1
Observation Model:
n)
Initialization:
2
Must
MustKnow:
Know:s,s,CCs,s,A,
A,B,
B,h,h,Q,
Q,n2n
s[1 | 1] = E{s[1]} = s
s[n | n 1] = As[n 1 | n 1]
K[ n ] =
M[n | n 1]h[n]
n2 + h%T "
[n]"
M"
[n$| n""
1]"
h[#
n]
11
Update:
14
Observation:
Initialization:
s[1 | 1] = E{s[1]} = s
Must
MustKnow
Know: :s,s,CCs,s,AA, ,BB, ,HH, ,QQ, ,CC[n]}
[n]}
s[n | n 1] = As[n 1 | n 1]
M M
Update:
M[n | n] = (I K[n]H[n])M[n | n 1]
15
Innovations
Observations
x[n] +
~
x[n]
K[n]
Bu [n]
x [n | n 1]
Predicted
Observation
Estimated
State
s[n | n]
H[n]
Embedded
Observation
Model
s[n | n 1]
Predicted
State
Az-1
Embedded
Dynamical
Model
16
Gen. MMSE
Force Linear
Any PDF,
Known 2nd Moments
= E{ | x}
1
(x E{x})
= E{} + C x C xx
= + C HT HC HT + C w
LMMSE
(x H )
LMMSE
Linear
Model
Linear
Seq. Filter
n = n 1 + k n x[n ] hTn n 1
(No Dynamics)
(No Dynamics)
Optimal
Kalman Filter
Linear
Kalman Filter
(w/ Dynamics)
(w/ Dynamics)17