o t
o
=
x
e x p
N x p
o o
Univariate
) ( ) (
2
1
2 / 1
2 /
1
) 2 (
1
) (
: ) ( ~ ) (
x x
x
x
=
t
e p
, p
d
t
Multivariate
) , ( ~
) , ( ~
2 2
2
o
o
a b a N Y
b aX Y
N X
+
)
`
+ =
Properties of Gaussians


.

\

+ +
+
+
)
`
2
2
2
1
2
2
2
2
1
2
1
1
2
2
2
1
2
2
2 1
2
2 2 2
2
1 1 1
1
, ~ ) ( ) (
) , ( ~
) , ( ~
o o
o o
o
o o
o
o
o
N X p X p
N X
N X
We stay in the Gaussian world as long as we
start with Gaussians and perform only linear
transformations.
) , ( ~
) , ( ~
T
A A B A N Y
B AX Y
N X
E +
)
`
+ =
E
Multivariate Gaussians


.

\

E + E E + E
E
+
E + E
E
)
`
E
E
1
2
1
1
2
2 1
1
1
2 1
2
2 1
2 2 2
1 1 1
1
, ~ ) ( ) (
) , ( ~
) , ( ~
N X p X p
N X
N X
6
Discrete Kalman Filter
t t t t t t
u B x A x c + + =
1
t t t t
x C z o + =
Estimates the state x of a discretetime
controlled process that is governed by the
linear stochastic difference equation
with a measurement
7
Components of a Kalman Filter
t
c
Matrix (nxn) that describes how the state
evolves from t to t1 without controls or
noise.
t
A
Matrix (nxl) that describes how the control u
t
changes the state from t to t1.
t
B
Matrix (kxn) that describes how to map the
state x
t
to an observation z
t
.
t
C
t
o
Random variables representing the process
and measurement noise that are assumed to
be independent and normally distributed
with covariance R
t
and Q
t
respectively.
8
Kalman Filter Updates in 1D
9
Kalman Filter Updates in 1D
1
) ( with
) (
) (
) (
+ E E =
E = E
+ =
=
t
T
t
t
t
T
t
t
t
t
t t t
t t t t t t
t
Q C C C K
C K I
C z K
x bel
2
,
2
2
2 2
with
) 1 (
) (
) (
t obs t
t
t
t t t
t t t t t
t
K
K
z K
x bel
o o
o
o o
+
=
=
+ =
=
10
Kalman Filter Updates in 1D
+ E = E
+ =
=
t
T
t t t
t
t t t t t
t
R A A
u B A
x bel
1
1
) (
+ =
+ =
=
2
,
2 2 2
1
) (
t act t t t
t t t t t
t
a
u b a
x bel
o o o
11
Kalman Filter Updates
12
( )
0 0 0 0
, ; ) ( E = x N x bel
Linear Gaussian Systems: Initialization
Initial belief is normally distributed:
13
Dynamics are linear function of state and
control plus additive noise:
t t t t t t
u B x A x c + + =
1
Linear Gaussian Systems: Dynamics
( )
t t t t t t t t t
R u B x A x N x u x p , ; ) ,  (
1 1
+ =
( ) ( )
1 1 1 1
1 1 1
, ; ~ , ; ~
) ( ) ,  ( ) (
E +
=
}
t t t t t t t t t
t t t t t t
x N R u B x A x N
dx x bel x u x p x bel
14
Linear Gaussian Systems: Dynamics
( ) ( )
+ E = E
+ =
=
)
`
E
)
`
E +
=
}
}
t
T
t t t
t
t t t t t
t
t t t t
T
t t
t t t t t t
T
t t t t t t
t t t t t t t t t
t t t t t t
R A A
u B A
x bel
dx x x
u B x A x R u B x A x x bel
x N R u B x A x N
dx x bel x u x p x bel
1
1
1 1 1
1
1 1 1
1
1
1
1 1 1 1
1 1 1
) (
) ( ) (
2
1
exp
) ( ) (
2
1
exp ) (
, ; ~ , ; ~
) ( ) ,  ( ) (
q
15
Observations are linear function of state
plus additive noise:
t t t t
x C z o + =
Linear Gaussian Systems: Observations
( )
t t t t t t
Q x C z N x z p , ; )  ( =
( ) ( )
t
t t t t t t
t t t t
x N Q x C z N
x bel x z p x bel
E
=
, ; ~ , ; ~
) ( )  ( ) (
q
16
Linear Gaussian Systems: Observations
( ) ( )
1
1 1
) ( with
) (
) (
) (
) ( ) (
2
1
exp ) ( ) (
2
1
exp ) (
, ; ~ , ; ~
) ( )  ( ) (
+ E E =
E = E
+ =
=
)
`
E
)
`
E
=
t
T
t
t
t
T
t
t
t
t
t t t
t t t t t t
t
t t t
T
t t t t t t
T
t t t t
t
t t t t t t
t t t t
Q C C C K
C K I
C z K
x bel
x x x C z Q x C z x bel
x N Q x C z N
x bel x z p x bel
q
q
17
Kalman Filter Algorithm
1. Algorithm Kalman_filter(
t1
,
E
t1
, u
t
, z
t
):
2. Prediction:
3.
4.
5. Correction:
6.
7.
8.
9. Return
t
,
E
t
t t t t t
u B A + =
1
t
T
t t t
t R A A + E = E
1
1
) (
+ E E =
t
T
t
t
t
T
t
t
t
Q C C C K
) (
t t t t t t
C z K + =
t
t t t
C K I E = E ) (
18
The PredictionCorrectionCycle
+ E = E
+ =
=
t
T
t t t
t
t t t t t
t
R A A
u B A
x bel
1
1
) (
+ =
+ =
=
2
,
2 2 2
1
) (
t act t t t
t t t t t
t
a
u b a
x bel
o o o
Prediction
19
The PredictionCorrectionCycle
1
) ( ,
) (
) (
) (
+ E E =
E = E
+ =
=
t
T
t
t
t
T
t
t
t
t
t t t
t t t t t t
t
Q C C C K
C K I
C z K
x bel
2
,
2
2
2 2
,
) 1 (
) (
) (
t obs t
t
t
t t t
t t t t t
t
K
K
z K
x bel
o o
o
o o
+
=
=
+ =
=
Correction
20
The PredictionCorrectionCycle
1
) ( ,
) (
) (
) (
+ E E =
E = E
+ =
=
t
T
t
t
t
T
t
t
t
t
t t t
t t t t t t
t
Q C C C K
C K I
C z K
x bel
2
,
2
2
2 2
,
) 1 (
) (
) (
t obs t
t
t
t t t
t t t t t
t
K
K
z K
x bel
o o
o
o o
+
=
=
+ =
=
+ E = E
+ =
=
t
T
t t t
t
t t t t t
t
R A A
u B A
x bel
1
1
) (
+ =
+ =
=
2
,
2 2 2
1
) (
t act t t t
t t t t t
t
a
u b a
x bel
o o o
Correction
Prediction
21
Kalman Filter Summary
Highly efficient: Polynomial in
measurement dimensionality k and
state dimensionality n:
O(k
2.376
+ n
2
)
Optimal for linear Gaussian systems!
Most robotics systems are nonlinear!
22
Nonlinear Dynamic Systems
Most realistic robotic problems involve
nonlinear functions
) , (
1
=
t t t
x u g x
) (
t t
x h z =
23
Linearity Assumption Revisited
24
Nonlinear Function
25
EKF Linearization (1)
26
EKF Linearization (2)
27
EKF Linearization (3)
28
Prediction:
Correction:
EKF Linearization: First Order
Taylor Series Expansion
) ( ) , ( ) , (
) (
) , (
) , ( ) , (
1 1 1 1
1 1
1
1
1 1
+ ~
c
c
+ ~
t t t t t t t
t t
t
t t
t t t t
x G u g x u g
x
x
u g
u g x u g
) ( ) ( ) (
) (
) (
) ( ) (
t t t t t
t t
t
t
t t
x H h x h
x
x
h
h x h
+ ~
c
c
+ ~
29
EKF Algorithm
1. Extended_Kalman_filter(
t1
,
E
t1
, u
t
, z
t
):
2. Prediction:
3.
4.
5. Correction:
6.
7.
8.
9. Return
t
,
E
t
) , (
1
=
t t t
u g
t
T
t t t
t R G G + E = E
1
1
) (
+ E E =
t
T
t
t
t
T
t
t
t
Q H H H K
)) ( (
t t t t t
h z K + =
t
t t t
H K I E = E ) (
1
1
) , (
c
c
=
t
t t
t
x
u g
G
t
t
t
x
h
H
c
c
=
) (
t t t t t
u B A + =
1
t
T
t t t
t R A A + E = E
1
1
) (
+ E E =
t
T
t
t
t
T
t
t
t
Q C C C K
) (
t t t t t t
C z K + =
t
t t t
C K I E = E ) (
30
Localization
Given
Map of the environment.
Sequence of sensor measurements.
Wanted
Estimate of the robots position.
Problem classes
Position tracking
Global localization
Kidnapped robot problem (recovery)
Using sensory information to locate the robot
in its environment is the most fundamental
problem to providing a mobile robot with
autonomous capabilities. [Cox 91]
31
Landmarkbased Localization
32
1. EKF_localization (
t1
,
E
t1
, u
t
, z
t
,
m):
Prediction:
3.
5.
6.
7.
8.
) , (
1
=
t t t
u g
T
t t t
T
t t t
t V M V G G + E = E
1








.

\

c
c
c
c
c
c
c
c
c
c
c
c
c
c
c
c
c
c
=
c
c
=
u
u
u
, 1 , 1 , 1
, 1 , 1 , 1
, 1 , 1 , 1
1
1
' ' '
' ' '
' ' '
) , (
t y t x t
t y t x t
t y t x t
t
t t
t
y y y
x x x
x
u g
G







.

\

c
c
c
c
c
c
c
c
c
c
c
c
=
c
c
=
t t
t t
t t
t
t t
t
v
y
v
y
x
v
x
u
u g
V
e
u u
e
e
' '
' '
' '
) , (
1
( )
( )


.

\

+
+
=
2
4 3
2
2 1
    0
0    
t t
t t
t
v
v
M
e o o
e o o
Motion noise
Jacobian of g w.r.t location
Predicted mean
Predicted covariance
Jacobian of g w.r.t control
33
1. EKF_localization (
t1
,
E
t1
, u
t
, z
t
,
m):
Correction:
3.
5.
6.
7.
8.
9.
10.
) (
t t t t t
z z K + =
( ) t
t t t
H K I E = E




.

\

c
c
c
c
c
c
c
c
c
c
c
c
=
c
c
=
u
u
,
,
,
,
,
,
) , (
t
t
t
t
y t
t
y t
t
x t
t
x t
t
t
t
t
r r r
x
m h
H
( ) ( )
( )


.

\

+
=
u
, , ,
2
,
2
,
, 2 atan
t x t x y t y
y t y x t x
t
m m
m m
z
t
T
t t t t
Q H H S + E =
1
E =
t
T
t t t
S H K


.

\

=
2
2
0
0
r
r
t
Q
o
o
Predicted measurement mean
Pred. measurement covariance
Kalman gain
Updated mean
Updated covariance
Jacobian of h w.r.t location
34
EKF Prediction Step
35
EKF Observation Prediction Step
36
EKF Correction Step
37
Estimation Sequence (1)
38
Estimation Sequence (2)
39
Comparison to GroundTruth
40
EKF Summary
Highly efficient: Polynomial in
measurement dimensionality k and
state dimensionality n:
O(k
2.376
+ n
2
)
Not optimal!
Can diverge if nonlinearities are large!
Works surprisingly well even when all
assumptions are violated!
41
Linearization via Unscented
Transform
EKF
UKF
42
UKF SigmaPoint Estimate (2)
EKF
UKF
43
UKF SigmaPoint Estimate (3)
EKF
UKF
44
Unscented Transform
( ) n i
n
w w n
n
w
n
w
i
c
i
m
i
i
c m
2 ,..., 1 for
) ( 2
1
) (
) 1 (
2 0 0 0
=
+
= = E + =
+ +
+
=
+
= =
_
 o
_
Sigma points Weights
) (
i i
g _ =
=
=
= E
=
n
i
T i i i
c
n
i
i i
m
w
w
2
0
2
0
) )( ( '
'
Pass sigma points through nonlinear function
Recover mean and covariance
45
UKF_localization (
t1
,
E
t1
, u
t
, z
t
,
m):
Prediction:
( )
( )


.

\

+
+
=
2
4 3
2
2 1
    0
0    
t t
t t
t
v
v
M
e o o
e o o


.

\

=
2
2
0
0
r
r
t
Q
o
o
( ) ( ) ( )
T T
T
t
a
t
0 0 0 0
1 1
=



.

\

E
= E
t
t
t
a
t
Q
M
0 0
0 0
0 0
1
1
( )
a
t
a
t
a
t
a
t
a
t
a
t 1 1 1 1 1 1
E E + = _
( )
x
t
u
t t
x
t
u g
1
,
+ = _ _ _
( ) ( )
=
= E
L
i
T
t
x
t i t
x
t i
i
c t
w
2
0
, ,
_ _
=
=
L
i
x
t i
i
m t
w
2
0
,
_
Motion noise
Measurement noise
Augmented state mean
Augmented covariance
Sigma points
Prediction of sigma points
Predicted mean
Predicted covariance
46
UKF_localization (
t1
,
E
t1
, u
t
, z
t
,
m):
Correction:
( )
z
t
x
t t
h _ _ + = Z
=
Z =
L
i
t i
i
m t
w z
2
0
,
=
( )( )
T
t t i
L
i
t
x
t i
i
c
z x
t
z w
,
2
0
,
,
Z = E
=
_
1
,
E =
t
z x
t t
S K
) (
t t t t t
z z K + =
T
t t t
t
t
K S K E = E
47
1. EKF_localization (
t1
,
E
t1
, u
t
, z
t
,
m):
Correction:
3.
5.
6.
7.
8.
9.
10.
) (
t t t t t
z z K + =
( ) t
t t t
H K I E = E




.

\

c
c
c
c
c
c
c
c
c
c
c
c
=
c
c
=
u
u
,
,
,
,
,
,
) , (
t
t
t
t
y t
t
y t
t
x t
t
x t
t
t
t
t
r r r
x
m h
H
( ) ( )
( )


.

\

+
=
u
, , ,
2
,
2
,
, 2 atan
t x t x y t y
y t y x t x
t
m m
m m
z
t
T
t t t t
Q H H S + E =
1
E =
t
T
t t t
S H K


.

\

=
2
2
0
0
r
r
t
Q
o
o
Predicted measurement mean
Pred. measurement covariance
Kalman gain
Updated mean
Updated covariance
Jacobian of h w.r.t location
48
UKF Prediction Step
49
UKF Observation Prediction Step
50
UKF Correction Step
51
EKF Correction Step
52
Estimation Sequence
EKF PF UKF
53
Estimation Sequence
EKF UKF
54
Prediction Quality
EKF UKF
55
UKF Summary
Highly efficient: Same complexity as
EKF, with a constant factor slower in
typical practical applications
Better linearization than EKF:
Accurate in first two terms of Taylor
expansion (EKF only first term)
Derivativefree: No Jacobians needed
Still not optimal!
56
[Arras et al. 98]:
Laser rangefinder and vision
High precision (<1cm accuracy)
Kalman Filterbased System
[Courtesy of Kai Arras]
57
Multi
hypothesis
Tracking
58
Belief is represented by multiple hypotheses
Each hypothesis is tracked by a Kalman filter
Additional problems:
Data association: Which observation
corresponds to which hypothesis?
Hypothesis management: When to add / delete
hypotheses?
Huge body of literature on target tracking, motion
correspondence etc.
Localization With MHT
59
Hypotheses are extracted from LRF scans
Each hypothesis has probability of being the correct
one:
Hypothesis probability is computed using Bayes
rule
Hypotheses with low probability are deleted.
New candidates are extracted from LRF scans.
MHT: Implemented System (1)
)} ( , ,
{
i i i i
H P x H E =
} , {
j j j
R z C =
) (
) ( )  (
)  (
s P
H P H s P
s H P
i i
i
=
[Jensfelt et al. 00]
60
MHT: Implemented System (2)
Courtesy of P. Jensfelt and S. Kristensen
61
MHT: Implemented System (3)
Example run
Map and trajectory
# hypotheses
#hypotheses vs. time
P(H
best
)
Courtesy of P. Jensfelt and S. Kristensen
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.