You are on page 1of 44

City College of New York

1
Dr. Jizhong Xiao
Department of Electrical Engineering
City College of New York
jxiao@ccny.cuny.edu
Probabilistic Robotics
Advanced Mobile Robotics
City College of New York
2
Robot Navigation
Fundamental problems to provide a mobile
robot with autonomous capabilities:
Where am I going
Whats the best way there?
Where have I been? how to create an
environmental map with imperfect sensors?
Where am I? how a robot can tell where
it is on a map?
What if youre lost and dont have a map?
Mapping
Localization
Robot SLAM
Path Planning
Mission Planning
City College of New York
3
Representation of the Environment
Environment Representation
Continuos Metric
x,y,u
Discrete Metric
metric grid
Discrete Topological
topological grid
City College of New York
4
Localization, Where am I?
?
Odometry, Dead Reckoning
Localization base on external sensors,
beacons or landmarks
Probabilistic Map Based Localization
Observation
Map
data base
Prediction of
Position
(e.g. odometry)
P
e
r
c
e
p
t
i
o
n
Matching
Position Update
(Estimation?)
raw sensor data or
extracted features
predicted position
position
matched
observati ons
YES
Encoder
P
e
r
c
e
p
t
i
o
n

City College of New York
5
Localization Methods
Mathematic Background, Bayes Filters
Markov Localization:
Central idea: represent the robots belief by a probability distribution
over possible positions, and uses Bayes rule and convolution to update
the belief whenever the robot senses or moves
Markov Assumption: past and future data are independent if one knows the
current state
Kalman Filtering
Central idea: posing localization problem as a sensor fusion problem
Assumption: gaussian distribution function
Particle Filtering
Central idea: Sample-based, nonparametric Filter
Monte-Carlo method

SLAM (simultaneous localization and mapping)
Multi-robot localization
City College of New York
6
Markov Localization
Applying probability theory to robot localization

Markov localization uses an
explicit, discrete representation for the probability of
all position in the state space.

This is usually done by representing the environment by a
grid or a topological graph with a finite number of possible
states (positions).

During each update, the probability for each state
(element) of the entire space is updated.
City College of New York
7
Markov Localization Example
Assume the robot position is one-
dimensional

The robot is placed
somewhere in the
environment but it is
not told its location
The robot queries its
sensors and finds out it
is next to a door
City College of New York
8
Markov Localization Example
The robot moves one
meter forward. To
account for inherent
noise in robot motion the
new belief is smoother
The robot queries its
sensors and again it
finds itself next to a
door
City College of New York
9
Probabilistic Robotics
Falls in between model-based and behavior-
based techniques
There are models, and sensor measurements, but they are
assumed to be incomplete and insufficient for control
Statistics provides the mathematical glue to integrate
models and sensor measurements
Basic Mathematics
Probabilities
Bayes rule
Bayes filters

City College of New York
10
The next slides are provided by the
authors of the book "Probabilistic
Robotics, you can download from the
website:
http://www.probabilistic-robotics.org/

City College of New York
11
Probabilistic Robotics
Mathematic Background

Probabilities
Bayes rule
Bayes filters
City College of New York
12
Probabilistic Robotics
Key idea:
Explicit representation of uncertainty using
the calculus of probability theory

Perception = state estimation
Action = utility optimization
City College of New York
13
Pr(A) denotes probability that proposition A is true.






Axioms of Probability Theory
1 ) Pr( 0 s s A
1 ) Pr( = True
) Pr( ) Pr( ) Pr( ) Pr( B A B A B A . + = v
0 ) Pr( = False
City College of New York
14
A Closer Look at Axiom 3
B
B A.
A
B
True
) Pr( ) Pr( ) Pr( ) Pr( B A B A B A . + = v
City College of New York
15
Using the Axioms
) Pr( 1 ) Pr(
0 ) Pr( ) Pr( 1
) Pr( ) Pr( ) Pr( ) Pr(
) Pr( ) Pr( ) Pr( ) Pr(
A A
A A
False A A True
A A A A A A
=
+ =
+ =
. + = v
City College of New York
16
Discrete Random Variables
X denotes a random variable.
X can take on a countable number of values in {x
1
,
x
2
, , x
n
}.
P(X=x
i
), or P(x
i
), is the probability that the random
variable X takes on value x
i
.
P( ) is called probability mass function.

E.g.

02 . 0 , 08 . 0 , 2 . 0 , 7 . 0 ) ( = Room P
.
City College of New York
17
Continuous Random Variables
X takes on values in the continuum.
p(X=x), or p(x), is a probability density function.



E.g.
}
= e
b
a
dx x p b a x ) ( )) , ( Pr(
x
p(x)
City College of New York
18
Joint and Conditional Probability
P(X=x and Y=y) = P(x,y)

If X and Y are independent then
P(x,y) = P(x) P(y)
P(x | y) is the probability of x given y
P(x | y) = P(x,y) / P(y)
P(x,y) = P(x | y) P(y)
If X and Y are independent then
P(x | y) = P(x)
City College of New York
19
Law of Total Probability, Marginals

=
y
y x P x P ) , ( ) (

=
y
y P y x P x P ) ( ) | ( ) (

=
x
x P 1 ) (
Discrete case
}
=1 ) ( dx x p
Continuous case
}
= dy y p y x p x p ) ( ) | ( ) (
}
= dy y x p x p ) , ( ) (
City College of New York
20
Bayes Formula
) ( y x p
evidence
prior likelihood
) (
) ( ) | (
) (

= =
y P
x P x y P
y x P

) ( ) | ( ) ( ) | ( ) , ( x P x y P y P y x P y x P = =
Posterior probability distribution
) ( x y p
) (x p
Prior probability distribution
If y is a new sensor reading
Generative model, characteristics of the sensor
) ( y p




Does not depend on x
City College of New York
21
Normalization
) ( ) | (
1
) (
) ( ) | (
) (
) ( ) | (
) (
1
x P x y P
y P
x P x y P
y P
x P x y P
y x P
x

= =
= =

q
q
y x
x
y x
y x
y x P x
x P x y P x
|
|
|
aux ) | ( :
aux
1
) ( ) | ( aux :
q
q
=
=
=

Algorithm:
City College of New York
22
Bayes Rule
with Background Knowledge
) | (
) | ( ) , | (
) , | (
z y P
z x P z x y P
z y x P =
City College of New York
23
Conditioning
Total probability:





}
}
}
=
=
=
dz z P z y x P y x P
dz z P z x P x P
dz z x P x P
) ( ) , | ( ) (
) ( ) | ( ) (
) , ( ) (
City College of New York
24
Conditional Independence
) | ( ) | ( ) , ( z y P z x P z y x P =
) , | ( ) ( y z x P z x P =
) , | ( ) ( x z y P z y P =



equivalent to

and
City College of New York
25
Simple Example of State Estimation
Suppose a robot obtains measurement z
What is P(open|z)?
City College of New York
26
Causal vs. Diagnostic Reasoning
P(open|z) is diagnostic.
P(z|open) is causal.
Often causal knowledge is easier to
obtain.
Bayes rule allows us to use causal
knowledge:
) (
) ( ) | (
) | (
z P
open P open z P
z open P =
count frequencies!
City College of New York
27
Example
P(z|open) = 0.6 P(z|open) = 0.3
P(open) = P(open) = 0.5
67 . 0
3
2
5 . 0 3 . 0 5 . 0 6 . 0
5 . 0 6 . 0
) | (
) ( ) | ( ) ( ) | (
) ( ) | (
) | (
= =
+

=
+
=
z open P
open p open z P open p open z P
open P open z P
z open P
z raises the probability that the door is open.
City College of New York
28
Combining Evidence
Suppose our robot obtains another
observation z
2
.
How can we integrate this new
information?
More generally, how can we estimate
P(x| z
1
...z
n
)?
City College of New York
29
Recursive Bayesian Updating
) , , | (
) , , | ( ) , , , | (
) , , | (
1 1
1 1 1 1
1


=
n n
n n n
n
z z z P
z z x P z z x z P
z z x P

Markov assumption: z
n
is independent of z
1
,...,z
n-1
if
we know x.
) ( ) | (
) , , | ( ) | (
) , , | (
) , , | ( ) | (
) , , | (
... 1
... 1
1 1
1 1
1 1
1
x P x z P
z z x P x z P
z z z P
z z x P x z P
z z x P
n i
i
n
n n
n n
n n
n
[
=

=
=
=
q
q

City College of New York


30
Example: Second Measurement
P(z
2
|open) = 0.5 P(z
2
|open) = 0.6
P(open|z
1
)=2/3

625 . 0
8
5
3
1
5
3
3
2
2
1
3
2
2
1
) | ( ) | ( ) | ( ) | (
) | ( ) | (
) , | (
1 2 1 2
1 2
1 2
= =
+

=
+
=
z open P open z P z open P open z P
z open P open z P
z z open P
z
2
lowers the probability that the door is open.
City College of New York
31
Actions
Often the world is dynamic since
actions carried out by the robot,
actions carried out by other agents,
or just the time passing by
change the world.

How can we incorporate such actions?

City College of New York
32
Typical Actions
The robot turns its wheels to move
The robot uses its manipulator to grasp an
object
Plants grow over time

Actions are never carried out with absolute
certainty.
In contrast to measurements, actions generally
increase the uncertainty.

City College of New York
33
Modeling Actions
To incorporate the outcome of an action
u into the current belief, we use the
conditional pdf

P(x|u,x)

This term specifies the pdf that
executing u changes the state from x
to x.
City College of New York
34
Integrating the Outcome of Actions
}
= ' ) ' ( ) ' , | ( ) | ( dx x P x u x P u x P

= ) ' ( ) ' , | ( ) | ( x P x u x P u x P
Continuous case:





Discrete case:
City College of New York
35
Example: Closing the door
City College of New York
36
State Transitions
P(x|u,x) for u = close door:





If the door is open, the action close door
succeeds in 90% of all cases.
open closed 0.1 1
0.9
0
P(open)=5/8 P(closed)=3/8
City College of New York
37
Example: The Resulting Belief
) | ( 1
16
1
8
3
1
0
8
5
10
1
) ( ) , | (
) ( ) , | (
) ' ( ) ' , | ( ) | (
16
15
8
3
1
1
8
5
10
9
) ( ) , | (
) ( ) , | (
) ' ( ) ' , | ( ) | (
u closed P
closed P closed u open P
open P open u open P
x P x u open P u open P
closed P closed u closed P
open P open u closed P
x P x u closed P u closed P
=
= - + - =
+
=
=
= - + - =
+
=
=

City College of New York


38
Bayes Filters: Framework
Given:
Stream of observations z and action data u:

Sensor model P(z|x).
Action model P(x|u,x).
Prior probability of the system state P(x).
Wanted:
Estimate of the state X of a dynamical system.
The posterior of the state is also called Belief:
) , , , | ( ) (
1 1 t t t t
z u z u x P x Bel =
} , , , {
1 1 t t t
z u z u d =
City College of New York
39
Markov Assumption
Underlying Assumptions
Static world, Independent noise
Perfect model, no approximation errors
) , | ( ) , , | (
1 : 1 : 1 1 : 1 t t t t t t t
u x x p u z x x p

=
) | ( ) , , | (
: 1 : 1 : 0 t t t t t t
x z p u z x z p =
State transition probability
Measurement probability


Markov Assumption:
past and future data are independent if one knows the current state
City College of New York
40
1 1 1
) ( ) , | ( ) | (

}
=
t t t t t t t
dx x Bel x u x P x z P q
Bayes Filters
) , , , | ( ) , , , , | (
1 1 1 1 t t t t t
u z u x P u z u x z P q =
Bayes
z = observation
u = action
x = state
) , , , | ( ) (
1 1 t t t t
z u z u x P x Bel =
Markov
) , , , | ( ) | (
1 1 t t t t
u z u x P x z P q =
Markov
1 1 1 1 1
) , , , | ( ) , | ( ) | (

}
=
t t t t t t t t
dx u z u x P x u x P x z P q
1 1 1 1
1 1 1
) , , , | (
) , , , , | ( ) | (

}
=
t t t
t t t t t
dx u z u x P
x u z u x P x z P

q
Total prob.
Markov
1 1 1 1 1 1
) , , , | ( ) , | ( ) | (

}
=
t t t t t t t t
dx z z u x P x u x P x z P q
City College of New York
41
Bayes Filter Algorithm
1. Algorithm Bayes_filter( Bel(x),d ):
2. q=0
3. If d is a perceptual data item z then
4. For all x do
5.
6.
7. For all x do
8.
9. Else if d is an action data item u then
10. For all x do
11.
12. Return Bel(x)
) ( ) | ( ) ( ' x Bel x z P x Bel =
) ( ' x Bel + =q q
) ( ' ) ( '
1
x Bel x Bel

=q
' ) ' ( ) ' , | ( ) ( ' dx x Bel x u x P x Bel
}
=
1 1 1
) ( ) , | ( ) | ( ) (

}
=
t t t t t t t t
dx x Bel x u x P x z P x Bel q
City College of New York
42
Bayes Filters are Familiar!
Kalman filters
Particle filters
Hidden Markov models
Dynamic Bayesian networks
Partially Observable Markov Decision Processes
(POMDPs)
1 1 1
) ( ) , | ( ) | ( ) (

}
=
t t t t t t t t
dx x Bel x u x P x z P x Bel q
City College of New York
43
Summary
Bayes rule allows us to compute
probabilities that are hard to assess
otherwise.
Under the Markov assumption, recursive
Bayesian updating can be used to
efficiently combine evidence.
Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.
City College of New York
44
Thank You



Homework 3: Exercises 1, 2 and 3 on pp36~37 of textbook,
Due date March 5
Next class: Feb 21, 2012
Homework 2: Due date: Feb 27

You might also like