Professional Documents
Culture Documents
Mobile Robotics
Probabilistic Robotics
Wolfram Burgard
1
Probabilistic Robotics
Key idea:
2
Axioms of Probability Theory
P(A) denotes probability that proposition A is true.
3
A Closer Look at Axiom 3
True
A A∧ B B
4
Using the Axioms
5
Discrete Random Variables
E.g.
6
Continuous Random Variables
E.g. p(x)
7
“Probability Sums up to One”
8
Joint and Conditional Probability
P(X=x and Y=y) = P(x,y)
9
Law of Total Probability
10
Marginalization
11
Bayes Formula
12
Normalization
Algorithm:
13
Bayes Rule
with Background Knowledge
14
Conditional Independence
P ( x, y z ) = P ( x | z ) P ( y | z )
Equivalent to P ( x z ) = P ( x | z , y )
and P( y z ) = P( y | z , x)
P ( x, y ) = P ( x ) P ( y )
(independence/marginal independence)
15
Simple Example of State Estimation
16
Causal vs. Diagnostic Reasoning
P(open|z) is diagnostic
P(z|open) is causal
In some situations, causal knowledge
is easier to obtain count frequencies!
Bayes rule allows us to use causal
knowledge:
P( z | open) P(open)
P(open | z ) =
P( z )
17
Example
P(z|open) = 0.6 P(z|¬open) = 0.3
P(open) = P(¬open) = 0.5
P( z | open) P(open)
P(open | z ) =
P( z | open) p (open) + P( z | ¬open) p (¬open)
0.6 ⋅ 0.5 0 .3
P(open | z ) = = = 0.67
0.6 ⋅ 0.5 + 0.3 ⋅ 0.5 0.3 + 0.15
18
Combining Evidence
Suppose our robot obtains another
observation z2
19
Recursive Bayesian Updating
Markov assumption:
zn is independent of z1,...,zn-1 if we know x
20
Example: Second Measurement
23
Typical Actions
The robot turns its wheels to move
The robot uses its manipulator to grasp
an object
Plants grow over time …
24
Modeling Actions
To incorporate the outcome of an
action u into the current “belief”, we
use the conditional pdf
P(x | u, x’)
25
Example: Closing the door
26
State Transitions
P(x | u, x’) for u = “close door”:
0.9
0.1 open closed 1
0
If the door is open, the action “close door”
succeeds in 90% of all cases
27
Integrating the Outcome of Actions
Continuous case:
Discrete case:
29
Bayes Filters: Framework
Given:
Stream of observations z and action data u:
30
Markov Assumption
Underlying Assumptions
Static world
Independent noise
Perfect model, no approximation errors
31
z = observation
u = action
Bayes Filters x = state
Bayes
Markov
Total prob.
Markov
Markov
32
Bayes
Bel ( xt ) = ηFilter
P( zt | xt ) ∫Algorithm
P( xt | ut , xt −1 )Bel ( xt −1 )dxt −1
33
Bayes Filters are Familiar!
Bel ( xt ) = η P ( zt | xt ) ∫ P( xt | ut , xt −1 )Bel ( xt −1 )dxt −1
Kalman filters
Particle filters
Hidden Markov models
Dynamic Bayesian networks
Partially Observable Markov Decision
Processes (POMDPs)
34
Probabilistic Localization
Probabilistic Localization
Summary
Bayes rule allows us to compute
probabilities that are hard to assess
otherwise.
Under the Markov assumption,
recursive Bayesian updating can be
used to efficiently combine evidence.
Bayes filters are a probabilistic tool
for estimating the state of dynamic
systems.
37