You are on page 1of 26

Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Probabilistic Map
Based Localization

Zürich Autonomous Systems Lab

Lecture 9 – Localization: Probabilistic Approach, Markov

2 Probabilistic Reasoning (e.g. Bayesian)


 Reasoning in the presence of uncertainties and incomplete information
 Combining preliminary information and models with learning from
experimental data

Picture Courtesy of
Bessiere, INRIA Grenoble, France
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL
Lecture 9 – Localization: Probabilistic Approach, Markov

3 Outline of today’s lecture

 Definition and illustration of probabilistic map based localization

 Two solutions
 Markov localization (section 5.6.7, pages 307-322 of the textbook)
 Kalman filter localization (section 5.6.8, pages 322-342)

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

4 The problem
 Consider a mobile robot moving in a known environment.

 As it starts to move, say from a precisely known location, it might keep


track of its location using odometry.

 However, after a certain movement the robot will get very uncertain about
its position
=> update using an observation of its environment

 Observation leads also to an estimate of the robot position which can then
be fused with the odometric estimation to get the best possible update of
the robots actual position.
?

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

5 Illustration of probabilistic bap based localization:


Improving belief state by moving

The robot is placed somewhere in the environment


but it is not told its location

The robot queries its sensors and finds it is next to a


pillar

The robot moves one meter forward. To account for


inherent noise in robot motion the new belief is
smoother

The robot queries its sensors and again it finds itself


next to a pillar

Finally, it updates its belief by combining this


information with its previous belief
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

6 Why a probabilistic approach for mobile robot localization?

 Because the data coming from the robot sensors are affected by
measurement errors, we can only compute the probability that the robot is
in a given configuration.
 The key idea in probabilistic robotics is to represent uncertainty using
probability theory: instead of giving a single best estimate of the current
robot configuration, probabilistic robotics represents the robot configuration
as a probability distribution over the all possible robot poses. This
probability is called belief.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

7 Applying Bayes theory to robot localization


 p(A): Probability that A is true.
 e.g. p(rt = l): probability that the robot r is at position l at time t

 We wish to compute the probability of each individual robot position given


actions and sensor measures.

 p(A | B): Conditional probability of A given that we know B.


 p(rt = l | it): probability that the robot is at position l given the sensors input it.

 Product rule:

 Bayes rule:

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

8 Applying Bayes theory to robot localization


 Bayes rule:

 Map from a belief state and a sensor input to a refined belief state (perception):

Theorem of Total probability

 p(l): belief state before perceptual update process


 p(i | l): probability to get measurement i when being at position l -> MAP
• consult robots map, identify the probability of a certain sensor reading for each
possible position in the map
 p(i): normalization factor so that sum over all l for L equals 1.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

9 Applying Bayes theory to robot localization


 Bayes rule:

 Map from a belief state and a action to new belief state (prediction (action)):

 Summing over all possible ways in which the robot may have reached l.
 with lt and l’t-1 : positions, ot: odometric measurement

 Markov assumption: Update only depends on previous state and its most
recent actions and perception.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

10 Example probability distributions for representing robot configurations

 In the following we consider a robot which is constrained to move along a


straight rail. The problem is one dimensional.
 The robot configuration is the position x along the rail.

0 x

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

11 Example 1: Uniform distribution


 This is used when the robot does not know where it is. It means that the
information about the robot configuration is null.

1
N

N
x
Remember that in order to be a probability distribution a function p(x) must
always satisfy the following constraint:


 p( x)dx  1


© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

12 Example 2: Multimodal distribution


 The robot is in the region around “a” or “b”.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

13 Example 3: Dirac distribution


 The robot has a probability of 1.0 (i.e. 100%) to be at position “a”.

p( x)   ( x  a)

x
 Note, the Dirac distribution is usually represented with an arrow.
 The Dirac function  (x ) is defined as:

 if x  0 
 ( x)  
 0 otherwise
with   ( x)dx  1


© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

14 Example 4: Gaussian distribution


 The Gaussian distribution has the shape of a bell and is defined by the
following formula:

( x )2
1 
p( x)  e 2 2

2 
where μ is the mean value and σ is the standard deviation.
 The Gaussian distribution is also called normal distribution and is usually
abbreviated with N(μ ,σ ).

x
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL
Lecture 9 – Localization: Probabilistic Approach, Markov

15 Description of the probabilistic localization problem


 Consider a mobile robot moving in a known environment.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

16 Description of the probabilistic localization problem


 Consider a mobile robot moving in a known environment.
 As it starts to move, say from a precisely known location, it can keep track of its motion using
odometry.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

17 Description of the probabilistic localization problem


 Consider a mobile robot moving in a known environment.
 As it starts to move, say from a precisely known location, it can keep track of its motion using
odometry.
 Due to odometry uncertainty, after some movement the robot will become very uncertain
about its position.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

18 Description of the probabilistic localization problem


 Consider a mobile robot moving in a known environment.
 As it starts to move, say from a precisely known location, it can keep track of its motion using
odometry.
 Due to odometry uncertainty, after some movement the robot will become very uncertain
about its position.
 To keep position uncertainty from growing unbounded, the robot must localize itself in relation
to its environment map. To localize, the robot might use its on-board exteroceptive sensors
(e.g. ultrasonic, laser, vision sensors) to make observations of its environment

Sensor reference frame

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

19 Description of the probabilistic localization problem


 Consider a mobile robot moving in a known environment.
 As it starts to move, say from a precisely known location, it can keep track of its motion using
odometry.
 Due to odometry uncertainty, after some movement the robot will become very uncertain
about its position.
 To keep position uncertainty from growing unbounded, the robot must localize itself in relation
to its environment map. To localize, the robot might use its on-board exteroceptive sensors
(e.g. ultrasonic, laser, vision sensors) to make observations of its environment
 The robot updates its position based on the observation. Its uncertainty shrinks.

Sensor reference frame

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

20 Action and perception updates


 In robot localization, we distinguish two update steps:
1. Action update:
• the robot moves and estimates its position through its proprioceptive sensors.
During this step, the robot uncertainty grows.

Robot belief before


the observation

2. Perception update:
• the robot makes an observation using its exteroceptive sensors and correct its
position by opportunely combining its belief before the observation with the probability
of making exactly that observation. During this step, the robot uncertainty shrinks.

Updated belief
Probability of making (fusion)
this observation

Robot belief before


the observation

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

21 Localization: Probabilistic Position Estimation (Kalman Filter)


 Continuous, recursive and very compact

Perception
(measurement)

Prediction
(action)

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

22 The solution to the probabilistic localization problem

Solving the probabilistic localization problem


consists in solving separately
the Action/Prediction and Perception/Measurement updates

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

23 The solution to the probabilistic localization problem


A probabilistic approach to the mobile robot
localization problem is a method able to compute the
probability distribution of the robot configuration
during each Action and Perception step.

The ingredients are:


1. The initial probability distribution p ( x)t 0

2. The statistical error model of the proprioceptive


sensors (e.g. wheel encoders)

3. The statistical error model of the exteroceptive


sensors (e.g. laser, sonar, camera)

4. Map of the environment


(If the map is not known a priori then the robots needs to build a
map of the environment and then localizing in it. This is called
SLAM (Simultaneous Localization And Mapping))

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

24 The solution to the probabilistic localization problem


How do we solve the Action and Perception updates?

 Action update uses the Theorem of Total probability (or convolution)

 Perception update uses the Bayes rule

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

25 Action update
 In this phase the robot estimates its current position based on the
knowledge of the previous position and the odometric input

 Using the Theorem of Total probability, we compute the robot’s belief after
the motion as

which can be written as a convolution

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

26 Perception update
 In this phase the robot corrects its previous position (i.e. its former belief)
by opportunely combining it with the information from its exteroceptive
sensors

p ( zt | xt ) p ( xt )
p ( xt | zt ) 
p ( zt )

bel ( xt )    p( zt | xt )bel ( xt )

where η is a normalization factor which makes the probability integrate to 1.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

27 Perception update
 Explanation of bel ( xt )    p ( zt | xt )bel ( xt )

This is the probability of observing z given that the


robot is at position x, that is:
p( zt | xt )

p ( xt )

p ( zt | xt ) p ( xt )
p ( xt | zt ) 
p ( zt )

p ( xt )

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

28 Illustration of probabilistic bap based localization

Initial probability distribution p ( x)t 0


x

p ( zt | xt )

Perception update
bel ( xt )    p ( zt | xt )bel ( xt )

Action update

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

29 Illustration of probabilistic bap based localization

Initial probability distribution p ( x)t 0


x

p( zt | xt )

Perception update

bel ( xt )    p( zt | xt )bel ( xt )

Action update

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

30 Illustration of probabilistic bap based localization

Initial probability distribution p ( x)t 0


x

p( zt | xt )

Perception update

bel ( xt )    p( zt | xt )bel ( xt )

Action update

(convolution of old belief with motion model)

p( zt | xt )

Perception update bel ( xt )    p( zt | xt )bel ( xt )

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

31 Illustration of probabilistic bap based localization

Initial probability distribution p ( x)t 0


x

p( zt | xt )

Perception update

bel ( xt )    p( zt | xt )bel ( xt )

Action update

(convolution of old belief with motion model)

p( zt | xt )

Perception update
bel ( xt )    p( zt | xt )bel ( xt )
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

32 Markov versus Kalman localization


Two approaches exist to represent the probability distribution and to compute the convolution
and Bayes rule during the Action and Perception phases

Markov Kalman

• The configuration space is divided into • The probability distribution of both the
many cells. The configuration space of a robot configuration and the sensor model is
robot moving on a plane is 3D dimensional assumed to be continuous and Gaussian!
(x,y,θ). Each cell contains the probability of
the robot to be in that cell.

• The probability distribution of the sensors


model is also discrete.

• During Action and Perception, all the cells • Since a Gaussian distribution only
are updated described through mean value μ and
variance σ2, we need only to update μ and
Σ2. Therefore the computational cost is
very low!

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Probabilistic Map Based


Localization:
Markov Localization
Section 5.6.7, pages 307-322 in the textbook

Zürich Autonomous Systems Lab

Lecture 9 – Localization: Probabilistic Approach, Markov

34 Markov localization
 Markov localization uses a grid space representation of the robot
configuration.

 For sake of simplicity let us consider a robot moving in a one dimensional


environment (e.g. moving on a rail). Therefore the robot configuration
space can be represented through the sole variable x.
 Let us discretize the configuration space into 10 cells

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

35 Markov localization
 Let us discretize the configuration space into 10 cells

 Suppose that the robot’s initial belief is a uniform distribution from 0 to 3.


Observe that all the elements were normalized so that their sum is 1.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

36 Markov localization
 Initial belief distribution

 Action phase:
Let us assume that the robot moves forward with the following statistical model

 This means that we have 50% probability that the robot moved 2 or 3 cells forward.
 Considering what the probability was before moving, what will the probability be
after the motion?

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

37 Markov localization: Action update


 The solution is given by the convolution of the two distributions

* 

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

38 Markov localization: Action update - Explanation


 Here, we would like to explain where this formula actually comes from. In order to compute
the probability of position x1 in the new belief state, one must sum over all the possible ways
in which the robot may reach x1 according to the potential positions expressed in the former
belief state x0 and the potential input expressed by u1. Observe that because is limited
between 0 and 3, the robot can only reach the states between 2 to 6, therefore

 You can now verify that equations that these equations are computing nothing but the
convolution of the Action update:

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

39 Markov localization: Perception update


 Let us now assume that the robot uses its onboard range finder and measures the distance
from the origin. Assume that the statistical error model of the sensors is

This plot tells us that the distance of the robot from the origin can be equally 5 or 6 units.

 What will the final robot belief be after this measurement? The answer is again given by the
Bayes rule:
bel ( xt )    p( zt | xt )bel ( xt )

 
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

40 Markov Localization: Case Study 1 – Grid Map (1)


 Example 2: Museum Courtesy of
 Laser scan 1 W. Burgard

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

41 Markov Localization: Case Study 1 – Grid Map (2)


 Example 2: Museum Courtesy of
 Laser scan 2 W. Burgard

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

42 Markov Localization: Case Study 1 – Grid Map (3)


 Example 2: Museum Courtesy of
 Laser scan 3 W. Burgard

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

43 Markov Localization: Case Study 1 – Grid Map (4)


 Example 2: Museum Courtesy of
 Laser scan 13 W. Burgard

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

44 Markov Localization: Case Study 1 – Grid Map (5)


 Example 2: Museum Courtesy of
 Laser scan 21 W. Burgard

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

45 Drawbacks of Markov localization


 In the more general planar motion case the grid-map is a three-dimensional array where each
cell contains the probability of the robot to be in that cell. In this case, the cell size must be
chosen carefully.
 During each prediction and measurement steps, all the cells are updated. If the number of
cells in the map is too large, the computation can become too heavy for real-time operations.
 The convolution in a 3D space is clearly the computationally most expensive step.
 As an example, consider a 30x30 m environment and a cell size of 0.1 m x 0.1 m x 1 deg. In
this case, the number of cells which need to be updated at each step would be
300 x 300 x 360 = 32.4 million cells!
Fine fixed decomposition grids result in a huge state space
 Very important processing power needed
 Large memory requirement

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

46 Drawbacks of Markov localization


 Reducing complexity
 Various approached have been proposed for
reducing complexity
• One possible solution would be to increase the
cell size at the expense of localization
accuracy.
• Another solution is to use an adaptive cell
decomposition instead of a fixed cell
decomposition.
 Randomized Sampling / Particle Filter
 The main goal is to reduce the number of
states that are updated in each step
 Approximated belief state by representing
only a ‘representative’ subset of all states
(possible locations)
 E.g update only 10% of all possible locations
 The sampling process is typically weighted,
e.g. put more samples around the local
peaks in the probability density function
 However, you have to ensure some less
likely locations are still tracked, otherwise the
robot might get lost
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL
Lecture 9 – Localization: Probabilistic Approach, Markov

47 Markov Localization: Case Study 2 - Topological Map (1)


 The Dervish Robot
 Topological Localization with Sonar

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

48 Markov Localization: Case Study 1 - Topological Map (2)


 Topological map of office-type environment

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

49 Markov Localization: Case Study 1 - Topological Map (3)


 Update of believe state for position n given the percept-pair i

 p(n|i): new likelihood for being in position n


 p(n): current believe state
 p(i|n): probability of seeing i in n (see table)
 No action update !
 However, the robot is moving and therefore we can apply a combination of
action and perception update

 t-i is used instead of t-1 because the topological distance between n’ and n is
very depending on the specific topological map

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

Lecture 9 – Localization: Probabilistic Approach, Markov

50 Markov Localization: Case Study 1 - Topological Map (4)


 The calculation

is realized by multiplying the probability of generating perceptual event i at


position n by the probability of having failed to generate perceptual event s
at all nodes between n’ and n.

© R. Siegwart, D. Scaramuzza ETH Zurich - ASL


Lecture 9 – Localization: Probabilistic Approach, Markov

51 Markov Localization: Case Study 1 - Topological Map (5)


 Example calculation
 Assume that the robot has two nonzero belief states
• p(1-2) = 1.0 ; p(2-3) = 0.2 * 1-2 1-2
and that it is facing east with certainty
 Perceptual event: open hallway on its left and open door on its right
 State 2-3 will progress potentially to 3 and 3-4 to 4.
 State 3 and 3-4 can be eliminated because the likelihood of detecting an open door is zero.
 The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2, (a) the
likelihood of detecting anything at node 3 and the likelihood of detecting a hallway on the left
and a door on the right at node 4 and (b) the likelihood of detecting a hallway on the left and
a door on the right at node 4. (for simplicity we assume that the likelihood of detecting
nothing at node 3-4 is 1.0)
 (a) occurs only if Dervish fails to detect the door on its left at node 3 (either closed or open),
[0.6  0.4 +(1-0.6)  0.05] and correctly detects nothing on its right, 0.7.
 (b) occurs if Dervish correctly identifies the open hallway on its left at node 4, 0.90, and
mistakes the right hallway for an open door, 0.10.
 This leads to:
• 0.2  [0.6  0.4 + 0.4  0.05]  0.7  [0.9  0.1]  p(4) = 0.003.
• Similar calculation for progress from 1-2  p(2) = 0.3.

* Note that the probabilities do not sum up to one.


For simplicity normalization was avoided in this example
© R. Siegwart, D. Scaramuzza ETH Zurich - ASL

You might also like