You are on page 1of 122

Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

Localization and
Mapping

Zürich Autonomous Systems Lab


5 - Localization and Mapping
5
2 Localization and Mapping

ƒ Noise and aliasing; odometric position estimation

ƒ To localize or not to localize

ƒ Belief representation

ƒ Map representation

ƒ Probabilistic map-based localization

ƒ Other examples of localization system

ƒ Autonomous map building (SLAM)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
3 Localization, Where am I?

• Odometry, Dead Reckoning


• Localization based on external sensors,
beacons or landmarks
• Probabilistic Map Based Localization
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
4 Challenges of Localization
ƒ Knowing the absolute position (e.g. GPS) is not sufficient

ƒ Localization in human-scale in relation with environment

ƒ Planning in the Cognition step requires more than only position as input

ƒ Perception and motion plays an important role


ƒ Sensor noise
ƒ Sensor aliasing
ƒ Effector noise
ƒ Odometric position estimation

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
5 Sensor Noise

ƒ Sensor noise is mainly influenced by environment


e.g. surface, illumination …

ƒ or by the measurement principle itself


e.g. interference between ultrasonic sensors

ƒ Sensor noise drastically reduces the useful information of sensor readings.


The solution is:
ƒ to take multiple readings into account
ƒ employ temporal and/or multi-sensor fusion

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
6 Sensor Aliasing

ƒ In robots, non-uniqueness of sensors readings is the norm

ƒ Even with multiple sensors, there is a many-to-one mapping from


environmental states to robot’s perceptual inputs

ƒ Therefore the amount of information perceived by the sensors is generally


insufficient to identify the robot’s position from a single reading
ƒ Robot’s localization is usually based on a series of readings
ƒ Sufficient information is recovered by the robot over time

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
7 Effector Noise: Odometry, Deduced Reckoning

ƒ Odometry and dead reckoning:


Position update is based on proprioceptive sensors
ƒ Odometry: wheel sensors only
ƒ Dead reckoning: also heading sensors

ƒ The movement of the robot, sensed with wheel encoders and/or heading
sensors is integrated to the position.
ƒ Pros: Straight forward, easy
ƒ Cons: Errors are integrated -> unbound

ƒ Using additional heading sensors (e.g. gyroscope) might help to reduce


the cumulated errors, but the main problems remain the same.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
8 Odometry: Error sources

deterministic non-deterministic
(systematic) (non-systematic)

ƒ deterministic errors can be eliminated by proper calibration of the system.


ƒ non-deterministic errors have to be described by error models and will always lead to
uncertain position estimate.

ƒ Major Error Sources:


ƒ Limited resolution during integration (time increments, measurement resolution)
ƒ Misalignment of the wheels (deterministic)
ƒ Unequal wheel diameter (deterministic)
ƒ Variation in the contact point of the wheel
ƒ Unequal floor contact (slipping, not planar …)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
9 Odometry: Classification of Integration Errors

ƒ Range error: integrated path length (distance) of the robots movement


ƒ sum of the wheel movements

ƒ Turn error: similar to range error, but for turns


ƒ difference of the wheel motions

ƒ Drift error: difference in the error of the wheels leads to an error in the
robots angular orientation

ƒ Over long periods of time, turn and drift errors far outweigh range
errors!
ƒ Consider moving forward on a straight line along the x axis. The error in the y-
position introduced by a move of d meters will have a component of dsinDq,
which can be quite large as the angular error Dq grows.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
10 Odometry: The Differential Drive Robot (1)

⎡ x⎤ ⎡Δx⎤
p = ⎢ y⎥ p′ = p + ⎢⎢Δy ⎥⎥
⎢ ⎥
⎢⎣ θ⎥⎦ ⎢⎣Δθ⎥⎦

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
11 Odometry: The Differential Drive Robot (2)
ƒ Kinematics

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
12 Odometry: The Differential Drive Robot (3)
ƒ Error model

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
13 Odometry: Growth of Pose uncertainty for Straight Line Movement
ƒ Note: Errors perpendicular to the direction of movement are growing much faster!

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
14 Odometry: Growth of Pose uncertainty for Movement on a Circle 1rst hour, 7.4.2008

ƒ Note: Errors ellipse in does not remain perpendicular to the direction of movement!

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
15 Odometry: example of non-Gaussian error model

ƒ Note: Errors are not shaped like ellipses!

Courtesy AI Lab, Stanford

[Fox, Thrun, Burgard, Dellaert, 2000]


© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
16 Odometry: Calibration of Errors I (Borenstein [5])
ƒ The unidirectional square path experiment

ƒ BILD 1 Borenstein

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
17 Odometry: Calibration of Errors II (Borenstein [5])
ƒ The bi-directional square path experiment

ƒ BILD 2/3 Borenstein

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
18 Odometry: Calibration of Errors III (Borenstein [5])
ƒ The deterministic and
non-deterministic errors

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
19 To localize or not?

ƒ How to navigate between A and B


ƒ navigation without hitting obstacles
ƒ detection of goal location
ƒ Possible by following always the left wall
ƒ However, how to detect that the goal is reached

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
20 Behavior Based Navigation

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
21 Model Based Navigation

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
22 Belief Representation
ƒ a) Continuous map
with single hypothesis

ƒ b) Continuous map
with multiple hypothesis

ƒ c) Discretized map
with probability distribution

ƒ d) Discretized topological
map with probability
distribution

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
23 Belief Representation: Characteristics

ƒ Continuous ƒ Discrete

ƒ Precision bound by sensor data ƒ Precision bound by resolution of


ƒ Typically single hypothesis pose discretisation
estimate ƒ Typically multiple hypothesis pose
ƒ Lost when diverging (for single estimate
hypothesis) ƒ Never lost (when diverges
ƒ Compact representation and converges to another cell)
typically reasonable in processing ƒ Important memory and processing
power. power needed. (not the case for
topological maps)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
24 Bayesian Approach: A taxonomy of probabilistic models

More general Courtesy of Julien Diard


Bayesian
Programs
S: State
Bayesian O: Observation
Networks A: Action

DBNs
St St-1 St St-1 At

Markov Bayesian
Markov Loc MDPs
Chains Filters

Particle discrete semi-cont. continuous


MCML POMDPs
Filters HMMs HMMs HMMs

Kalman
St St-1 Ot Filters St St-1 Ot At
More specific
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
25 Single-hypothesis Belief – Continuous Line-Map

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
26 Single-hypothesis Belief – Grid and Topological Map

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
27 Grid-based Representation - Multi Hypothesis
ƒ Grid size around 20 cm2.

Courtesy of W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
28 Map Representation 2nd hour, 7.4.2008

1. Map precision vs. application

2. Features precision vs. map precision

3. Precision vs. computational complexity

ƒ Continuous Representation

ƒ Decomposition (Discretisation)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
29 Representation of the Environment

ƒ Environment Representation
ƒ Continuos Metric → x,y,θ
ƒ Discrete Metric → metric grid
ƒ Discrete Topological → topological grid

ƒ Environment Modeling
ƒ Raw sensor data, e.g. laser range data, grayscale images
• large volume of data, low distinctiveness on the level of individual values
• makes use of all acquired information
ƒ Low level features, e.g. line other geometric features
• medium volume of data, average distinctiveness
• filters out the useful information, still ambiguities
ƒ High level features, e.g. doors, a car, the Eiffel tower
• low volume of data, high distinctiveness
• filters out the useful information, few/no ambiguities, not enough information

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
30 Map Representation: Continuous Line-Based
a) Architecture map
b) Representation with set of infinite lines

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
31 Map Representation: Decomposition (1)
ƒ Exact cell decomposition - Polygons

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
32 Map Representation: Decomposition (2)
ƒ Fixed cell decomposition
ƒ Narrow passages disappear

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
33 Map Representation: Decomposition (3)
ƒ Adaptive cell decomposition

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
34 Map Representation: Decomposition (4)
ƒ Fixed cell decomposition – Example
with very small cells

Courtesy of S. Thrun
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
35 Map Representation: Decomposition (5)
ƒ Topological Decomposition

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
36 Map Representation: Decomposition (6)
ƒ Topological Decomposition

node
(location)

edge
(connectivity)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
37 Map Representation: Decomposition (7)
ƒ Topological Decomposition

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
38 State-of-the-Art: Current Challenges in Map Representation
ƒ Real world is dynamic

ƒ Perception is still a major challenge


ƒ Error prone
ƒ Extraction of useful information difficult

ƒ Traversal of open space

ƒ How to build up topology (boundaries of nodes)

ƒ Sensor fusion

ƒ 2D...3D

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
39 Sneak Preview: Particle Filter Localization

Courtesy AI Lab, Stanford

ƒ “Alternative” discretisation of space


ƒ Montecarlo method: “simulate” random
errors, one for each sample of robot state
ƒ Propagate samples (particles), add and
prune as you go along
ƒ Example movies from Stanford campus

[Fox, Thrun, Burgard, Dellaert, 2000]


© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
40 Localization and Map Building
ƒ Noise and aliasing; odometric position estimation
ƒ To localize or not to localize
ƒ Belief representation
ƒ Map representation
ƒ Probabilistic map-based localization
ƒ Other examples of localization system
ƒ Autonomous map building

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
41 Localization, Where am I?

?
position
Position Update
(Estimation?)

Encoder Prediction of
Position matched
(e.g. odometry) observations
YES

Map predicted position


data base Matching

• Odometry, Dead Reckoning


raw sensor data or
• Localization base on external sensors, extracted features

Perception
Perception
beacons or landmarks
Observation
• Probabilistic Map Based Localization
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
42 Probabilistic, Map-Based Localization (1)

ƒ Consider a mobile robot moving in a known environment.

ƒ As it start to move, say from a precisely known location, it might keep track
of its location using odometry.

ƒ However, after a certain movement the robot will get very uncertain about
its position.
Î update using an observation of its environment.

ƒ observation leads also to an estimate of the robots position which can than
be fused with the odometric estimation to get the best possible update of
the robots actual position.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
43 Probabilistic, Map-Based Localization (2)
ƒ Action update
ƒ action model ACT

ƒ with ot: Encoder Measurement, st-1: prior belief state


ƒ increases uncertainty
ƒ Perception update
ƒ perception model SEE

with it: exteroceptive sensor inputs, s’t: updated belief state


ƒ decreases uncertainty

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
44 Improving belief state by moving 1st hour, 21.4.2008

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
45 Probabilistic, Map-Based Localization (3)

ƒ Given
ƒ the position estimate p(k k )
its covariance Σ p (k k ) for time k,
Probability Density
ƒ
ƒ the current control input u(k )
ƒ the current set of observations Z (k + 1) and
ƒ the map M (k )

ƒ Compute the
ƒ new (posteriori) position estimate p(k + 1 k + 1) and
ƒ its covariance Σ p (k + 1 k + 1)

ƒ Such a procedure usually involves five steps:

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
46 The Five Steps for Map-Based Localization

Prediction position Estimation


Encoder Measurement and Position
(odometry) estimate (fusion)

matched predictions

Predicted features
and observations

observations
Map
data base
YES

Matching

raw sensor data or


1. Prediction based on previous estimate and odometry extracted features

perception
2. Observation with on-board sensors
3. Measurement prediction based on prediction and map Observation
on-board sensors
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
47 Markov Ù Kalman Filter Localization

ƒ Markov localization ƒ Kalman filter localization


ƒ localization starting from any unknown ƒ tracks the robot and is inherently very
position precise and efficient.
ƒ recovers from ambiguous situation. ƒ However, if the uncertainty of the robot
ƒ However, to update the probability of all becomes to large (e.g. collision with an
positions within the whole state space at object) the Kalman filter will fail and the
any time requires a discrete position is definitively lost.
representation of the space (grid). The
required memory and calculation power
can thus become very important if a fine
grid is used.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
48 Markov Localization (1)

ƒ Markov localization uses an


explicit, discrete representation for the probability of
all position in the state space.

ƒ This is usually done by representing the environment by a grid or a


topological graph with a finite number of possible states (positions).

ƒ During each update, the probability for each state (element) of the entire
space is updated.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
49 Markov Localization (2): Applying Bayes theory to robot localization
ƒ P(A): Probability that A is true.
ƒ e.g. p(rt = l): probability that the robot r is at position l at time t

ƒ We wish to compute the probability of each individual robot position given


actions and sensor measures.

ƒ P(A|B): Conditional probability of A given that we know B.


ƒ e.g. p(rt = l| it): probability that the robot is at position l given the sensors input it.

ƒ Product rule:

ƒ Bayes rule:

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
50 Markov Localization (3): Applying Bayes theory to robot localization

ƒ Bayes rule:

ƒ Map from a belief state and a sensor input to a refined belief state (SEE):

ƒ p(l): belief state before perceptual update process


ƒ p(i |l): probability to get measurement i when being at position l -> MAP
• consult robots map, identify the probability of a certain sensor reading for each
possible position in the map
ƒ p(i): normalization factor so that sum over all l for L equals 1.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
51 Markov Localization (4): Applying Bayes theory to robot localization
ƒ Bayes rule:

ƒ Map from a belief state and a action to new belief state (ACT):

ƒ Summing over all possible ways in which the robot may have reached l.
ƒ with lt and l’t-1 : positions, ot: odometric measurement

ƒ Markov assumption: Update only depends on previous state and its most
recent actions and perception.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
52 Markov Localization: Case Study 1 - Topological Map (1)
ƒ The Dervish Robot
ƒ Topological Localization with Sonar

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
53 Markov Localization: Case Study 1 - Topological Map (2)
ƒ Topological map of office-type environment

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
54 Markov Localization: Case Study 1 - Topological Map (3)
ƒ Update of believe state for position n given the percept-pair i

ƒ p(n|i): new likelihood for being in position n


ƒ p(n): current believe state
ƒ p(i|n): probability of seeing i in n (see table)
ƒ No action update !
ƒ However, the robot is moving and therefore we can apply a combination of
action and perception update

ƒ t-i is used instead of t-1 because the topological distance between n’ and n is
very depending on the specific topological map

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
55 Markov Localization: Case Study 1 - Topological Map (4)
ƒ The calculation

is calculated by multiplying the probability of generating perceptual event i


at position n by the probability of having failed to generate perceptual
event s at all nodes between n’ and n.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
56 Markov Localization: Case Study 1 - Topological Map (5)
ƒ Example calculation
ƒ Assume that the robot has two nonzero belief states
• p(1-2) = 1.0 ; p(2-3) = 0.2 *
and that it is facing east with certainty
ƒ Perceptual event: open hallway on its left and open door on its right
ƒ State 2-3 will progress potentially to 3 and 3-4 to 4.
ƒ State 3 and 3-4 can be eliminated because the likelihood of detecting an open door is zero.
ƒ The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2, (a) the
likelihood of detecting anything at node 3 and the likelihood of detecting a hallway on the left
and a door on the right at node 4 and (b) the likelihood of detecting a hallway on the left and
a door on the right at node 4. (for simplicity we assume that the likelihood of detecting
nothing at node 3-4 is 1.0)
ƒ (a) occurs only if Dervish fails to detect the door on its left at node 3 (either closed or open),
[0.6 ⋅ 0.4 +(1-0.6) ⋅ 0.05] and correctly detects nothing on its right, 0.7.
ƒ (b) occurs if Dervish correctly identifies the open hallway on its left at node 4, 0.90, and
mistakes the right hallway for an open door, 0.10.
ƒ This leads to:
• 0.2 ⋅ [0.6 ⋅ 0.4 + 0.4 ⋅ 0.05] ⋅ 0.7 ⋅ [0.9 ⋅ 0.1] → p(4) = 0.003.
• Similar calculation for progress from 1-2 → p(2) = 0.3.

* Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
57 Markov Localization: Case Study 1 - Topological Map (6)
2nd hour, 21.4.2008
ƒ Topological map of office-type environment

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
58 Markov Localization: Case Study 2 – Grid Map (3)
ƒ The 1D case
1. Start
¾ No knowledge at start, thus we have
an uniform probability distribution.
2. Robot perceives first pillar
¾ Seeing only one pillar, the probability
being at pillar 1, 2 or 3 is equal.

3. Robot moves
¾ Action model enables to estimate the
new probability distribution based
on the previous one and the motion.
4. Robot perceives second pillar
¾ Base on all prior knowledge the
probability being at pillar 2 becomes
dominant

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
59 Markov Localization: Case Study 2 – Grid Map (1)
ƒ Fine fixed decomposition grid (x, y, θ), 15 cm x 15 cm x 1°
ƒ Action and perception update

ƒ Action update:
ƒ Sum over previous possible positions
and motion model

ƒ Discrete version of eq. 5.22

ƒ Perception update:
ƒ Given perception i, what is the
probability to be a location l
Courtesy of
W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
60 Markov Localization: Case Study 2 – Grid Map (2)

ƒ The critical challenge is the calculation of p(i | l)


ƒ The number of possible sensor readings and geometric contexts is extremely large
ƒ p(i | l) is computed using a model of the robot’s sensor behavior, its position l, and the
local environment metric map around l.
ƒ Assumptions
• Measurement error can be described by a distribution with a mean
• Non-zero chance for any measurement

Courtesy of
W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
61 Markov Localization: Case Study 2 – Grid Map (3)

T E
D A
U P
ƒ How to calculate p(i | l) - examples M E
S
ƒ Depends largely on the sensor and environmentO model

D S
ƒ E.g.: Counting the on numberE ofE
cell for a given pose (x,y,θ).N
sensor readings that are laying in a occupied

D E
L I
S
ƒ E.g.: learning p(i | l) from experiments -> What is the probability to see a door if
the robots stand in front of a door with a certain angle.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
62 Markov Localization: Case Study 2 – Grid Map (4)
ƒ Example 1: Office Building

Courtesy of
W. Burgard

Position 5

Position 3
Position 4 © R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
63 Markov Localization: Case Study 2 – Grid Map (5)
ƒ Example 2: Museum Courtesy of
ƒ Laser scan 1 W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
64 Markov Localization: Case Study 2 – Grid Map (6)
ƒ Example 2: Museum Courtesy of
ƒ Laser scan 2 W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
65 Markov Localization: Case Study 2 – Grid Map (7)
ƒ Example 2: Museum Courtesy of
ƒ Laser scan 3 W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
66 Markov Localization: Case Study 2 – Grid Map (8)
ƒ Example 2: Museum Courtesy of
ƒ Laser scan 13 W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
67 Markov Localization: Case Study 2 – Grid Map (9)
ƒ Example 2: Museum Courtesy of
ƒ Laser scan 21 W. Burgard

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
68 Markov Localization: Case Study 2 – Grid Map (10)
ƒ Fine fixed decomposition grids result in a huge state space
ƒ Very important processing power needed
ƒ Large memory requirement
ƒ Reducing complexity
ƒ Various approached have been proposed for reducing complexity
ƒ The main goal is to reduce the number of states that are updated in each step
ƒ Randomized Sampling / Particle Filter
ƒ Approximated belief state by representing only a ‘representative’ subset of all
states (possible locations)
ƒ E.g update only 10% of all possible locations
ƒ The sampling process is typically weighted, e.g. put more samples around the
local peaks in the probability density function
ƒ However, you have to ensure some less likely locations are still tracked,
otherwise the robot might get lost

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
69 Kalman Filter Localization

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
70 Introduction to Kalman Filter (1)
ƒ Two measurements

ƒ Weighted least-square

ƒ Finding minimum error

ƒ After some calculation and rearrangements

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
71 Introduction to Kalman Filter (2)
ƒ In Kalman Filter notation

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
72 Introduction to Kalman Filter (3)
1st hour, 28.4.2008
ƒ Dynamic Prediction (robot moving)

u = velocity
w = noise
ƒ Motion

ƒ Combining fusion and dynamic prediction

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
73 The Five Steps for Map-Based Localization

Prediction position Estimation


Encoder Measurement and Position
(odometry) estimate (fusion)

matched predictions

Predicted features
and observations

observations
Map
data base
YES

Matching

raw sensor data or


1. Prediction based on previous estimate and odometry extracted features

perception
2. Observation with on-board sensors
3. Measurement prediction based on prediction and map Observation
on-board sensors
4. Matching of observation and map
5. Estimation -> position update (posteriori position)
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
74 Kalman Filter for Mobile Robot Localization: Robot Position Prediction
ƒ In a first step, the robots position at time step k+1 is predicted based on its
old location (time step k) and its movement due to the control input u(k):

pˆ (k + 1 k ) = f ( pˆ (k k ), u(k )) f: Odometry function

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
75 Kalman Filter for Mobile Robot Localization: Robot Position Prediction: Example

⎡ Δsr + Δsl Δsr − Δsl ⎤


⎢ cos(θ + )⎥
⎡ ˆ
x ( k ) ⎤ ⎢ 2 2 b
⎢ ⎥ Δsr + Δsl Δsr − Δsl ⎥ Odometry
pˆ (k + 1k ) = pˆ (k k ) + u(k ) = yˆ (k ) + ⎢ sin(θ + )⎥
⎢ ⎥ ⎢ 2 2b ⎥
⎢⎣ θ
ˆ ( k ) ⎥⎦ ⎢ Δsr − Δsl ⎥
⎢⎣ b ⎥⎦

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T

⎡kr Δsr 0 ⎤
Σu (k ) = ⎢ ⎥
⎣ 0 kl Δsl ⎦

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
76 Kalman Filter for Mobile Robot Localization: Observation

ƒ The second step it to obtain the observation Z(k+1) (measurements) from


the robot’s sensors at the new location at time k+1
ƒ The observation usually consists of a set n0 of single observations zj(k+1)
extracted from the different sensors signals. It can represent raw data
scans as well as features like lines, doors or any kind of landmarks.
ƒ The parameters of the targets are usually observed in the sensor frame
{S}.
ƒ Therefore the observations have to be transformed to the world frame {W} or
ƒ the measurement prediction have to be transformed to the sensor frame {S}.
ƒ This transformation is specified in the function hi (see later).

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
77 Kalman Filter for Mobile Robot Localization: Observation: Example

Raw Date of Extracted Lines Extracted Lines


Laser Scanner in Model Space

line j

αj

rj

⎡α j ⎤
R Sensor
z j (k + 1)= ⎢ ⎥ (robot)
⎣ rj ⎦ frame

⎡σαα σαr ⎤
ΣR, j (k + 1) = ⎢ ⎥
⎣ σrα σrr ⎦ j
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
78 Kalman Filter for Mobile Robot Localization: Measurement Prediction

ƒ In the next step we use the predicted robot position p̂ = k + 1 k ( )


and the map M(k) to generate multiple predicted observations zt.
ƒ They have to be transformed into the sensor frame

ẑi (k + 1) = hi (zt , p̂(k + 1 k ))


ƒ We can now define the measurement prediction as the set containing all ni
predicted observations

Ẑ (k + 1) = {ẑi (k + 1)(1 ≤ i ≤ ni )}
ƒ The function hi is mainly the coordinate transformation between the world
frame and the sensor frame

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
79 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example
ƒ For prediction, only the walls that are in
the field of view of the robot are selected.
ƒ This is done by linking the individual
lines to the nodes of the path

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
80 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example
ƒ The generated measurement predictions have to be transformed to the
robot frame {R}

ƒ According to the figure in previous slide the transformation is given by

and its Jacobian by

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
81 Kalman Filter for Mobile Robot Localization: Matching

ƒ Assignment from observations zj(k+1) (gained by the sensors) to the targets zt


(stored in the map)
ƒ For each measurement prediction for which an corresponding observation is found
we calculate the innovation:

and its innovation covariance found by applying the error propagation law:

ƒ The validity of the correspondence between measurement and prediction can e.g.
be evaluated through the Mahalanobis distance:

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
82 Kalman Filter for Mobile Robot Localization: Matching: Example

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
83 Kalman Filter for Mobile Robot Localization: Matching: Example
ƒ To find correspondence (pairs) of predicted and observed features we use
the Mahalanobis distance

with

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
84 Kalman Filter for Mobile Robot Localization: Estimation: Applying the Kalman Filter
ƒ Kalman filter gain:

ƒ Update of robot’s position estimate:

ƒ The associate variance

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
85 Kalman Filter for Mobile Robot Localization: Estimation: 1D Case

ƒ For the one-dimensional case with we can show


that the estimation corresponds to the Kalman filter for one-dimension
presented earlier.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
86 Kalman Filter for Mobile Robot Localization: Estimation: Example

ƒ Kalman filter estimation of the new robot


ˆ (k k ) :
position p
ƒ By fusing the prediction of robot position
(magenta) with the innovation gained by the
measurements (green) we get the updated
estimate of the robot position (red)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
87 Autonomous Indoor Navigation (Pygmalion EPFL)
ƒ very robust on-the-fly localization
ƒ one of the first systems with probabilistic sensor fusion
ƒ 47 steps,78 meter length, realistic office environment,
ƒ conducted 16 times > 1km overall distance
ƒ partially difficult surfaces (laser), partially few vertical edges (vision)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
92 Other Localization Methods (not probabilistic): Positioning Beacon Systems: Triangulation

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
93 Other Localization Methods (not probabilistic): Positioning Beacon Systems: Triangulation

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
97 Autonomous Map Building (SLAM)

Starting from an arbitrary initial point,


a mobile robot should be able to autonomously explore the
environment with its on board sensors,
gain knowledge about it,
interpret the scene,
build an appropriate map
and localize itself relative to this map.

SLAM
The Simultaneous Localization and Mapping Problem

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
99 Map Building: The Problems

1. Map Maintaining: Keeping track of 2. Representation and


changes in the environment Reduction of Uncertainty

e.g. disappearing position of robot -> position of wall


cupboard

position of wall -> position of robot

- e.g. measure of belief of each ƒ probability densities for feature positions


environment feature ƒ additional exploration strategies

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
100 General Map Building Schematics

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
101 Map Representation
ƒ M is a set n of probabilistic feature locations
ƒ Each feature is represented by the covariance matrix Σt and an associated
credibility factor ct

ƒ ct is between 0 and 1 and quantifies the belief in the existence of the


feature in the environment

ƒ a and b define the learning and forgetting rate and ns and nu are the
number of matched and unobserved predictions up to time k, respectively.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
102 Autonomous Map Building: Stochastic Map Technique
ƒ Stacked system state vector:

ƒ State covariance matrix:

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
103 Particle Filter SLAM 1
ƒ Overview
ƒ Simultaneous localization and mapping (SLAM) is a technique used by robots and
autonomous vehicles to build up a map within an unknown environment while at the
same time keeping track of their current position. This is not as straightforward as it might
sound due to inherent uncertainties in discerning the robot's relative movement from its
various sensors. If at the next iteration of map building the measured distance and
direction traveled has a slight inaccuracy, then any features being added to the map will
contain corresponding errors. If unchecked, these positional errors build cumulatively,
grossly distorting the map and therefore the robot's ability to know its precise location.
There are various techniques to compensate for this such as recognizing features that it
has come across previously and re-skewing recent parts of the map to make sure the two
instances of that feature become one. Some of the statistical techniques used in SLAM
include Kalman filters, particle filters and scan matching of range data.
ƒ Sensors characteristics
ƒ A sensor is characterized principally by:
1. Noise
2. Dimensionality of input
a. Planar laser range finder (2D points)
b. 3D laser range finder (3D point cloud)
c. Camera features..
3. Frame of reference
a. Laser/camera in robot frame
b. GPS in earth coord. Frame
c. Accelerometer/Gyros in inertial coord. frame

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
104 Particle Filter SLAM 2
ƒ SLAM problem
ƒ The approach to solve the SLAM problem is addressed using probabilities.
SLAM is usually explained by the conditional probability:
ƒ Overview

ƒ An online SLAM algorithm factorize that formula to estimate the robot state
at current time t

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
105 Particle Filter SLAM 3
ƒ FastSLAM approach
ƒ It solves the SLAM problem using particle filters. Particle filters are
mathematical models that represent probability distribution as a set of discrete
particles which occupy the state space.

ƒ Particle filter update


ƒ In the update step a new particle
distribution given motion model and
controls applied is generated.
a) For each particle:
1. Compare particle’s prediction of measurements with actual measurements
2. Particles whose predictions match the measurements are given a high weight
b) Filter resample:
• Resample particles based on weight
• Filter resample
• Assign each particle a weight depending on how well its estimate of the state
agrees with the measurements and randomly draw particles from previous
distribution based on weights creating a new distribution.
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
106 Particle Filter SLAM 4
ƒ FastSLAM approach (continuation)
ƒ Formulation
ƒ Particle filter SLAM (or FastSLAM) decouples map of features from pose:
• 1. Each particle represents a robot pose
• 2. Feature measurements are correlated thought the robot pose
ƒ If the robot pose was known all of the features would be uncorrelated so it
treats each pose particle as if it is the true pose, processing all of the feature
measurements independently.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
107 Particle Filter SLAM 5
ƒ FastSLAM approach (continuation)

ƒ It is possible to factorize the previous formula (Murphy1999) as:

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
108 Particle Filter SLAM 6
ƒ FastSLAM approach (continuation)
ƒ Particle set
• The robot posterior is solved using a Rao Blackwellized particle filtering using
landmarks. Each landmark estimation is represented by a 2x2 EKF (Extended
Kalman Filter). Each particle is “independent” (due the factorization) from the others
and maintains the estimate of M landmark positions.

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
Autonomous Map Building
110
Example of Feature Based Mapping (ETH)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
111 Probabilistic Feature Based 3D Maps

raw 3D scan of
the same scene

raw data

decompose space into grid cells


photo of the scene
fill cells with data

one plane per grid cell


find a plane for every cell
using RANSAC

fuse similar neighboring


planes together

segmented planar segments


final
segmentation © R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
112 Probabilistic Feature Based 3D SLAM

close-up of a reconstructed hallway

close-up of reconstructed bookshelves

The same experiment as before but


this time planar segments are
visualized and integrated into the
estimation process
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
113 Cyclic Environments
Courtesy of Sebastian Thrun
ƒ Small local error accumulate to arbitrary large global errors!
ƒ This is usually irrelevant for navigation
ƒ However, when closing loops, global error does matter

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
114 Dynamic Environments
ƒ Dynamical changes require continuous mapping

ƒ If extraction of high-level features would be


possible, the mapping in dynamic
environments would become
significantly more straightforward.
ƒ e.g. difference between human and wall
ƒ Environment modeling is a key factor
for robustness

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
Map Building:
115
Exploration and Graph Construction

1. Exploration 2. Graph Construction

explore
on stack
already
examined

Where to put the nodes?

ƒ Topology-based: at distinctive locations


- provides correct topology
- must recognize already visited location
- backtracking for unexplored openings ƒ Metric-based: where features disappear
or get visible

© R. Siegwart, ETH Zurich - ASL


Autonomous Mobile Robots

Real Time 3D SLAM with a Single Camera

This presentation is based on the following papers:

•Andrew J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003
•Nicholas Molton, Andrew J. Davison and Ian Reid, Locally Planar Patch Features for Real-Time Structure
from Motion, BMVC 2004

Zürich Autonomous Systems Lab


5 - Localization and Mapping
5
118 Outline

ƒ Visual SLAM versus Structure From Motion


ƒ Extended Kalman Filter for First-Order Uncertainty Propagation
ƒ Camera Motion Model
ƒ Matching of Existing Features
ƒ Initialization of new features in the map
ƒ Improving feature matching

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
119 Structure From Motion (SFM)
Structure from Motion:

ƒ Take some images of the object to


reconstruct

ƒ Features (points, lines, …) are


extracted from all frames and
matched among them

ƒ All images are processed


simultaneously

ƒ Both camera motion and 3D structure


can be recovered by optimally fusing
the overall information, up to a scale
factor

Robust solution but far from


being Real-Time !

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
120 Visual SLAM

(From Davison’03)

Can be Real-Time !
© R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
121 SLAM versus SFM
Repeatable Localization

ƒ Ability to estimate the location of the camera after 10 minutes of motion with the same
accuracy as was possible after 10 seconds.
ƒ Features must be stable, long-term landmark, no transient (as in SFM)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
122 State of the System

The State vector and the first-order uncertainty are:

Where:

In the SLAM approach the Covariance matrix is not diagonal, that is, the
uncertainty of any feature affects the position estimate of all other features
and the camera

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
123 Extended Kalman Filter (EKF)
The state vector X and the Covariance matrix P are updated during camera
motion using an EKF:

ƒ Prediction: a motion model is used to predict where the camera will be in the next time
step (P increases)

?
ƒ Observation: a new feature is measured (P decreases)

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
124 A Motion Model for Smoothly Moving Camera

ƒ Attempts to predict where the camera will be in the next time step

ƒ In the case the camera is attached to a person, the unknown intentions of


the person can be statistically modeled

ƒ Most Structure from Motion approaches did not use any motion model!

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
125 Constant Velocity Model

The unknown intentions, and so unknown accelerations, are taken into account by
modeling the acceleration as a process of zero mean and Gaussian distribution:

By setting the covariance matrix of n to small or large values, we define the smoothness
or rapidity of the motion we expect. In practice these values were used:
⎡ 4 m/s ⎤
⎢6 rad / s ⎥
⎣ ⎦

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
126 Selection of Features already in the Map

ƒ By predicting the next camera pose, we can predict where


each features is going likely to appear:

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
127 Selection of Features already in the Map

• At each frame, the features occurring at previous step are searched in the elliptic region
where they are expected to be according to the motion model (Normalized Sum of Square
Differences is used for matching)

• Large ellipses mean that the feature is difficult to predict, thus the feature inside will
provide more information for camera motion estimation

•Once the features are matched, the entire state of the system is updated©according to EKF
R. Siegwart, ETH Zurich - ASL
5 - Localization and Mapping
5
128 Initialization of New Features in the Database

ƒ The Shi-Tomasi feature is firstly initialized as a 3D line

ƒ Along this line, 100 possible feature positions are set uniformly in the range 0.5-5 m

ƒ A each time, each hypothesis is tested by projecting it into the image

ƒ Each matching produces a likelihood for each hypothesis and their probabilities are recomputed

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
129 Map Management

ƒ The number of features to be constantly visible in the image varies (in practice between 6-
10) according to
• Localization accuracy
• Computing power available

ƒ If a feature is required to be added, the detected feature is added only if it is not expected
to disappear from the next image

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
130 Improved Feature Matching
ƒ Up to now, tracked features were treated as 2D templates in image space

ƒ Long-term tracking can be improved by approximating the feature as a locally


planar region on 3D world surfaces

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
131 Locally Planar 3D Patch Features

© R. Siegwart, ETH Zurich - ASL


5 - Localization and Mapping
5
132 References
ƒ A. J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003

ƒ N. Molton, A. J. Davison and I. Reid, Locally Planar Patch Features for Real-Time Structure from Motion, BMVC 2004

ƒ A. J. Davison, Y. Gonzalez Cid, N. Kita, Real-Time 3D SLAM with wide-angle vision, IFAC Symposium on Intelligent
Autonomous Vehicles, 2004

ƒ J. Shi and C. Tomasi, Good features to track, CVPR, 1994.

© R. Siegwart, ETH Zurich - ASL