58 views

Uploaded by Mehdi Chaouch

- Vision-Based Vehicle Speed Measurement System
- Stereo Vision Based Object Detection
- Vision Based Autonomous Navigation
- Online Algorithms
- Tutorial Dips
- Welch - Bishop - An Introduction to the Kalman Filter
- pmsm
- Yu 2004 - Empirical Characteristic Function Estimation and Its Applications-2
- pftk
- Midterm 05 Solution
- effectiveness_measure
- Output Two way manova.doc
- Standard Deviation
- state space models
- Business Statistics L3 Model Answers Series 2 2010 (1)
- Weight
- Proposing a Popular Method for Meteorological Drought Monitoring in the Kabul River Basin, Afghanistan
- 305-2012d
- Working Paper PEDI
- Week 1.4 - Exponential and Normal Distributions

You are on page 1of 122

"Position"

Localization Cognition

Global Map

Local Map

Environment

Localization and

Mapping

5 - Localization and Mapping

5

2 Localization and Mapping

Belief representation

Map representation

5 - Localization and Mapping

5

3 Localization, Where am I?

• Localization based on external sensors,

beacons or landmarks

• Probabilistic Map Based Localization

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

4 Challenges of Localization

Knowing the absolute position (e.g. GPS) is not sufficient

Planning in the Cognition step requires more than only position as input

Sensor noise

Sensor aliasing

Effector noise

Odometric position estimation

5 - Localization and Mapping

5

5 Sensor Noise

e.g. surface, illumination …

e.g. interference between ultrasonic sensors

The solution is:

to take multiple readings into account

employ temporal and/or multi-sensor fusion

5 - Localization and Mapping

5

6 Sensor Aliasing

environmental states to robot’s perceptual inputs

insufficient to identify the robot’s position from a single reading

Robot’s localization is usually based on a series of readings

Sufficient information is recovered by the robot over time

5 - Localization and Mapping

5

7 Effector Noise: Odometry, Deduced Reckoning

Position update is based on proprioceptive sensors

Odometry: wheel sensors only

Dead reckoning: also heading sensors

The movement of the robot, sensed with wheel encoders and/or heading

sensors is integrated to the position.

Pros: Straight forward, easy

Cons: Errors are integrated -> unbound

the cumulated errors, but the main problems remain the same.

5 - Localization and Mapping

5

8 Odometry: Error sources

deterministic non-deterministic

(systematic) (non-systematic)

non-deterministic errors have to be described by error models and will always lead to

uncertain position estimate.

Limited resolution during integration (time increments, measurement resolution)

Misalignment of the wheels (deterministic)

Unequal wheel diameter (deterministic)

Variation in the contact point of the wheel

Unequal floor contact (slipping, not planar …)

5 - Localization and Mapping

5

9 Odometry: Classification of Integration Errors

sum of the wheel movements

difference of the wheel motions

Drift error: difference in the error of the wheels leads to an error in the

robots angular orientation

Over long periods of time, turn and drift errors far outweigh range

errors!

Consider moving forward on a straight line along the x axis. The error in the y-

position introduced by a move of d meters will have a component of dsinDq,

which can be quite large as the angular error Dq grows.

5 - Localization and Mapping

5

10 Odometry: The Differential Drive Robot (1)

⎡ x⎤ ⎡Δx⎤

p = ⎢ y⎥ p′ = p + ⎢⎢Δy ⎥⎥

⎢ ⎥

⎢⎣ θ⎥⎦ ⎢⎣Δθ⎥⎦

5 - Localization and Mapping

5

11 Odometry: The Differential Drive Robot (2)

Kinematics

5 - Localization and Mapping

5

12 Odometry: The Differential Drive Robot (3)

Error model

5 - Localization and Mapping

5

13 Odometry: Growth of Pose uncertainty for Straight Line Movement

Note: Errors perpendicular to the direction of movement are growing much faster!

5 - Localization and Mapping

5

14 Odometry: Growth of Pose uncertainty for Movement on a Circle 1rst hour, 7.4.2008

Note: Errors ellipse in does not remain perpendicular to the direction of movement!

5 - Localization and Mapping

5

15 Odometry: example of non-Gaussian error model

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

16 Odometry: Calibration of Errors I (Borenstein [5])

The unidirectional square path experiment

BILD 1 Borenstein

5 - Localization and Mapping

5

17 Odometry: Calibration of Errors II (Borenstein [5])

The bi-directional square path experiment

5 - Localization and Mapping

5

18 Odometry: Calibration of Errors III (Borenstein [5])

The deterministic and

non-deterministic errors

5 - Localization and Mapping

5

19 To localize or not?

navigation without hitting obstacles

detection of goal location

Possible by following always the left wall

However, how to detect that the goal is reached

5 - Localization and Mapping

5

20 Behavior Based Navigation

5 - Localization and Mapping

5

21 Model Based Navigation

5 - Localization and Mapping

5

22 Belief Representation

a) Continuous map

with single hypothesis

b) Continuous map

with multiple hypothesis

c) Discretized map

with probability distribution

d) Discretized topological

map with probability

distribution

5 - Localization and Mapping

5

23 Belief Representation: Characteristics

Continuous Discrete

Typically single hypothesis pose discretisation

estimate Typically multiple hypothesis pose

Lost when diverging (for single estimate

hypothesis) Never lost (when diverges

Compact representation and converges to another cell)

typically reasonable in processing Important memory and processing

power. power needed. (not the case for

topological maps)

5 - Localization and Mapping

5

24 Bayesian Approach: A taxonomy of probabilistic models

Bayesian

Programs

S: State

Bayesian O: Observation

Networks A: Action

DBNs

St St-1 St St-1 At

Markov Bayesian

Markov Loc MDPs

Chains Filters

MCML POMDPs

Filters HMMs HMMs HMMs

Kalman

St St-1 Ot Filters St St-1 Ot At

More specific

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

25 Single-hypothesis Belief – Continuous Line-Map

5 - Localization and Mapping

5

26 Single-hypothesis Belief – Grid and Topological Map

5 - Localization and Mapping

5

27 Grid-based Representation - Multi Hypothesis

Grid size around 20 cm2.

Courtesy of W. Burgard

5 - Localization and Mapping

5

28 Map Representation 2nd hour, 7.4.2008

Continuous Representation

Decomposition (Discretisation)

5 - Localization and Mapping

5

29 Representation of the Environment

Environment Representation

Continuos Metric → x,y,θ

Discrete Metric → metric grid

Discrete Topological → topological grid

Environment Modeling

Raw sensor data, e.g. laser range data, grayscale images

• large volume of data, low distinctiveness on the level of individual values

• makes use of all acquired information

Low level features, e.g. line other geometric features

• medium volume of data, average distinctiveness

• filters out the useful information, still ambiguities

High level features, e.g. doors, a car, the Eiffel tower

• low volume of data, high distinctiveness

• filters out the useful information, few/no ambiguities, not enough information

5 - Localization and Mapping

5

30 Map Representation: Continuous Line-Based

a) Architecture map

b) Representation with set of infinite lines

5 - Localization and Mapping

5

31 Map Representation: Decomposition (1)

Exact cell decomposition - Polygons

5 - Localization and Mapping

5

32 Map Representation: Decomposition (2)

Fixed cell decomposition

Narrow passages disappear

5 - Localization and Mapping

5

33 Map Representation: Decomposition (3)

Adaptive cell decomposition

5 - Localization and Mapping

5

34 Map Representation: Decomposition (4)

Fixed cell decomposition – Example

with very small cells

Courtesy of S. Thrun

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

35 Map Representation: Decomposition (5)

Topological Decomposition

5 - Localization and Mapping

5

36 Map Representation: Decomposition (6)

Topological Decomposition

node

(location)

edge

(connectivity)

5 - Localization and Mapping

5

37 Map Representation: Decomposition (7)

Topological Decomposition

5 - Localization and Mapping

5

38 State-of-the-Art: Current Challenges in Map Representation

Real world is dynamic

Error prone

Extraction of useful information difficult

Sensor fusion

2D...3D

5 - Localization and Mapping

5

39 Sneak Preview: Particle Filter Localization

Montecarlo method: “simulate” random

errors, one for each sample of robot state

Propagate samples (particles), add and

prune as you go along

Example movies from Stanford campus

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

40 Localization and Map Building

Noise and aliasing; odometric position estimation

To localize or not to localize

Belief representation

Map representation

Probabilistic map-based localization

Other examples of localization system

Autonomous map building

"Position"

Localization Cognition

Global Map

Local Map

Environment

5 - Localization and Mapping

5

41 Localization, Where am I?

?

position

Position Update

(Estimation?)

Encoder Prediction of

Position matched

(e.g. odometry) observations

YES

data base Matching

raw sensor data or

• Localization base on external sensors, extracted features

Perception

Perception

beacons or landmarks

Observation

• Probabilistic Map Based Localization

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

42 Probabilistic, Map-Based Localization (1)

As it start to move, say from a precisely known location, it might keep track

of its location using odometry.

However, after a certain movement the robot will get very uncertain about

its position.

Î update using an observation of its environment.

observation leads also to an estimate of the robots position which can than

be fused with the odometric estimation to get the best possible update of

the robots actual position.

5 - Localization and Mapping

5

43 Probabilistic, Map-Based Localization (2)

Action update

action model ACT

increases uncertainty

Perception update

perception model SEE

decreases uncertainty

5 - Localization and Mapping

5

44 Improving belief state by moving 1st hour, 21.4.2008

5 - Localization and Mapping

5

45 Probabilistic, Map-Based Localization (3)

Given

the position estimate p(k k )

its covariance Σ p (k k ) for time k,

Probability Density

the current control input u(k )

the current set of observations Z (k + 1) and

the map M (k )

Compute the

new (posteriori) position estimate p(k + 1 k + 1) and

its covariance Σ p (k + 1 k + 1)

5 - Localization and Mapping

5

46 The Five Steps for Map-Based Localization

Encoder Measurement and Position

(odometry) estimate (fusion)

matched predictions

Predicted features

and observations

observations

Map

data base

YES

Matching

1. Prediction based on previous estimate and odometry extracted features

perception

2. Observation with on-board sensors

3. Measurement prediction based on prediction and map Observation

on-board sensors

4. Matching of observation and map

5. Estimation -> position update (posteriori position)

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

47 Markov Ù Kalman Filter Localization

localization starting from any unknown tracks the robot and is inherently very

position precise and efficient.

recovers from ambiguous situation. However, if the uncertainty of the robot

However, to update the probability of all becomes to large (e.g. collision with an

positions within the whole state space at object) the Kalman filter will fail and the

any time requires a discrete position is definitively lost.

representation of the space (grid). The

required memory and calculation power

can thus become very important if a fine

grid is used.

5 - Localization and Mapping

5

48 Markov Localization (1)

explicit, discrete representation for the probability of

all position in the state space.

topological graph with a finite number of possible states (positions).

During each update, the probability for each state (element) of the entire

space is updated.

5 - Localization and Mapping

5

49 Markov Localization (2): Applying Bayes theory to robot localization

P(A): Probability that A is true.

e.g. p(rt = l): probability that the robot r is at position l at time t

actions and sensor measures.

e.g. p(rt = l| it): probability that the robot is at position l given the sensors input it.

Product rule:

Bayes rule:

5 - Localization and Mapping

5

50 Markov Localization (3): Applying Bayes theory to robot localization

Bayes rule:

Map from a belief state and a sensor input to a refined belief state (SEE):

p(i |l): probability to get measurement i when being at position l -> MAP

• consult robots map, identify the probability of a certain sensor reading for each

possible position in the map

p(i): normalization factor so that sum over all l for L equals 1.

5 - Localization and Mapping

5

51 Markov Localization (4): Applying Bayes theory to robot localization

Bayes rule:

Map from a belief state and a action to new belief state (ACT):

Summing over all possible ways in which the robot may have reached l.

with lt and l’t-1 : positions, ot: odometric measurement

Markov assumption: Update only depends on previous state and its most

recent actions and perception.

5 - Localization and Mapping

5

52 Markov Localization: Case Study 1 - Topological Map (1)

The Dervish Robot

Topological Localization with Sonar

5 - Localization and Mapping

5

53 Markov Localization: Case Study 1 - Topological Map (2)

Topological map of office-type environment

5 - Localization and Mapping

5

54 Markov Localization: Case Study 1 - Topological Map (3)

Update of believe state for position n given the percept-pair i

p(n): current believe state

p(i|n): probability of seeing i in n (see table)

No action update !

However, the robot is moving and therefore we can apply a combination of

action and perception update

t-i is used instead of t-1 because the topological distance between n’ and n is

very depending on the specific topological map

5 - Localization and Mapping

5

55 Markov Localization: Case Study 1 - Topological Map (4)

The calculation

at position n by the probability of having failed to generate perceptual

event s at all nodes between n’ and n.

5 - Localization and Mapping

5

56 Markov Localization: Case Study 1 - Topological Map (5)

Example calculation

Assume that the robot has two nonzero belief states

• p(1-2) = 1.0 ; p(2-3) = 0.2 *

and that it is facing east with certainty

Perceptual event: open hallway on its left and open door on its right

State 2-3 will progress potentially to 3 and 3-4 to 4.

State 3 and 3-4 can be eliminated because the likelihood of detecting an open door is zero.

The likelihood of reaching state 4 is the product of the initial likelihood p(2-3)= 0.2, (a) the

likelihood of detecting anything at node 3 and the likelihood of detecting a hallway on the left

and a door on the right at node 4 and (b) the likelihood of detecting a hallway on the left and

a door on the right at node 4. (for simplicity we assume that the likelihood of detecting

nothing at node 3-4 is 1.0)

(a) occurs only if Dervish fails to detect the door on its left at node 3 (either closed or open),

[0.6 ⋅ 0.4 +(1-0.6) ⋅ 0.05] and correctly detects nothing on its right, 0.7.

(b) occurs if Dervish correctly identifies the open hallway on its left at node 4, 0.90, and

mistakes the right hallway for an open door, 0.10.

This leads to:

• 0.2 ⋅ [0.6 ⋅ 0.4 + 0.4 ⋅ 0.05] ⋅ 0.7 ⋅ [0.9 ⋅ 0.1] → p(4) = 0.003.

• Similar calculation for progress from 1-2 → p(2) = 0.3.

* Note that the probabilities do not sum up to one. For simplicity normalization was avoided in this example

5 - Localization and Mapping

5

57 Markov Localization: Case Study 1 - Topological Map (6)

2nd hour, 21.4.2008

Topological map of office-type environment

5 - Localization and Mapping

5

58 Markov Localization: Case Study 2 – Grid Map (3)

The 1D case

1. Start

¾ No knowledge at start, thus we have

an uniform probability distribution.

2. Robot perceives first pillar

¾ Seeing only one pillar, the probability

being at pillar 1, 2 or 3 is equal.

3. Robot moves

¾ Action model enables to estimate the

new probability distribution based

on the previous one and the motion.

4. Robot perceives second pillar

¾ Base on all prior knowledge the

probability being at pillar 2 becomes

dominant

5 - Localization and Mapping

5

59 Markov Localization: Case Study 2 – Grid Map (1)

Fine fixed decomposition grid (x, y, θ), 15 cm x 15 cm x 1°

Action and perception update

Action update:

Sum over previous possible positions

and motion model

Perception update:

Given perception i, what is the

probability to be a location l

Courtesy of

W. Burgard

5 - Localization and Mapping

5

60 Markov Localization: Case Study 2 – Grid Map (2)

The number of possible sensor readings and geometric contexts is extremely large

p(i | l) is computed using a model of the robot’s sensor behavior, its position l, and the

local environment metric map around l.

Assumptions

• Measurement error can be described by a distribution with a mean

• Non-zero chance for any measurement

Courtesy of

W. Burgard

5 - Localization and Mapping

5

61 Markov Localization: Case Study 2 – Grid Map (3)

T E

D A

U P

How to calculate p(i | l) - examples M E

S

Depends largely on the sensor and environmentO model

D S

E.g.: Counting the on numberE ofE

cell for a given pose (x,y,θ).N

sensor readings that are laying in a occupied

D E

L I

S

E.g.: learning p(i | l) from experiments -> What is the probability to see a door if

the robots stand in front of a door with a certain angle.

5 - Localization and Mapping

5

62 Markov Localization: Case Study 2 – Grid Map (4)

Example 1: Office Building

Courtesy of

W. Burgard

Position 5

Position 3

Position 4 © R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

63 Markov Localization: Case Study 2 – Grid Map (5)

Example 2: Museum Courtesy of

Laser scan 1 W. Burgard

5 - Localization and Mapping

5

64 Markov Localization: Case Study 2 – Grid Map (6)

Example 2: Museum Courtesy of

Laser scan 2 W. Burgard

5 - Localization and Mapping

5

65 Markov Localization: Case Study 2 – Grid Map (7)

Example 2: Museum Courtesy of

Laser scan 3 W. Burgard

5 - Localization and Mapping

5

66 Markov Localization: Case Study 2 – Grid Map (8)

Example 2: Museum Courtesy of

Laser scan 13 W. Burgard

5 - Localization and Mapping

5

67 Markov Localization: Case Study 2 – Grid Map (9)

Example 2: Museum Courtesy of

Laser scan 21 W. Burgard

5 - Localization and Mapping

5

68 Markov Localization: Case Study 2 – Grid Map (10)

Fine fixed decomposition grids result in a huge state space

Very important processing power needed

Large memory requirement

Reducing complexity

Various approached have been proposed for reducing complexity

The main goal is to reduce the number of states that are updated in each step

Randomized Sampling / Particle Filter

Approximated belief state by representing only a ‘representative’ subset of all

states (possible locations)

E.g update only 10% of all possible locations

The sampling process is typically weighted, e.g. put more samples around the

local peaks in the probability density function

However, you have to ensure some less likely locations are still tracked,

otherwise the robot might get lost

5 - Localization and Mapping

5

69 Kalman Filter Localization

5 - Localization and Mapping

5

70 Introduction to Kalman Filter (1)

Two measurements

Weighted least-square

5 - Localization and Mapping

5

71 Introduction to Kalman Filter (2)

In Kalman Filter notation

5 - Localization and Mapping

5

72 Introduction to Kalman Filter (3)

1st hour, 28.4.2008

Dynamic Prediction (robot moving)

u = velocity

w = noise

Motion

5 - Localization and Mapping

5

73 The Five Steps for Map-Based Localization

Encoder Measurement and Position

(odometry) estimate (fusion)

matched predictions

Predicted features

and observations

observations

Map

data base

YES

Matching

1. Prediction based on previous estimate and odometry extracted features

perception

2. Observation with on-board sensors

3. Measurement prediction based on prediction and map Observation

on-board sensors

4. Matching of observation and map

5. Estimation -> position update (posteriori position)

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

74 Kalman Filter for Mobile Robot Localization: Robot Position Prediction

In a first step, the robots position at time step k+1 is predicted based on its

old location (time step k) and its movement due to the control input u(k):

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T

5 - Localization and Mapping

5

75 Kalman Filter for Mobile Robot Localization: Robot Position Prediction: Example

⎢ cos(θ + )⎥

⎡ ˆ

x ( k ) ⎤ ⎢ 2 2 b

⎢ ⎥ Δsr + Δsl Δsr − Δsl ⎥ Odometry

pˆ (k + 1k ) = pˆ (k k ) + u(k ) = yˆ (k ) + ⎢ sin(θ + )⎥

⎢ ⎥ ⎢ 2 2b ⎥

⎢⎣ θ

ˆ ( k ) ⎥⎦ ⎢ Δsr − Δsl ⎥

⎢⎣ b ⎥⎦

Σ p ( k + 1 k ) = ∇ p f ⋅ Σ p ( k k ) ⋅ ∇ p f T + ∇u f ⋅ Σu ( k ) ⋅ ∇u f T

⎡kr Δsr 0 ⎤

Σu (k ) = ⎢ ⎥

⎣ 0 kl Δsl ⎦

5 - Localization and Mapping

5

76 Kalman Filter for Mobile Robot Localization: Observation

the robot’s sensors at the new location at time k+1

The observation usually consists of a set n0 of single observations zj(k+1)

extracted from the different sensors signals. It can represent raw data

scans as well as features like lines, doors or any kind of landmarks.

The parameters of the targets are usually observed in the sensor frame

{S}.

Therefore the observations have to be transformed to the world frame {W} or

the measurement prediction have to be transformed to the sensor frame {S}.

This transformation is specified in the function hi (see later).

5 - Localization and Mapping

5

77 Kalman Filter for Mobile Robot Localization: Observation: Example

Laser Scanner in Model Space

line j

αj

rj

⎡α j ⎤

R Sensor

z j (k + 1)= ⎢ ⎥ (robot)

⎣ rj ⎦ frame

⎡σαα σαr ⎤

ΣR, j (k + 1) = ⎢ ⎥

⎣ σrα σrr ⎦ j

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

78 Kalman Filter for Mobile Robot Localization: Measurement Prediction

and the map M(k) to generate multiple predicted observations zt.

They have to be transformed into the sensor frame

We can now define the measurement prediction as the set containing all ni

predicted observations

Ẑ (k + 1) = {ẑi (k + 1)(1 ≤ i ≤ ni )}

The function hi is mainly the coordinate transformation between the world

frame and the sensor frame

5 - Localization and Mapping

5

79 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example

For prediction, only the walls that are in

the field of view of the robot are selected.

This is done by linking the individual

lines to the nodes of the path

5 - Localization and Mapping

5

80 Kalman Filter for Mobile Robot Localization: Measurement Prediction: Example

The generated measurement predictions have to be transformed to the

robot frame {R}

5 - Localization and Mapping

5

81 Kalman Filter for Mobile Robot Localization: Matching

(stored in the map)

For each measurement prediction for which an corresponding observation is found

we calculate the innovation:

and its innovation covariance found by applying the error propagation law:

The validity of the correspondence between measurement and prediction can e.g.

be evaluated through the Mahalanobis distance:

5 - Localization and Mapping

5

82 Kalman Filter for Mobile Robot Localization: Matching: Example

5 - Localization and Mapping

5

83 Kalman Filter for Mobile Robot Localization: Matching: Example

To find correspondence (pairs) of predicted and observed features we use

the Mahalanobis distance

with

5 - Localization and Mapping

5

84 Kalman Filter for Mobile Robot Localization: Estimation: Applying the Kalman Filter

Kalman filter gain:

5 - Localization and Mapping

5

85 Kalman Filter for Mobile Robot Localization: Estimation: 1D Case

that the estimation corresponds to the Kalman filter for one-dimension

presented earlier.

5 - Localization and Mapping

5

86 Kalman Filter for Mobile Robot Localization: Estimation: Example

ˆ (k k ) :

position p

By fusing the prediction of robot position

(magenta) with the innovation gained by the

measurements (green) we get the updated

estimate of the robot position (red)

5 - Localization and Mapping

5

87 Autonomous Indoor Navigation (Pygmalion EPFL)

very robust on-the-fly localization

one of the first systems with probabilistic sensor fusion

47 steps,78 meter length, realistic office environment,

conducted 16 times > 1km overall distance

partially difficult surfaces (laser), partially few vertical edges (vision)

5 - Localization and Mapping

5

92 Other Localization Methods (not probabilistic): Positioning Beacon Systems: Triangulation

5 - Localization and Mapping

5

93 Other Localization Methods (not probabilistic): Positioning Beacon Systems: Triangulation

5 - Localization and Mapping

5

97 Autonomous Map Building (SLAM)

a mobile robot should be able to autonomously explore the

environment with its on board sensors,

gain knowledge about it,

interpret the scene,

build an appropriate map

and localize itself relative to this map.

SLAM

The Simultaneous Localization and Mapping Problem

5 - Localization and Mapping

5

99 Map Building: The Problems

changes in the environment Reduction of Uncertainty

cupboard

environment feature additional exploration strategies

5 - Localization and Mapping

5

100 General Map Building Schematics

5 - Localization and Mapping

5

101 Map Representation

M is a set n of probabilistic feature locations

Each feature is represented by the covariance matrix Σt and an associated

credibility factor ct

feature in the environment

a and b define the learning and forgetting rate and ns and nu are the

number of matched and unobserved predictions up to time k, respectively.

5 - Localization and Mapping

5

102 Autonomous Map Building: Stochastic Map Technique

Stacked system state vector:

5 - Localization and Mapping

5

103 Particle Filter SLAM 1

Overview

Simultaneous localization and mapping (SLAM) is a technique used by robots and

autonomous vehicles to build up a map within an unknown environment while at the

same time keeping track of their current position. This is not as straightforward as it might

sound due to inherent uncertainties in discerning the robot's relative movement from its

various sensors. If at the next iteration of map building the measured distance and

direction traveled has a slight inaccuracy, then any features being added to the map will

contain corresponding errors. If unchecked, these positional errors build cumulatively,

grossly distorting the map and therefore the robot's ability to know its precise location.

There are various techniques to compensate for this such as recognizing features that it

has come across previously and re-skewing recent parts of the map to make sure the two

instances of that feature become one. Some of the statistical techniques used in SLAM

include Kalman filters, particle filters and scan matching of range data.

Sensors characteristics

A sensor is characterized principally by:

1. Noise

2. Dimensionality of input

a. Planar laser range finder (2D points)

b. 3D laser range finder (3D point cloud)

c. Camera features..

3. Frame of reference

a. Laser/camera in robot frame

b. GPS in earth coord. Frame

c. Accelerometer/Gyros in inertial coord. frame

5 - Localization and Mapping

5

104 Particle Filter SLAM 2

SLAM problem

The approach to solve the SLAM problem is addressed using probabilities.

SLAM is usually explained by the conditional probability:

Overview

An online SLAM algorithm factorize that formula to estimate the robot state

at current time t

5 - Localization and Mapping

5

105 Particle Filter SLAM 3

FastSLAM approach

It solves the SLAM problem using particle filters. Particle filters are

mathematical models that represent probability distribution as a set of discrete

particles which occupy the state space.

In the update step a new particle

distribution given motion model and

controls applied is generated.

a) For each particle:

1. Compare particle’s prediction of measurements with actual measurements

2. Particles whose predictions match the measurements are given a high weight

b) Filter resample:

• Resample particles based on weight

• Filter resample

• Assign each particle a weight depending on how well its estimate of the state

agrees with the measurements and randomly draw particles from previous

distribution based on weights creating a new distribution.

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

106 Particle Filter SLAM 4

FastSLAM approach (continuation)

Formulation

Particle filter SLAM (or FastSLAM) decouples map of features from pose:

• 1. Each particle represents a robot pose

• 2. Feature measurements are correlated thought the robot pose

If the robot pose was known all of the features would be uncorrelated so it

treats each pose particle as if it is the true pose, processing all of the feature

measurements independently.

5 - Localization and Mapping

5

107 Particle Filter SLAM 5

FastSLAM approach (continuation)

5 - Localization and Mapping

5

108 Particle Filter SLAM 6

FastSLAM approach (continuation)

Particle set

• The robot posterior is solved using a Rao Blackwellized particle filtering using

landmarks. Each landmark estimation is represented by a 2x2 EKF (Extended

Kalman Filter). Each particle is “independent” (due the factorization) from the others

and maintains the estimate of M landmark positions.

5 - Localization and Mapping

5

Autonomous Map Building

110

Example of Feature Based Mapping (ETH)

5 - Localization and Mapping

5

111 Probabilistic Feature Based 3D Maps

raw 3D scan of

the same scene

raw data

photo of the scene

fill cells with data

find a plane for every cell

using RANSAC

planes together

final

segmentation © R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

112 Probabilistic Feature Based 3D SLAM

this time planar segments are

visualized and integrated into the

estimation process

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

113 Cyclic Environments

Courtesy of Sebastian Thrun

Small local error accumulate to arbitrary large global errors!

This is usually irrelevant for navigation

However, when closing loops, global error does matter

5 - Localization and Mapping

5

114 Dynamic Environments

Dynamical changes require continuous mapping

possible, the mapping in dynamic

environments would become

significantly more straightforward.

e.g. difference between human and wall

Environment modeling is a key factor

for robustness

5 - Localization and Mapping

5

Map Building:

115

Exploration and Graph Construction

explore

on stack

already

examined

- provides correct topology

- must recognize already visited location

- backtracking for unexplored openings Metric-based: where features disappear

or get visible

Autonomous Mobile Robots

•Andrew J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003

•Nicholas Molton, Andrew J. Davison and Ian Reid, Locally Planar Patch Features for Real-Time Structure

from Motion, BMVC 2004

5 - Localization and Mapping

5

118 Outline

Extended Kalman Filter for First-Order Uncertainty Propagation

Camera Motion Model

Matching of Existing Features

Initialization of new features in the map

Improving feature matching

5 - Localization and Mapping

5

119 Structure From Motion (SFM)

Structure from Motion:

reconstruct

extracted from all frames and

matched among them

simultaneously

can be recovered by optimally fusing

the overall information, up to a scale

factor

being Real-Time !

5 - Localization and Mapping

5

120 Visual SLAM

(From Davison’03)

Can be Real-Time !

© R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

121 SLAM versus SFM

Repeatable Localization

Ability to estimate the location of the camera after 10 minutes of motion with the same

accuracy as was possible after 10 seconds.

Features must be stable, long-term landmark, no transient (as in SFM)

5 - Localization and Mapping

5

122 State of the System

Where:

In the SLAM approach the Covariance matrix is not diagonal, that is, the

uncertainty of any feature affects the position estimate of all other features

and the camera

5 - Localization and Mapping

5

123 Extended Kalman Filter (EKF)

The state vector X and the Covariance matrix P are updated during camera

motion using an EKF:

Prediction: a motion model is used to predict where the camera will be in the next time

step (P increases)

?

Observation: a new feature is measured (P decreases)

5 - Localization and Mapping

5

124 A Motion Model for Smoothly Moving Camera

Attempts to predict where the camera will be in the next time step

the person can be statistically modeled

Most Structure from Motion approaches did not use any motion model!

5 - Localization and Mapping

5

125 Constant Velocity Model

The unknown intentions, and so unknown accelerations, are taken into account by

modeling the acceleration as a process of zero mean and Gaussian distribution:

By setting the covariance matrix of n to small or large values, we define the smoothness

or rapidity of the motion we expect. In practice these values were used:

⎡ 4 m/s ⎤

⎢6 rad / s ⎥

⎣ ⎦

5 - Localization and Mapping

5

126 Selection of Features already in the Map

each features is going likely to appear:

5 - Localization and Mapping

5

127 Selection of Features already in the Map

• At each frame, the features occurring at previous step are searched in the elliptic region

where they are expected to be according to the motion model (Normalized Sum of Square

Differences is used for matching)

• Large ellipses mean that the feature is difficult to predict, thus the feature inside will

provide more information for camera motion estimation

•Once the features are matched, the entire state of the system is updated©according to EKF

R. Siegwart, ETH Zurich - ASL

5 - Localization and Mapping

5

128 Initialization of New Features in the Database

Along this line, 100 possible feature positions are set uniformly in the range 0.5-5 m

Each matching produces a likelihood for each hypothesis and their probabilities are recomputed

5 - Localization and Mapping

5

129 Map Management

The number of features to be constantly visible in the image varies (in practice between 6-

10) according to

• Localization accuracy

• Computing power available

If a feature is required to be added, the detected feature is added only if it is not expected

to disappear from the next image

5 - Localization and Mapping

5

130 Improved Feature Matching

Up to now, tracked features were treated as 2D templates in image space

planar region on 3D world surfaces

5 - Localization and Mapping

5

131 Locally Planar 3D Patch Features

5 - Localization and Mapping

5

132 References

A. J. Davison, Real-Time Simultaneous Localization and Mapping with a Single Camera, ICCV 2003

N. Molton, A. J. Davison and I. Reid, Locally Planar Patch Features for Real-Time Structure from Motion, BMVC 2004

A. J. Davison, Y. Gonzalez Cid, N. Kita, Real-Time 3D SLAM with wide-angle vision, IFAC Symposium on Intelligent

Autonomous Vehicles, 2004

- Vision-Based Vehicle Speed Measurement SystemUploaded byJournal of Computer Applications
- Stereo Vision Based Object DetectionUploaded byDwij IT Solutions
- Vision Based Autonomous NavigationUploaded byaeroboticsUIUC
- Online AlgorithmsUploaded byInna Davydov
- Tutorial DipsUploaded byMarusia Maddu Cruz
- Welch - Bishop - An Introduction to the Kalman FilterUploaded byAzizi van Abdullah
- pmsmUploaded byvinod.nvn
- Yu 2004 - Empirical Characteristic Function Estimation and Its Applications-2Uploaded bywyn019
- pftkUploaded byweitsang
- effectiveness_measureUploaded byMohammad Ismail
- Output Two way manova.docUploaded byJohann Heinrich
- Midterm 05 SolutionUploaded bymjh_ipn
- Standard DeviationUploaded byGeanina Maria
- state space modelsUploaded byErnest Dtv
- Business Statistics L3 Model Answers Series 2 2010 (1)Uploaded byTing Fion
- WeightUploaded byRaluca Muresan
- Proposing a Popular Method for Meteorological Drought Monitoring in the Kabul River Basin, AfghanistanUploaded byIJAERS JOURNAL
- 305-2012dUploaded byHoney Singh
- Working Paper PEDIUploaded byMac Ymac
- Week 1.4 - Exponential and Normal DistributionsUploaded byPhilipp Kähler
- Nested Log ItUploaded byfer
- Business Statistics L3 Past Paper Series 2 2010Uploaded byjulietlee4880
- Por Qué No Debería Usar El Filtro HP, HamiltonUploaded bySantiago PA
- Elga 2Uploaded byPeprin Nj
- Assignment 1 221 w2016Uploaded byiamnuai
- Informal EconomyUploaded bywaleedthapet
- ch9 project 1Uploaded byapi-387889745
- sample of test feedbackUploaded byapi-327991016
- Rm Final WordUploaded bySayonie Bose

- DBACUploaded byovelho
- HTML with Embedded Images from PLUploaded byNaik Amal Khan
- MECH433-Energy and the Environment IW 2012Resit Answers(1)Uploaded byliverpoolengineer
- Final Control ElementUploaded byMalik Muchamad
- 4 1mwlab New VqUploaded byPraveen Kumar
- r50eUploaded byhunter73
- 【豆知識】腐食 Trivia About CorrosionUploaded bycharzree
- 1Uploaded byRomario Donaldo Solis Valle
- PingUploaded byswordmy2523
- Soldier Pile Retaining WallUploaded byolivier
- electro static charge in gen rotorUploaded bymontanad
- hybrid solar cell by Pbs nanoUploaded byAkshaya Ramanarayanan
- Class G and H Basic Oilwell CementsUploaded byAnonymous LEVNDh4
- User-manual FGH14 EnUploaded byedwin-cuchieyes
- Development of NodemcuUploaded bylalonader
- Exalca - MDSUploaded byRajkumar
- journal131_article10Design and Construction of a Motorized Tricycle for Physically Challenged PersonsUploaded byBhausaheb Botre
- Mvi56e Mcmmcmxt User ManualUploaded byRenatio0o0o
- Manual 1.0 Cu.ft Microwave Oven Convection-WM1010CCUploaded byEfren Morin
- VR-pptUploaded byeraash
- Btech Syllabus Rtu 3-8 semesterUploaded bySanat Sharma
- Unit 17 CAD B11 Assessment P1of3 v4_Answer 1 (3)Uploaded bydharul khair
- Erke Wang-Ansys ContactUploaded bySagarKBL
- item listUploaded byapi-276801807
- 2202E_a13.pdfUploaded byrajksharma
- docukemetUploaded byChetan Raj
- Manual MaqinaUploaded byAlexandro Martinez Gutierrez
- Induction MotorUploaded byCarlos Castillo Palma
- 22261 NobelReplace Tapered Sales Brochure GB Tcm61-19282Uploaded byAna Sofia Rodriguez
- 1 Limestone dust and wood sawdust as brick material.pdfUploaded byisaacstefanello