You are on page 1of 54

Autonomous Mobile Robots

"Position"
Localization Cognition
Global Map

Environment Model Path


Local Map

Perception Real World Motion Control


Environment

SLAM:
Simultaneous Localization and Mapping

Zürich Autonomous Systems Lab


12 – Simultaneous Localization And Mapping
Lec.12
2
Today‟s lecture

Section 5.8 + some extras…


 One of the fundamental problems in robotics: SLAM

 EKF SLAM
 Particle filter SLAM
 GraphSLAM

 Look into MonoSLAM

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
3
Simultaneous Localization and Mapping

The SLAM problem:

How can a body navigate in a previously unknown environment


while constantly building and updating a map of its workspace using
on board sensors only?
What is SLAM?
 When is SLAM necessary?
 When a robot must be truly autonomous (no human input)
 When there is no prior knowledge about the environment
 When we cannot place beacons (also in GPS-denied environments)
 When the robot needs to know where it is

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
4
Simultaneous Localization and Mapping

 SLAM is significantly more difficult than all robotics problems discussed so


far:
 More difficult than pure localization: the map is unknown and has to be
estimated along the way.
By hand: hard & costly
(e.g. large environment) Automatic Map Building:

More challenging, but:


 Automatic
12 3.5
 The robot learns its environment
 Can adapt to dynamic changes

 More difficult than mapping with known poses: the poses are unknown and
have to be estimated along the way.

 SLAM: one of the greatest challenges in probabilistic robotics

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
5
Features for SLAM
 Can we track the motion of a camera/robot while it is moving?

Courtesy of Andrew J. Davison


 Pick natural scene features to serve as landmarks
(in most modern SLAM systems)
 Range sensing (laser/sonar): line segments, 3D planes, corners
 Vision: point features, lines, textured surfaces.
 Key: features must be distinctive & recognizable from different viewpoints

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
6
Map Building

1. Map Maintenance: Keeping track of 2. Representing and Propagating Uncertainty


changes in the environment
position of robot -> position of wall
e.g. disappearing
cupboard

position of wall -> position of robot

 probability densities for feature positions


 Map can become inconsistent due to
- e.g. measure of belief of each erroneous measurements / motion drift
environment feature
- Generally assume static environment
© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
12 – Simultaneous Localization And Mapping
Lec.12
7
How to do SLAM

 Use internal representations for


B C
 the positions of landmarks (: map)
 the camera parameters

 Assumption: Robot‟s uncertainty


at starting position is zero

Start: robot has zero uncertainty

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
8
How to do SLAM

B C

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Start:
First camera
measurement
has zero
of uncertainty
feature A

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
9
How to do SLAM

 The robot observes a feature


which is mapped with an
uncertainty related to the B C
measurement model

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
10
How to do SLAM

 As the robot moves, its pose


uncertainty increases, obeying the
robot‟s motion model. B C

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Robot moves forwards: uncertainty grows

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
11
How to do SLAM

 Robot observes two new features.

B C

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Robot makes first measurements of B & C

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
12
How to do SLAM

 Their position uncertainty results


from the combination of the
measurement error with the robot B C
pose uncertainty.
 map becomes correlated with
the robot pose estimate.

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Robot makes first measurements of B & C

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
13
How to do SLAM

 Robot moves again and its


uncertainty increases (motion
model) B C

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Robot moves again: uncertainty grows more

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
14
How to do SLAM

 Robot re-observes an old feature


 Loop closure detection
B C

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Robot
Robot
moves
re-measures
again: uncertainty
A: “loop closure”
grows more

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
15
How to do SLAM

 Robot updates its position: the


resulting position estimate
becomes correlated with the B C
feature location estimates.
 Robot‟s uncertainty shrinks and so
does the uncertainty in the rest of
the map

On every frame:
 Predict how the robot has moved
 Measure A
 Update the internal representations

Robot re-measures A: “loop closure”


uncertainty shrinks

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
16
SLAM: Probabilistic Formulation

 Robot pose at time t : xt  Robot path up to this time: {x0, x1,…, xt}

 Robot motion between time t-1 and t : ut (control inputs / proprioceptive sensor
readings)  Sequence of robot relative motions: {u0, u1,…, ut}

 The true map of the environment: {m0, m1,…, mN}

 At each time t the robot makes measurements zi


 Set of all measurements (observations): {z0, z1,…, zk}

 The Full SLAM problem: estimate the posterior


p(x0:t ,m0:n | z0:k ,u0:t )
 The Online SLAM problem: estimate the posterior
p(xt ,m0:n | z0:k ,u0:t )

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
17
SLAM: graphical representation

m0 m1 m2 m3 m4 m5 m6 m7 m8

z0 z1 z2 z3 z4 z5 z6 z7 z8 z9 z10 z11 z12 z13 z14 z15 z16

x0 x1 x2 x3 . . .
u0 u1 u2 u3

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
18
Approaches SLAM
 Full graph optimization (bundle adjustment)

m0 m1 m2 m3 m4 m5 m6 m7 m8

x0 x1 x2 x3

 Eliminate observations & control-input nodes and solve for the constraints
between poses and landmarks.

 Globally consistent solution, but infeasible for large-scale SLAM


 If real-time is a requirement, we need to sparsify this graph

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
19
Approaches SLAM
 Filtering

m0 m1 m2 m3 m4 m5 m6 m7 m8

x0 x1 x2 x3

 Eliminate all past poses: „summarize‟ all experience with respect to the last
pose, using a state vector and the associated covariance matrix

 We‟re going to look at filtering in more detail…

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
20
Approaches SLAM
 Key-frames

m0 m1 m2 m3 m4 m5 m6 m7 m8

x0 x1 x2 x3

 Retain the most „representative‟ poses (key-frames) and their dependency


links  optimize the resulting graph

 Example: PTAM [Klein & Murray, ISMAR 2007]

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
21
PTAM: Parallel Tracking and Mapping
http://www.youtube.com/watch?v=Y9HMn6bd-v8

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
22
The Three SLAM paradigms

 Some of the most important approaches to SLAM:

 Extended Kalman Filter SLAM (EKF SLAM)

 Particle Filter SLAM (FAST SLAM) – tomorrow‟s exercise

 GraphSLAM

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
23
EKF SLAM: overview
 Like the standard EKF for robot localization
 EKF SLAM summarizes all past experience in an extended state vector yt
comprising of the robot pose xt and the position of all the features mi in the map,
and an associated covariance matrix Pyt:
 xt   Pxx Pxm1 .. Pxmn1 
m  P Pm1m1 .. Pm1mn1 
yt   1  Pyt   1
mx
,
 ...   .. .. .. .. 
   
 n 1 
m  Pmn1x Pmn1m1 .. Pmn1mn1 

 If we sense 2D line-landmarks, the size of yt is 3+2n


(and size of Pt : (3+2n)(3+2n) )
 3 variables to represent the robot pose and
 2n variables for the n line-landmarks with state components ( i , ri )
Hence, yt  [ X t , Yt ,t ,  0 , r0 ,...,  n1 , rn1 ]T

 As the robot moves and makes measurements, yt and Pyt are updated using the
standard EKF equations

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
24
EKF SLAM: Prediction
 The predicted robot pose x̂t at time-stamp t is computed using the estimated pose
xt 1 at time-stamp t-1 and the odometric control input ut  {Sl , Sr }
 S r  Sl S r  Sl  Based on the example of Section 5.2.4
cos(   )
 Xˆ t   X t 1   2
t 1
2b
 ˆ   S  S S r  Sl  Sl ; Sr : distance travelled by the left

xˆt  f ( xt 1 , ut )   Yt    Yt 1    r l
sin( t 1  ) and right wheels resp.
 ˆ      2 2b  b : distance between the two robot wheels
 t   t 1    S r   S l 
Odometry  b 
function

 During this step, the position of the features remains unchanged.


 EKF Prediction Equations:
Jacobians of f
 S r  Sl S r  Sl 
cos(   )
 Xˆ t   X t   2
t 1
2b
 ˆ     S r  Sl S r  S l 
Pˆyt  Fy Pyt1 Fy  Fu Qt Fu
T T
 t   t  
Y Y sin( t 1  )
 ˆ    t   2 2b 
 t
     S r   S l 
 ˆ   
yˆ t   0    0    b
 Covariance at Covariance of noise
rˆ0  r0  0
      previous time-stamp associated to the motion
 ...   ...   0 
 ˆ  ˆ   
    
...

n  1 n 1

 rˆn 1   rn 1   0 
 
 0 
© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
12 – Simultaneous Localization And Mapping
Lec.12
25
EKF SLAM: Comparison with EKF localization

EKF LOCALIZATION EKF SLAM


 The state xt is ONLY the robot  The state yt comprises of the robot
configuration: configuration xt and that of each feature mi :

xt  [ X t , Yt ,t ]T yt  [ X t , Yt ,t ,  0 , r0 ,...,  n1 , rn1 ]T


 The prediction function is:  The prediction function is:

xˆt  f ( xt 1 , ut ) yˆt  f ( yt 1 , ut )
 S r  Sl S r  Sl 
 S r  Sl S r  Sl   cos(   )
 Xˆ t   X t  t 1
cos(   ) 2
  S r  Sl
2b
S r  S l 
 Xˆ t   X t 1   2
t 1
2b  ˆ  
 t   t  Y  sin( t 1  )
 ˆ    Y
    S r  Sl sin(  S r  Sl )   ˆ    t   2 2b 
 t   t 1 
Y Y t 1  t
     S   S 
 
r l
 ˆ     2 2b  ˆ   
       S   S  yˆ t   0    0    b

 r 
t t 1 r l
rˆ 0
 b   0   0   
 ...   ...   0 
 ˆ  ˆ   
 n 1   n 1  
...

 rˆn 1   rn 1   0 
 
 0 

Pˆxt  Fx Pxt1 Fx  Fu Qt Fu Pˆyt  Fy Pyt1 Fy  Fu Qt Fu


T T T T

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
26
EKF SLAM: Measurement prediction & Update
 The application of the measurement model is the same as in EKF
localization. The predicted observation of each feature mi is:
ˆ  We need the predicted new pose to predict
ˆzi   i   hi ( xˆt , mi )
 rˆi  where each feature lies in measurement space

 After obtaining the set of actual observations z0:n-1 the EKF state gets
updated:
yt  yˆ t  K t ( z0:n 1  h0:n 1 ( xˆt , m0:n 1 ))
P  Pˆ  K  K
T
yt yt t IN t

where Jacobian of h Measurement noise

 IN  HPˆyt H T  R
K t  Pˆyt H ( IN ) 1
Kalman Gain Innovation Covariance

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
27
EKF SLAM: Correlations
 At start up: the robot makes the first measurements and the covariance matrix is
populated assuming that these (initial) features are uncorrelated
 off-diagonal elements are zero.

 Pxx 0 0 ... 0 0 
0 Pm0 m0 0 ... 0 0 

0 0 Pm1 m ... 0 0 
Py0   1

 ... ... ... ... ... ... 
0 0 0 ... Pmn2 m 0 
 n 2 
 0 0 0 ... 0 Pmn1mn1 

 When the robot starts moving & taking new measurements, both the robot pose
and features start becoming correlated.

Pˆyt  Fy Pyt1 Fy  Fu Qt Fu
T T

 Accordingly, the covariance matrix becomes dense.

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
28
EKF SLAM: Correlations
 Correlations arise as
 the uncertainty in the robot pose is used to obtain the uncertainty of the
observed features.
 the feature measurements are used to update the robot pose.

 Regularly covisible features become


correlated and when their motion is
coherent, their correlation is even stronger

 Correlations very important for convergence:


The more observations are made, the more
the correlations between the features will
grow, the better the solution to SLAM.

Chli & Davison, ICRA 2009

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
29 Feature Based SLAM (ETH - ASL)

 3D laser scanner. Corner features extracted from lines


 Connecting corners to structures

[Weingarten & Siegwart, IROS 2006]

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
30 Feature Based SLAM (ETH - ASL)

 corner features extracted from lines


 connecting corners to structures

[Weingarten & Siegwart, IROS 2006]

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
31
Drawbacks of EKF SLAM
 The state vector in EKF SLAM is much larger than the state vector in EKF
localization

 Newly observed features are added to the state vector  The covariance matrix
grows quadratically with the no. features  computationally expensive for
large-scale SLAM.

 Approach to attack this: sparsify the structure of the covariance matrix (via
approximations)

Chli & Davison, ICRA 2009

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
32
Particle Filter SLAM
 Particle filters: mathematical models that represent probability distributions as a
set of discrete particles which occupy the state space.
 Particle = a point estimate of the state with an associated weight pi  { y ti , wi }
(all weights should add up to 1)

 Steps in Particle Filtering:

 Predict:
Apply motion prediction to each particle
 Make measurements
 Update:
for each particle:
• Compare the particle‟s predictions of measurements with the actual measurements
• Assign weights such that particles with good predictions have higher weight
 Normalize weights of particles to add up to 1
 Resample: generate a new set of M particles which all have equal weights 1/M
reflecting the probability density of the last particle set.

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
33
Particle Filter SLAM
 FastSLAM approach [Montemerlo et al., 2002]

• Solve state posterior using a Rao-Blackwellized Particle Filter


• Each landmark estimate is represented by a 2x2 EKF.
• Each particle is “independent” (due the factorization) from the others and
maintains the estimate of M landmark positions.

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
34
FastSLAM example

[Montemerlo et al., 2002]


© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
12 – Simultaneous Localization And Mapping
Lec.12
35
GraphSLAM [Thrun and Montemerlo, IJRR 2006]

 GraphSLAM basic idea: SLAM can be interpreted as a sparse graph of nodes and constraints
between nodes.
 nodes: robot locations and map-feature locations
 edges: constraints between
 consecutive robot poses (given by the odometry input u) and
 robot poses and the features observed from these poses.

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
36
GraphSLAM [Thrun and Montemerlo, IJRR 2006]

 Key property: constraints are not to be thought as rigid constraints but as soft constraints
 constraints acting like springs

 Solve full SLAM by relaxing these constraints  get the best estimate of the robot path and
the environment map by computing the state of minimal energy of this network

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
37
GraphSLAM vs. EKF SLAM

 In GraphSLAM style techniques


 the update-time is constant and
 the required memory is linear with the no. features

 … while in EKF SLAM, both the update-time and the storage of the
covariance matrix scales quadratically with the no. features

 However, the final graph optimization can be computationally very costly


if the robot path is long.

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


Autonomous Mobile Robots

Example of EKF SLAM

MonoSLAM:
Real-Time Single Camera SLAM

[Davison, Reid, Molton and Stasse. PAMI 2007]

Zürich Autonomous Systems Lab


12 – Simultaneous Localization And Mapping
Lec.12
39
Single Camera SLAM

• Images = information-rich snapshots of a scene


Vision
for SLAM • Compactness + affordability of cameras
• HW advances

SLAM using a single, handheld camera:


• Hard but … (e.g. cannot recover depth from 1 image)
• very applicable, compact, affordable, …

Image Courtesy of G. Klein


© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL
12 – Simultaneous Localization And Mapping
Lec.12
40
From SFM to SLAM

Structure from Motion (SFM):


 Take some images of the
object/scene to reconstruct
 Features (points, lines, …) are P1
extracted from all frames and P3
matched among them P2

 Process all images simultaneously


 Optimisation to recover both:
 camera motion and
 3D structure
up to a scale factor
(similar to GraphSLAM)

 Not real-time

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
41
MonoSLAM
 Can we track the motion of a hand-held camera while it is moving?
i.e. online

scene view camera view

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
42
MonoSLAM
 SLAM using a single camera, grabbing frames at 30Hz
 Ellipses (in camera view) and Bubbles (in map view) represent uncertainty
Courtesy of Andrew J. Davison

camera view internal SLAM map

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
43
Representation of the world
 The belief about the state of the world x is approximated with a single,
multivariate Gaussian distribution:

Landmark’s state
e.g. 3D position for
point-features

Mean Covariance
(state vector) matrix

Camera state
: Position [3 dim.]
: Orientation using quaternions [4 dim.]
: Linear velocity [3 dim.]
: Angular velocity [3 dim.]

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
44
Motion & Probabilistic Prediction

 How has the camera moved from frame t-1 to frame t?


xˆt  f ( xt 1 , ut )
Pˆt  Fy Pt 1Fy  Fu Qt Fu
T T

 The camera is hand-held  no odometry or control input data


 Use a motion model

 But how can we model the unknown intentions of a human carrier?

 Davison et al. use a constant velocity motion model:


“on average we expect undetermined
accelerations occur with a Gaussian profile”

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
45
Constant Velocity Model

The constant velocity motion model, imposes a certain smoothness on the camera
motion expected.

In each time step, the unknown linear and angular accelerations (characterized by
zero-mean Gausian distr.) cause an impulse of velocity:

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
46
Motion & Probabilistic Prediction
 Based on the predicted new camera pose  predict which known features will be
visible and where they will lie in the image

 Use measurement model h to identify the predicted location zˆi  hi ( xˆt , yi ) of each
feature and an associated search region (defined in the corresponding diagonal
ˆ HT  R )
block of  IN  HPt

 Essentially: project the 3D ellipsoids in image space

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
47
Measurement & EKF update
 Search for the known feature-patches inside their corresponding search
regions to get the set of all observations

 Update the state using the EKF equations


xt  xˆt  K t ( z0:n 1  h0:n 1 ( xˆt , y0:n 1 ))
ˆ
P  P K  K
T
t t t IN t

where:
 IN  HPˆt H T  R
K t  Pˆt H ( IN ) 1

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
48
MonoSLAM for Augmented Reality

Courtesy of Andrew J. Davison

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
49
Application: HPR-2 Humanoid at JRL, AIST, Japan

• Small circular loop within a


large room
• No re-observation of „old‟
features until closing of large
loop

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
50
Real-time Single Camera SLAM systems

MonoSLAM is computationally expensive with increasing no. features

MonoSLAM PTAM Graph-SLAM


[Davison et al. 2007] [Klein, Murray 2008] [Eade, Drummond 2007]

 not ready yet to leave the lab & perform everyday tasks

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
51
SLAM challenges

• Fast motion • Rich maps


• Large scales • Low computation for
• Robustness embedded apps

• Handle larger amounts of data more effectively

• Competing goals:

EFFICIENCY
PRECISION

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
52
Essential components of a scalable SLAM system
1

Robust local
motion estimation

2 3
Map management &
optimisation

Mapping &
loop-closure
detection

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
53
Large Scale Monocular SLAM using optimization
 Scale Drift-Aware Large Scale Monocular SLAM
[Strasdat et al. RSS 2010]

© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL


12 – Simultaneous Localization And Mapping
Lec.12
54
Live Dense Reconstruction
 During live camera tracking, perform dense per-pixel surface reconstruction
 Relies heavily on GPU processing for dense image matching

[Newcombe, Davison CVPR 2010 ]


© R. Siegwart, D. Scaramuzza and M. Chli, ETH Zurich - ASL

You might also like