Professional Documents
Culture Documents
"Position"
Localization Cognition
Global Map
SLAM:
Simultaneous Localization and Mapping
EKF SLAM
Particle filter SLAM
GraphSLAM
More difficult than mapping with known poses: the poses are unknown and
have to be estimated along the way.
B C
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
Start:
First camera
measurement
has zero
of uncertainty
feature A
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
B C
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
Robot
Robot
moves
re-measures
again: uncertainty
A: “loop closure”
grows more
On every frame:
Predict how the robot has moved
Measure A
Update the internal representations
Robot pose at time t : xt Robot path up to this time: {x0, x1,…, xt}
Robot motion between time t-1 and t : ut (control inputs / proprioceptive sensor
readings) Sequence of robot relative motions: {u0, u1,…, ut}
m0 m1 m2 m3 m4 m5 m6 m7 m8
x0 x1 x2 x3 . . .
u0 u1 u2 u3
m0 m1 m2 m3 m4 m5 m6 m7 m8
x0 x1 x2 x3
Eliminate observations & control-input nodes and solve for the constraints
between poses and landmarks.
m0 m1 m2 m3 m4 m5 m6 m7 m8
x0 x1 x2 x3
Eliminate all past poses: „summarize‟ all experience with respect to the last
pose, using a state vector and the associated covariance matrix
m0 m1 m2 m3 m4 m5 m6 m7 m8
x0 x1 x2 x3
GraphSLAM
As the robot moves and makes measurements, yt and Pyt are updated using the
standard EKF equations
xˆt f ( xt 1 , ut ) yˆt f ( yt 1 , ut )
S r Sl S r Sl
S r Sl S r Sl cos( )
Xˆ t X t t 1
cos( ) 2
S r Sl
2b
S r S l
Xˆ t X t 1 2
t 1
2b ˆ
t t Y sin( t 1 )
ˆ Y
S r Sl sin( S r Sl ) ˆ t 2 2b
t t 1
Y Y t 1 t
S S
r l
ˆ 2 2b ˆ
S S yˆ t 0 0 b
r
t t 1 r l
rˆ 0
b 0 0
... ... 0
ˆ ˆ
n 1 n 1
...
rˆn 1 rn 1 0
0
After obtaining the set of actual observations z0:n-1 the EKF state gets
updated:
yt yˆ t K t ( z0:n 1 h0:n 1 ( xˆt , m0:n 1 ))
P Pˆ K K
T
yt yt t IN t
IN HPˆyt H T R
K t Pˆyt H ( IN ) 1
Kalman Gain Innovation Covariance
Pxx 0 0 ... 0 0
0 Pm0 m0 0 ... 0 0
0 0 Pm1 m ... 0 0
Py0 1
... ... ... ... ... ...
0 0 0 ... Pmn2 m 0
n 2
0 0 0 ... 0 Pmn1mn1
When the robot starts moving & taking new measurements, both the robot pose
and features start becoming correlated.
Pˆyt Fy Pyt1 Fy Fu Qt Fu
T T
Newly observed features are added to the state vector The covariance matrix
grows quadratically with the no. features computationally expensive for
large-scale SLAM.
Approach to attack this: sparsify the structure of the covariance matrix (via
approximations)
Predict:
Apply motion prediction to each particle
Make measurements
Update:
for each particle:
• Compare the particle‟s predictions of measurements with the actual measurements
• Assign weights such that particles with good predictions have higher weight
Normalize weights of particles to add up to 1
Resample: generate a new set of M particles which all have equal weights 1/M
reflecting the probability density of the last particle set.
GraphSLAM basic idea: SLAM can be interpreted as a sparse graph of nodes and constraints
between nodes.
nodes: robot locations and map-feature locations
edges: constraints between
consecutive robot poses (given by the odometry input u) and
robot poses and the features observed from these poses.
Key property: constraints are not to be thought as rigid constraints but as soft constraints
constraints acting like springs
Solve full SLAM by relaxing these constraints get the best estimate of the robot path and
the environment map by computing the state of minimal energy of this network
… while in EKF SLAM, both the update-time and the storage of the
covariance matrix scales quadratically with the no. features
MonoSLAM:
Real-Time Single Camera SLAM
Not real-time
Landmark’s state
e.g. 3D position for
point-features
Mean Covariance
(state vector) matrix
Camera state
: Position [3 dim.]
: Orientation using quaternions [4 dim.]
: Linear velocity [3 dim.]
: Angular velocity [3 dim.]
The constant velocity motion model, imposes a certain smoothness on the camera
motion expected.
In each time step, the unknown linear and angular accelerations (characterized by
zero-mean Gausian distr.) cause an impulse of velocity:
Use measurement model h to identify the predicted location zˆi hi ( xˆt , yi ) of each
feature and an associated search region (defined in the corresponding diagonal
ˆ HT R )
block of IN HPt
where:
IN HPˆt H T R
K t Pˆt H ( IN ) 1
not ready yet to leave the lab & perform everyday tasks
• Competing goals:
EFFICIENCY
PRECISION
Robust local
motion estimation
2 3
Map management &
optimisation
Mapping &
loop-closure
detection