You are on page 1of 24

KLT Tracker

16-385 Computer Vision
Feature-based tracking

How should we select features?

How should we track them from frame to frame?
Kanade-Lucas-Tomasi
Lucas Kanade
(KLT) Tracker
An Iterative Image Registration Technique
with an Application to Stereo Vision.
(1981)

Kanade Tomasi
Detection and Tracking of Feature Points.
(1991)

The original KLT algorithm Tomasi Shi

Good Features to Track.
(1994)
Kanade-Lucas-Tomasi

How should we track them from frame How should we select features?
to frame?

Lucas-Kanade Tomasi-Kanade
Method for choosing the
Method for aligning
best feature (image patch)
(tracking) an image patch
for tracking
What are good features for tracking?
What are good features for tracking?

Intuitively, we want to avoid smooth
regions and edges. But is there a more
is principled way to define good
features?
What are good features for tracking?

Can be derived from the tracking algorithm
What are good features for tracking?

Can be derived from the tracking algorithm

‘A feature is good if it can be tracked well’
Recall the Lucas-Kanade image alignment method:
X 2
error function (SSD) [I(W(x; p)) T (x)]
x
X 2
incremental update [I(W(x; p + p)) T (x)]
x
Recall the Lucas-Kanade image alignment method:
X 2
error function (SSD) [I(W(x; p)) T (x)]
x
X 2
incremental update [I(W(x; p + p)) T (x)]
x
X  2
@W
linearize I(W(x; p)) + rI p T (x)
x @p
Recall the Lucas-Kanade image alignment method:
X 2
error function (SSD) [I(W(x; p)) T (x)]
x
X 2
incremental update [I(W(x; p + p)) T (x)]
x
X  2
@W
linearize I(W(x; p)) + rI p T (x)
x @p

X @W
>
1
p=H rI [T (x) I(W(x; p))]
Gradient update x @p

X @W
> 
@W
H= rI rI
x @p @p
Recall the Lucas-Kanade image alignment method:
X 2
error function (SSD) [I(W(x; p)) T (x)]
x
X 2
incremental update [I(W(x; p + p)) T (x)]
x
X  2
@W
linearize I(W(x; p)) + rI p T (x)
x @p

X @W
>
1
p=H rI [T (x) I(W(x; p))]
Gradient update x @p

X @W
> 
@W
H= rI rI
x @p @p

Update p p+ p
Stability of gradient decent iterations depends on …
X  >
1 @W
p=H rI [T (x) I(W(x; p))]
x @p
Stability of gradient decent iterations depends on …
X  >
1 @W
p=H rI [T (x) I(W(x; p))]
x @p

Inverting the Hessian
X  > 
@W @W
H= rI rI
x @p @p

When does the inversion fail?
Stability of gradient decent iterations depends on …
X  >
1 @W
p=H rI [T (x) I(W(x; p))]
x @p

Inverting the Hessian
X  > 
@W @W
H= rI rI
x @p @p

When does the inversion fail?

H is singular. But what does that mean?
Above the noise level

1 0
2 0
both Eigenvalues are large

Well-conditioned
both Eigenvalues have similar magnitude
Concrete example: Consider translation model
 
x + p1 W 1 0
W(x; p) = =
y + p2 @p 0 1

Hessian
X  > 
@W @W
H= rI rI
x @p @p

X 1 0  
Ix ⇥ ⇤ 1 0
= Ix Iy
0 1 Iy 0 1
x
 P P
P x I x I x P x I y I x
=
x Ix Iy x Iy Iy

How are the eigenvalues related to image content?
interpreting eigenvalues
λ2
λ2 >> λ1

What kind of image patch
does each region represent?

1 ⇠0
λ1 >> λ2
2 ⇠0
λ1
interpreting eigenvalues
λ2
horizontal corner
edge

λ2 >> λ1

λ1 ~ λ2

flat
λ1 >> λ2
vertical
edge
λ1
interpreting eigenvalues
λ2
horizontal corner
edge

λ2 >> λ1

λ1 ~ λ2

flat
λ1 >> λ2
vertical
edge
λ1
What are good features for tracking?
What are good features for tracking?

min( 1, 2) >
KLT algorithm
1. Find corners satisfying min( 1, 2) >

2. For each corner compute displacement to next frame
using the Lucas-Kanade method

3. Store displacement of each corner, update corner position

4. (optional) Add more corner points every M frames using 1

5. Repeat 2 to 3 (4)

6. Returns long trajectories for each corner point
(Demo)