Professional Documents
Culture Documents
and Understanding
Feature Detection
and Matching
Connelly Barnes
Slides from Jason Lawrence, Fei Fei Li, Juan Carlos Niebles, Alexei Efros, Rick Szeliski, Fredo Durand, Kristin Grauman, James Hays
Outline
• Motivation for sparse features
• Harris corner detector
• Difference of Gaussian (blob) feature detector
• Sparse feature descriptor: SIFT
• Robust model fitting
• Hough transform
• RANSAC
• Application: panorama stitching
Motivation: Image Matching (Hard Problem)
Slide from Fei Fei Li, Juan Carlos Niebles, Steve Seitz
What is a Feature?
• Global features can only summarize the overall content of the image.
• Local (or sparse) features can allow us to match local regions with
more geometric accuracy.
• Increased robustness to:
?
Feature-based Panorama Stitching
• Find corresponding feature points
• Fit a model placing the two images in correspondence
• Blend / cut
Feature-based Panorama Stitching
• Find corresponding feature points
• Fit a model placing the two images in correspondence
• Blend / cut
Requirements for the Features
Requirements for the Features
Outline
• Motivation for sparse features
• Harris corner detector
• Difference of Gaussian (blob) feature detector
• Sparse feature descriptor: SIFT
• Robust model fitting
• Hough transform
• RANSAC
• Application: panorama stitching
Harris Corner Detector: Basic Idea
• We should easily recognize the point by looking through a
small window
• Shifting a window in any direction should give a large
change in intensity
Harris Corner Detector: Basic Idea
Classification of 2 “Edge”
image points using 2 >> 1 “Corner”
eigenvalues of M: 1 and 2 are large,
1 ~ 2;
E increases in all
directions
1
Applications of Corner Detectors
Applications of Corner Detectors
• Augmented reality (video puppetry)
• 3D photography / light fields
Outline
• Motivation for sparse features
• Harris corner detector
• Difference of Gaussian (blob) feature detector
• Sparse feature descriptor: SIFT
• Robust model fitting
• Hough transform
• RANSAC
• Application: panorama stitching
Gaussian Pyramids
• Approach:
• Run linear filter (Difference of Gaussians)
• At different resolutions of image pyramid
29 Typical K = 1.6
Non-maxima (non-minima) suppression
?
Point descriptor should be:
1. Invariant 2. Distinctive
Descriptors Invariant to Rotation
• Find local orientation
• Make histogram of 36 different angles (10 degree increments).
• Vote into histogram based on magnitude of gradient.
• Detect peaks from histogram.
Dominant direction of gradient
36
SIFT Descriptor (A Feature Vector)
• Often “SIFT” =
• Difference of Gaussian keypoint detector, plus
• SIFT descriptor
• But you can also use SIFT descriptor computed at other locations
(e.g. at Harris corners, at every pixel, etc)
• More details: Lowe 2004 (especially Sections 3-6)
Feature Matching
?
Feature Matching
• Exhaustive search
– for each feature in one image, look at all the other features in the other image(s)
• Hashing (see locality sensitive hashing)
– Project into a lower k dimensional space, e.g. by random projections, use that as
a “key” for a k-d hash table, e.g. k=5.
• Nearest neighbor techniques
– kd-trees (available in libraries, e.g. SciPy, OpenCV, FLANN, Faiss).
What about outliers?
?
Feature-space outlier rejection
• Can we now compute an alignment from the blue points? (the ones
that survived the “feature space outlier rejection” test)
– No! Still too many outliers…
– What can we do?
Outline
• Motivation for sparse features
• Harris corner detector
• Difference of Gaussian (blob) feature detector
• Sparse feature descriptor: SIFT
• Robust model fitting
• Hough transform
• RANSAC
• Application: panorama stitching
Model fitting
• Fitting: find the parameters of a model that best fit the data
• For each point, vote in “Hough space” for all lines that the point may belong to.
y m
x b
Hough space
y=mx+b
Slide from S. Savarese
Hough transform
y m
x b
y m
3 5 3 3 2 2
3 7 11 10 4 3
2 3 1 4 5 2
2 1 0 1 3 3
x b
Slide from S. Savarese
Hough transform
P.V.C. Hough, Machine Analysis of Bubble Chamber Pictures, Proc. Int. Conf. High Energy
Accelerators and Instrumentation, 1959
y
x
Hough space
x cos y sin
Slide from S. Savarese
Hough Transform: Effect of Noise
• Cons:
• Bin size has to be set carefully to trade of noise/precision/memory
• Grid size grows exponentially in number of parameters
Algorithm:
1. Sample (randomly) the number of points required to fit the model
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence
RANSAC
Algorithm:
1. Sample (randomly) the number of points required to fit the model (#=2)
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence
Illustration by Savarese
RANSAC
Algorithm:
1. Sample (randomly) the number of points required to fit the model (#=2)
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence
RANSAC
NI 6
Algorithm:
1. Sample (randomly) the number of points required to fit the model (#=2)
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence
RANSAC
N I 14
Algorithm:
1. Sample (randomly) the number of points required to fit the model (#=2)
2. Solve for model parameters using samples
3. Score by the fraction of inliers within a preset threshold of the model
Repeat 1-3 until the best model is found with high confidence
Choosing the parameters
• Initial number of points s
– Minimum number needed to fit the model
• Distance threshold t
– Choose t so probability for inlier is p (e.g. 0.95)
– Zero-mean Gaussian noise with std. dev. σ: t =1.96σ
• Number of iterations N
– Choose N so that, with probability p, at least one random sample is free from outliers
(e.g. p=0.99) (outlier ratio: e) proportion of outliers e
s 5% 10% 20% 25% 30% 40% 50%
2 2 3 5 6 7 11 17
1 1 e s N
1 p
3
4
5
3
3
4
4
5
6
7
9
12
9
13
17
11
17
26
19
34
57
35
72
146
6 4 7 16 24 37 97 293
N log 1 p / log 1 1 e
s 7 4 8 20 33 54 163 588
8 5 9 26 44 78 272 1177
Source: M. Pollefeys
RANSAC Conclusions
• Pros:
• Robust to outliers
• Can use models with more parameters than Hough transform
• Cons:
• Computation time grows quickly with fraction of outliers and
number of model parameters
?
Feature-based Panorama Stitching
• Find corresponding feature points
• Fit a model placing the two images in correspondence
• Blend / cut
Feature-based Panorama Stitching
• Find corresponding feature points
• Fit a model placing the two images in correspondence
• Blend / cut
Aligning Images with Homographies
http://users.skynet.be/J.Beever/pave.htm
Homography
x1 , y1 x1, y1
x 2 , y 2 x2 , y2
…
…
x n , y n xn , yn
Matlab: p = A \ y;
Python: p = numpy.linalg.lstsq(A, y)
im2
im1
im2
im1
im1
• RANSAC loop:
1. Select four feature pairs (at random)
2. Compute homography H (exact)
3. Compute inliers where SSD(pi’, H pi) < ε
4. Keep largest set of inliers
5. Re-compute least-squares H estimate on all
of the inliers
RANSAC