You are on page 1of 73

EECE 5639 Computer Vision I

Lecture 1
Hough Transform, Model Fittin
Hw3 has been poste
Next Clas
Planar Unwarping

1
s

More Image Features


(Grouping edges)
Hough Transform Technique
H.T. is a method for detecting straight lines (and curves) in images
Main idea
Map a dif cult pattern problem into a simple peak detection problem

3
fi
:

Hough Transform Technique


Given an edge point, there is an in nite number of lines passing through it
(Vary m and n)
These lines can be represented as a line in parameter space.

slope
y n = (-x) m + y

P(x,y) n
intercept
x m

Parameter Space
4
.

fi
Img-Param Spaces
Image Spac Parameter Spac
Lines Point
Points Line
Collinear points Intersecting lines

5
s

Hough Transform Technique


Given a set of collinear edge points, each of them have associated a line in
parameter space
These lines intersect at the point (m,n) corresponding to the
parameters of the line in the image space.

6
.

Hough Transform Technique


At each point of the (discrete) parameter space, count how
many lines pass through it
Use an array of counter
Can be thought as a “ parameter image
The higher the count, the more edges are collinear in the image
space
Find a peak in the counter arra
This is a “bright” point in the parameter imag
It can be found by thresholding

7
.

Practical Issues
The slope of the line is -∞<m<
The parameter space is INFINIT
The representation y = mx + n does not express lines of the form
x=k

8

Solution:
Use the “Normal” equation of a line:

y = mx + n ρ = x cosθ+y sinθ
y
θ Is the orientation of the normal
ρ Is the distance between
the origin and the line
P(x,y)

y ρ
θ
x
x

New Parameter Space


Use the parameter space (ρ, θ
The new space is FINIT
-D/2 < ρ < D/2 , where D is the image diagonal
-90 ≦ θ ≦ 9
The new space can represent all line
y = k is represented with ρ = k, θ=9
x = k is represented with ρ = k, θ=0 ρ <0, θ<0 ρ >0, θ>0

ρ <0, θ>0
ρ >0, θ<0

10
0

Consequence:
A Point in Image Space is now represented as a SINUSOI
ρ = x cosθ+y sinθ

11
Hough Transform Algorithm
Input is an edge image (E(i,j)=1 for edgels
1. Discretize θ and ρ in increments of dθ and dρ. Let A(R,T) be
an array of integer accumulators, initialized to 0
2. For each pixel E(i,j)=1 and h=1,2,…T d
1. ρ = i cos(h * dθ ) + j sin(h * dθ
2. Find closest integer k corresponding to
3. Increment counter A(h,k) by on
3. Find local maxima in A(R,T)

12
)

Hough Transform Speed Up


If we know the orientation of the edge – usually available from
the edge detection ste
We x theta in the parameter space and increment only one
counter
We can allow for orientation uncertainty by incrementing a
few counters around the “nominal” counter.

13
fi
!

Hough Transf. & Noise


tokens
votes

15
tokens votes

16
17
18
19
Hough Transform for Curves
The H.T. can be generalized to detect any curve that can be
expressed in parametric form
y = f(x, a1,a2,…ap
a1, a2, … ap are the parameter
The parameter space is p-dimensiona
The accumulating array is LARGE!

20
)

HT for Circles
y
R
(a,b)

R
a

21
HT for Circles
y
θ
R
(a,b)

R
a

22
Generalizing the H.T.

The H.T. can be used even if the curve has not a simple analytic form!

1. Pick a reference point (xc,yc)


2. For i = 1,…,n :
(xc,yc) a. Draw segment to Pi on the boundary.
b. Measure its length ri, and its orientation αi.
φi c. Write the coordinates of (xc,yc) as a function
ri αi of ri and αi
Pi d. Record the gradient orientation φi at Pi.
xc = xi + ricos(αi) 5. Build a table with the data, indexed by φi .

yc = yi + risin(αi)

23

Generalizing the H.T.


Suppose, there were m different gradient orientations: (m <= n)

φ1 (r11,α11),(r12,α12),…,(r1n1,α1n1)

φj φ2 (r21,α21),(r22,α12),…,(r2n2,α1n2)
αj
rj . .
. .
(xc,yc) φi .
ri αi .
Pi φm (rm1,αm1),(rm2,αm2),…,(rmnm,αmnm)
xc = xi + ricos(αi)
yc = yi + risin(αi)
H.T. table
24

Generalized H.T. Algorithm:


Finds a rotated, scaled, and translated version of the curve:

1. Form an A accumulator array of possible


reference points (xc,yc), scaling factor S and
αr jj φ j θ Rotation angle θ.
S
Pj 2. For each edge (x,y) in the image:
a. Compute φ(x,y)
φi θ b. For each (r,α) corresponding to φ(x,y) do:
y ) S r i αi
, c Pi 1.
(x c For each S and θ:
φi θ a. xc = xi + r(φ) S cos[α(φ) + θ]
Sr kα k k b.
P yc = yi + r(φ) S sin[α(φ) + θ]
c. A(xc,yc,S,θ) ++
xc = xi + ricos(αi)
3. Find maxima of A.
yc = yi + risin(αi)

25

H.T. Summary
H.T. is a “voting” scheme
points vote for a set of parameters describing a line or curve
The more votes for a particular se
the more evidence that the corresponding curve is present in
the image
Can detect MULTIPLE curves in one shot
Computational cost increases with the number of parameters
describing the curve.

26
.

Segmentation by
Fitting
Fitting Lines
Using Least Squares Fitting Error:

28
Line tting can be max.
likelihood - but choice of
model is important

29
fi

Fitting Lines

Using Total Least Squares Fitting Error:

Eigenvalue
Problem!

30

Who came from which line?


Assume we know how many lines there are - but which lines are
they
easy, if we know who came from which lin
Possible strategie
Hough transfor
Incremental line ttin
K-means

31
?

fi
s

Slide by D.A. Forsyth

32
Slide by D.A. Forsyth

33
Inliers-Outliers

Loosely speaking, outliers are “bad data” points that do not


t the model. Points that t the model are inliers.

34
fi
fi
Problems with Outliers
Least square estimation is very sensitive to outliers! :
Few outliers can GREATLY skew the result.

35

36
37
38
39
Outliers are not the only problem
Multiple structures can also skew results
The tting procedure implicitly assumes ONE instance

40
fi
.

Robustness
As we have seen, squared error can be a source of bias in the
presence of noise point
One x is EM - we’ll not do this in this clas
Another is an M-estimato
Square nearby, threshold far awa
A third is RANSA
Search for good points

41
fi
C

Robust Estimation
Two steps
Classify data into INLIERS and OUTLIER
Use only INLIERS to t the mode

RANSAC is and example of this approach.

42
:

fi
l

Robustness and M-estimators

LSE methods are very sensitive to outliers: one bad point


can have tremendous effect on the solution.

An M-estimator is used to give different weights to the


errors:

Parameter
Controlling
Error in uence
Residual error a
xi Set of parameters
Being tted

43
fi
fl

44
Issues:

1. Minimizing objective is not longer linear:


Solutions must be found iteratively

2. Need to decide the scale parameter sigma.

45

Too small sigma

46
Too large sigma

47
Good sigma value:

48
RANSAC
RANdom SAmple Consensus
RANSAC procedure (line example)

50
RANSAC procedure (line example)

51
RANSAC procedure (line example)

52
RANSAC procedure (line example)

53
RANSAC procedure (line example)

54
RANSAC procedure (line example)

55
RANSAC procedure (line example)

56
RANSAC procedure (line example)

57
RANSAC procedure (line example)

58
RANSAC procedure (line example)

59
RANSAC
Choose a small subset uniformly at rando
Fit to tha
Anything that is close to result is signal; all others are nois
Re
Do this many times and choose the best

60
fi
t

ISSUES

– How many times?


– Often enough that we are likely to have a good line
– How big a subset?
– Smallest possible
– What does close mean?
– Depends on the problem
– What is a good line?
– One where the number of nearby points is so big it is
unlikely to be all outliers

61

Forsyth & Ponce


62
How Many Samples to Choose?

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

63

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of choosing one inlier

64

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of choosing s inliers

65

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of that one or more


points in the sample were
outliers (contaminated sample)
66

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of that N Samples


were contaminated

67

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of that AT LEAST


one sample of N Samples was
NOT contaminated
68

How Many Samples?


Choose N so that, with probability p, at least one random
sample is free from outliers. E.g. p = 0.99

69
Line example

12 pts: n = 12
Sample size: s = 2
Outliers 2: e=1/6 -> 20%
N = 5 gives 99% chance of getting a good sample
(trying every possible pair requires 66 trials!)

70

Acceptable Consensus Set

Typically, terminate when inlier ratio reaches expected ratio


of inliers

71
After RANSAC

•RANSAC divides the data into inliers and outliers


•But it computes the estimate with the MINIMAL samples
•We can improve the result by estimating the model using all
inliers:
•After RANSAC: estimate once more!

72

Fitting curves other than lines


In principle, an easy generalizatio
The probability of obtaining a point, given a curve, is given by
a negative exponential of distance squared

• In practice, rather har


• It is generally dif cult to compute the distance between a
point and a curve

73
fi
d

You might also like