You are on page 1of 62

EECE 5639 Computer Vision I

Lecture 1
Hough Transform, Model Fitting, Planar Unwarpin
Hw3 has been poste
Next Clas
Planar Unwarping

1
s

Generalizing the H.T.

The H.T. can be used even if the curve has not a simple analytic form!

1. Pick a reference point (xc,yc)


2. For i = 1,…,n :
(xc,yc) a. Draw segment to Pi on the boundary.
b. Measure its length ri, and its orientation αi.
φi c. Write the coordinates of (xc,yc) as a function
ri αi of ri and αi
Pi d. Record the gradient orientation φi at Pi.
xc = xi + ricos(αi) 5. Build a table with the data, indexed by φi .

yc = yi + risin(αi)

Generalizing the H.T.


Suppose, there were m different gradient orientations: (m <= n)

φ1 (r11,α11),(r12,α12),…,(r1n1,α1n1)

φj φ2 (r21,α21),(r22,α12),…,(r2n2,α1n2)
αj
rj . .
. .
(xc,yc) φi .
ri αi .
Pi φm (rm1,αm1),(rm2,αm2),…,(rmnm,αmnm)
xc = xi + ricos(αi)
yc = yi + risin(αi)
H.T. table
3

Generalized H.T. Algorithm:


Finds a rotated, scaled, and translated version of the curve:

1. Form an A accumulator array of possible


reference points (xc,yc), scaling factor S and
αr jj φ j θ Rotation angle θ.
S
Pj 2. For each edge (x,y) in the image:
a. Compute φ(x,y)
φi θ b. For each (r,α) corresponding to φ(x,y) do:
y ) S r i αi
, c Pi 1.
(x c For each S and θ:
φi θ a. xc = xi + r(φ) S cos[α(φ) + θ]
Sr kα k k b.
P yc = yi + r(φ) S sin[α(φ) + θ]
c. A(xc,yc,S,θ) ++
xc = xi + ricos(αi)
3. Find maxima of A.
yc = yi + risin(αi)

H.T. Summary
H.T. is a “voting” scheme
points vote for a set of parameters describing a line or curve
The more votes for a particular se
the more evidence that the corresponding curve is present in
the image
Can detect MULTIPLE curves in one shot
Computational cost increases with the number of parameters
describing the curve.

5
.

Segmentation by
Fitting
Fitting Lines
Using Least Squares Fitting Error:

7
Line tting can be max.
likelihood - but choice of
model is important

8
fi

Fitting Lines

Using Total Least Squares Fitting Error:

Eigenvalue
Problem!

Who came from which line?


Assume we know how many lines there are - but which lines are
they
easy, if we know who came from which lin
Possible strategie
Hough transfor
Incremental line ttin
K-means

10
?

fi
s

Slide by D.A. Forsyth

11
Slide by D.A. Forsyth

12
Inliers-Outliers

Loosely speaking, outliers are “bad data” points that do not


t the model. Points that t the model are inliers.

13
fi
fi
Problems with Outliers
Least square estimation is very sensitive to outliers! :
Few outliers can GREATLY skew the result.

14

15
16
17
18
Outliers are not the only problem
Multiple structures can also skew results
The tting procedure implicitly assumes ONE instance

19
fi
.

Robustness
As we have seen, squared error can be a source of bias in the
presence of noise point
One x is EM - we’ll not do this in this clas
Another is an M-estimato
Square nearby, threshold far awa
A third is RANSA
Search for good points

20
fi
C

Robust Estimation
Two steps
Classify data into INLIERS and OUTLIER
Use only INLIERS to t the mode

RANSAC is and example of this approach.

21
:

fi
l

Robustness and M-estimators

LSE methods are very sensitive to outliers: one bad point


can have tremendous effect on the solution.

An M-estimator is used to give different weights to the


errors:

Parameter
Controlling
Error in uence
Residual error a
xi Set of parameters
Being tted

22
fi
fl

23
Issues:

1. Minimizing objective is not longer linear:


Solutions must be found iteratively

2. Need to decide the scale parameter sigma.

24

Too small sigma

25
Too large sigma

26
Good sigma value:

27
RANSAC
RANdom SAmple Consensus
RANSAC procedure (line example)

29
RANSAC procedure (line example)

30
RANSAC procedure (line example)

31
RANSAC procedure (line example)

32
RANSAC procedure (line example)

33
RANSAC procedure (line example)

34
RANSAC procedure (line example)

35
RANSAC procedure (line example)

36
RANSAC procedure (line example)

37
RANSAC procedure (line example)

38
RANSAC
Choose a small subset uniformly at rando
Fit to tha
Anything that is close to result is signal; all others are nois
Re
Do this many times and choose the best

39
fi
t

ISSUES

– How many times?


– Often enough that we are likely to have a good line
– How big a subset?
– Smallest possible
– What does close mean?
– Depends on the problem
– What is a good line?
– One where the number of nearby points is so big it is
unlikely to be all outliers

40

Forsyth & Ponce


41
How Many Samples to Choose?

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

42

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of choosing one inlier

43

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of choosing s inliers

44

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of that one or more


points in the sample were
outliers (contaminated sample)
45

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of that N Samples


were contaminated

46

Let’s see why …

•Probability that a point is an outlier: e


•Number of points in a sample: s
•Number of samples (we want to compute this): N
•Desired probability that we get a good sample: p

Probability of that AT LEAST


one sample of N Samples was
NOT contaminated
47

How Many Samples?


Choose N so that, with probability p, at least one random
sample is free from outliers. E.g. p = 0.99

48
Line example

12 pts: n = 12
Sample size: s = 2
Outliers 2: e=1/6 -> 20%
N = 5 gives 99% chance of getting a good sample
(trying every possible pair requires 66 trials!)

49

Acceptable Consensus Set

Typically, terminate when inlier ratio reaches expected ratio


of inliers

50
After RANSAC

•RANSAC divides the data into inliers and outliers


•But it computes the estimate with the MINIMAL samples
•We can improve the result by estimating the model using all
inliers:
•After RANSAC: estimate once more!

51

Fitting curves other than lines


In principle, an easy generalizatio
The probability of obtaining a point, given a curve, is given by
a negative exponential of distance squared

• In practice, rather har


• It is generally dif cult to compute the distance between a
point and a curve

52
fi
d

Planar Unwarping
Thanks: Several of these slides are from Bob Collins
Remove Perspective Distortion

from Hartley & Zisserman

54
Review : Forward Projection


World Camera Film Pixel


Coords Coords Coords Coords
U X x u
V Mext Y Mproj Maff
y v
W Z

U X Mint u
V Y v
W Z

U u
M
V m11 m12 m13 m14 v
W m21 m22 m23 m24
m31 m31 m33 m34
55



Recall

from R.Szeliski
10/15/10 56
Summary of 2D Transformations

from R.Szeliski
57
Projection of Points on Planar Surface

Perspective
projection
y
Film coordinates
x

Point
on plane
Rotation +
Translation

58



Projection of Planar Points

59
Projection of Planar Points (cont)

Homography H
(planar projective
transformation)

60


Special Case : Frontal Plane
What if the planar surface is perpendicular to the optic axis
(Z axis of camera coord system)?
Then world rotation matrix simpli es:

61
fi
Frontal Plane
So the homography for a frontal plane simpli es:

Similarity Transformation!
62

fi

You might also like