You are on page 1of 25

Fundamentals of Photogrammetry Fundamentals of Photogrammetry

Presentation

Fundamentals of Photogrammetry

Ph.D. in Computing Science (2000).


Niclas Börlin, Ph.D.
niclas.borlin@cs.umu.se Numerical Linear Algebra.
Non-linear least squares with non-linear equality constraint.
Department of Computing Science
Umeå University X-ray photogrammetry — Radiostereometry (RSA).
Sweden
Post doc at Harvard Medical School, Boston, MA.
April 1, 2014

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Introduction

Radiostereometric analysis (RSA) Definition


Developed by Hallert (1960), Selvik (1974), Kärrholm (1989),
Börlin (2002, 2006), Valstar (2005).
Procedure Photogrammetry — measuring from photographs
Dual X-ray setup photos — “light”
Calibration cage gramma — “that which is drawn or written”
Marker metron — “to measure”
measurements
Reconstruction of Definition in Manual of Photogrammetry, 1st ed., 1944,
projection geometry American Society for Photogrammetry:
Motion analysis Photogrammetry is the science or art of obtaining
reliable measurement by means of photographs

Software UmRSA Digital Measure running in Europe, North


America, Australia, Asia. Used to produce 150+ scientific
papers.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Introduction Principles

Overview Principles

Principles
Non-contact measurements.
History
(Passive sensor.)
Mathematical models
Collinearity.
Processing
Triangulation.
Applications

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Principles Principles
Collinearity Triangulation

Collinearity Triangulation

The collinearity principle is the assumption that


One image coordinate measurement (x, y ) is too little to
the object points Q,
determine the object point coordinates (X , Y , Z ).
the projection center C , and
the projected points q We need at least two measurements of the same point.
Q
are collinear, i.e. lie on a straight line.
q1 ?

C1

q1 Q
C
q2
C1
q
C2
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Principles Principles
Triangulation Triangulation

Triangulation (2) Other techniques

The position of object points are calculated by triangulation,


i.e. by angles, but without any range values.
Trilateration, ranges but no Tachyometry, angles and
Q angles (GPS). ranges (surveying, laser
scanning)
r3
Q r2 Q
C3
α r
C2
r1
C1
C
C1 α β C2
B

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


History History
Pre-history

Overview Pre-history

Geometry, perspective, pinhole camera model — Euclid (300


BC).
Principles Leonardo da Vinci (1480)
History Perspective is nothing else than the seeing of an
Mathematical models object behind a sheet of glass, smooth and quite
transparent, on the surface of which all the things
Processing may be marked that are behind this glass. All things
Applications transmit their images to the eye by pyramidal lines,
and these pyramids are cut by the said glass. The
nearer to the eye these are intersected, the smaller
the image of their cause will appear.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
History History
Plane table photogrammetry Analog photogrammetry

First generation — Plane table photogrammetry Second generation — Analog photogrammetry


First photograph — Niépce, Stereocomparator (Pulfrich, Fourcade, 1901). Required
1825. Required 8 hour expo- coplanar photographs. Measurements made by floating mark.
sure.
Glass negative — Hershel,
1839.

First use of terrestrial photographs for topographic maps —


Laussedat, 1849 “Father of photogrammetry”. City map of
Paris (1851).
Film — Eastman, 1884.
Architectural photogrammetry — Meydenbauer, 1893, coined
the word “photogrammetry”.
Measurements made on a map on a table. Photographs used
to extract angles.
Aeroplane (Wright 1903). First aerial imagery from aeroplane
in 1909.
Fundamentals of Photogrammetry
Aerial survey camera for overlapping vertical photos (Messter
Fundamentals of Photogrammetry
History History 1915).
Analog photogrammetry Analytical photogrammetry

Second generation — Analog photogrammetry (2) Third generation — analytical photogrammetry

Opto-mechanical stere- Finsterwalder (1899) — equations for analytical


oplotters (von Orel, photogrammetry, intersection of rays, relative and absolute
Thompson, 1908, Zeiss orientation, least squares theory.
1921, Wild 1926). Allowed von Gruber (1924) — projective equations and their
non-coplanar photographs. differentials,
Computer (Zuse 1941, Turing, Flowers, 1943, Aiken 1944).
Schmid, Brown multi-station analytical photogrammetry,
bundle block adjustment (1953), adjustment theory.
The [Ballistic Research] laboratory had a virtual
Wild A8 Autograph (1950)
global monopoly on electronic computing power.
Relative orientation determined by 6 points in overlapping This unique circumstance combined with Schmid set
images — von Gruber points (1924) the stage for the rapid transistion from classical
Photogrammetry — the art of avoiding photogrammetry to the analytic approach (Brown).
computations
Ackermann independent models (1966).
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
History History
Analytical photogrammetry Digital photogrammetry

Third generation — analytical photogrammetry (2) Digital photogrammetry

Analytical plotter (Helava 1957) - image-map coordinate


transformation by electronic computation, servocontrol.
Charge-Coupled Device (CCD) (Boyle, Smith 1969).
Landsat (1972)
Digital camera (Sesson (Eastman Kodak) 1975 — 0.01
Mpixels).
Flash memory (Masuoka (Toshiba) 1980).
Matching (Förstner 1986, Gruen 1985, Lowe 1999).
Projective Geometry (Klein 1939)
5-point relative orientation (Nistér 2004)
Zeiss Planicomp P3
Camera calibration (Brown 1966, 1971).
Direct Linear Transform (DLT) (Abdel-Azis, Karara, 1971).

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Preliminaries

Overview Matrix multiplication


 

 b11 b12 

 
 

Principles
C = AB ×
b 12



b21 b22 


a 21 b 22

+
 
History ×
 b31 b32 
a 22 b 32

+
Mathematical models ×
Processing a 23
   
Applications
 a11 a12 a13   c11 c12 
   
   
 a a22 a23   c21 c22
 21  


   
 
c31 c32
 a  
31 a32 a33 
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Preliminaries The collinearity equations

Image plane placement The collinearity equations


The projected coordinates q will be identical
if a (negative) sensor is placed behind the camera center or
if a (positive) sensor is mirrored and placed in front of the The collinearity equations
camera center. Q     C
x − xp X − X0 Z
y − yp  = kR Y − Y0  Z0 c y
q
−c Z − Z0 qp x

describe the relationship Y Q


q between the object point X
(X , Y , Z )T , the position X0 Z
C C = (X0 , Y0 , Z0 )T of the
camera center and the Y0 Y
q orientation R of the
camera. X

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
The collinearity equations The collinearity equations

The collinearity equations (2) The collinearity equations (3)

From
The distance c is known as C      
x − xp X − X0 r11 r12 r13
the principal distance or Z c y y − yp  = kR Y − Y0  , and R = r21 r22 r23  ,
camera constant. Z0 q
qp x −c Z − Z0 r31 r32 r33
The point qp = (xp , yp )T is
called the principal point. Q we can solve for k and insert:
Y
The ray passing through X
Z r11 (X − X0 ) + r12 (Y − Y0 ) + r13 (Z − Z0 )
the camera center C and X0 x = xp − c ,
r31 (X − X0 ) + r32 (Y − Y0 ) + r33 (Z − Z0 )
the principal point qp is r21 (X − X0 ) + r22 (Y − Y0 ) + r23 (Z − Z0 )
called the principal ray. Y0 Y y = yp − c .
r31 (X − X0 ) + r32 (Y − Y0 ) + r33 (Z − Z0 )
X
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Projective geometry Projective geometry

Homogenous coordinates Homogenous coordinates (2)


In projective geometry, points, lines, etc. are represented by Any homogenous vector (x1 , x2 , x3 )T , x3 6= 0 may be
homogenous coordinates. transformed to cartesian coordinates by dividing by the last
Any cartesian coordinates (x, y ) may be transformed to element
homogenous by adding a unit value as an extra coordinate:      
 
 
x x1 x1 /x3 x1 /x3
x x2  7→ x2 /x3  = x2 /x3  .
7→ y  .
y x3 x3 /x3 1
1
All homogenous vector multiplied by a non-zero scalar k A homogenous vector (x1 , x2 , x3 )T with x3 = 0 is called an
belong to the same equivalence class and correspond to the ideal point and is “infinitely far away” in the direction of
same object. Thus, (x1 , x2 ).
     
x x kx The point (0, 0, 0)T is undefined.
y  and k y  = ky  , k 6= 0
The space <3 \ (0, 0, 0)T is called the projective plane P 2 .
1 1 k
A homogenous point in P 2 has 2 degrees of freedom.
all correspond to the same 2D point (x, y )T .

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Projective geometry Transformations

Interpretation of the projective plane P 2 Transformations

x2
Transformation of homogenous 2D points may be described
A homogenous vector by multiplication by a 3×3 matrix
x ∈ P 2 may be ideal x1     
interpreted as a line point u a11 a12 a13 x
3 v  = a21 a22 a23 y ,
through the origin in < .
 
1 a31 a32 a33 1
The intersection with the
plane x3 = 1 gives the O
corresponding cartesian x
1 x3 or
coordinates.
q = Ap.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Transformations Transformations

Basic transformations — Translation Basic transformations — Rotation


A translation of points in <2 may be described using
A rotation may be described using homogenous coordinates as
homogenous coordinates as
         
1 0 x0 x x + x0 cos ϕ − sin ϕ 0 x x cos ϕ − y sin ϕ
q = T (x0 , y0 )p = 0 1 y0  y  = y + y0  . R(ϕ)p =  sin ϕ cos ϕ 0 y  = x sin ϕ + y cos ϕ .
0 0 1 1 1 0 0 1 1 1

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Transformations Transformations

Basic transformations — Scaling Combination of transformations

Scaling of points in <2 along the coordinate axes may be Combinations of transformations are constructed by matrix
described using homogenous coordinates as multiplication:
     q = T (x0 , y0 )R(ϕ)T (−x0 , −y0 )p
k 0 0 x kx
q = S(k, l)p =  0 l 0 y  =  ly  .
0 0 1 1 1
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Transformation classes Transformation classes

Transformation classes Similarity


A similarity transformation consists of a combination
of rotations, isotropic scalings, and translations.
 
s cos θ −s sin θ tx
 s sin θ s cos θ ty 
Transformation may be classified based on their properties.
0 0 1
The most important transformations are
Similarity (rigid-body transformation). or
Affinity.  
Projectivity (homography). sR t
,
0 1

where the scalar s is the scaling, R is a 2 × 2


rotation matrix and t is the translation vector.
A 2D similarity has 4 degrees of freedom.
A similarity preserves angles (and “shape”).

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Transformation classes Transformation classes

Affinity Projectivity (Homography)

For an affine transformation the rotation and scaling


is replaced by any non-singular 2 × 2 matrix A
A projectivity or homography consists of any
non-singular 3 × 3 matrix H
 
a11 a12 tx
a21 a22 ty   
0 0 1 h11 h12 h13
h21 h22 h23  .
or h31 h32 h33
 
A t A 2D projectivity has 8 degrees of freedom.
.
0 1
A projectivity preserves neither parallelity nor angles.
A 2D affinity has 6 degrees of freedom.
A similarity preserves parallelity but not angles.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Transformation classes Planar rectification

The effect of different transformations Planar rectification

If the coordinates for 4 points pi and their mappings qi = Hpi


in the image are known, we may calculate the homography H.
Similarity Affinity Projectivity
From each point pair pi = (xi , yi , 1)T , qi = (xi0 , yi0 , 1)T we get
the following equations:
 0       
xi u/w u h11 h12 h13 xi
y 0  = v /w  , where  v  = h21 h22 h23  yi 
i
1 1 w h31 h32 h33 1
or
h11 xi + h12 yi + h13
xi0 = u/w = ,
h31 xi + h32 yi + h33
h21 xi + h22 yi + h23
yi0 = v /w = .
h31 xi + h32 yi + h33

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Planar rectification Planar rectification

Planar rectification (2) Planar rectification (3)


Given H we may apply H −1 to remove the effect of the
homography.
Rearranging

xi0 (h31 xi + h32 yi + h33 ) = h11 xi + h12 yi + h13 ,


yi0 (h31 xi + h32 yi + h33 ) = h21 xi + h22 yi + h23 .

This equation is linear in hij .


Given 4 points we get 8 equations, enough to uniquely
determine H assuming the points are in “standard position”,
i.e. no 3 points are collinear.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
The camera model The camera model

The pinhole camera model The central projection


The most commonly used camera model is called the pinhole If the camera center is at the origin and the image plane is the
camera. plane Z = c, the world coordinate (X , Y , Z )T is mapped to
In the pinhole camera model: the point (cX /Z , cY /Z , c)T in space or (cX /Z , cY /Z ) in the
All object points Q are projected via a central projection image plane, i.e.
through the same point C , called the camera center.
The object point Q, the camera center C , and the projected (X , Y , Z )T 7→ (cX /Z , cY /Z )T
point q are collinear. Y
A pinhole camera is straight line-preserving.
X
Q
y Q
q
O x
q qp
Z

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
The camera model The camera model

The central projection (2) The central projection (3)


If the camera center is at the origin and the image plane is the The corresponding expression in homogenous coordinates may
plane Z = c, the world coordinate (X , Y , Z )T is mapped to be written as
the point (cX /Z , cY /Z , c)T in space or (cX /Z , cY /Z ) in the  
X    
 
X
image plane, i.e. Y  cX c 0 Y 
  7→ cY  =  c 0  Z  .

(X , Y , Z )T 7→ (cX /Z , cY /Z )T Z 
Z 1 0
1 1
q P Q
Y The matrix P is called the camera matrix and maps the world
q point Q onto the image point q.
In more compact form P may be written as
cY /Z
α

P = diag(c, c, 1) I | 0 ,
O qp Z where diag(c, c, 1) is a diagonal matrix and I is the 3 × 3
c identity matrix.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
The camera model The camera model

The principal point The principal point (2)


If the principal point is not at the origin of the image
coordinate system, the mapping becomes In homogenous coordinates
T T
(X , Y , Z ) 7→ (cX /Z + px , cY /Z + py ) ,  
X  
where (px , py )T are the image coordinates for the principal Y  7→ cX /Z + px
point qp . cY /Z + py
Y Z
X becomes
   
Q X     X
q Y  cX + Zpx c px 0  
Y
O   7→ cY + Zpy  =  c py
Z  0 
Z 

qp Z 1 0
Z 1 1
y

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
The camera model The camera model

The camera calibration matrix The camera position and orientation


Introduce X0
 
Y 0 
Q0 = 
 0
 Z 0  and q = K I | 0 Q .

If we write
  1
c px
K =  c py  , to describe coordinates in the camera coordinate system.
1 The camera and world coordinate systems are identical if the
camera center is at the origin, the X and Y axes coincide
the projection may be written as with the sensor coordinate system and the Z axes coincide
 with the principal ray. Y
q = K I | 0 Q. X
Y0 0
X
The matrix K is known as the camera calibration matrix.
O
Z0 Z
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
The camera model The camera model

The camera position and orientation (2) The camera position and orientation (3)
In the general case, the transformation between the In homogenous coordinates, this transformation becomes
coordinate systems is usually described as  
 0
X
   
X X0    X  
R 0 I −C  Y  = R −RC Q.
Q0 =

Y 0  = R Y  − Y0  ,
0 1 0 1 Z  0 1
Z0 Z Z0 1
where C = (X0 , Y0 , Z0 )T is the camera center in world
The full projection is given by
coordinates and the rotation matrix R describes the rotation

from world coordinates to camera coordinates. q = KR I | − C Q.
Y0
X0 The equation
Y
C
X 
q = PQ = KR I | − C Q,
Z0
O is sometimes referred to as the camera equation.
Z The 3 × 4 matrix P is known as the camera matrix.

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
The camera model The camera model

The camera position and orientation (4) Camera coordinates


If the transformation from the world to the camera is written
as
 0     What are the (Z) coordinates of points in front of the camera?
X X X0
Q 0 = Y 0  = R Y  − Y0  , C
Z0 Z Z0 Z c y
Y0 Z0 q
X0 qp x
how does the transformation from the camera to the world Y
C Q
look like? X Y X
Z0 X0 Z
Y0
X0 O
Y
C Z Y0 Y
X
Z0 X
O
Z
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
The camera model The camera model

The collinearity equations (revisited) Internal and external parameters


Given   The camera equation
−c xq 
K = −c yp  , q = KR I | − C Q
1 that describes the general projection for a pinhole camera has
the camera equation 9 degrees of freedom: 3 in K (the elements c, px , py ), 3 in R
 
 
X (rotation angles) and 3 for C .
x   Y  The elements of K describes properties internal to the camera
y  = q = K R I | − C Q = K R I | − C   while the parameters of R and C describe the relation
Z 
1 between the camera and the world.
1
The parameters are therefore called one of
becomes K R, C
r11 (X − X0 ) + r12 (Y − Y0 ) + r13 (Z − Z0 ) internal parameters external parameters
x = xp − c ,
r31 (X − X0 ) + r32 (Y − Y0 ) + r33 (Z − Z0 ) internal orientation external orientation
r21 (X − X0 ) + r22 (Y − Y0 ) + r23 (Z − Z0 ) intrinsic parameters extrinsic parameters
y = yp − c .
r31 (X − X0 ) + r32 (Y − Y0 ) + r33 (Z − Z0 ) sensor model platform model

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
The camera model The camera model

Aspect ratio Skew

If we have different scale in the x and y directions, i.e. the


For an even more general camera model we can add a skew
pixels are not square, we have to include that deformation
parameter s to describe any non-orthogonality between the
into the equation.
image axis. Then the camera calibration matrix becomes
Let mx and my be the number of pixels per unit in the x and
y direction of the image. Then the camera calibration matrix 
αx s x0

becomes
      
K = αy y0  .
mx c px mx c m x px αx x0 1
K = my  c py  =  my c my py  =  αy y0  ,
1 1 1 1
The complete 3 × 4 camera matrix
where αx = fmx and αy = fmy is the camera constant in 
pixels in the x and y directions and P = KR I | − C
(x0 , y0 )T = (mx px , my py )T is the principal point in pixels.
has 11 degrees of freedom, the same as a 3 × 4 homogenous
A camera with unknown aspect ratio has 10 degrees of matrix.
freedom.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Rotations in <3 Elementary rotations (1)

The first elementary rotation (ω, omega) is about the x-axis.


A rotation in <3 is usually described as a sequence of 3 The rotation matrix is defined as Y
elementary rotations, by the so called Euler angles.
0
ωY
 
Warning: There are many different Euler angles and Euler 1 0 0
rotations! R1 (ω) = 0 cos ω − sin ω  .
Each elementary rotation takes place about a cardinal axis, x, 0 sin ω cos ω
y , or z.
The sequence of axis determines the actual rotation. X0
A common example is the ω − ϕ − κ (omega-phi-kappa or Z
x-y-z) convention that correspond to sequential rotations
ω
about the x, y , and z axes, respectively.
Z0

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Elementary rotations (2) Elementary rotations (3)

The second elementary rotation (ϕ, phi) is about the y -axis. The third elementary rotation (κ, kappa) is about the z-axis.
The rotation matrix is defined as Y0
Y The rotation matrix is defined as Y
    0
Y κ
cos ϕ 0 sin ϕ cos κ − sin κ 0
R2 (ϕ) =  0 1 0 . R3 (κ) =  sin κ cos κ 0 . X0
− sin ϕ 0 cos ϕ 0 0 1
κ
0
ϕ X X X
ϕ Z0
Z Z0
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Combined rotations Combined rotations (2)

Y
. . . first a rotation about the x-axis. . . Y0
ω

The axes follow the rotated object, so the second rotation is


about a once-rotated axis, the third about a twice-rotated
axis.
A sequential rotation of 20 degrees about each of the axis X0
is. . .
Z
ω
Z0

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Combined rotations (3) Combined rotations (4)

. . . followed by a rotation about Y 000


Y . . . followed by a final rotation about Y 00
the once-rotated y -axis. . . the twice-rotated z-axis. . . Y 000
κ
X 000
κ
X 00 X 00
ϕ
X0

ϕ Z 00 Z 000
00
Z0
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Combined rotations (5) Rotations in <3 (2)

Y
The inverse rotation is about the same axes in reverse
. . . resulting in a total rotation looking
Y 000 sequence with angles of opposite sign.
like this.
This is sometimes called roll-pitch-yaw, where the κ angle is
X 000 called the roll angle.
Other rotations: azimuth-tilt-swing (z-x-z), axis-and-angle,
etc.
X Every 3-parameter-description of a rotation has some rotation
without a unique representation.
Z x-y -z if the middle rotation is 90 degrees,
z-x-z if the middle rotation is 0 degrees,
Z 000 axis-and-angle when the rotation is zero (axis undefined).
However, the rotation is always well defined.

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Lens distortion Lens distortion (2)

A lens is designed to bend rays of light to construct a sharp


image.
A side effect is that the collinearity between incoming and
outgoing rays is destroyed.

Positive radial distortion Negative radial distortion


(pin-cushion) (barrel)
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Mathematical models Mathematical models
Rotations in <3 Rotations in <3

Lens distortion (3) Lens distortion (4)

The effect of lens distortion is that the projected point is The radial distortion is formulated as
moved toward or away from a point of symmetry.  
xr
 
x
= (K1 r + K2 r + . . .) m ,
2 4
The most common distortion model is due to Brown (1966, yr ym
1971).
The distortion is separated into a symmetric (radial) and for any number of coefficients (usually 1–2), where r is a
asymmetric (tangential) about the principal point: function of the distance to the principal point
   
! ∆x x − xp
r 2 = ∆x 2 + ∆y 2 , and = m .
       
xc xm xr xt ∆y ym − yp
= + + .
yc ym yr yt
corrected measured radial tangential The tangential distortion is formulated as follows

2P1 ∆x∆y + P2 (r 2 + 2∆x 2 )


   
Warning: Someone’s positive distortion is someone else’s xt
= ,
negative! yt 2P2 ∆x∆y + P1 (r 2 + 2∆y 2 )

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Mathematical models Processing
Rotations in <3

Lens distortion (5) Overview


The radial distortion follows from that the lens bends rays of
light. It is neglectable only for large focal lengths.
Any tangential distortion is due to de-centering of the optical
axis for the various lens components. It is neglectable except Principles
for high precision measurements.
History
The lens distortion parameters are usually determined at
camera calibration. Mathematical models
The lens distortion varies with the focal length. To use a Processing
calibrated camera, the focal length (and hence any zoom) Applications
must be the same as during calibration.
Warning: Some internal parameters are strongly correlated,
e.g. the tangential coefficients P1 , P2 and the principal point.
Any calibration including P1 , P2 must have multiple images at
roll angles 0 and 90 degrees.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Processing Processing
Camera calibration

Processing Camera calibration

Special cameras may be calibrated by measuring deviation


between input/output rays.
1 Camera calibration 1 Camera calibration Most of the time, camera calibration is performed by imaging
2 Image acquisition 2 Image acquisition a calibration object or scene.

3 Measurements 3 Measurements A 3D scene is preferable, but may be expensive.

4 Spatial resection 4 Relative orientation A 2D object is easier to manufacture and transport.

5 Forward intersection 5 Forward intersection


6 (Bundle adjustment) 6 (Bundle adjustment)
7 (Absolute orientation)

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Processing Processing
Camera calibration Image acquisition

Camera calibration (2) Image acquisition

Ideally, the calibration situation should mimic the actual scene.


With a 2D object, multiple images must be taken.
Remember: use the same focal setting during calibration and
image acquisition! Camera networks
If possible, include rolled images of the calibration object. Parallel (stereo)
Convergent
Aerial
Other
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Processing Processing
Image acquisition Image acquisition

Stereo images Convergent networks

Simplified measurements.
Simplified automation.
Stronger geometry.
May be viewed in “3D”. Q
q1 More than 2
measurements per object Q
C1 q2 q1
point.
Should ideally surround q2
C2 C1
the object.
C2 q3

C3

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Processing Processing
Image acquisition Measurements

Aerial networks Measurements


Highly structurized.
Typically around 60% overlap (along-track) and 30% sidelap
(cross-track).

 -  -
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Processing Processing
Spatial resection Forward intersection

Spatial resection Forward intersection


Determine the external orientation C , R of the camera from If the camera external orientations are known, an object point
measurements and (ground) control points. may be estimated from measurements in (at least) two
Direct method from 3 points — solve 4th order polynomial images.
(Grunert 1841, Haralick 1991, 1994). May have multiple Requires at least two observations.
solutions. Linear estimation, robust.

Q1

Q2
q1 q1 Q
q2
C1 q2

C q3
Q3
C2

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Processing Processing
Forward intersection Forward intersection

Forward intersection (2) Forward intersection (3)

From the left camera we know that


In reality, the lines may not intersect.
Q = C1 + (v1 − C1 )α1 , q1 Q
In that case, we may choose to find
for some value of α1 , where v1 are the 3D coordinates of q1 . the point that is closest to both lines C1
at the same time, i.e. that solves the q2
Similarly, for the right camera
following minimization problem
Q = C2 + (v2 − C2 )α2 . C2
2 2
min kQ − l1 (α1 )k + kQ − l2 (α2 )k ,
Q,α1 ,α2
We have 3+3 equations and 5 un-
knowns (Q, α1 , α2 ). or
q1 Q Q
Q − (C1 + t1 α1 ) 2
 
In theory, the point Q is at the in- min ,
C1 q2
tersection of the two lines, so we Q,α1 ,α2 Q − (C2 + t2 α2 )
drop 1 equation and solve the re-
C2 where ti = xi − Ci .
maining 5 to get Q.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Processing Processing
Forward intersection Forward intersection

Forward intersection (4) Forward intersection (5)


Given one more camera, we extend the equation system
 
  Q  
This problem is linear in the unknowns and may be rewritten I3 −t1 0 0   C1
α 1 C2  k2 ,
  min k I3 0 −t2 0   α2  −
 Q x
I3 0 0 −t3 C3
  
I3 −t1 0   C
min k α1 − 1 k2 . } α3
I3 0 −t2
| {z | {z }
x C2 A | {z } b
| {z } α2 | {z } x
A | {z } b
x
with the solution again Q
q1
The solution is given by the normal equations given by the normal equa-
tions q2
T T C1
A Ax = A b.
AT Ax = AT b. C2 q3
C3

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Processing Processing
Forward intersection Forward intersection

Forward intersection (6) Forward intersection (7)


Stereorestitution (“normal case”) If we have non-zero y parallax, i.e. py = y1 − y2 6= 0, we must
approximate.
B
Otherwise,
Z
O1 O2 x1 x2
X −Z =B −Z ,
c P1 x P2 −c −c
x
x1 x2 Bc Bc
−Z = = .
x1 − x2 px
Error propagation (first order)
P = (PX , PZ )
Bc ZZ
σZ = 2
σx = σx .
px c B
In photo 1: In photo 2:
x1 y1 x2 y2 The ratio B/Z is the base/object distance.
X =Z Y =Z X =B +Z Y =Z
−c −c −c −c The ratio Z /c is the scale factor.
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Processing Processing
Relative orientation Absolute orientation

Relative orientation Absolute orientation

One camera fixed, determine


position and orientation of
second camera.
Need 5 point pairs measured p3 p5 A 3D similarity transformation.
in both images. 7 degrees of freedom (3 translations, 3 rotations, 1 scale).
p1
No 3D information is q5 Direct method based on singular value decomposition (Arun
p4 q3
necessary. q1 1987) for isotropic errors.
p2
C1 q4
Direct method (Nistér q2
2004). Solve 10th order
polynomial. May have C2
multiple solutions.

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Processing Applications
Bundle adjustment

Bundle adjustment Applications

Architecture
Forensics
Maps
Simultaneous estimation of camera external orientation and
object points. Industrial
Iterative method, needs initial values. Motion analysis
May diverge. Movie industry
Orthopaedics
Space science
Microscopy
GIS
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Epipolar geometry Epipolar geometry

Epipolar geometry Epipolar lines


Let Q be an object point and q1 and q2 its projections in two Given a point q1 in image 1, the epipolar plane is defined by
images through the camera centers C1 and C2 . the ray through q1 and C1 and the baseline through C1 and
The point Q, the camera centers C1 and C2 and the (3D C2 .
points corresponding to) the projected points q1 and q2 will A corresponding point q2 thus has to lie on the intersecting
lie in the same plane. line l2 between the epipolar plane and image plane 2.
This plane is called the epipolar plane for C1 , C2 and Q. The line l2 is the projection of the ray through q1 and C1 in
image 2 and is called the epipolar line to q1 .
Q Q

q1 q1
q2 q2
C1 C1 e1
C2 e2 C2

Fundamentals of Photogrammetry Fundamentals of Photogrammetry


Epipolar geometry Epipolar geometry

Epipoles Examples
The intersection points between the base line and the image
planes are called epipoles.
The epipole e2 in image 2 is the mapping of the camera
center C1 .
The epipole e1 in image 1 is the mapping of the camera
center C2 .
Q

q1
q2

C1 e1
e2 C2
Fundamentals of Photogrammetry Fundamentals of Photogrammetry
Epipolar geometry Epipolar geometry
RANSAC RANSAC

Robust estimation — RANSAC c

The Random Sample Consensus (RANSAC) algorithm (Fishler a


and Bolles, 1981) is an algorithm for handling observations d

with large errors (outliers).


Given a model and a data set S containing outliers:
Pick randomly s data points from the set S and calculate the
model from these points. For a line, pick 2 points. C
D
Determine the consensus set Si of s, i.e. the set of points A
B

being within t units from the model. The set Si define the
inliers in S.
If the number of inliers are larger than a threshold T ,
recalculate the model based on all points in Si and terminate.
Otherwise repeat with a new random subset.
C
After N tries, choose the largest consensus set Si , recalculate B
D
A
the model based on all points in Si and terminate.

You might also like