You are on page 1of 71

Shanghai Jiao Tong University

Instructor: Xu Zhao

Computer Vision Class No.: AU7005


Spring 2023

Xu Zhao @ Shanghai Jiao Tong university

Lecture 2: Camera and Image Formation


Contents
❖ Pinhole cameras
๏ Camera obscura

๏ Pinhole model

❖ Camera with lens


❖ Depth of focus

❖ Field of view

❖ Lens aberrations

❖ Digital cameras

The structure of ambient light

Slide from Antonio Torralba


The structure of ambient light

Slide from Antonio Torralba


The Plenoptic Function
Adelson & Bergen, 91

❖ The intensity P can be parameterized as:


P(θ, ϕ, λ, t, x, y, z)
Slide from Antonio Torralba
Measuring the Plenoptic function

❖ Why is there no picture appearing on the paper?

Slide from Antonio Torralba


Let’s design a camera

❖ Idea 1:  put a piece of film in front of an object


❖ Do we get a reasonable image?

Slide by Steve Seitz


Let’s design a camera

❖ Add a barrier to block off most of the rays


๏ This reduces blurring
๏ The opening is known as the aperture
Slide by Steve Seitz
9

Camera obscura

❖ Basic principle known


to Mozi (470-390 BCE),
Aristotle (384-322 BCE)
❖ Drawing aid for artists:
described by Leonardo
da Vinci (1452-1519)

Illustration of Camera Obscura

James Hays

Camera obscura

"Reinerus Gemma-Frisius,
observed an eclipse of the sun at
Louvain on January 24, 1544,
and later he used this illustration
of the event in his book De Radio
Astronomica et Geometrica,
1545. It is thought to be the first
published illustration of a camera
obscura..."
Hammond, John H., The Camera
Obscura, A Chronicle

http://www.acmi.net.au/AIC/CAMERA_OBSCURA.html

Camera obscura: dark room

An attraction in the late 19th century Around 1870s

http://brightbytes.com/cosite/collection2.html Adapted from R. Duraiswami


First photograph
Oldest surviving photograph –
Photograph of the rst photograph
Took 8 hours on pewter plate

Joseph Niepce, 1826 Stored at UT Austin

Niepce later teamed up with Daguerre, who eventually created Daguerrotypes

Slide from James Tompkin


fi

Pinhole model

❖ Captures pencil of rays – all rays through a single point


❖ The point is called Center of Projection (focal point)
❖ The image is formed on the Image Plane

Forsyth and Ponce


Perspective effects
Distant objects are smaller

Forsyth and Ponce


Perspective effects

Kristen Grauman
Parallel lines meet
❖ Parallel lines in the
scene intersect in the
image
❖ Converge in image on
horizon line

Forsyth and Ponce


Vanishing points
❖ Each direction in space has its own
vanishing point
❖ All lines going in that direction
converge at that point
❖ Exception: directions parallel to the
image plane
❖ All directions in the same plane have
vanishing points on the same line.
❖ The line is called the horizon, or
vanishing line for that plane.
Railroad tracks "vanishing" into the distance

Source
own work

❖ Good ways to spot faked images


2006-05-23
Date

Author
User:MikKBDFJKGeMalak Forsyth and Ponce

8 More Single View Geometry


x
x/

v/ v
C D

X1 X2 X3 X4 X
a

d X( )
A
X( )
v
d
C
8.6 Vanishing points and vanishing lines 217

v1 v2
l

n
l

n
C

Fig. 8.16. Vanishing line formation. (a) The two sets of parallel lines on the scene plane converge to
the vanishing points v1 and v2 in the image. The line l through v1 and v2 is the vanishing line of the
plane. (b) The vanishing line l of a plane π is obtained by intersecting theHartley & Zisserman
image plane with a plane
through the camera centre C and parallel to π.
Forsyth & Ponce
!
Vertical vanishing
point
(at infinity)

Vanishing
line

Vanishing
Vanishing
point
point

Slide from Efros, Photo from Criminisi


Slide from Efros, Photo from Criminisi
Forsyth and Ponce
Projection properties
❖ Many-to-one: any points along same visual ray map to same
point in image
❖ Points → points
❖ But projection of points on focal plane is undefined
❖ Lines → lines (collinearity preserved)
❖ Line through focal point projects to a point.
❖ Planes → planes
❖ Plane through focal point projects to line

❖ Distances and angles are not preserved


Perspective distortion

Image copyright 2007 SharkD, licensed CC-BY-SA 3.0 Source: F. Durand


Perspective and art

Raphael Durer, 1525

❖ Use of correct perspective projection indicated in 1st century B.C. frescoes


❖ Skill resurfaces in Renaissance: artists develop systematic methods to
determine perspective projection (around 1480-1515)

Perspective projective equations


n e World points
a
pl Focal length
g e x
[z]
a y
Im
Optical center
Optical axis
x′
y′
z′
Camera frame

x y
(x, y, z) → ( f , f )
z z
Scen point Image coordinates
Source: J. Ponce, S. Seitz



Models

fY/Z
C
p Z
f

ra centre and p the principal point. The camera


Vanishing point formulation
Line in 3-space Perspective projection of that line

x(t ) = x0 + at x'(t) =
fx f (x 0 + at)
=
y (t ) = y0 + bt z z0 + ct
fy f (y 0 + bt)
z (t ) = z0 + ct y'(t) = =
z z0 + ct

This tells us that any set of t→∞


parallel lines (same a,
€ b, c c≠0
parameters) project to the x'(t) "
"→
fa
same point (called the c
fb
vanishing point). y'(t) "
"→
c
Dimensionality Reduction Machine (3D to 2D)

Point of observation

Figures © Stephen E. Palmer, 2002


Homogeneous coordinates
❖ Is this a linear transformation?
❖ No, division by z is nonlinear

❖ Trick: add one more coordinate


x
x
[1]
y
(x, y) ⇒ y (x, y, z) ⇒ z
1
homogeneous image coordinates homogeneous scene coordinates

❖ Converting from homogeneous coordinates


x
x
[w]
x y y x y z
y ⇒( , ) ⇒ ( , , )
w w z w w w
w Slide by Steve Seitz

Perspective projective matrix


❖ Projection is a matrix multiplication using homogeneous coordinates:

1 0 0 0 x x
0 1 0 0 y y x y
z = z
⇒ (f , f )
z z
0 0 1f 0
1 f

divide by the third coordinate


❖ In practice, split into lots of different coordinate transformations

2D Camera to Perspective World to


3D
point pixel coord. projection camera coord.
= point
(3x1) trans. matrix matrix trans. matrix
(4x1)
(3x3) (3x4) (4x4)







Calibration
❖ Camera’s intrinsic and extrinsic parameters needed to calibrate geometry.
Extrinsic: Camera frame World frame
World frame

Intrinsic:
Image coordinates
relative to camera
Pixel
coordinates Camera frame

2D Camera to Perspective World to


3D
point pixel coord. projection camera coord.
= point
(3x1) trans. matrix matrix trans. matrix
(4x1)
(3x3) (3x4) (4x4)





170
Weak perspective 6 Camera Models

weak
perspective

perspective
C
f d0

❖ Perspective
Fig. 6.8. Perspective vseffects, but not over
weak perspective the scale
projection. of individual
The action of the weakobjects
perspective camera is
equivalent to orthographic projection onto a plane (at Z = d0 ), followed by perspective projection from
❖ Collect
the plane. points between
The difference into a group at about
the perspective and the
weaksame depth,
perspective then
image divide
point dependseach
both on
the distance ∆ of the point X from the plane, and the distance of the point from the principal ray.
point by the depth of its group
❖Adv: easy; Disadv: only approximate
one of two ways, either with x̃0 = 0 or with t̂ = 0̂. Using the second decomposition
of (6.20), the image of the world origin is P∞ (0, 0, 0, 1)T = (x̃T , 1)T . Consequently,

Orthographic projection
Image World

❖ Special case of perspective projection –Distance


from center of projection to image plane is infinite
❖ Also called “parallel projection”

Three camera projections

Perspective projection Parallel (orthographic) projection

Antonio Torralba
Three camera projections
3D point 2D image point

' fx fy $
Perspective ( x, y , z ) → % , "
& z z #
Weak Perspective ' fx fy $
( x, y, z ) → %% , ""
& z0 z0 #
Orthographic Projection ( x, y , z ) → ( x, y )
Discussions

❖ From the viewpoint of image formation, can you design


a depth camera to extract depth information for every
pixel in a single image?
Pinhole size / aperture
❖ How does the size of the aperture affect the image
we’d get?

lager

smaller
Kristen Grauman
Adding a lens

focal point

f
❖ A lens focuses light onto the film
❖ Rays passing through the center are not deviated
❖ All parallel rays converge to one point on a plane
located at the focal length f Slide by Steve Seitz

Pinhole vs. lens


Cameras with lenses

optical center
focal point
(Center Of Projection)

❖ A lens focuses parallel rays onto a single focal point


❖ Gather more light, while keeping focus; make pinhole
perspective projection practical

Thin lens

Thin lens
❖ Rays entering parallel on
one side go through
focus on other, and vice Left focus Right focus

versa.
❖ In ideal case – all rays
from P imaged at P’.

Lens diameter d Focal length f


Thin lens equation

1 1 1
= +
f u v
u v

❖ Any object point satisfying this equation is in focus


Focus and depth of field

Slide by A. Efros
Focus and depth of field
❖ Thin lens: scene points
at distinct depths come
in focus at different
image planes.
❖ (Real camera lens
systems have greater
depth of field.)
“circles of confusion”

❖ Depth of eld: distance between image planes where


blur is tolerable.
fi

Varying the aperture

Large aperture  = small DOF Small aperture = large DOF


Slide by A. Efros
How can we control the depth of field?

❖ Changing the aperture


size affects depth of field
❖ A smaller aperture
increases the range in
which the object is
approximately in focus
❖ But small aperture
reduces amount of
light – need to
increase exposure
Slide by A. Efros
Depth from focus

Images from same point


of view, different
camera parameters

3d shape / depth
estimates

[ gs from H. Jin and P. Favaro, 2002]


fi
Field of view

Angular measure
of portion of 3d
space seen by
the camera

Images from http://en.wikipedia.org/wiki/Angle_of_view


Kristen Grauman

Field of view depends on focal length


❖ As f gets smaller, image
becomes more wide angle
❖ more world points
project onto the finite
image plane
❖ f gets larger, image
becomes more telescopic
❖ smaller part of the world
projects onto the finite
image plane

from R. Duraiswami

Field of view depends on focal length

f
f

FOV depends on focal length and size of the camera retina


d−1
φ = tan ( )
2f
Smaller FOV = larger Focal Length

Slide by A. Efros
Real lenses
Lens Flaws: Chromatic Aberration
Lens has different refractive indices
for different wavelengths: causes
color fringing

Near Lens Center Near Lens Outer Edge


slides from Rob Fergus

Lens flaws: Spherical aberration

❖ Spherical lenses don’t focus light perfectly


❖ Rays farther from the optical axis focus closer

slides from Rob Fergus


Lens flaws: Vignetting

slides from Rob Fergus


Radial Distortion

No distortion Pin cushion Barrel

slides from Rob Fergus


Digital cameras

camera
CCD
array optics frame
computer
grabber

❖ Film -> sensor array


❖ Often an array of charge coupled devices
❖ Each CCD is light sensitive diode that converts photons (light energy) to
electrons

CCD vs. CMOS


❖ CCD: transports the charge across
the chip and reads it at one corner
of the array. An analog-to-digital
converter (ADC) then turns each
pixel's value into a digital value by
measuring the amount of charge at
each photosite and converting that
measurement to binary form
❖ CMOS: uses several transistors at
each pixel to amplify and move the http://www.dalsa.com/shared/content/pdfs/CCD_vs_CMOS_Litwiller_2005.pdf

charge using more traditional wires.


The CMOS signal is digital, so it
needs no ADC.
http://electronics.howstuffworks.com/digital-camera.htm

Digital images
width
j=1 520
i=1
Think of images as matrices
taken from CCD array.

500
height
Intensity : [0,255]
im[176][201] has value 164 im[194][203] has value 37 Kristen Grauman

Color sensing in camera: Color filter array

Estimate missing
components from
neighboring values
(demosaicing)

Why more green?

Bayer grid

Human Luminance Sensitivity Function



Demosaicing
Demosaicing
Simple interpolation
Simple “edge aware”interpolation
Digital camera artifacts
❖ Noise
❖ low light is where you most notice noise

❖ light sensitivity (ISO) / noise tradeoff

❖ stuck pixels

❖ In-camera processing
❖ oversharpening can produce halos

❖ Compression
❖ JPEG artifacts, blocking

❖ Blooming
❖ charge overflowing into neighboring pixels

❖ Color artifacts
❖ purple fringing from microlenses

❖ white balance

Slide by Steve Seitz


Historical context
❖ Pinhole model: Mozi (470-390 BCE), Aristotle
(384-322 BCE)
❖ Principles of optics (including lenses):
Alhacen (965-1039 CE) 
Alhacen’s notes
❖ Camera obscura: Leonardo da Vinci
(1452-1519), Johann Zahn (1631-1707)
❖ First photo: Joseph Nicephore Niepce (1822)
❖ Daguerréotypes (1839)
❖ Photographic lm (Eastman, 1889)
Niepce, “La Table Servie,” 1822
❖ Cinema (Lumière Brothers, 1895)
❖ Color Photography (Lumière Brothers, 1908)
❖ Television (Baird, Farnsworth, Zworykin, 1920s)
❖ First consumer camera with CCD: Sony
Mavica (1981)
❖ First fully digital camera: Kodak DCS100 CCD chip
(1990)
Slide credit: L. Lazebnik
fi
Summary
❖ Image formation affected by geometry, photometry, and optics.
❖ Projection equations express how world points mapped to 2d
image.
❖ Homogenous coordinates allow linear system for projection
equations.
❖ Lenses make pinhole model practical.
❖ Parameters (focal length, aperture, lens diameter,…) affect
image obtained.

You might also like