This action might not be possible to undo. Are you sure you want to continue?
Some slides from M. Agrawala, F. Durand, P. Debevec, A. Efros, R. Fergus, D. Forsyth, M. Levoy, and S. Seitz
The Plenoptic Function
Figure by Leonard McMillan
• Q: What is the set of all things that we can ever see? • A: The Plenoptic Function (Adelson & Bergen) • Let’s start with a stationary person and try to parameterize everything that she can see …
• is intensity of light – Seen from a single viewpoint – At a single time – Averaged over the wavelengths of the visible spectrum
P(q, f, l)
• is intensity of light – Seen from a single viewpoint – At a single time – As a function of wavelength
P(q, f, l, t)
• is intensity of light – Seen from a single viewpoint – Over time – As a function of wavelength
t. l. VY. f.Holographic Movie P(q. VZ) • is intensity of light – Seen from ANY viewpoint – Over time – As a function of wavelength . VX.
l. at every moment. VZ) – Can reconstruct every possible view. VX. f. VY. at every wavelength – Contains every photograph.The Plenoptic Function P(q. from every position. every movie. t. everything that anyone has ever seen .
Sampling the Plenoptic Function .
Camera Lighting Surface Camera • A camera is a device for capturing and storing samples of the Plenoptic Function .
Building Better Cameras • Capture more rays! – Higher density sensor arrays – Color cameras. multi-spectral cameras – Video cameras .
Nalwa 96. Nayar 88. Chahl & Srinivasan 97 . Swaminathan & Nayar 99. Hong 91. Charles 87. Yamazawa 95. 02 Examples: Rees 70. Yagi 90.Modify Optics: Wide-Angle Imaging Multiple Cameras Catadioptric Imaging Examples: Disney 55. Bogner 95. McCutchen 91. Cutler et al. Nayar 97. Nalwa 96.
Catadioptric Cameras for 360 Imaging .
Omnidirectional Image .
Femto Photography FemtoFlash UltraFast Detector A trillion frame per second camera Serious Sync Computational Optics http://www. Hutchison.com/watch?v=9xjlck6W020 Kirmani. Raskar. 2009 . ICCV. Davis.youtube.
How Much Light is Really in a Scene? • Light transported throughout scene along rays – Anchor • Any point in 3D space • 3 coordinates radiance constant here – Direction • Any 3D unit vector • 2 angles – Total of 5 dimensions • Radiance remains constant along ray as long as in empty space – Removes one dimension – Total of 4 dimensions dA1 L1 dw2 dA2 L2 dw1 .
Ray • Ignoring time and color. f. one sample: P(q. VX. VZ) • 5D – 3D position – 2D direction Slide by Rick Szeliski and Michael Cohen . VY.
How can we use this? Lighting no change in radiance Surface Camera .
Ray Reuse • Infinite line – Assume light is constant (vacuum) • 4D – 2D direction – 2D position – non-dispersive medium Slide by Rick Szeliski and Michael Cohen .
Only need Plenoptic Surface .
Light Field .Organization • 2D position • 2D direction q s Slide by Rick Szeliski and Michael Cohen .
Organization • 2D position • 2D position u s • 2 plane parameterization Slide by Rick Szeliski and Michael Cohen .Light Field .
v v u.t u.Organization • 2D position • 2D position s.v • 2 plane parameterization s u Slide by Rick Szeliski and Michael Cohen .Light Field .t t s.
v Slide by Rick Szeliski and Michael Cohen .Light Field . t constant • Let u.Organization • Hold s. v vary • An image s.t u.
Light Field / Lumigraph .
t u. t plane – Gantry s.Light Field .v Slide by Rick Szeliski and Michael Cohen .Capture • Idea 1 – Move camera carefully over s.
Gantry • Lazy Susan – Manually rotated • XY Positioner • Lights turn with lazy susan • Correctness by construction .
Light Field . t.Rendering q For each output pixel • determine s. u. v • either • use closest discrete RGB • interpolate near values s u Slide by Rick Szeliski and Michael Cohen .
Rendering • Nearest – closest s – closest u – draw it • Blend 16 nearest – quadrilinear interpolation s u Slide by Rick Szeliski and Michael Cohen .Light Field .
Devices for Capturing Light Fields (using geometrical optics) big baseline small baseline • • • • • handheld camera camera gantry array of cameras plenoptic camera light field microscope [Buehler 2001] [Stanford 2002] [Wilburn 2005] [Ng 2005] [Levoy 2006] .
Multi-Camera Arrays • Stanford’s 640 × 480 pixels × 30 fps × 128 cameras • synchronized timing • continuous streaming • flexible arrangement .
Stanford Tiled Camera Array .
What’s a Light Field Good For? • Synthetic aperture photography – Seeing through occlusions • Refocusing • Changing Depth of Field • Synthesizing images from novel viewpoints .
Synthetic Aperture Photography [Vaish CVPR 2004] 45 cameras aimed at bushes .
Synthetic Aperture Photography One image of people behind bushes Reconstructed synthetic aperture image .
Synthetic Aperture Photography .
Mathieu Brédif. Levoy . Marc Levoy.Light Field Photography using a Handheld Light Field Camera Ren Ng. Gene Duval. Mark Horowitz and Pat Hanrahan Proc. SIGGRAPH 2005 Source: M.
lytro.Lytro Light Field Camera • www. f/2 lens.com • $400 (Nov. 2011) • 8x optical zoom. 8 GB memory (350 1080 x 1080 images) .
Conventional vs Light Field Camera Source: M. Levoy .
Conventional vs Light Field Camera uv-plane st-plane Source: M. Levoy .
Conventional vs Light Field Camera st-plane uv-plane Source: M. Levoy .
Prototype Camera Contax medium format camera Kodak 16-megapixel sensor Adaptive Optics microlens array 125μ square-sided microlenses 4000 × 4000 pixels ÷ 292 × 292 lenses = 14 × 14 pixels per lens .
Levoy .Mechanical Design • microlenses float 500μ above sensor • focused using 3 precision screws Source: M.
(b) illustrates microlenses at depths further than the focal plane.c b a a b c (a) illustrates microlenses at depths closer than the focal plane. The microlenses at this depth are constant in color because all the rays arriving at the microlens originate from the same point on the fingers. as it appears in the macroscopic image. the woman’s cheek appears on the left. This effect is due to inversion of the microlens’ rays as they pass through the world focal plane before arriving at the main lens. which reflect light diffusely. In these inverted microlens images. In these right-side up microlens images. opposite the macroscopic world. In contrast. the man’s cheek appears on the right. (c) illustrates microlenses on edges at the focal plane (the fingers that are clasped together). Finally. .
Levoy .Digitally Stopping-Down Σ Σ stopping down = summing only the central portion of each microlens Source: M.
Levoy .Digital Refocusing Σ Σ refocusing = summing windows extracted from several microlenses Source: M.
Levoy .Example of Digital Refocusing Source: M.
Extending the Depth of Field conventional photograph. main lens at f / 4. main lens at f / 4 conventional photograph. main lens at f / 22 light field. after all-focus algorithm [Agarwala 2004] .
Levoy .Digitally Moving the Observer Σ Σ moving the observer = moving the window we extract from the microlenses Source: M.
Levoy .Example of Moving the Observer Source: M.
Levoy .Moving Backward and Forward Source: M.
Light Field Demos (Stanford) .
subject to the diffraction limit 36mm × 24mm ÷ 2μ pixels = 216 megapixels 18K × 12K pixels 1800 × 1200 pixels × 10 × 10 rays per pixel Source: M.Implications • • Cuts the unwanted link between exposure (due to the aperture) and depth of field Trades off (excess) spatial resolution for ability to refocus and adjust the perspective • Sensor pixels should be made even smaller. Levoy .
Haystacks studies .Other ways to Sample the Plenoptic Function • Moving in time: – Spatio-temporal volume: P(q. t) – Useful to study temporal changes – Long an interest of artists Claude Monet. f.
Space-Time Images Other ways to slice the plenoptic function: t y x .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.