You are on page 1of 3

Dialogue Concerning Two Imaging Systems

Galileo Galilei
November 24, 2009
Sagredo: HDR is definitely a cool technique, i just don’t get why
they all look so HDR-y (instead of just looking correctly exposed
everywhere)

Salviati: @ Sagredo: good observation.
This is probably because the wide dynamic range captured with
the technique must ultimately be compressed to a much narrower
range for storage and display. (E.g. 8bpp JPEG.) Since this is done
differently for different parts of the image, the end result looks unre-
alistic.
The best thing to do would be to somehow capture all of the dy-
namic range in the original image in one exposure and present it on
some kind of projectable transparency.
#grandmaglance–that’s basically what slide film does.
Salviati: Also, HDR needs to get beyond its "look at me, I’m HDR"
aesthetic. It’s really just one more tool in the ever-growing digital
photography toolbox.
Sagredo: "This is probably because the wide dynamic range cap-
tured with the technique must ultimately be compressed to a much
narrower range for storage and display." – i don’t get this concept
take [Figure1] and [Figure 2] for example.
in the first correctly exposes the sky and underexposes the sign,
and the second overexposes the sky and correctly exposes the sign.
intuitively it seems like an HDR image could take the sky from the Figure 1: Tom’s image correctly expos-
first and the sign from the second without compressing anything into ing the sign.

a narrower range or whatever

Salviati: "Intuitively it seems like and HDR image could take the
sky from the first and the sign from the second."
That’s what it does. And that’s why it looks unnatural! We’re
probably pretty good at understanding light and shade in visual
perception, so we can quickly recognize an artificial scene. This is
probably why photographs taken in lighting conditions remote from
Figure 2: Tom’s image correctly expos-
our experience look fake, e.g. those taken on the Moon. ing the building.
Sagredo: but i don’t understand why JPEG would have any prob-
lem storing this kind of dynamic range
Salviati: Because it is 255 (bits) per R, G, and B. That’s only not
nearly enough range to realistically represent a scene where the
darkest point might be many stops dimmer than the brightest point
without compressing it "into a narrow range or whatever." A good
dialogue concerning two imaging systems 2

digital camera sensor, for example, should capture 12 or more bits
per channel.

Sagredo: so what’s going on [in Figure 3].
(i copied the sign from the image in which it was correctly ex-
posed)
Sagredo: PUT ANOTHER WAY: i can increase the exposure on
dark areas without blowing out light areas in lightroom by moving Figure 3: Tom’s image correctly expos-
the "fill light" slider right. however, this introduces artifacts when you ing the building.
move it too far right.
isn’t HDR just extreme fill light without the artifacts? (even if it
isn’t, couldn’t it be? i.e., if the JPEG dynamic range supports fill light,
why can’t it support the full dynamic range of an HDR?)
Salviati: I don’t have aperture, so I’m going to have to guess what
the "fill light" slider does. It sounds like it is selectively brightening
the dark areas. HDR tools selectively merge bright and dark areas
from different images, so the results can be similar.
The HDR look doesn’t come from this process but from the "tone
mapping" necessary to make the merged file viewable on displays
with limited dynamic range.. This is the process I alluded to above.
(http://en.wikipedia.org/wiki/Tone_mapping)
Using your sign example, suppose the real scene has light val-
ues on some arbitrary scale of "EV," where each EV differs by one
aperture stop. (Remember each stop represents 2X more light than
the stop before.) Perhaps the sign has values from 16 to 20 EV and
the darkest part of the scene has values from 1 to 4 EV. You capture
correctly exposed images of each so you now have two digital files.
Assume for simplicity’s sake that your digital files have a range of
values from 0-255. A value of "255" in the first image might corre-
spond to 20 EV of real light in the first exposure, while it might mean
4 EV of real light in the second exposure. That means first image
represents real light amounts 65,536x up to greater than the second.
The tone mapping algorithm has to merge these values that repre-
sent radically different real light ranges into the same image. It could
do this by mapping the first (bright) image onto a range like 250-255
and the second on a range from say 1-2. But that would be unsatis-
factory because those values would appear only about 250x brighter
and lots of tonal information would have to be discarded (255 values
would have to be sampled down to 1-2 values).
Instead, tone mapping algorithms typically combine the values
into one image (so that 255 might represent 20 EV in one part and 4
EV in another part) and then use perceptual tricks to suggest great
contrast between the values. For example, to make a building pho-
tographed against a bright sky appear darker than the sky, the algo-
dialogue concerning two imaging systems 3

rithm might brighten the sky immediately surrounding the building,
creating local contrast. Those tricks are what make the final file have
such a strange appearance.
Personally, I think most images produced with perceptual tone
mapping are pretty ugly. And the required multiple exposures rule
out street photography, wildlife photography, and anything else with
potentially moving subjects.
You might ask why we tone map down to 255 values in my ex-
ample when there are file formats that support many more bits per
channel. That’s correct, but you will ultimately need to collapse your
image down to approximately this range for printing or display.
(Unless you are using transparency film or you have a high dynamic
range monitor. See http://www.bit-tech.net/hardware/2005/10/03/brightside_hdr_edr/1)