You are on page 1of 50

1.

EXPOSURE CONTROLS: Shutters and F-Stops


EXPOSURE

Film exposure is determined by four factors:


(a) light sensitivity of the emulsion or camera sensor setting — the film speed (ISO),
(b) brightness of the subject,
(c) brightness of the image falling on the emulsion,
(d) length of time the image strikes the emulsion.
Whatever (a) and (b) may be, the photographer can control (c) by choice of f-stop, and (d) by choice of shutter
speed.

FILM SPEED
Doubling or halving the film speed setting on a meter or camera system (e.g., 200 to 400, or 100 to 50) produces a
one-stop exposure change. Standard ISO settings include:

25, 32, 40, 50, 64, 80, 100, 125, 160, 200, 250, 320, 400, 500, 640, 800, 1000, 1280, 1600, 2000, 2560, 3200

It’s important to note that each ISO setting listed above is not one stop different than its adjacent ISO setting. For
example, the difference between 25 and 50 ISO is a full stop increment to a speed twice as fast (25 x 2 = 50). ISO’s
32 and 40 in this example represent 1/3 and 2/3 stop increases respectively.

F-STOPS

Image brightness is controlled by setting the iris diaphragm in a camera lens to a larger or smaller diameter. The
settings, called f-stops, are marked by a series of f-numbers, which are calculated:

f-number = Lens focal length ÷ diameter of iris opening (as seen from the front of the lens)

The bigger the diameter, the lower the f-number and the more light transmitted.

The diameters are chosen so that each higher f-number setting reduces the amount of light the lens transmits by one
half (and each lower f-number setting doubles the amount of light).

Using the ratio of focal length to iris diameter has a very practical result: All lenses set to the same f-stop transmit
the same amount of light, regardless of their different focal lengths.

The standard f-number series is the same for all lenses:


_ _
p p p p p p p p
f/1 f/1.4 f/2 f/2.8 f/4 f/5.6 f/8 f/11 f/16 f/22 f/32 f/45 f/64 f/90 f/128 …
n n n n n n n

The brackets and arrows show that the numbers in the series double at every other step. So, if you can remember
any two adjacent f-numbers, you can construct the entire series. You can also construct the series by this method:

Any f-no. X 1.4 = next higher f/no. Any f-no. X 0.7 = next lower f-no.
Examples: f/4 x 1.4 = f/5.6 and f/4 x 0.7 = 2.8.
[Note that there is some rounding off in this method to get the standard numbers. For example, f/2.8 x 1.4 (=
3.92) = f/4, and f/8 x 1.4 (=11.2) = f/11.]

Lens speed is the lowest f-number, calculated from the wide-open diameter of the iris. It is marked on the lens
along with the focal length, for example: 50 mm, f/2. It is also marked as the first number in the f-stop settings. In
some cases this is a number not in the standard series—such as f/1.8—and so is not a full f-stop difference from the
next higher number, where the standard series begins.
1
1. EXPOSURE CONTROLS: Shutters and F-Stops
THE APERTURE AS A CONTROLLER OF LIGHT

Changing the size of the aperture, the lens opening through which light enters the camera, can change the
exposure, the amount of light that reaches the film. The shutter speed changes the length of time that light strikes
the film; the aperture changes the brightness of the light. The aperture works like the pupil of an eye; it can be
enlarged or contracted to admit more light or less. In a camera this is done with a diaphragm, a ring of thin,
overlapping metal leaves located inside the lens. The leaves are movable: they can be swung out of the way so that
most of the light reaching the surface of the lens passes through. They can be closed so that the aperture becomes
very small and allows little light to pass.

On the lens to the right, the aperture settings or f-


stops go from f/2.8 to f/22, with the actual size of
the apertures shown in the seven circles. To select
an f-stop on this lens, a movable ring on the lens
is turned until the desired setting is opposite a
white mark. You can set the ring exactly on an f-
stop or partway in between. Turning the ring
adjusts the diaphragm inside the lens. Here the
camera is set at f8. Each f-stop setting lets in half
(or double) the light of the next setting. The effect
of decreasing the light by stopping down is shown
in the diagrams at left. It takes four circles the
size of an fl5.6 aperture to equal the size of an
fl2.8 aperture. Notice that the lowest f-stop
numeral (2.8) lets in the most light. As the
numerals get larger (4, 5.6, 8), the aperture size
and light admitted decrease.

The size of an aperture is indicated by it's f-number or f-stop. On early cameras, aperture was adjusted by individual
metal stop plates that had holes of different diameters. The term “stop” is still used to refer to the aperture size, and
a lens is said to be “stopped down” when the size of the aperture is decreased.

f/1.4 f/2 f/2.8 f/4 f/5.6 f/8 f/11 f/16 f/22 f/32 f/45 f/64

Exposure doubles/halves each time you change one stop. The largest of these, f/1.4, admits the most light. Each
f-stop after that admits half the light of the previous one. A lens that is set at f/4 admits half as much light as one set
at f/2.8 and only a quarter as much as one set at f/2. (Notice that f-stops have the same half or double relationship
that shutter-speed settings do.) The change in light over the full range of f-stops is large; a lens whose aperture is
stopped down to f/64 admits less than 1/2000 of the light that comes through a lens set at f/1.4.

No lens is built to use the whole range of apertures. A general-purpose lens for a 35mm camera, for example, might
run from f/1.4 to f/22. A camera lens designed for a large view camera might stop down to f/64 but open up only to
f/5.6. The widest possible aperture at which a particular lens design will function well may not be a full stop from a
standard setting, so a lens’s f-stops may begin with a setting such as f/1.8 f/4.5, or f/7.7, then proceed in the standard
sequence.

Lenses are often described as fast or slow. These terms refer to how wide the maximum aperture is. A lens that
opens to f/1.4 opens wider and is said to be faster than one that opens only to f/2. (In the same way, the faucet on
the left in the diagram below is said to be running faster than the faucet on the right.)

2
1. EXPOSURE CONTROLS: Shutters and F-Stops
The term stop is used to refer to a change in exposure, whether the aperture or the shutter speed is changed. To give
one stop more exposure means to double the amount of light reaching the film (either by opening up to the next
larger aperture setting or by doubling the exposure time). To give one stop less exposure means to cut the light
reaching the film in half (stopping down to the next smaller aperture setting or halving the exposure time).

The flow of light into a camera The quantity of light that reaches a
can be controlled by aperture size piece of film inside a camera
just as the flow of water into a depends on a combination of
glass can be controlled by the aperture size (f-stop) and length of
faucet setting. Here a faucet exposure (shutter speed) In the same
running wide open for 2 seconds way, the water that flows from a
fills a glass. If it runs half shut, it faucet depends on how wide the
fills only half a glass in that time valve is open and how long the water
period. The same is true for light. flows if a 2-sec. flow from a wide
In a given length of time, an open faucet fills a glass, then the
aperture opened to any f-stop same glass will be filled in 4 sec
admits half as much light as one from a half-open faucet. If the
opened to the next larger f-stop. correct exposure for a scene is 1/30
Thus the aperture setting controls second at f/8, you get the same total
the rate at which light enters the amount of exposure with twice the
camera; in contrast to shutter length of time (next slower shutter
setting, which controls how long speed) and half the amount of light
the light flow continues. (next smaller aperture) - 1/15 sec. at
f/11

SHUTTERS
There are two kinds of shutters,
(1) those built into a lens—called leaf shutters
(2) those located just in front of the film in the camera—called focal plane shutters.

Most exposures need to be only a fraction of a second, so shutter opening and closing is a matter of setting the
shutter mechanism to a particular speed. Many auto-exposure cameras with fully electronic shutter control can
select any fraction of a second required by the built-in metering system. But manual speed selection is often
necessary (as with view cameras) or preferable (for example, to get a slow speed for desired blur, or to use a small
f-stop for greater depth of field).

For uniformity, there is a standard series of speed settings, marked by a “T” or “B (bulb)” (for time exposures) and
the numbers 1, 2, 4, 8…8000. The number 1 stands for one full second of exposure; all the other numbers stand for
fractions of a second.

Each faster speed is half as long as the preceding speed. That is, the settings make exposure changes of 1/2X or 2X,
just as f-stop settings do.

The standard shutter speed series is:

T or B, 16s, 8s, 4s, 2s, 1, 1/2, 1/4, 1/8, 1/15, 1/30, 1/60, 1/125, 1/250, 1/500, 1/1000, 1/2000, 1/4000, 1/8000 second

To get the next faster speed, multiply the bottom number of the speed fraction by 2; to get the next slower speed,
divide by 2. For full seconds, multiply by 2 for the next longer exposure.

Speeds from 1/15 through 1/125 are very accurate in virtually all shutters; the slower and especially the fastest
speeds may be somewhat inaccurate.
SHUTTER SPEEDS TO STOP MOVING SUBJECTS
1/125 Slow Action: People walking, children playing, wriggling babies.
1/250 Moderately Fast Action: Joggers, swimmers, medium speed bicycles, distant running horses, parades, running children,
sailboats, medium baseball/football action, skaters, slow moving vehicles, golf (putting).
1/500 Fast Action: Past runners, fast baseball/football action, basketball, divers, cars in traffic, fast bicycles, running horses at medium
distance.
1/1000 Very Fast Action: Race cars, motorcycles, low-flying aircraft, speed boats, track and field events, tennis, skiing, golf (driving).

3
1. EXPOSURE CONTROLS: Shutters and F-Stops
USING SHUTTER AND APERTURE TOGETHER

Both shutter speed and aperture affect the amount of light that enters the camera. To get a correctly exposed
negative (one that is neither too light nor too dark), you have to find a combination of shutter speed and aperture
that will let in the right amount of light for a particular scene and film. (Chapter 5, Exposure, tells in detail how to
do this.) But shutter speed and aperture also affect sharpness, and in this respect they act quite differently: shutter
speed affects the sharpness of moving objects; aperture affects the depth of field, the sharpness from near to far.

Once you know any single combination of shutter speed and aperture that will let in the right amount of light, you
can change one setting as long as you change the other-in the opposite way. Since each aperture setting lets in twice
as much light as the next smaller size, and each shutter speed lets in half as much light as the next slower speed, you
can use a larger aperture if you use a faster shutter speed, or you can use a smaller aperture if you use a slower
shutter speed. The same amount of light will be let in by an f/22 aperture at a 1-second shutter speed, f/16 at 1/2
second, f/11 at 1/4 second, and so on.

The effects of such combinations are shown in the three photographs at below. In each, the lens was focused on the
same point, and shutter and aperture settings were balanced to admit the same total amount of light into the camera.
But the three equivalent exposures resulted in three very different photographs.

In the first picture, a small aperture produced considerable depth of field that rendered background details sharply;
but the shutter speed needed to compensate for this tiny aperture had to be so slow that the rapidly moving flock of
pigeons appears only as indistinct ghosts. In the center photograph, the aperture was wider and the shutter speed
faster; the background is less sharp but the pigeons are visible, though still blurred. At far right, a still larger
aperture and faster shutter speed sacrificed almost all background detail, but the birds are now very clear, with only
a few wing tips still blurred.

A small aperture (f/16) produces A medium aperture (f/4) and A fast shutter speed (1/500 sec)
great depth of field; in this scene shutter speed (1/100 sec) sacrifices stops the motion of the pigeons so
even distant trees are sharp. But to some background detail to produce completely that the flapping wings
admit enough light, a slow shutter recognizable images of the birds. are frozen. But the wide aperture
speed (1/4 sec) was needed; it was But the exposure is still too long to (f/2) needed gives so little depth of
too slow to capture the pigeons in freeze the motion of the birds’ field that the background is now
flight. wings. out of focus.

4
2. Determining Exposure

EXPOSURE METERS

Exposure meters are calibrated in terms of an “average subject,” one that reflects 18–20% of the light falling on it.
That is a step V subject brightness, and step V "middle gray" on the gray scale. (The standard gray card reflects 18–
20% of the light and is itself a neutral middle gray.)
The meter should be set to the true speed of the film.

*A Reflected-light meter is aimed at the subject and measures the light bouncing off of the subject. With in-
camera meters, take meter readings from as close to the subject as possible. With an in camera meter, if necessary
use a long focal-length (telephoto) lens or the longest zoom lens setting to give the smallest reading area.

*An Incident meter is faced away from the subject and toward the camera to measure the general light falling on
the subject from the direction of the camera.

METERING METHODS with REFLECTED LIGHT meters

There are three methods of taking reflected-light meter readings to determine exposure—overall average; key tone;
and brightness range.

1. OVERALL AVERAGE Point the meter at the subject from a distance that takes in the entire scene; usually
this is done from the camera position. (Outdoors, point the meter down a bit to reduce the amount sky in the
reading.) If the subject is fairly average (a mixture of light and dark colors/areas, in roughly equal proportions) and
is in full sunlight, use the indicated exposure. This is adequate with the great majority of subjects, and is equivalent
to the f/16 rule of estimating exposure, which says, to get an acceptable exposure when shooting in full sunlight set
the aperture to f/16 and set the shutter speed to 1/ISO of the film (rounded off), i.e. with 400 ISO film use 1/500
sec., 200 ISO film use 1/250 sec., and with 100 ISO film use 1/125 sec.

2. KEY TONE Aim the meter at the most important subject area/tone, from a distance that takes in only that
area. The meter will indicate an exposure that will print as step V middle gray. To get it to print lighter or darker
than step V, expose to place it wherever you wish on the gray scale. To make it print lighter (move higher on the
scale), give 1 stop more exposure for each step up the scale. To make it print darker (move lower on the scale),
give 1 stop less exposure for each step down the scale.

3. BRIGHTNESS RANGE Take a meter reading of the darkest area in which you want to record and print in
full detail. Take a second reading of the lightest area in which detail (a sense of texture) is important (called the
"diffuse highlight area"). Count the number of f-stops between them (at the same shutter speed). This is the subject
brightness range.

Example Meter readings are: Dark subject area, f/2 1/60. Light subject area, f/11 1/60.
Brightness range = 5 stops (counting from f/2: f/2.8, f/4, f/5.6, f/8, f/11).
If you expose at either one of the meter readings, that places the area on step V of the gray scale, and it will print as
a step V gray. The other area will fall on a different step of the scale. It will be as many steps lighter or darker as the
number of f-stops in the brightness range. To get different results, you can figure an exposure that will place either
the dark area or the light area on any step of the gray scale you choose. There are three methods for doing this, each
with a different application.

3a. BRIGHTNESS RANGE METHOD 1 — DETERMINING EXPOSURE FOR STEP PLACEMENT


To place the dark area, working with the dark area meter reading: reduce exposure 1 stop for each step you want
to move it down the scale. Example: Dark area reading, f/2 1/60 (step V exposure) To place the area on step III
(two steps darker), reduce exposure two stops to f/4.
To place the light area, working with its meter reading: increase exposure 1 stop for each step you want to move
it up the scale. Example: Light area reading, f/11 1/60 (step V exposure). To place this area on step VIII (three
steps lighter), increase exposure three stops to f/4.
DO EITHER OF THE ABOVE, NOT BOTH. You can place only one area. The other area will then fall farther up

1
2. Determining Exposure

the gray scale (if it is lighter than the placed area) or farther down the scale (if it is darker than the placed area). The
number of gray scale steps difference between the two tones in the print will be the same as the number of f-stops
difference in the subject brightness range.

Examples: Using the above meter readings, the brightness range is 5 stops. If the dark area is placed on
step II, the light area will fall 5 steps up the scale, on step VII. OR: If the light area is placed on step VIII,
the dark area will fall 5 steps down the scale, on step III.

3b. BRIGHTNESS RANGE METHOD 2 — DETERMINING EXPOSURE FROM STEP PLACEMENT

Method 1 (above) lets you determine how much to increase or decrease the exposure from the meter reading to
place a metered area on a particular step of the gray scale. Brightness range Method 2 determines the exposure by
using a diagram of the gray scale as a kind of f-stop ruler.

1.Take meter readings of the most important detailed light area and most important detailed dark area. Decide
which area you want to place.
2. To PLACE the area, write the f# of its meter reading over the gray scale step you want it on.
3. Write the next higher f# above the next higher gray scale step. Continue the f# sequence up the scale to
the end.
4. Return to the scale step where you placed the meter reading. Write the next lower f# above the next lower
gray scale step. Continue the sequence down to the end.
5. To find where the other area FALLS, look for the f# of its meter reading. It will be over the gray scale step
where the area falls.
6. To determine ACTUAL CAMERA EXPOSURE, look at the f# over step V. This is the exposure to give, at
the same shutter speed as in the meter reading you placed in (2) above. Of course, you can change to any other f#
and shutter speed combination that will give the same exposure.
Example 1: Placing Dark Area Reading
Meter readings are f2.8 1/60 and f/16 1/60, and you want to place the dark area on step III. Write the dark-area f#
above III and fill in the rest of the scale ruler as described above (and shown below).

Dark area Light area


placed EXPOSURE falls
p p p
f# 1 1.4 2 2.8 4 5.6 8 11 16 22 32
Step # 0 I II III IV V VI VII VIII IX X

So, an exposure of f/5.6 1/60 (or equivalent) will place the dark area on III, and the light area will fall on VIII.

Example 2: Placing Light Area Reading Meter readings are f/5.6 1/125 and f/22 1/125; you want to place the
light area on step VIII. Write the light area f# above VIII and fill in the rest of the scale ruler as above/shown
below.

Dark area Light area


falls EXPOSURE placed
p p p
f# 1.4 2 2.8 4 5.6 8 11 16 22 32 45
Step # 0 I II III IV V VI VII VIII IX X

So, an exposure of f/8 1/125 (or equivalent) will place the light area on step VIII, and the dark area will fall on IV.

If either area falls out of the acceptable gray scale range, you must make an appropriate adjustment:

If the detailed light area falls below VIII, develop the film more print on a higher contrast grade of paper. If it
falls above VIII, develop less or use a lower grade of paper.

2
2. Determining Exposure

If the detailed dark area falls below III (outdoors) or II (studio), add light to the dark area or change the
placement of the light area. Development or paper grade changes cannot compensate for underexposure of dark
areas.

3c. BRIGHTNESS RANGE METHOD 3 - “AFTER-THE-FACT” EVALUATION

When you know the light and dark area meter readings, and the actual exposure given, you can figure out what
steps those two areas were recorded on. Write the exposure f# over step V and fill in the other f#s up and down the
scale, as in method 2, above. Then look for the two meter reading f#s. Each one will be above the gray scale steps
on which its area was recorded.

Note: The brightness range method is the basis of the ZONE SYSTEM. In outline, the Zone System is:
1. Make tests to determine the true film speed and normal development.
2. Measure the subject brightness range (meter the desired detailed dark and light areas).
3. Determine the exposure to put the detailed dark area on step III or (indoors only) step II.
4. If necessary, adjust development to keep the light area printable on step VIII.

10-STEP GRAY SCALE

Step Negative Image Print Image and Example Subject Details

0 No printable density Deepest black the paper can produce. Unlit room seen
through door or window.
I First printable density First visible tone above total black. Shapes, but no texture or
details. When next to a light tone, sensed as total black.
Twilight shadows.
II Traces of printable detail Very dark gray. Sense of space, volume. Visible texture and
details, but only with low- or no-flare exposure (e.g.,
studio lighting).
III Fully printable detail Dark gray; rich dark texture and details. Average dark
materials: black hair, fur, clothes. Darkest tone that
gives details with flare exposure (outdoors, sunlight)
IV Fully printable detail Moderately dark gray. Dark foliage, stone, open shadows in
landscape. Shadow value for Caucasian skin in sunlight.
V Fully printable detail Middle gray; the gray tone meters are calibrated to produce.
Most black and dark skin; gray stone; grass in sunlight.
VI Fully printable detail Moderately light gray. Average Caucasian skin in sunlight;
light stone; shadows on sunlit snow.
VII Fully printable detail Light gray. Very light skin; side-lighted snow.
VIII Lightest printable detail Very light, near-white gray; last visible texture and details.
Textured snow; highlights on Caucasian skin.
IX Dense detail; usually White with no detail, last possible white tone with Step X
not printable papers; paper base white with standard papers.
(X) No printable detail Papers with "whiteners" or "brighteners"; an extra partial
step, so that IX is a bare trace of very light, almost white
tone, and X is pure paper-base white.)

3
2. Determining Exposure

When an overall exposure reading was To meter for a subject against a Having set the correct exposure, return to the
made of this scene, a large expanse of much lighter background, come original position to make the photograph.
light sky was included in the area close enough so that the meter reads The face was more accurately rendered by
metered, so the reading indicated a mostly the subject, but not so close this method. A camera that automatically sets
relatively high level of light. But the that you cast a shadow on the area f-stops or aperture must sometimes be
figure of the man was much darker you are metering. manually overridden, as it was here, if a
than the sky; he did not receive correct exposure is to be made.
enough exposure and came out very
dark.

In a landscape or cityscape, so much For a proper exposure for the buildings, Having set the correct exposure by
light can be metered from the sky that light reflected from them should be measuring the light reflected from the
the reading produces too little exposure dominant when the reading is made. buildings, tilt the camera up once again,
for the land elements in the scene. Here This is done by pointing the camera or returning to the original composition.
the sky is properly exposed but the separate meter slightly down so that the This time the buildings are lighter and
buildings are too dark and lack detail. meter’s cells “see” less of the sky and reveal more detail. The sky is lighter
more of the buildings. also, but there is no significant detail
there at either setting. Light areas, such
as the sky, can easily be darkened during
printing.

4
2. Determining Exposure

Using a meter built into a camera. Using a hand held, reflected light Using an incident light meter
meter
1) Set the film speed dial of the 1) Set the meter to the ISO/ASA
camera to the ISO rating of the film 1) Set the meter to the ISO/ASA rating of the film you are using.
you are using. rating of the film you are using.
2) Point the meter away from the
2) Set the exposure mode of the 2) Point the meter at the subject from subject, in the opposite direction
camera, if you have a choice the direction of fhe camera. Activate from the camera lens. Activate the
(a) aperture-priority automatic, the meter. meter. You want to measure the
(b) shutter priority automatic, amount of light falling on the
(c) programmed automatic, or 3) Line up the number registered by subject so make sure that the meter
(d) manual the meter’s indicator needle with the is in the same light as the subect.
arrow on the calculator dial (Not For example, don’t shade the meter
3) Look at the scene through the necessary with meters that if your sublect is sunlit.
camera’s viewfinder and activate the automatically provide shutter-speed
meter and f-stop combinations). 3) Line up the number registered
by the meter’s indicator needle
4a) In aperture priority automatic 4) Choose one ot the combinations of with the arrow on the calculator
operation, select an aperture small shutter speed and f-stop shown on the dial (Not necessary with meters
enough to give the desired depth of calculator dial or provided as digital that automatically provide shutter
field. The camera will adjust the readout, and set the camera speed and f-stop combinations).
shutter speed. lf your sublect is accordingly. Any combination shown
moving or if you are hand holding the by the meter lets in the same amount 4) Choose one of the combinations
camera, check the viewfinder readout of light and produces the same of shutter speed and f-stop shown
to make sure that the shutter speed is exposure. on the calculator dial or provided
fast enough to prevent blur. The as digital readout, and set the
wider the aperture you select, the camera accordingly. Any
faster the shutter speed that the combination shown by the meter
camera will set (Remember that a lets in the same amount of light and
wide aperture is a small number: f/2 produces the same exposure.
is wider than f/4.)

4b) In shutter priority automatic


operation, select a shurter speed fast
enough to prevent blur. The camera
will adjust the aperture. Check that
the aperture is small enough lo
produce the desired depth of field.
The slower the shutter speed you
select, the smaller the aperture will
be.

4c) ln programmed automatic


operation, the camera will adjust both
shutter speed and aperture

4d) In manual operation, you select


both shutter speed and aperture. You
can use the camera’s built-in meter or
a hand-held meter to calculate the
settings.

5
2. Determining Exposure

ESTIMATING EXPOSURE (the Sunny 16 Rule)

When your meter is not working or you cannot get close enough to the subject to get accurate readings, for
AVERAGE SUBJECTS use:

Front lighted, bright sun f/16 @ l/film speed [BASIC RULE]


Bright surroundings (sand, snow) 1 stop less
Side lighted 1 stop more
Back lighted, close-up 2 stops more
Back lighted, with some BG included 1 stop more
Weak, hazy sun (soft shadows) 1 stop more
Cloudy bright (no shadows) 2 stops more
Heavy overcast, or Open shade 3 stops more

BRACKETING: Vary exposure in one-stop intervals for negative films. Vary exposure in half-stop intervals
for slide films.

HARD TO METER SCENES

If there is light enough to see by, there is SITUATION APPROXIMATE EXPOSURE


probably light enough to make a photograph, but FOR ISO/ASA 400 film
the light level may be so low that you may not be
able to get a reading from your exposure meter. Stage scene, sports arena, circus event 1/60 sec f/2.8
Try metering a white surface such as a white
paper or handkerchief; then give about two stops Brightly lighted street at night lighted store window l/60 sec f/4
more exposure than indicated by the meter. If
metering is not practical, the chart on the right City skyline at night 1 sec f/2.8
lists some starting points for exposures at ISO
400. Since the intensity of light can vary widely, Skyline just after sunset l/60 sec f/5.6
bracket your exposures by making at least three
Candlelit scene 1/8 sec f/2.8
pictures:
Campfire scene, burning building at night l/60 sec f/4
(1) using the recommended exposure
(2) giving one to two stops more exposure, and Fireworks against dark sky 1 sec f/16
(3) giving one to two stops less exposure. (or keep shutter open for more than one display)

Fireworks on ground l/60 sec f/4

Television image:
Focal-plane shutter speed must be l/8 sec. or slower to prevent dark raster
streaks from appearing in photographs of the screen. 1/8 sec f/11
Leaf shutter speed must be 1/30 sec or slower to prevent streaks
1/30 sec f/5.6

One of the exposures should be printable, and the range of exposures will bring out details in different parts of the
scene. If the exposure time is one second or longer, you may find that the film does not respond the same way that
it does in ordinary lighting situations. In theory, a long exposure in very dim light should give the same result as a
short exposure in very bright light. According to the photographic law of reciprocity, light intensity and exposure
time are reciprocal; an increase in one will be balanced by a decrease in the other. But in practice the law does not
hold for very long or very short exposures.

6
3. EXPOSURE CONTROLS: brightness, contrast, gray scale 1

SUBJECT BRIGHTNESS AND CONTRAST


Overall contrast:
The various areas of a photographic subject reflect different amounts of light. The difference in brightness from the
darkest area (least amount of light reflected) to the lightest area (greatest amount of light reflected) is called the subject
brightness range, or the overall contrast.

Local contrast:
The difference between adjacent areas within the brightness range is called local brightness or local contrast.
Brightness measurements are made with a reflected-light exposure meter, which is pointed at the subject.

Normal-contrast, High-contrast. low-contrast


Overall and local brightness ranges are measured in the number of f-stops between two selected areas.

The average or "normal-contrast" subject in full sunlight has an overall brightness range of about 7-1/2 f-stops.

Subjects with more than a 7-1/2–stop range are considered to be "contrasty" or high-contrast;
subjects with less than a 7-1/2–stop range are considered to be "flat" or low-contrast.

GRAY SCALE
In order to visualize how a subject can be printed in B&W, it is useful to divide the range of tones from black to
white into a scale of 10 steps, numbered 0 to IX.
In a normal-contrast (Grade 2) print paper, step 0 is the deepest black the paper can produce, and step IX is pure
paper white.
The gray scale can be divided into many more, smaller, steps from black to white, and of course papers can print a
continuous range of grays, with no step divisions. But 10 steps are most easily visualized and easily related to film/print
factors.

GRAY SCALE AND SUBJECT BRIGHTNESS


Each 1-stop difference in subject brightness is equal to 1 step on the gray scale (see page 2). So, if two areas are 3
f-stops different in brightness, with normal exposure and development their print tones will be 3 gray-scale steps
different.
The B&W photographic problem is: (1) to determine the brightness range of those parts of the subject you want to
record, and (2) to determine the exposure and development that will record that range in a way that you can print richly
and expressively.
3. EXPOSURE CONTROLS: brightness, contrast, gray scale 2

Reading Histograms

A histogram is a graphical representation of the tonal distribution in a digital image. It displays in graph form the
number of pixels for each tonal value. On digital cameras, they can be used as an aid to show the distribution of
tones captured, and whether image detail has been lost to blown-out highlights or blacked-out shadows. It’s a
simple graph that displays where all of the brightness levels contained in the scene are found, from the darkest to
the brightest. These values are arrayed across the bottom of the graph from left (darkest) to right (brightest). The
vertical axis (the height of points on the graph) shows how much of the image is found at any particular
brightness level.

The horizontal axis of the histogram represents tonal variation, with the left side for black and dark areas, the
middle for medium grey and midtones, the right side for light and pure white areas. The vertical axis represents
the number of pixels in a particular tonal area, or the “size” of the area which is captured in each one of these
zones.

Tonal range – the area where most of the pixel values are present – can vary greatly from image to image. There
is no ideal histogram, and caution should be used when using histograms to judge exposure. Histograms are more
useful for determining the tonal characteristics of an image, such as high-key, low-key, high contrast, normal
contrast and for assessing the impact of “clipping”. Clipping occurs when the color value of a pixel has either
been pushed to pure black (Red = 0, Green = 0, Blue = 0) or pure white (Red = 255, Green = 255, Blue = 255).
When a large area of pixels is clipped, it contains no detail.
3. EXPOSURE CONTROLS: brightness, contrast, gray scale 4

ADJUSTMENT FOR B&W LONG-EXPOSURE RECIPROCITY EFFECT

Reciprocity effect occurs at exposures of one second or longer (and at exposures faster than 1/10,000 second); you get a
decrease in effective film speed and consequently underexposure. To make up for the decrease in film speed, you must
increase the exposure. The exact increase needed varies with different types of film, but the chart (bottom) gives
approximate amounts. Bracketing is a good idea with very long exposures. Make at least two exposures: one as indicated
by the chart, plus one more giving at least a stop additional exposure. Some meters are designed to calculate exposures of
several minutes’ duration or even longer, but they do not allow for reciprocity effect. The indicated exposure should be
increased according to the chart.

Very long exposure times cause an increase in contrast since the prolonged exposure has more effect on highlights than
on shadow areas. This is not a problem in most photographs, since contrast can be controlled when the negative is
printed. But where contrast is a critical factor, it can also be decreased by decreasing the film development time. The
amount varies with the film; exact information is available from the film manufacturer or see the chart below. Kodak’s
T-Max films require no development change. The reciprocity effect in color film is more complicated since each color
layer responds differently, changing the color balance as well as the overall exposure.

Use the large graph below to determine the proper exposure for conventional films. Then use the table on the bottom left
to determine how to reduce development so that highlight areas do not produce excess density. (Kodak T-Max films do
not require reduced development when adjusted for reciprocity effect; see data at bottom of page.)

Average Adjustments for Most General Purpose


B&W Films

1. Find the meter-indicated or calculated long exposure


time across the bottom of the graph (use the insert at the
upper left for times from 1 to 10 seconds).
2. Trace straight upward to the curve on the graph, then
trace to the right (or left) to determine the actual
required (adjusted) exposure time in either minutes or
seconds.
3. Use this table to determine how to reduce
development so that highlight areas do not produce
excess density. (Kodak T-Max films do not require
reduced development when adjusted for reciprocity
effect; see data at bottom of page.)

Adjusted exposure Change normal development


2–35 sec –10%
35–700 sec –20%
700–1200 sec –30%

FOR KODAK T-MAX FILMS, do not use the above graph. Instead, use the following data:
Calculated Exposure Adjusted Exposure
1–7 sec +1/3 stop (open up one stop and use ND 0.2 filter)
8–65 sec Calculated time x 1.5 (50% increase)
65–200+ sec T-Max 100 film: Calculated time x 2 (100% increase)
T-Max 400 film: Calculated time x 3 (150% increase)
T-Max P3200 film: Calculated time x 4 (200% increase)
4. FILM AND SENSOR CHARACTERISTICS 1

BLACK-AND-WHITE FILMS
Emulsion: a layer of gelatin containing silver-halide crystals. Clear anti-abrasion (antiscratch) coating on top,
clear gelatin sublayer below to adhere emulsion to film base.
Base: strong, clear plastic support for the emulsion. Thicker in 35mm and sheet films than in 120/220 films,
which must curl onto very thin spool. Has an anti-halation layer, undercoating, or dye to prevent unwanted
exposure from light reflected from back of camera or fog from ”light piping” along film edges. Gray dye is
permanent; other dyes dissolve out during processing, leaving faint blue or purple tinge; neither affects printing
quality of image. Thin base may also have anti-curl outside layer.
Spectral (color) sensitivity: all silver halides are blue- and ultraviolet-sensitive. To get panchromatic
sensitivity, colorless sensitizers are added to make halides also respond to green and red wave-lengths.
Orthochromatic or “red blind” emulsions (graphic arts, “lith” films) have green but no red sensitizers added; used
primarily to copy B&W images, stencils, etc. Blue-sensitive emulsions (no sensitizers added) are for technical and
special purposes.

COLOR FILMS
Emulsion: anti-abrasion coating on top of three-layer stack of gelatin with silver halide crystals. Two layers
contain color sensitizers, as in B&W; all three also contain colorless color coupler molecules that form dyes during
processing (except Kodachrome: couplers contained in developer for each layer). Thin yellow-dye separation layer
below top emulsion layer prevents blue wavelengths from reaching lower layers; dye dissolves out in processing.
Thin clear separation layer between lower two emulsion layers. The separation layers prevent image dyes from
migrating, mixing at borders. Sublayer at bottom of stack adheres to film base.
Base: plastic, with anti-halation prevention as in B&W films (Kodachrome: opaque REM-jet [removable jet-
black] coating on back, is eliminated during processing).
Sensitivity: Top layer, blue sensitive (no sensitizers); middle, green sensitive (no red sensitizers);
bottom, red sensitive (no green sensitizers). (Yellow dye layer blocks blue from lower two layers.)

Color Film Characteristics

Negative film. Produces an image that is the opposite in colors and density of the original scene. It is printed, usually onto
paper, to make a positive color print. Color negative film ofen has “color” in its name (Kodacolor, Fujicolor).

Reversal film. The film exposed in the camera is processed so that the negative image is reversed back to a positive
transparency with the same colors and density of the scene. Positive transparencies can be projected or viewed directly and
can also be printed onto reversal paper to make a positive print.
Color reversal film ofen has “chrome” in its name (Ektachrome, Fujichrome), and professional photographers ofen refer to a
color transparency, especially a large-format one, as a chrome. A slide is a transparency mounted in a cardboard holder so it
can be inserted in a projector for viewing.

Professional films. The word ‘professional” in the name of a color film (Fujichrome 64 Professional Film) means that the
film is designed for professional photographers, who have exacting standards, especially for color balance, and who tend to
buy film in large quantities, use it up quickly, and process it soon after exposure. Kodak manufactures its professional color
films so that they are at optimum color balance when they reach the retailer, where (if the retailer is competent) the film will
be refrigerated. Kodak makes its nonprofessional color films for a typical “amateur” or “consumer,” who would be more
likely to accept some variation in color balance, buy one to two rolls at a time, and keep a roll of film in the camera for days,
weeks, or longer, making exposures intermittently. Thus Kodak nonprofessional color films do not have to be refrigerated
unless room temperatures are high. In fact, they are manufactured to age to their optimum color balance sometime after they
reach the retailer and after room-temperature storage.

Color balance. Daylight films produce the best results when exposed in the relatively bluish light of daylight or electronic
flash. Tungsten-balanced films should be used in the relatively reddish light of incandescent bulbs.

Type S / Type B films. A few films are designed to produce the best results within certain ranges of exposure times, as well
as with specific light sources. Type S films (S for “short exposure”), are balanced for daylight or electronic flash at exposure
times of 1/10 second or shorter. Type B films, are balanced for short exposures under the warmer light of tungsten 3200K
lamps.
4. FILM AND SENSOR CHARACTERISTICS 2

EXPOSURE
Color films cannot record full color and detail in both dark and highlight areas over the same brightness range (7
stops) as B&W film. The brightness range limit is about 2 stops for full detail with color transparency films, and
about 3 stops for negative films.

FILM SPEED
Light sensitivity; depends primarily on surface area of halide crystals—large crystals catch more light photons,
therefore respond faster. Speed is rated in arithmetic or logarithmic numbers. Manufacturers’ ISO speeds are
established by standard test methods. Any other speed rating (e.g., established by personal test) is an exposure
index (EI). A difference of 2X or 0.5X in arithmetic speed or +3 or –3 in logarithmic speed (written with a °
symbol) indicates a one-stop change in sensitivity.

ISO FILM SPEEDS


(Standard full-stop numbers in boldface; intervening numbers are 1/3-stop differences)
Arith./Log. Arith./Log. Arith./Log.

3200/36° 200/24° 12/12°


2500/35° 160/23° 10/11°
2000/34° 125/22° 8/10°

1600/33° 100/21° 6/9°


1250/32° 80/20° 5/8°
1000/31° 64/19° 4/7°

800/30° 50/18° 3/6°


650/29° 40/17° 2.5/5°
500/28° 32/16° 2.0/4°

400/27° 25/15° 1.6/3°


320/26° 20/14° 1.2/2°
250/25° 16/13° 1.0/1°

FILM SIZES
Roll Film No. of
Size No. Width Format/Image Size Images Notes
35 mm 1 3/8“ (35 mm) 1” x 1 1/2” (24 x 36 mm”) --------- In 30 m (100’) long rolls

135 same as 35 mm same as 35 mm 24 or 36 In cassettes

120 2.5” (62 mm) 1 3/4” x 2 1/4” (4.5 x 6 cm) 16 Rolls 80 cm (32”) long
2 1/4” x 2 1/4” (6 x 6 cm) 12 with full-length backing
2 1/4” x 2 3/4” (6 x 7 cm) 10 paper (cm x10 = mm)
2 1/4” x 3 1/2” (6 x 9 cm) 8

220 same as 120 same as 120 but twice as many Rolls with paper leader
exposures at beginning and end

Sheet Film Sizes 4” x 5” (100 x 125 mm) Image area is about 3/8” (10 mm)
8” x 10 (200 x 250 mm) less in each dimension
Certain key image characteristics are determined by the combination of the physical characteristics of the film and
the exposure and processing the film receives.
4. FILM AND SENSOR CHARACTERISTICS 3

Graininess/Granularity – RMS Granularity Value


The pattern of image-forming bits (“grains”) of silver in a processed black-and-white film. The grains of silver
are too small to be seen by the eye, even with an optical magnifier. In modern emulsions actual grain size is classed
as fine, extremely fine, and micro fine. The grains are so very small that so-called fine grain developers are not
necessary for excellent quality. Any visible image-structure pattern, called graininess, is actually the pattern of
clumps of many grains of silver. When measured by laboratory instruments it is called granularity. Overexposure
and/or overdevelopment produce larger clumps of grains and therefore more visible graininess. Film speed is
roughly related to granularity, the size of the grains of silver halide in the emulsion, since larger grains give film a
greater sensitivity to light.
Granularity, or RMS granularity, is a numerical quantification of film grain noise - the random optical texture
of processed photographic film due to the presence of small particles of a metallic silver, or dye clouds, developed
from exposed silver halide. Each successive RMS number represents a doubling of the graininess. For example, a
film with an RMS 5 granularity rating is twice as grainy as a film with an RMS 4 rating. RMS granularity ratings of
print films and slide films are not directly comparable. As a rule of thumb, you should multiply a print film's RMS
number by 2.5 to approximate its graininess compared to a slide film's RMS rating.

Resolution/Resolving Power – Lines Per Millimeter (LPMM)


The ability of an emulsion to make clearly distinguishable images of tiny, closely spaced details. It is a basic
emulsion characteristic that is related to grain size. Resolving power ratings are given in lines per millimeter, the
number of alternating dark lines and light spaces, all the same width, in a test pattern that can be distinguished
when examining the film with a magnifier. The test pattern is on glass and is contact-exposed on the emulsion.
When the test pattern is black lines and white spaces, with a contrast of about 1000:1 (about 10 f-stops difference in
brightness), resolving power is very high. However, a more meaningful rating for purposes of making typical
photographic images is given by a low-contrast (1.6:1, or 1/2-stop) test pattern. Both high- and low-contrast
resolving power ratings are given in film data sheets. They are useful in comparing films to make a choice in terms
of a particular kind of subject to be photographed. However, camera images are made by lenses, which have their
own resolving power characteristics. Even a lens of the highest optical quality will produce a test pattern image
with somewhat less resolving power on the emulsion than contact exposure. And, any overexposure and/or
overdevelopment will also reduce resolving power.

Sharpness
The visual impression of clearly defined edges of details. It is affected by the combination of film–camera
lens–enlarger lens resolving power, focusing, camera steadiness, exposure, development, and contrast. The same
image with a variety of details seems sharper in a more contrasty print. Psychological associations also affect the
impression of sharpness. Images of spiky details seem sharper than rounded contours or areas of continuous tone,
although they have the same measured degree of sharpness. The objective measurement of edge sharpness in an
image is called acutance.

Digital Images

Raster images have an absolute resolution defined by the total number of pixels in the image. The points
can be distributed a variety of ways to accommodate various display requirements, but the total number of
pixels is fixed by the camera or scanner’s hardware specifications and the user’s settings. The optical
resolution of a camera is usually described in megapixels – the total number of pixels generated by the
camera sensor in millions. A camera with a 6 megapixel (Mp) sensor creates an image with a maximum of
6 million individual dots (pixels). In a typical small format SLR, such an image has 3000 pixels on the long
edge (columns), and 2000 pixels on the short edge (rows): 2000 x 3000 = 6,000,000 or 6 million.

Print resolution describes the way an image’s pixels are arrayed for output or display. If a six megapixel
image is 3000 pixels on the long edge, you can calculate the number of pixels appearing in each unit of
measure for a given length. If you distribute 3000 pixels along 10 inches of print, there will be 300 pixels
(dots) in each inch. Thus, your print resolution is 300 dots-per-inch, or dpi. If you decide to make a print 20
inches long, then your 3000 pixels will extend across 20 inches of print: 3000 / 20 = 150 pixels per inch
4. FILM AND SENSOR CHARACTERISTICS 4

(ppi). Along 5 inches: 3000 / 5 = 600 ppi (or dpi). In this context, pixels-per-inch (ppi) and dots-per-inch
(dpi) can be used interchangeably.

Interpolated resolution happens when pixels are added or subtracted to alter an image’s absolute
resolution. This is most commonly done to reduce image size to accommodate a specific display size that is
smaller than the camera or scanner’s hardware (optical) resolution. In some cases, pixels are added to
increase resolution for display sizes that exceed hardware resolution. This is done through software that
examines each pixel’s color values, and creates new pixels with color values based on a mathematical
average of neighboring pixels. Using this method, the software attempts to “guess” at what color values the
new pixels would have had if the camera’s hardware had been capable of the target resolution.

There are a variety of methods for employing this technique in practice, using the built in features of
various imaging programs (ie: Photoshop) or special add on programs (plug-ins) that use more
sophisticated methods to achieve better results. In all cases, interpolated resolution does not produce results
equal to hardware designed to achieve higher resolution. In some cases the difference in quality is not
perceivable in the finished product, and in some cases it’s very obvious.

Photographic digital images are constructed as an array of individual points called a bitmap. Each point –
or pixel (short for picture element) – is described by three binary numbers that define the point’s row,
column, and color value. Bitmap images are also described as raster images, and can be stored in a variety
of file types. Some common raster file types are listed below.

PSD – Photoshop Document. Adobe Photoshop's native format stores images with support for most
imaging options available in Photoshop. These include layers with masks, color spaces, ICC profiles,
transparency, text, alpha channels and spot colors, clipping paths, and duotone settings. This contrasts with
many other file formats (e.g. .EPS or .GIF) that restrict content to provide streamlined, predictable
functionality.

TIFF – Tagged Image File Format. Created by Aldus Corporation, later acquired and currently owned by
Adobe Systems. TIFF is a flexible, adaptable file format for handling images and data within a single file.

JPEG – Joint Photographic Experts Group. JPEG/Exif is the most common image format used by digital
cameras and other photographic image capture devices. It’s a lossy compressed format where the degree of
compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG
typically achieves 10:1 compression with little perceptible loss in image quality.

NEF – Nikon Electronic Format. NEF is Nikon's RAW file format. RAW image files, sometimes referred
to as digital negatives, contain all the image information captured by the camera's sensor, along with the
image's metadata (the camera's identification and its settings, the lens used and other information). NEF
files are written to the memory card in either an uncompressed or "lossless" compressed form.

CRW – Canon RAW. Canon uses two different RAW formats, and some camera models produce CR2
instead of CRW files. CR2 files use a TIFF-based format.

Targa (TGA) – Truevision Advanced Raster Graphics Adapter. The native format of Truevision Inc.'s
(now AVID) TARGA and VISTA products, which were the first graphic cards for IBM-compatible PCs to
support highcolor/truecolor display. TGA has become the common format for storing textures and
screenshots due to its ease of implementation and lack of encumbering patents. Supports 24 bit color and
one alpha channel.

BMP – Windows bitmap, sometimes called bitmap or DIB file format (for device-independent bitmap), is
an image file format used primarily to store bitmap images used in graphical user interfaces, especially on
Microsoft Windows and OS/2 operating systems. The simplicity of the BMP file format, its widespread
4. FILM AND SENSOR CHARACTERISTICS 5

familiarity, as well as the fact that this format is relatively well documented and free of patents, makes it a
very common format.

PNG – Portable Network Graphic. PNG was created to improve upon and replace GIF (Graphics
Interchange Format) as an image-file format not requiring a patent license. It employs lossless data
compression, 24 bit RGB color, grayscale, channel layers, and transparency. PNG was designed for
transferring images on the Internet, not professional graphics, and so does not support other color spaces
(such as CMYK).
GIF – Compuserve Graphics Interchange Format. Introduced by CompuServe in 1987 and has since come
into widespread use on the World Wide Web due to its wide support and portability. Supports up to 8 bits
of color per pixel, allowing a single image to reference a palette of up to 256 distinct colors. Also supports
animations and allows a separate palette of 256 colors for each frame. The color limitation makes the GIF
format unsuitable for reproducing color photographs and other images with continuous color, but it is well-
suited for simpler images such as graphics or logos with solid areas of color.
DNG – Digital Negative format. A royalty free (but patent encumbered) raw image format designed by
Adobe Systems ("royalty free" refers to the format itself, not the rights associated with images stored in
this format). According to Adobe, Digital Negative was a response to demand for a unifying, non-
proprietary camera raw file format. Digital Negative is based on the TIFF/EP format, and mandates use of
metadata. Hasselblad, Leica, Casio, Ricoh, and Samsung have introduced cameras that provide direct DNG
support.

Camera Raw formats

Other than standard image types such as JPEG and TIFF, most digital SLRs also allow images to be
captured in a proprietary raw image format. These files types vary from manufacturer to manufacturer. In a
small number of cases, manufacturers will use an independent standard for their raw format, like Adobe’s
Digital Negative (DNG) format*. Images captured in raw format retain the maximum amount of image
data the camera is capable of creating, especially in the area of bit depth. Camera raw formats usually
support bit depth levels up to the limit of the camera’s hardware, sometimes in amounts not supported by
JPEG and TIFF files.

* Pentax and Samsung both make cameras that directly support DNG format.

Color data stored in camera raw format has not been interpreted through a fixed white point, and can be
thought of as raw numbers taken from the camera sensor. These numbers don’t become color values until
the file is opened in an image editing program, so their color values can be interpreted differently each time
the image is opened without affecting the stored values of the raw sensor data. As a result, settings like
white point (light source color temperature) and sharpening can be changed with very fine control when the
image is opened.

Shooting in raw format can obviate the need for light balancing filters to correct discrepancies between the
color temperature of a light source and the camera white balance (light source) setting. Other file formats
(like JPEG or TIFF) translate the sensor values and map the data to specific fixed colors based on the
camera light source setting. As a result, an image shot at a light source setting different than the actual light
source will create a pronounced and predictable color shift that is very difficult to correct.

Raw sensor data is not mapped to specific color values in the file. When the file is opened in an image
editing program (like Photoshop), a light source setting is assigned. The values for each pixel are translated
to color values by interpreting them through the characteristics of the assigned light source (color
temperature). In this way, an image shot with an incorrect light source setting can be corrected with results
identical to having shot it at the correct setting.
4. FILM AND SENSOR CHARACTERISTICS 6

Following is a list of various cameras and software with their associated raw file extensions:

Adobe - .dng Leaf - .mos Panasonic- .raw .rw2


Canon - .crw .cr2 Leica - .raw .rwl Pentax - .ptx .pef
Casio - .bay Logitech - .pxn Phase one - .cap .iiq .eip
Epson - .erf Mamiya - .mef Rawzor - .rwz
Fuji - .raf Minolta - .mrw Red - .r3d
Hasselblad - .3fr Nikon - .nef .nrw Sigma - .x3f
Imacon - .fff Olympus - .orf Sony - .arw .srf .sr2
Kodak - .dcs .dcr .drf .k25 .kdc

Digital images created by drawing programs (like Illustrator, Freehand, or CorelDraw) are typically stored
in vector formats. Vector formats are not usually useful for working with photographs, since they don’t
readily include the fine color detail required for describing subtle continuous tone in individual points in
the image. Vector formats (fonts, eps files) primarily describe the elements of an image as lines, points,
curves, and shapes. The images can be scaled without aliasing (the “staircase” effect) by adjusting the
numerical values of the equation that describes each shape.

Bit Depth

Color depth or bit depth describes the number of binary digits (bits) used to represent the color of a single
pixel in a raster image. This concept is also known as bits per pixel (bpp), particularly when specified along
with the number of bits used. Higher color depth gives a broader range of distinct colors.

Color depth is only one aspect of color representation - also referred to as gamut: which colors can be
displayed by a device or color space. Color depth describes how finely levels of color can be represented
(gamut depth). Another aspect of gamut is how broad a range of colors can be expressed. The RGB color
model cannot express many colors, notably saturated colors such as yellow. The issue of color
representation is not simply "sufficient color depth" but also "broad enough gamut".

An image’s bit depth is described by the number of binary digits used to define a pixel’s individual red,
green, and blue color values. The three values are sometimes combined to describe the combined color
depth of all three primaries, or left separate for each color. In photographic images, the value for each
individual primary color is usually 8 bits or greater, or 24 bits or greater when describing the total.

Red8 bits
Green 8 bits
Blue 8 bits

Total 24 bits

A 24 bit binary number can have a decimal value of between zero and 16,777,215 so 24 bit color images (8
bits per channel) can represent up to 16,777,215 (nearly 16.8 million) distinct different colors for each
pixel. Images with larger bit depths can display a larger number of simultaneous colors:

Red12 bits
Green 12 bits
Blue 12 bits
Total 36 bits
4. FILM AND SENSOR CHARACTERISTICS 7

A 36 bit binary number can have a decimal value between zero and 68,719,476,735, so almost 68.1 billion
colors can be represented in 36 bit images. Some imaging devices and cameras are capable of even greater
bit depths:

Red14 bits
Green 14 bits
Blue 14 bits

Total 42 bits = zero to 4,398,046,511,103 (over four trillion)

Red16 bits
Green 16 bits
Blue 16 bits

Total 48 bits = zero to 281,474,976,710,655 (over 281 trillion)

These bit depths are sometimes described as 14 or 16 bits per channel (one channel for each of the three
primary colors). Sometimes they’re described as 42 or 48 bit color, where the color depths of each
individual color are combined to make one long binary number.

Increased bit depth creates a proportional increase in image size (and file size), but does not affect an
image’s pixel resolution. For a given number of pixels, uncompressed image size typically doubles as the
number of bits per channel doubles. Images in 16 bit per channel (primary color) format are twice as large
as 8 bit per channel. If you create two 6 megapixel images of 2000 x 3000 each, an 8 bit per channel image
will be approximately 17.4 megabytes in size, while a 16 bit per channel image will be roughly 34.8
megabytes in size. Both images remain 2000 x 3000 pixels, so their resolution is the same.

Bit depth is one of several factors that affect color gamut.

In color reproduction, color gamut is a certain complete subset of colors. The most common usage refers to
the subset of colors which can be accurately represented in a given color space or by a certain output
device. Another sense, less frequently used but not less correct, refers to the complete set of colors found in
an image. Digitizing a photograph, converting a digitized image to a different color space, or outputting it
to printer generally alters its gamut, since some of the colors in the original are typically lost in the process.

The gamut of a device or process is that portion of the color space that can be represented, or reproduced.
When certain colors cannot be displayed within a particular color model, those colors are said to be out of
gamut. For example, pure red which is contained in the RGB color model gamut is out of gamut in the
CMYK model.

Image Sensors

Digital images are created by converting light to voltages that are then interpreted as binary numbers.
Image sensors (camera or scanner) are arrayed as a very fine grid of light sensitive points typically called
photosites. Each of these points will ultimately correspond to a pixel in the stored image.

When light strikes a photosite on the sensor, an electrical charge accumulates at the site. The camera or
scanner then reads that charge, and converts the analogue voltage level to a binary integer through a
process called analogue-to-digital (A/D) conversion. The resulting value will define that pixel’s color in the
final image.

A digital camera sensor is made up of three basic layers.


4. FILM AND SENSOR CHARACTERISTICS 8

1. Substrate - This is the silicon material which measures the light intensity. The sensor is not actually flat,
but has tiny cavities, like wells, that trap the incoming light and allow it to be measured. Each of
these wells is a photosite.

2. Bayer filter - This is a color filter that is bonded to the substrate to allow color to be recorded. The sensor
on its own can only measure the number of light photons it collects. It has no way of determining the color
of those photons. The Bayer filter - also called the Color Filter Array (CFA) - acts as a screen, allowing
light photons of a certain color onto each photosite on the sensor. The Bayer filter is made up of alternating
rows of Red/Green and Blue/Green filters.

When a photosite measures the number of light photons it has captured, it knows that every photon is of a
certain color. For example, if a photosite that has a red filter above it has captured 5000 photons, it knows
that they are all photons of red light, and it can therefore begin to calculate the brightness of red light at that
point. The Bayer mask has twice as many green filters as red or blue, because the human eye is more
sensitive to green light, and has a greater resolving power in that range.

The camera treats each 2x2 set of photosites as a single unit. This provides one red, one blue and two green
values. The camera then records the actual color based on the photon levels in each of these four photosites
as a single pixel in the stored image.

3. Microlens – A tiny lens sits above the Bayer filter and helps each photosite capture as much light as
possible. The pixels do not sit precisely next to each other. There is a tiny gap between them. Any light
falling into this gap is wasted, and will not be used for the exposure. The microlens eliminates this waste
by directing light falling between photosites into the nearest one available.
The overall quality of a digital camera image is a function of many factors including the depth of the
camera’s analogue-to-digital conversion (bit depth), dynamic range, signal noise, photosite size and pitch,
and the degree of file encoding artifacts.

Bit Depth. A camera’s analogue-to-digital conversion ability directly affects the maximum color bit depth.
A camera capable of 8-bit A/D conversion will represent each primary color as an 8 bit binary number, or 8
bits per channel – 24 bit color in total, roughly 16.8 million total possible colors. A camera with 14 bit A/D
conversion uses a 14 bit binary number for each color channel, totaling 42 bits, or about 4 trillion possible
colors. This greatly affects the ability to capture subtle transitions of tone along with shadow and highlight
detail at the extreme ends of exposure.

Dynamic Range is the maximum range of light levels that an image sensor can capture. Dynamic range
measures the difference between the lightest and darkest tones that can be captured, while bit depth
measures the number of intermediate tones between lightest and darkest that can be rendered. Greater bit
depths and larger photosites on the sensor both contribute to improved dynamic range.

Signal noise is sometimes referred to as “digital grain”, and usually appears as random specks of color in
an otherwise smooth area of tone. Signal noise varies among camera models and increases with ISO
setting, exposure time, and sensor temperature.

A small number of electrons are generated in the sensor’s photosites, even when no light is present. When
the sensor signal is greatly amplified (such as at high ISO), these randomly generated electrons (noise) are
amplified along with the rest of the image (signal), and have a greater appearance in the final image. Signal
noise is usually more visible in darker areas of the image, where the ratio of signal to noise favors the
noise, and thus the noise is more prevalent. If the amount of noise generated is 10 electrons, and a photon
of light striking that photosite generates 1 electron, the image created by that photon will be lost in the
noise.
4. FILM AND SENSOR CHARACTERISTICS 9

Background noise exists in any electrical system, so some signal noise exists in every image. Under
optimal conditions, this noise is not visible, because it’s overwhelmed by the far greater amount of signal
(image) present. The relationship between the amount of signal (image) versus the amount of noise is
called signal-to-noise ratio.

The size and pitch (distance between photosites) of a sensor’s photosites has an impact on signal noise. It is
typically the case that physically larger sensors (with proportionally larger photosites) generate less signal
noise. Larger photosites are inherently more sensitive to light. Their individual surface areas are larger, and
capable of capturing more photons of light and storing a larger number of electrons. As a result, a greater
amount of signal is generated for a given amount of light, requiring less amplification to generate an image,
and creating a more favorable signal-to-noise ratio.

It’s worth noting that a potential exists for two sensors of identical size, but different resolutions, to
generate less signal noise in the lower resolution version. If two sensors have the same physical size,
increasing the number of photosites to create more pixels also forces the manufacturer to reduce the size of
the photosites. These smaller photosites are less sensitive to light, and require more amplification to
generate an image, also amplifying the noise along the way. In practice, this event is rarely seen, since
camera makers incorporate increasingly improved techniques for removing the appearance of signal noise
by processing the image after it’s captured from the sensor.

File encoding artifacts usually refer to compression artifacts, and are most commonly seen in highly
compressed images. The lossy compression algorithm used in JPEG (and a few other) file types strips away
increasing amounts of image data depending on the compression level desired. Unfortunately, the effect of
mapping the image’s pixel values to a smaller subset of values (quantization) can create some or all of the
following artifacts:
* Ringing (jaggedness and random speckles at sharp tonal transitions, particularly visible in text)
* Contouring
* Posterizing (continuous gradation of tone replaced with an abrupt change from one tone to another)
* Aliasing along curving edges
* Blockiness in "busy" regions (sometimes called quilting or checkerboarding)

Camera Sensor Types

There are several types of sensors in common (and uncommon) use in digital cameras and scanners.

CCD – Charged Coupled Device. A CCD is an analog device. When light strikes the chip it is held as a
small electrical charge at each photosite. The charges are converted to voltage one pixel at a time as they
are read from the chip. Additional circuitry in the camera performs A/D conversion to turn the voltage into
a color value and create image pixels.
CCD sensors create high-quality, low-noise images, but the requirements of their implementation introduce
increased noise levels in the circuits used to operate them. Commonly found in digital cameras, desktop
scanning equipment, and video cameras. Many point-and-shoot and small format SLRs use these sensors in
the following sizes:

24 x 16 mm (half frame or APS) – used in SLR cameras


8.8 x 6.6 mm (2/3 in.) – used in point-and-shoot cameras
7.2 x 5.3 mm (1/1.8 in.) – used in point-and-shoot cameras
5.3 x 4.0 mm (1/2.7 in.) – used in point-and-shoot cameras

All currently manufactured medium format cameras and digital backs use CCD image sensors.

CMOS – Complementary metal–oxide–semiconductor. A CMOS chip is made using the same techniques
as computer microprocessors. Extra circuitry next to each photosite converts light energy to voltage.
4. FILM AND SENSOR CHARACTERISTICS 10

Additional circuitry on the chip converts the voltage to digital data. CMOS image sensors are characterized
by low power consumption, reduced component count for implementation, and on-chip image optimization
circuitry. Imaging problems like signal noise are addressed on the sensor itself, reducing the opportunity
for increasing amounts of noise to be introduced in the amplification and processing stages of the camera.
These sensors are commonly found in small format digital SLR cameras in APS (16x24mm) or full frame
(24x36mm) sizes.

Foveon – used in a limited number of cameras by Sigma, the Foveon sensor is basically a CMOS sensor
where the photosites are in three layers, instead of implemented on a single plane with a Bayer mask. The
Foveon X3 sensor creates its color output for each photosite by combining the outputs of stacked
photodiodes at each of its photosites. Each photodiode layer has a different spectral sensitivity curve, and
responds only to the corresponding wavelengths of light for that layer. The benefits of this technique are:
* Color artifacts normally associated with Bayer masks are eliminated.
* Light sensitivity is increased (although color noise may also be increased in low light situations).

Super HAD CCD –This sensor is basically a CCD with two interlocking arrays of different sized
photosites referred to as S-pixels and R-pixels. Paired photodiodes are employed for each photosite: a
larger photodiode for high sensitivity, with a smaller photodiode for lower sensitivity. When exposure
exceeds the ability of the larger diode to capture highlight detail, its value is replaced by the light level
captured by the smaller, less sensitive photodiode. This creates a dramatic increase in dynamic range by
extending highlight detail. This sensor includes 6 million of each type of photosite, creating 12 million
total, of which only 6 million are active for a given photo. Since there are 6 million active photosites for
each captured image, the optical resolution is technically 6 megapixels, although Fuji cameras employ
sophisticated interpolation features to achieve their target resolution of 12 megapixels.

Most still cameras use either a CCD or CMOS sensor. Neither technology has a clear advantage in image
quality. CMOS can potentially be implemented with fewer components, use less power and provide faster
image capture than CCDs. CCD is a more mature technology and is in most respects the equal of CMOS.
8. COLOR AND FILTER PRINCIPLES FOR B&W 1

COLOR AND LIGHT

Color is the brain's interpretation of the wave-length sensations experienced by the eyes. With normal vision,
under white light, seeing color is unavoidable, so we tend to think of color as an attribute of objects rather
than as a sensation.

The visible portion of the electromagnetic energy spectrum—light—extends from about 700 nm
(nanometers; 1 nm = 1 billionth of a meter) to 400 nm. When white light is diffracted by a prism or in a
rainbow, we can see bands of individual colors, each produced by a narrow group of wavelengths. From the
long to the short end of the range, these colors include Red, Orange, Yellow, Green, Blue, Indigo, and
Violet.
[Wavelengths just longer than visible red (700 nm) are called infrared—below red. Those just shorter than
visible violet (400 nm) are called ultraviolet— above violet.] For simplicity, we group the visible
wavelengths into three major bands: Red, Green, and Blue. These are the primary colors of light.

SEEING COLOR
When the eye sees all the visible wavelengths at the same time (or equal-strength samples from the R, G, B
bands), it sees the color white. Thus, white light contains all the visible wavelengths. In photography,
"white" daylight is the wavelength mixture from direct sunlight and light from open blue sky. Electronic
flash produces light with about the same wavelength mixture (some flash tubes produce a bit more blue).
Tungsten bulbs produce similar light, but are weak in the blue band—they produce some blue, but not as
much as the amounts of red and green they produce.

Objects have color because of selective reflection, or selective transmission. These terms refer to how an
object affects white light that falls on it. The object absorbs some wavelengths and either reflects (if it is
opaque) or transmits (if it is transparent) the remaining wavelengths. The eye–brain perception and
interpretation of the wavelength mix that is reflected or transmitted "makes" the object a particular color.
Some objects —e.g., light bulbs, neon tubes, hot metal—emit (rather than reflect or absorb) wavelengths. If
they emit all wavelengths equally, they look white. If they emit a partial mix —selective emission—it is
colored light.

In order for us to see the true color of an object, it must be illuminated by white light, so that whatever
wavelengths it can reflect or transmit are actually present. If they are not, the object color will be distorted or
in some cases lost entirely. For example, if a green object is seen under red or blue light, it will look very
dark gray or black (colorless) because it can reflect only green wavelengths, but neither light source supplies
green wavelengths.

COLOR PRINCIPLES
White light is composed of many different wavelengths of energy. When separated into groups of closely
neighboring wavelengths, they are seen as individual colors: red, yellow, orange, green, blue, etc. A rainbow
and a prism separate white light this way. Opaque objects have color because they reflect some wavelengths
from white light and absorb (do not reflect) others. (Transparent colored objects transmit some wavelengths
and absorb others. Color light sources emit only certain wavelengths and not others.)
Few colors are pure. We see the color of the dominant wavelength(s), but other wavelengths are
mixed in with them. Red bricks also reflect some green, some blue, and other wavelengths. A green leaf both
reflects and (if thin enough) transmits green wavelengths, but also some blue, red, and other wavelengths.
The dominant wavelengths determine what we call the color of an object. We may be unaware of the other
wavelengths, or we may recognize that they are present because they modify the dominant color to a
noticeable degree. For example, magenta is dominantly red, but has a noticeable quantity of blue
wavelengths as well. Aqua is dominantly blue, but has a noticeable quantity of green wavelengths. Orange is
a mix of red and yellow or green wavelengths. And so on. (All these colors may also contain other
wavelengths, but not in quantities noticeable to the eye.)
8. COLOR AND FILTER PRINCIPLES FOR B&W 2
COLOR SYNTHESIS
Working with light, color can be created or synthesized by adding primary colors together, or by subtracting
colors from white light.

Additive Color Synthesis


Color is created by combining (adding) primary-color light. This can be done with white light sources and
primary-color filters; or by taking a small sample group of wavelengths from each of the major spectrum
bands, R, G, B; or by creating closely spaced glowing dots of R, G, B, as on a video screen.

The primary colors of light are Red, Green, Blue.

Secondary colors are created by adding equal amounts of two primaries:


G + B = Cyan
B + R = Magenta
R + G = Yellow
White light is composed of equal amounts of all three: R + B + G = White

Other colors (e.g., orange, lime, brown, etc.) are created by adding unequal amounts of two or all three
primaries. Additive color synthesis corresponds to the three-color theory of human vision: The retina of the
eye has cells (called cones) that are separately sensitive to either R, G, or B. (Other, non–color-sensitive
cells, called rods, can respond to very low levels of light; they make it possible to see at night or in dark
conditions.) We see various colors because different combinations of cones are stimulated by the various
wavelength mixes reflected from, transmitted by, or emitted by the objects we are looking at.

Subtractive Color Synthesis


Color can also be created by beginning with white light and removing (subtracting) those wavelengths that
are not needed to make the eye see a particular color. That is what is done by filters, pigments, and the dyes
in color photographic emulsions.

The key subtractive colors are the secondary, colors, Cyan, Magenta, Yellow. Their action is:

Cyan subtracts (absorbs) Red, and vice versa


Magenta subtracts (absorbs) Green " " "
Yellow subtracts (absorbs) Blue " " "

Each pair of a secondary (subtractive) color and the primary color that is not part of its makeup is called a
complementary pair because together they make up a complete set of the primaries:

Cyan—Red [= G +B + R}
Magenta—Green [= B +R + G]
Yellow—Blue [= R +G + B]

The color triangle places complementary colors directly opposite one another, and places each secondary
color between the two primaries that compose it.
8. COLOR AND FILTER PRINCIPLES FOR B&W 3
FILTER PRINCIPLES

A filter subtracts (absorbs) wavelengths from white light, or from the mixture of wavelengths reflected from
objects and scenes illuminated by white light as follows:
1. A filter subtracts its complementary color(s) and any related colors alongside the complementary
on the color triangle
2. A filter transmits its own color(s) and any related colors alongside itself on the triangle.
3. R, G, B filters transmit only their own colors. (They do transmit their own part of secondary or
other colors. For example, a red filter transmits the red part of yellow or orange, but not the green part.)
4. C, M, Y filters transmit two colors equally, the two primaries on their immediate left and right on
the color triangle. Thus, these filters absorb/subtract/control a single primary color—the one that is their
complement.
5. The amount of color absorption depends on filter strength, or color density. Few filters have 100%
strength. Most reduce the amount of a particular color passing through but do not absorb it completely.
6. Any two primary (R, G, B) filters used together absorb all three primary colors.
7. All three secondary (C, M, Y) filters used together absorb all colors.
In (6) and (7) above, if the filters are of equal but less than 100% strength (density), their effect is like that of
a neutral density filter: they reduce the intensity of the light, but do not change its color balance (its
wavelength composition).

Spectral Sensitivity

Spectral Sensitivity means color sensitivity. The most commonly used black-and-white films are
panchromatic – they are sensitive to all colors of light.

The illustration below shows the wavelengths of visible light. Energy made up of wavelengths shorter than
400 nanometers is ultraviolet light. Although ultraviolet is outside the range of visible light, nearly all films
are sensitive to it. Theglass in most camera lenses absorbs UV of wavelengths shorter than 350 nm, so even
though films are sensitive to it, you can’t use light in this range to make photographs.

The term panchromatic refers to film that is sensitive to all colors of light, but not necessarily equally
sensitive to all colors of light. Since films tend to be more sensitive in the blue and UV portion of the
spectrum, and less sensitive to green, yellow, orange, and red, areas of a B&W image representing blue tones
typically render disproportionably lighter than red and green tones.

PHOTOGRAPHING COLOR IN B&W

In order for all colors to register on a film, the emulsion must have panchromatic (all-color) sensitivity. The
total exposure of an object on the film depends on all its wavelengths affecting the silver halide crystals in
the area in which its image falls. If some of its wavelengths—even those not noticeable to the eye—do not
register in the emulsion, the exposure will be reduced in that area. As a result, there will be less density in the
8. COLOR AND FILTER PRINCIPLES FOR B&W 4
image of that object (thinner in the negative), and therefore it will appear darker in a print because more
printing light can pass through the negative at that point.

The local contrast in a B&W print—the difference in gray tones among the various parts of the
image—depends on how much exposure each part gets. We can affect the contrast by preventing all or some
percentage of some wavelengths from exposing the emulsion. As a result, the images of objects in which
those wavelengths are the dominant color will be darker in a print. And by comparison—or in contrast—the
images of other objects will seem to be lighter in gray tone. (The apparent lightness of a tone depends on the
surrounding tones. The same gray tone looks lighter when surrounded by black and dark grays than when
surrounded by white and light grays.)

We can selectively control the wavelengths or colors that reach the emulsion by using filters.

COLOR FILTERS FOR B&W PHOTOGRAPHY

A color filter is made of transparent material that absorbs some colors and transmits others without
disturbing the optical path (the image pattern) of the light. The color a filter transmits is the color you see
when you look through it at a white surface. Filters are designated by color and by a number, for
example, a red No. 25 filter.

While it is often the photographer’s intent to record colors in gray-tone equivalents of their visual
brightnesses, there are other occasions where the intent is to deliberately distort gray-tone rendering
(contrast) in order to accomplish special visual effects. The basis for choosing a filter for B&W use is this:

A filter transmits its own and closely related colors; it absorbs opposite colors.
Therefore it will make objects of its own color look lighter in a print and objects
of the opposite (or complementary) color look darker.

In general, if you want to darken the gray-tone rendering (positive print rendering) of a subject color, you
select a filter of a color complementary to that of the subject. For example, consider the sky as a blue subject.
A yellow (No. 8), orange (No. 2l), or red (No. 25) filter will give a darker rendering of the blue sky in the
photograph. To lighten a subject color, use a filter of similar color; thus a blue filter (No. 47) lightens the
gray tone rendering of the blue sky.
8. COLOR AND FILTER PRINCIPLES FOR B&W 5

Compensating Exposure for filter use - FILTER FACTORS

Filters absorb part of the light that passes through them, so you need an exposure increase when using a filter
or the film will be underexposed. When you use a filter to make an object darker, it absorbs part of the light
from that object so its image registers less strongly on the film. However, many other objects will also be
reflecting some of the wavelengths that the filter absorbs, mixed in with their overall color. Therefore they
will also appear somewhat darker unless extra exposure is given to make up for the portion of light that the
filter absorbs. The required amount of extra exposure is indicated by the filter factor. Correct exposure is
achieved by multiplying the exposure time (shutter speed) by the factor, or by opening the lens aperture an
equivalent number of stops.

Filter Factor Example: Proper exposure without a filter is f/8 @ 1/60 sec. Proper exposure using a filter
with a factor of 4 (often written 4X) would be:

Either: a. Change shutter speed to: 1/60 x 4 = 4/60 = 1/15 sec @ f/8

Or: b. Change the f-stop to: f/4 @ 1/60 sec (opening one stop, from f/8 to f/5.6 gives
2X as much exposure; opening another stop doubles that, for 4X more exposure)

Filter factors may differ for daylight/electronic flash and tungsten illumination because these light sources
have different wavelength compositions. Tungsten light is deficient in blue wavelengths, compared to
daylight/electronic flash. More accurate factors are given in the individual data sheets for specific films.

If you have a camera that meters through the lens, it would be convenient if you could simply meter the
scene through the filter and let the camera set the exposure. But some types of meters do not respond to all
colors, so you may get an incorrect reading if you meter through a filter, particularly through a dark filter. A
#29 deep red filter, for example, requires 4 stops additional exposure, but some cameras only produce a
2½ stop increase when they meter through that filter-under-exposing the film by 1½ stops.

You can meter through a filter if you run a simple test first. Select a scene with a variety of colors and tones.
Meter the scene without the filter and note the shutter speed and aperture. Then meter the same scene with
the filter over the lens. Compare the number of stops the camera’s settings changed with the number of stops
that they should have changed according to the chart. Adjust the settings as needed whenever you use that
filter.
8. COLOR AND FILTER PRINCIPLES FOR B&W 6
NEUTRAL DENSITY FILTERS
A special kind of filter does not change the color composition of the light passing through it because it
absorbs the same amount of all wavelengths; as a result it looks gray. In B&W its effect is to reduce
exposure without affecting contrast. Because it is neutral in color but comes in various strengths (densities),
it is called a neutral density (ND) filter. Such filters are identified by the letters ND and the actual density,
for example: ND 0.3.

The density indicates the exposure-reduction effect of the filter: 0.3 density reduces light transmission by one
stop (2X factor). Therefore: ND 0.1 reduces exposure 1/3 stop; ND 0.2 reduces exposure 2/3 stop; ND 0.6
reduces exposure 2 stops (4X factor), and so on.

ND filters make it possible to reduce exposure without changing shutter speed (for motion-stopping or blur
reasons) or without changing f-stop setting (for depth of field reasons).
Examples
a. You want to shoot at a speed of 1/15 sec. in order to blur movement. The meter reading
of the subject is f/11 @ 1/125 or the equivalent f/32 @ 1/15. However, your lens only closes down to f/22,
which would give one stop overexposure. Solution: Shoot at f/22 @ 1/15 with an ND 0.3 filter, which
reduces the light one stop, just as closing to f/32 would do.
b. You want to shoot at f/2.8 to get very shallow depth of field, so only one face in a group
will be sharp. The meter indicates 1/250 @ f/11 or the equivalent, 1/4000 sec. @ f/2.8. However, your
camera’s fastest shutter speed is 1/1000, which is two speeds too slow. That would give two stops
overexposure. Solution: Shoot at f/2.8 @ 1/1000 with an ND 0.6 filter, which reduces the light by two stops.

ND filters are also the most accurate way to make exposure changes of 1/3 and 2/3 stop, which
are sometimes needed in color photography. (Some lenses have 1/3-stop click settings between full
f-numbers, but most have either 1/2-stop positions or only full-stop positions.).
To reduce exposure 1/3 stop, use an ND 0.1 filter; to reduce 2/3 stop use ND 0.2.
To increase exposure, open up one full f-stop and use an ND 0.2 filter for an actual increase of only
1/3 stop, or ND 0.1 for an actual increase of 2/3 stop.
9. Color Temperature and Color Correcting Filters 1
COLOR TEMPERATURE

The concept of color temperature is used to describe the color characteristics of a light source, or the sensitivity
range of a color film or paper emulsion This concept derives from the phenomenon that as a solid material is heated,
at some point it begins to glow—emit visible wavelengths—and as the temperature rises, the color of the glow
changes.
The object that is heated to establish reference color temperatures is a black body—material that absorbs all
wavelengths of energy falling on it and reflects none of them. However, when it is heated, it begins to radiate energy.
At some point the heat can be sensed by touch; then as the temperature of the blackbody rises, it begins to change
appearance. First it glows a dull red, then as it gets hotter, cherry red, then orange, yellow-orange, and progressively
brighter and whiter, eventually reaching the stage we call "white hot."
These color changes occur because the object is adding increasingly shorter wavelengths to those it is already
radiating. Each color change corresponds to a certain temperature measured on the Kelvin or thermodynamic scale.
The unit of measurement, called a Kelvin (K), is the same quantity as a Celsius degree. However, the Kelvin scale
starts at absolute zero, the temperature at which even molecular movement stops. [0 K is -273.15°C or -459.67°F. The
freezing point of water is 0°C (32°F), or 273.15 K.]
Light that has the same wavelength mixture as that emitted by the blackbody at a particular temperature is said to have
that color temperature. Thus, a 3200 K light bulb gives off light with the same wavelength mix as that given off by a
blackbody heated to 3200 K. The lower the color temperature, the redder the light looks, because it lacks blue
wavelengths. The higher the color temperature, the whiter or even bluer the light looks because blue wavelengths are
being added to the output. A photographic emulsion is identified as having a particular color balance. That means its
color sensitivity has been adjusted to produce normal looking colors when exposed to light of that color temperature.
Color balance is more important with reversal films than with negative films. With a color reversal film,
transparencies (such as slides) are made directly from the film that was in the camera, and they have a distinct color
cast if the film was not shot in the light for which it was balanced or if it was not shot with a filter over the lens to
balance the light. With a color negative film, color balance can be adjusted when prints are made.

Film Color Balances


Film Type Balanced for
Type S 5500 K
Type A † 3400 K
Type B 3200 K
† The last Type A film was Kodachrome 40.

LIGHT SOURCES AND THEIR APPROXIMATE COLOR TEMPERATURES


Skylight 12000 to 18000 K
Overcast sky 7000 K
Photographic daylight* 5500 K
Electronic flash 5500 to 6500 K
Sunlight (average noon) 5400 K
500-watt 3400 K photolamp 3400 K
500-watt 3200 K tungsten lamp 3200K
200-watt general service† 2980 K
100-watt general service 2900 K
75- watt general service 2820 K
40-watt general service 2650 K
Candlelight 1900 to 1950 K
* Photographic daylight is the average mix of open blue skylight and direct sun falling on the front of a subject between mid-
morning and mid- to late afternoon.† General service bulbs are common household screw-base bulbs.
NOTE: Fluorescent tubes do not emit a continuous spectrum, and so cannot be assigned a color temp. rating.
Mired and Mired Shift Values
A mired is a unit of measurement used to convey a specific color temperature. Mired values are actually direct reciprocals of a
corresponding degrees Kelvin color temperature value. This provides to work with simpler numbers independent of the color
temperature scale that are additive. Mired values allow for an easy system in which to determine the strength of correction filter
or gel.
Mired = 1,000,000 / Color Temp in Kelvin degrees
Using mired values, one can tell how much correction gel is necessary to 'shift' a light source from its color temperature, to a
new one of choice. Two mired values are needed to perform the calculation: The first mired value is the desired color
temperature. The second is the value of the light you are going to change with correction gel. The formula below calculates the
desired mired shift between a light source in Kelvin and a desired color temperature in Kelvin.

FILTERS FOR COLOR PHOTOGRAPHY


There are three major classes of filters specifically used in color photography: conversion filters, light-balancing filters, and color
compensating filters. Others—such as neutral density and special-purpose filters—may also be used. Their function is to change
the light entering the lens so that it matches the color balance or color response of the film in use.
Conversion filters make gross changes in the light. They are used to change the overall color balance of the illumination to
match the color balance of the film in use. Thus, they change daylight or electronic flash to match tungsten or type A film
balance, or change 3200 K or 3400 K illumination to match daylight type film balance.
Light-balancing filters make moderate changes in tungsten illumination to make it somewhat "warmer" (more reddish) or
"cooler" (less reddish). They are particularly useful for matching the light from household (general service) bulbs with tungsten
film balance.

Color compensating (CC) filters make small changes in the light. They are used to fine-tune results in camera exposure, in
printing color negatives and slides, and in copying (duplicating) color slides. The most useful CC filters are C, M, Y because
each controls just one primary color. However, there are also R, G, and B CC filters. When taking pictures, usually only one
color of CC filtration is used. But in color printing, different densities of two secondary colors may be required (but never all
three—that simply produces neutral density).
A similar series, color printing (CP) filters are made of heat-resistant plastic so they can be used in the light head of an B&W
enlarger. They do not have the same high optical quality as CC filters and so cannot be used in front of a camera or enlarger
lens because they will degrade the image.

FILTERS AND ARTIFICIAL LIGHT SOURCES

It is possible to use a single filter, or a pair of closely matched filters, to adjust the light from sources that produce light by
means of heated material—such as the filament in a tungsten bulb or the wick of a candle. Such sources emit wavelengths in
roughly equal quantities (strengths) over a continuous portion of the visible spectrum. Their output drops off in the blue region,
but because it is continuous in the other regions, it can be adjusted ("corrected" to a different color temperature) with a single
color of filtration.
Fluorescent tubes are coated with phosphors on the interior surface. When excited by an electrical current flowing through gas
in the tube, the phosphors glow, emitting light. However, they do not emit continuous spectra. Wavelengths are emitted at
various, separated parts of the spectrum, and in very unequal strengths—some in insignificant amounts, others with spikes of
high intensity, especially in the green region.
The output pattern differs among the various kinds of fluorescent tubes. For this reason, filtration to adjust the light output to a
specific color temperature is different for various tubes and various films. There are three filters that attempt to deal with the
problem in an overall way. They and some CC filter corrections for common fluorescent illumination are listed in the table
Dealing with Fluorescent Light on page 3, along with some additional information.
CONVERSION FILTERS
No. Color Converts Exp. Increase
80A Blue 3200 K to 5500 K 2 stops
85B Amber 5500 K to 3200 K 2/3 stop
LIGHT-BALANCING FILTERS

TO GET 3200 K USE AND Exp. Increase


from Filter No. Color in stops
2490 K 82C + 82C 1-1/3
2570 K 82C + 82B 1-1/3
2650 K 82C + 82A 1
2720 K 82C + 82 Bluish 1
2800 K 82C 2/3
2900 K 82B 2/3
3000 K 82A 1/3
3100 K 82 1/3
3300 K 81 Yellowish 1/3
3400 K 81A 1/3
3500 K 81B 1/3
3600 K 81C 2/3
3700 K 81D 2/3
3850 K 81EF 2/3
DEALING WITH FLUORESCENT LIGHT
Use daylight type color film. Use of negative film is preferred, to be able to correct in printing as well as shooting.
Use ISO 400 film if possible, and a shutter speed of 1/60 sec. or slower to minimize fluorescent flickering.
General Correction Filters for Daylight Type Film
USE Filter WITH
FL-D Daylight Film
FL-T Tungsten Film
Typical Correction with CC (Color Compensating) Filters
(Refer to the individual color film data sheet for greater accuracy)
Fluorescent Daylight Tungsten
Tube Type Film Film
Unknown 30M 50R
Daylight 50R 85B + 30R + 10M
White 40 M 50R + 10M
Warm white 20B + 20M 40R + 10M
Warm white deluxe 30C + 30B 10R
Cool white 30M + 10R 60R
Cool white deluxe 10B + 10C 20R + 20Y
NOTE: Some Fuji color films with “4th Layer Technology” block greenish overexposure from fluorescent light
and do not need corrective filtration.
FILTERS FOR USE WITH HIGH-INTENSITY DISCHARGE LAMPS
Type S Negative Type S Transparency Type B

LUCALOX 70B + 50C + 3 stops 80B + 20C + 2 1/3 stops50M + 20C + I stop

MULTI-VAPOR 30M + 10Y + 1 stop 40M + 20Y + 1 stop 60R + 20Y + 1 213 stops

Deluxe White Mercury 40M + 20Y + 1 stop 60M + 30Y + 1 1/3 stops 70R + 10Y + 1 2/3 stops

Clear Mercury 80R + 1 2/3 stops 70R + 1 1/3 stops 90R + 40Y + 2 stops___
Sodium vapor lamps are not recommended for critical use. Daylight film gives realistic yellow-amber appearance,
tungsten film more neutral appearance.
Note: Increase calculated exposure by the amount indicated in the table. If necessary, make corrections for film
reciprocity failure, both in exposure and filtration. With transparency films, make a picture test series using filters that
vary +/- CC20 from the filters suggested in the table. Usually test filter series ranging from magenta to green and yellow
to blue are most useful.
Color Temperature Balance Dial

1) Set the red film balance arrow on the rotating dial opposite the type of film you're using
2) On the stationary multicolored scale, find the color temperature of the light source
3) On the filter scale - the outer multicolored scale on the rotating dial, find the filter(s) opposite the color
temperature for the light source. Use this filter on your camera.
4) Calculate exposure by using the normal film speed.
5) Increase the exposure by the number of f-stops shown next to the filter, or change the film speed by the
equivalent amount.
6) Correct for reciprocity law failure if the exposure time requires it.
The filter corrections indicated by the dial should produce color reproduction close to the reproduction given by the film when exposure
with the light source it's balanced for. While this is usually the intended result, there are exceptions. When you want to capture the mood of
a scene that's out of the ordinary, such as a sunset, do not use a correction filter because the usual intention is to capture the natural golden-
orange quality of the setting sun.
For a less extreme color reproduction, you can use a filter of about half the full correction. On the dial, this is halfway between no filter and
the full correction filter, or for example, a blue No. 80C filter for subjects illuminated by the light at sunset and photographed on daylight
color film.
10. Color Compensating Filters & Color Management 1

COLOR COMPENSATING FILTERS


Color compensating (CC) filters are the real control filters for color photography. Filters made of gelatin or optical plastic will
not interfere with the sharpness or other optical characteristics of the image when used in front of a camera or enlarger lens. A
similar series of filters, designated CP (color printing), is made of heat-resistant plastic suitable for use in the light path above
the negative in an enlarger, but not in front of the lens. Enlargers with built-in filtration have made CP filters obsolescent.
A color compensating filter is identified by the letters CC, followed by the value or density and the initial of the filter color. For
example:
CC20Y is a yellow filter with 0.20 density to blue light (wavelengths)
CC40M is a magenta filter with 0.40 density to green light
CC05R is a red filter with 0.05 density to cyan (G + B) light
Note that the density is written without the 0 and decimal point at the beginning—a density of 0.20 is given simply as 20.
Densities higher than 1.0 are also written without the decimal point; a value of 1.20 is written 120. CC and CP filters are not
made in densities greater than 1.0, but two or more can be combined for higher densities, and high values can be dial-set in
color enlargers equipped with glass-wedge dichroic filters in the light head.

The filter designation does not indicate what color the filter affects. You must know that a filter absorbs its opposite
(complementary) color, and you must know what the complementary is in order to use the filter correctly.
Unlike a neutral density filter, the density number of a CC filter refers only to the specific color(s) that it absorbs, not to the
total amount of white light. A 0.3 ND filter reduces exposure 1 stop because it affects R, B, and B equally. But a CC30C
filter reduces only the R content of the light; it has no effect on G or B and therefore requires less than a full stop of exposure
compensation. The color triangle (page 2) shows complementary color relationships.
The table below gives the exposure increase required by various CC filters. C, M, and Y filters are most widely used in taking
pictures, and are the only colors used in color printing. However, there are also CC R, G, and B filters. These may be used
when two primary colors need to be reduced equally.
For example, if the situation calls for CC40M to absorb green and CC40Y to absorb blue, a CC40R filter can be used instead,
because it absorbs both G and B. This would put only one filter instead of two in front of the camera lens, reducing the
number of surfaces that might collect dust or cause accidental reflections.

CC FILTER DATA
Exposure Increase in Stops
Filter
Value Yellow Magenta Cyan Red Green Blue
05 None 1/3 1/3 1/3 1/3 1/3
10 1/3 1/3 1/3 1/3 1/3 1/3
20 1/3 1/3 1/3 1/3 1/3 2/3
30 1/3 2/3 2/3 2/3 2/3 2/3
40 1/3 2/3 2/3 2/3 2/3 1
50 2/3 2/3 1 1 1 1-1/3
10. Color Compensating Filters & Color Management 2

CHOOSING FILTERS

To choose a filter for taking or printing color pictures, identify the color you want to remove—for example, excess blue
in an open-shade picture with illumination from the blue sky, or excess magenta in a print. Or, identify the color you
want to add—for example, adding "warmth," which usually means adding reddishness. You can identify the color by
looking at the subject or print, or from data sheets that accompany the film or paper.
In taking pictures, you must add filtration to get a
desired effect. In printing, you will have a basic amount of filtration (a "filter pack") to which you can add or subtract
filtration.

In printing it usually is easiest to identify what color must be removed from the image, because you can see the
excess color. It is harder to judge what must be added to correct the color balance, because you can't directly see that
color.
The following table shows how to change filtration to add or subtract colors in both photography and printing. In
printing, it is usually better to make a correction by subtracting rather than adding filtration. However, note that you
can achieve the same color change by subtracting one color or by adding its complement(s) to the filtration.

CHOOSING FILTER COLORS FOR SHOOTING OR PRINTING


IF THE For All Camera Photography
SUBJECT and Printing Transparencies For Printing Negatives
OR PRINT
IS TOO ADD or SUBTRACT ADD or SUBTRACT
Yellow M + C (= B) Y Y M + C (= B)
Magenta Y + C (= G) M M Y + C (= G)
Cyan Y + M (= R) C C Y + M (= R)
Blue Y M + C ( =B) M + C ( =B) Y
Green M Y + C ( =G) Y + C (= G) M
Red C Y + M ( =R) Y + M ( =R) C

SOME SPECIFIC CORRECTIONS FOR PRINTING COLOR NEGATIVES


IF PRINT Slight Moderate Greater
IS TOO Correction Correction Correction
Yellow +05Y +10Y +20Y
Magenta +05M +10M +20M
Cyan -05M -05Y or +05C -10M -10Y or +10C -20M -20Y or +20C

Blue -05Y or +05C +05M -10Y or +10M + 10C -20Y or +10M +20C
Green -05M or +05C +05Y -10M or +10C +10Y -20M or +20C +20Y
Red +05M +05Y +10M +10Y +20M +20Y
10. Color Compensating Filters & Color Management 3

FILTER ARITHMETIC

When using CC filtration it often is necessary to add two or more densities together to reach the required amount.
Similarly, it sometimes is necessary to subtract density to adjust the color balance of a print, or to print a different
negative. The basic rules of filter arithmetic are as follows:

Adding densities. Densities of the same color can be added together:


10Y + 25Y = 35Y
05B + 40B = 45B
Densities of two or three primaries cannot be added. Equal densities of two secondary colors can be added; they
equal the same density of the primary color they both contain:
10Y (G+R) + 10M (B+R) = 10R
30M (B+R) + 30C (B+G) = 30B
45C (B+G) + 45Y (R+G) = 45G
(Also see Expressing Primary Colors as Secondaries, next column.) All three secondary colors are not used together.
Their combined effect is neutral density, which reduces exposure without making any color adjustment.

Subtracting densities. Densities of the same color can be subtracted:


50C - 05C = 45C
25R - 10R = 15R
To subtract a secondary color from a primary color, (a) first express the primary as a secondary combination (next
column),
then (b) subtract:
(a) 25B – 10M = ? secondaries
25B = 25C+ 25M
(b) 25C + 25M – 10M = 25C + 15M
When subtraction results in a negative value of a secondary color (example: 10Y – 25Y = –15Y), cancel it by adding
equal amounts of all three secondaries:
–15Y (0M) (0C) (negative value)
+15Y +15M +15C
(0Y) 15M +15C (negative value canceled)

Expressing Primary Colors as Secondaries.


Some data sheets and instructions include recommendations for primary-color (R,G,B) filters. It is not possible to do
filter arithmetic with a mixture of primary and secondary color densities. Everything should be expressed as secondary
colors (C,M,Y). To do this, express each primary density as an equal amount of density of both secondaries that
contain it:
Express As
Blue Magenta + Cyan
` Green Cyan + Yellow
Red Yellow + Magenta
One way to determine the common color in two secondaries is to look at the color triangle. The primary color at the
triangle point between the two secondaries is their common color. If the secondaries are filters (or the dyes in emulsion
layers) that is the color they will transmit.
10. Color Compensating Filters & Color Management 4

Example. Instructions call for 10B filtration. To include it in calculations with other colors, convert 10B to its
equivalent in secondaries: 10M + 10C.
EXPLANATION
Magenta is a combination of equal amounts of blue and red: M = B + R. Cyan is a combination of equal amounts of
blue and green: C = B + G. When they are used together, the M filter stops the G wavelengths that the C filter passes
but does not stop the B. Similarly, the C filter stops the R wavelengths that the M filter passes but does not stop the
blue. So the combination passes only the color that is common to both filters, blue. In diagram form:
White M filter C filter
light (B + R) (B + G)

R ŸŸŸŸ ŸŸ ŸŸŸX
G ŸŸŸŸ ŸX
B ŸŸŸŸ ŸŸŸ ŸŸŸŸŸŸB

The amount or density of the transmitted color is equal only to the common density of the two secondaries. That is,
10M + 10C = 10B, not 20B. This is because only half of the magenta is blue, and only half of the cyan is blue—in
effect, 5B + 5B =10B. The other halves, 5R and 5G are each blocked by one of the two filters.

COLOR GAMUT

In color reproduction, color gamut is a certain complete subset of colors. The most common usage refers to the subset of
colors which can be accurately represented in a given color space or by a certain output device. Another sense, less
frequently used but not less correct, refers to the complete set of colors found in an image. Digitizing a photograph,
converting a digitized image to a different color space, or outputting it to printer generally alters its gamut, since some
of the colors in the original are typically lost in the process.

The gamut of a device or process is that portion of the color space that can be represented, or reproduced. When certain
colors cannot be displayed within a particular color model, those colors are said to be out of gamut. For example, pure
red which is contained in the RGB color model gamut is out of gamut in the CMYK model.

When processing a digital image, the most convenient color model used is the RGB model. Printing the image requires
transforming the image from the original RGB color space to the printer's CMYK color space. During this process, the
colors from the RGB space which are out of gamut must be somehow converted to approximate values within the
CMYK space gamut. There are several algorithms for effecting this transformation, but none of them are perfect, since
those colors are simply out of the target device's capabilities. Identifying colors in an image which are out of gamut in
the target color space is critical for the quality of the final product.

COLOR MODELS AND SPACES

A color model is an abstract mathematical construct describing the way colors can be represented as numbers, typically
as three or four values or color components (e.g. RGB and CMYK are color models). RGB models use additive color
mixing, because it describes what kind of light needs to be emitted to produce a given color. CMYK uses subtractive
color mixing, because it describes what kind of inks need to be applied so that light reflected from the paper and
through the inks produces a given color.

Mapping values between a color model and a color space results in a "footprint" within the color space. This "footprint"
is the gamut, and in combination with the color model, defines a new color space. For example, Adobe RGB and sRGB
are two different color spaces, both based on the RGB model.

A color model (ie: RGB or CMYK) is a set of numbers that describes all the possible colors it contains. Color space is a
subset of those values that defines all the colors within that model that can be captured or displayed by a device. A color
space also maps the values in its model that are out of range to the nearest corresponding values that are inside its range.
10. Color Compensating Filters & Color Management 5

Color spaces can be defined without the use of a color model. These spaces, such as Pantone, consist of names or
numbers which are defined by the existence of a corresponding set of physical color swatches.

Color Space is a term for a specific combination of a color model plus a color mapping function. The term "color space"
tends to be used to identify color models, since identifying a color space automatically identifies the associated color
model. The two terms are often used interchangeably, though this is not strictly correct. For example, although several
color spaces are based on the RGB model, there is no such thing as the RGB color space.

Color spaces define colors as a function of an absolute frame of reference. Color spaces, along with device profiling,
allow reproducible representations of color, in both analogue and digital devices.

There are a wide variety of color spaces. Some common color spaces are listed below:

CIELAB - (or L*a*b* or Lab) produces a color space that is perceptually linear - a change of the same amount in a
color value should produce a change of about the same visual importance. This space is commonly used for surface
colors, but not for mixtures of (transmitted) light.

sRGB - or standard RGB (Red Green Blue) color space, was created cooperatively by Hewlett-Packard and Microsoft
Corporation for use on the Internet. sRGB is intended as a common color space for the creation of images for viewing
on the Internet and World Wide Web (WWW). sRGB's color gamut encompasses 35% of the visible colors specified by
the International Commission on Illumination (CIE). Although sRGB results in one of the narrowest gamuts of any
working space, it’s still considered broad enough for most color applications. It’s commonly used in point-and-shoot
digital cameras, and for printing on consumer level commercial printing equipment.

Adobe RGB - Adobe RGB color space was developed by Adobe Systems in 1998. It was designed to encompass most
of the colors achievable on CMYK color printers, but by using RGB primary colors on a device such as a computer
display. The Adobe RGB color space encompasses roughly 50% of the visible colors specified by the CIE, improving
on the gamut of sRGB color space primarily in cyan-greens.

Adobe Wide Gamut RGB - Adobe Wide Gamut RGB color space is an RGB color space developed by Adobe Systems
as an alternative to sRGB color space. It is able to store a wider range of color values than sRGB. The Wide Gamut
color space is an expanded version of the Adobe RGB color space. The Adobe Wide Gamut RGB color space
encompasses 77.6% of the visible colors specified by the CIE, while the standard Adobe RGB color space covers just
50 %. Color accuracy in this space is compromised because 8% of its colors are imaginary colors that do not exist and
are not reproducible in any medium.

ProPhoto RGB - also known as ROMM RGB, was developed by Kodak to offer an especially large gamut for use with
photographic output. This color space encompasses over 90% of possible visible colors described by the CIE, and 100%
of likely occurring real world surface colors, making ProPhoto even larger than Adobe Wide Gamut RGB.
Approximately 13% of the defined colors are imaginary colors that do not exist and are not visible, potentially
compromising color accuracy.

HSB ((hue, saturation, brightness) or HSV (hue, saturation, value) - often used by artists because it is more natural to
think in terms of hue and saturation than in terms of additive or subtractive color components.

CMYK (Cyan, Magenta, Yellow, Black) - used in mechanical printing, because it describes the inks that need to be
applied so that light reflected from the paper and through the inks produces a given color. Inks subtract color from the
white page to create an image. CMYK includes ink values for cyan, magenta, yellow and black. There are many CMYK
color spaces for different sets of inks, substrates (paper), and device characteristics.

COLOR MANAGEMENT

Color management is the conversion between the color representations of various devices, such as scanners, digital
cameras and printers. The goal of color management is to achieve the same appearance on different color devices.
10. Color Compensating Filters & Color Management 6
In order to describe the behavior of a color imaging device, it must be compared (measured) in relation to a standard
color space. Instruments for measuring device colors include colorimeters (for measuring devices that radiate light, like
monitors) and spectrophotometers (for measuring reflected light, as from prints). These measurements, in combination
with special software, are used to create a color description of the device called a profile.

Calibration is like profiling, except that it can also include the adjustment of the device, and not just the measurement of
the device.

Typically calibration is used to adjust monitors, scanners, cameras and printers for photographic reproduction, so that a
printed photograph appears identical to the original or a source file on a computer display. Three independent
calibrations need to be performed:

* The scanner or camera needs a device specific calibration to output the originals colors.
* The computer display needs to represent the colors of the image color space.
* The printer needs to match the computer display.

Scanners are profiled using an IT8-target, a transparency or print with many small color fields measured by the
manufacturer. The scanned target’s color values are compared to the reference values provided by the target’s
manufacturer. The differences in these values are used to create an ICC profile, which precisely describes the scanner’s
color characteristics. When subsequent images are scanned, the created profile is assigned to the scans, thus correcting
color and exposure discrepancies between the scanner’s hardware the target color space.

Cameras are profiled similarly to scanners, except that instead of IT-8 targets, color checker cards are used. The color
checker card is similar to an IT-8 target, being composed of many patches of different colors. A photograph is taken of
the card at the beginning of a shoot, and this image is loaded into software that creates an ICC profile specific to the
camera, settings, and lighting conditions present at the time the photo was taken.

Monitors are calibrated by attaching a colorimeter to the display's surface. Special software then sends various colors to
the display and compares the values that were sent against the readings from the colorimeter. This establishes a table of
the differences between the colors sent and the colors received that is used to create an ICC profile for the monitor. This
profile is then loaded when the computer boots, adjusting the display to accurately reproduce color.

Printer profiles are created using a spectrophotometer to read the color values of individual color swatches in a test
print, and comparing the results with the color values sent to the printer. As with monitor calibration, a table of
differences between the colors sent to the printer by the computer and received by the spectrophotometer is used to
create an ICC profile. This profile is then used by ICC aware applications (like Photoshop) when printing to correct
errors between the display and the specific hardware peculiarities of the printer. A calibration profile is necessary for
each printer, paper, and ink combination.
5. Flash
COLOR BALANCE

Electronic flash produces light with a 5500-6000K color balance. This is photographic daylight
color balance, so no filter is required with daylight type color films or with B&W films.

If your color pictures with electronic flash are consistently slightly too bluish, use a no. 81B
(yellowish) filter in front of the camera lens. This filter requires a 1/3-stop exposure increase. A
simple way to automatically include this in your exposure figuring is to multiply your film ISO
speed by O.80 and set this speed on your flash meter or on the flash/camera auto exposure control
system. If you have a manual flash unit, or want to use your automatic flash in manual operation,
multiply the guide number (GN, explained below) by 0.93 and use that new GN to figure exposure.

If you want to use tungsten (type B, 3200 K) color film with electronic flash, use an 85B filter in
front of the camera lens or flash. Over your lens, this requires a 2/3 stop exposure increase: multiply
the ISO by 0.64, or multiply the GN by 0.80.

LIGHT OUTPUT

Self-contained portable flash units of the type used on the camera are rated in using a relative
measure of their light output called Guide Number, which is in turn derived from an absolute
measure of their light output called beam candlepower-seconds (BCPS). Flash manufacturers
measure the BCPS of a flash unit and calculate its Guide Number, a light output rating for the flash
when it’s used at a given film speed, in a specific unit of measure (meters or feet). The more
powerful the flash, the higher its Guide Number. Manufacturers often list two Guide Numbers for
each film speed, one for meters and the other for feet.

Studio flash equipment is rated in watt-seconds (ws), which is a unit of total stored energy and which
cannot be used to determine exposure directly.

USING GUIDE NUMBERS

A GN is useful with a non-auto flash, or with an automatic flash used in manual mode. Flash
exposure is determined by flash-to-subject distance and f-stop. (The camera shutter speed is set to
whatever is required for proper flash sync. It does not control exposure, because the flash duration is
much shorter than the shutter speed.) The GN lets you determine what distance or what f-stop to use
(you decide on one in the picture situation, and must figure out the other) as follows:

f-stop = GN / Flash-to-subject distance

Flash-to-subject distance = GN / f stop to be used

1
5. Flash
INVERSE SQUARE LAW

The farther a subject is from the flash, the less light it will receive and larger the f-stops will be needed to
keep exposure constant. The level of light drops rapidly as the distance between light and subject increases.
At double a given distance from a light source, an object receives ¼ as much light. This is the Inverse Square
Law: intensity of illumination is inversely proportional to the square of the distance from the light to the
subject.

FILL FLASH

Flash can be used as a fill light outdoors. A sunny day is a pleasant time to photograph, but direct
sunlight does not provide the most flattering light for portraits. Facing someone directly into the sun
will light the face overall but often causes the person to squint. Turning someone away from the light
can put too much of the face in dark shadow. Flash used as an addition to the basic exposure can
open up dark shadows so they show detail. It is better not to overpower the sunlight with the flash,
but to add just enough fill so that the shadows are still somewhat darker than the highlights-for
portraits, about one stop less than a correct ambient exposure.

Color reversal film, in particular, benefits from using flash for fill light. The final transparency is
made directly from the film in the camera, so it is not possible to lighten shadows by dodging
them during printing.

In much the same way as it is used to lighten shadows on a partly shaded subject, flash can increase
the light on a fully shaded subject that is against a brighter background. Without the flash, the
photographer could have gotten a good exposure for the brighter part of the scene or for the shaded
part, but not for both. Flash was used to reduce the difference in brightness between the two areas.

Ordinarily, flash outdoors during the day is used simply to lighten shadows so they won’t be overly
dark. But you can also combine flash with existing light for more unusual results.

2
5. Flash
Flash-Plus-Daylight Exposures

You can use flash outdoors during the day to decrease the contrast between shaded and brightly lit areas.

1) Set both camera and flash to manual exposure operation. Set your camera shutter speed to the correct speed
to synchronize with flash. Focus on your subject.

2) Meter the lighter part of the scene. Set your lens to the f-stop that combines with your shutter
synchronization speed to produce a correct exposure for the existing light. For a natural looking fill
light, you need to adjust the light from the flash so that it is about one stop less than that on the scene
overall. There are several ways to do this depending on the equipment you have.

If your flash has adjustable power settings

After doing steps 1 and 2 above, let’s assume that your film speed is ISO/ASA 100, your basic exposure is
1/60 sec at f/16, and the subject is 6 feet away (the distance can be read on the camera’s lens barrel after
focusing).

3) Set the film speed (100 in this example) on the flash calculator dial.
4) Line up on the dial the flash-to-subject distance (6 ft) with the camera f-stop (16).
5) Note the power setting that the dial indicates (such as full power, 1/2 power, 1/4
power). This setting will make the shaded area as bright as the lit area, a rather flat looking light. To get the
flash-filled shadows one stop darker than sunlit areas, set the flash to the next lower power setting (for
example from full power to 1/2 power).

If your flash does not have adjustable power settings

Do steps 1, 2, and 3. Locate the f-stop (from step 2) on the flash calculator dial and find the distance that is
opposite it. If you position the flash that distance from the subject, the light from the flash will equal the
sunlight. To decrease the intensity of the light, drape one or two layers of white handkerchief over the flash
head. Sometimes it’s feasible to move the flash farther from the subject to decrease the amount of light
reaching it (multiply the original distance by 1.4 for a one-stop difference between lit and shaded areas,
multiply by 2 for a two-stop difference).

No flash – exposed for shaded No flash – exposed for sunlit Exposed for sunlit background
background background plus flash to lighten shaded
foreground.

3
5. Flash

Direct flash off camera. Flash bounced from Flash bounced from Direct flash on
Compared to on-camera above. Bounce light is side. For soft lighting, camera. This is the
flash, this produces more softer and more natural- with good modeling of simplest method, one
of a three-dimensional looking than direct flash. features, the light can that lets you move
feeling. The flash is The flash can be left on be bounced onto the around and shoot
connected to the camera the camera if the flash subject from a quickly. But the light
by a long synch cord and has a head that swivels reflector or light tends to be flat,
held at arm’s length upward. In a room with a colored wall (as producing few of the
above and to the side. relatively low ceiling, neutral as possible for shadows that model
Attaching the flash to a the flash can be pointed color film). You can the subject to suggest
light stand frees both upward and bounced off use a flash unit in volume and texture.
hands for camera the ceiling (preferably automatic mode if it
operation. Point the flash neutral in color if you has a sensor that points
carefully at the most are using color film). A at the subject even
important part of the bounce-flash accessory when the flash head is
subject. You can’t see simplifies bounce use: a swiveled up or to one
the results as you shoot, card or mini-umbrella side for bounce
so it’s easy to let your clips above the flash lighting.
aim wander. head and the light
bounces off that.

4
IMAGE FORMATION
Visible images are formed by light reflected from points on the surface of an object to a corresponding pattern of points
on a receiving medium such as the retina of the eye, film in a photographic camera, or the light-sensitive array in a video
or digital camera. The path any individual bit of light travels is called a ray. Light rays spread outward from every point
on an object surface, and to form images the rays must be directed to the image-receiving medium. Mirrors direct light
rays
by reflection; lenses direct them by refraction.

The image pattern is composed of illuminated points. A pinhole forms an image by allowing only one small group of rays
traveling in the same direction to pass to the receiving medium; but this produces a dim image that is not very sharp. The
function of a lens is to collect rays spreading out from each point on a subject and refract them into corresponding points
on the image-receiving medium. This gives both a brighter and a sharper image.

LENS BASICS
A lens is defined/identified by three fundamental properties:

Focal Length - The distance from the "optical center" of a lens to the sharply focused image of an object at infinity. The
optical center of a lens may not be at its physical center or iris diaphragm position; this is the case with wide-angle and
telephoto lens designs, for complex optical reasons.
How far behind a lens a focused image is formed depends on the power of the lens and the distance from the lens
to the object that is being imaged (focused on). With any lens, as an object gets farther away, its image is focused closer
behind the lens. When an object is at infinity (’) the image is formed at the closest possible distance to the lens. This is
the image position that determines the lens focal length.
In a symmetrical lens, the optical center is effectively where the iris diaphragm is located. In asymmetrical
telephoto and wide-angle lenses, the "optical center" for making focal-length measurements is at a reference point called
the rear nodal point or exit node. This node is well forward of the diaphragm in a telephoto lens, and in fact may be
located in front of the front surface of the lens. In a wide-angle lens the exit node is behind the diaphragm, and perhaps
even behind the rear surface of the lens.

Lens Speed - The “speed” of a lens refers to its light-passing power; the f/# of its maximum aperture. It is expressed by an
f-number, one of the series of numbers that mark the settings, or f-stops, of the iris diaphragm in lens. Each f-number is
calculated:
Focal length ÷ diameter of iris opening (as seen from the front of the lens)

The bigger the diameter, the lower the f-number. In some cases this may not be a full f-stop in the standard series (for
example, f/1.8 and f/3.5 are not full stops). This is because the f/# is the ratio of the clear diameter of the iris opening as
seen through the front of the lens, and the focal length. The wide-open diameter may not result in a ratio that equals full f/
#.

The lens speed is the f-number calculated from the wide-open diameter of the iris. The f-number series is the same for all
lenses, so the iris diameters differ with the focal length of the lens. For example, the diameter that gives a number of f/2
for a 50 mm (2”) lens is smaller than the diameter that gives f/2 for a 100 mm (4”) lens. But this has a very practical
result: All lenses set to the same f-stop transmit the same amount of light, regardless of their different focal lengths. The
standard f-number series is:
_ _
p p p p p p p p
f/1 f/1.4 f/2 f/2.8 f/4 f/5.6 f/8 f/11 f/16 f/22 f/32 f/45 f/64 f/90 f/128
n n n n n n n
The brackets and arrows show that the numbers in the series double at every other step. So, if you can remember any
two adjacent f-numbers, you can construct the entire series.
You can also construct the series by this method:Any f-no. X 1.4 = next higher f/# Any f-no. X 0.7 = next lower f/#
Example: f/4 x 1.4 = f/5.6, and f/4 x 0.7 = 2.8. [NOTE there is some rounding off in this method to get the standard
numbers. For example, f/2.8 x 1.4 (= 3.92) = f/4, and f/8f/ x 1.4 (=11.2) = f/11.]

Coverage – the diameter of the image circle the lens is designed to project determines the film format (size) the lens is
designed to completely cover without vignetting or cropping.
CAMERA FORMATS & LENSES
The rule of thumb for approximate calculation of format to lens relationships is:
Normal Focal Length = diagonal dimension of the format (Use The Pythagorean Theorem: a2 + b2 = c2)
Wide Angle= short dimension of the format
Telephoto= 1.5 x long dimension of the format
35mm Format:
Normal: ~45mm lens (Diagonal Dimension = 43.3mm)
24mm
Wide Angle: 24mm lens (or less) (Short Dimension = 24mm)
Telephoto: ~55mm lens (or more) (1.5 x Long Dimension = 54mm)

2.25”
2 1/4” Medium Format (~6x6cm):

Normal: 80mm lens (Diagonal Dimension =


80.7mm)
3.18” or
80.7mm Wide Angle: 60mm lens (or less) (Short
Dimension= 60mm)

Telephoto: 90mm lens (or more) 1.5xLong


Dimension= 90mm) Whether a lens is a “normal,”
wide-angle, or telephoto (long
5” focus) focal length (FL) for its
format can roughly be
determined from its
relationship to the long (L)
dimension, the short (S)
dimension, and the diagonal
(D) of the format it covers.
For example the 35mm
format is:

Long = 35mm (l . 5" )


6.4” or
4” 163mm Short = 24mm (1")

Diagonal = 44mm (1.75")

Normal = D = 41 to 55 mm
(1.6 to 2.2"; 50mm or 2" is
standard)

Wide = S = 25mm (1") and


shorter (moderate W-A are
up to 35mm)

4x5” Large Format: Telephoto = Lx1.5 = 60mm


(2.5") and longer (in practice,
Normal: ~165mm (6.25”) lens. (Diagonal Dimension = 6.4”) 75mm [3"] and longer )

Wide Angle: ~100mm lens (or less) (Short Dimension = 102mm (4”)

Telephoto: ~200mm lens (or more) (1.5 x Long Dimension = 190.5 (7.5”)
MAGNIFICATION

The overall refractive power of a lens is called its magnification. The difference in the magnifications of two lenses is
directly proportional to their focal lengths. A long focal length lens has greater magnification than a shorter focal length
lens, and the difference is the same as the ratio between their focal lengths (FL1: FL2, or FL1 ÷ FL2), or how many times
one focal length is longer than the other. Examples: A 100 mm lens is twice the focal length of a 50 mm lens (100 ÷ 50 =
2), so it has twice (2X) as much magnification. A 400 mm lens has 8X the magnification of a 50 mm lens, and 4X the
magnification of a 100 mm lens. A 25 mm lens has only half the magnification of a 50 mm lens (25 ÷ 50 = 0.5 or 1/2).

Focal length determines the image size of the subject you focus on. You can change image size by changing focal
length or camera-to-subject distance, as follows:

Using the same focal length lens. Moving twice as far from the subject gives half the image size; moving three times
farther away gives one-third the image size; etc. (Moving from 10 to 20 feet will make the subject half as big in the
second picture; moving from 10 to 30 feet will make it one-third as big.) Similarly, moving closer makes the image
larger: moving to 1/2 the original distance (i.e., moving “twice as close”) makes the image twice as big; moving to 1/3 the
original distance makes the image three times as big; etc. (Moving from 15 to 5 feet gives a 3X larger image.)

From the same camera position. Using twice as much focal length doubles the image size, using three times as much
focal length (ie.: a 150mm lens instead of a 50mm lens) triples the image size, etc. [Of course part of the subject may be
cropped off by the limits of your film frame size, but the part that is included will be 2X, 3X, etc. larger than in the image
with the original focal length you were using.]

The diagram shows the angle of view


of some of the lenses that can be used
with a 35mm camera. The examples
above show the effect of increasing
focal length while keeping the same
lens-to-subject distance: a decrease in
the angle of view and an increase in
magnification. Since the
photographer has not changed
position the sizes of objects within
the frame remain the same in relation
to each other.

FIELD SIZE
The width and height of the area you take in at the focused distance (i.e., the frame size around the subject) changes as
you change image size. The change is inverse; that is, as you make the image bigger, the field size gets smaller, and vice
versa. If you make the image 4X bigger (either by changing to a longer lens or moving closer) the field you take in
around the subject will be 1/4 as wide and 1/4 as tall as in the first case. Or if you make the image 1/2 as big (“two times
smaller”) by using a shorter focal length lens or by moving twice as far away, the field you take in will be 2X as wide and
2X as tall as before.

FOCAL LENGTH AND FIELD OF VIEW (Subject/scene area dimensions)

The field of view of a lens is how much of the scene or subject it takes in. It is determined by the lens focal length. A
short focal length has a wider field of view (takes in more) than a longer focal length. All lenses have circular fields of
view, but since film formats are rectangles or squares, In practical terms the field of view is the width and the height of the
area included at any given distance from the lens.

The dimensions of the field of view, W, are determined by three factors: N, the negative (film format) dimension; u, the
distance from the lens; and F, the focal length of the lens. The formula is:

W = uN y F

That is, field dimension is equal to the distance multiplied by the negative dimension, divided by the lens focal length.
Use the negative width to get the width of the field, use the negative height to get the height of the field. The distance
depends on what you are concerned with: foreground, background, or coverage at the subject position. All factors must be
measured in the same units (e.g., inches, mm).

Example: Using 35 mm film (rough dimensions: 1.5” x 1”) in a horizontal format and a 50 mm (roughly 2”) lens, what is
the field of view at a distance of 10 feet?

Method: The distance u is 10 X 12 = 120". N is 1.5" wide, 1" tall. The focal length F = 2".
W [width] = (120 X 1.5) ÷ 2 = 180 ÷ 2 = 90" or 7'6".
W [height] = (120 X 1) ÷ 2 = 120 ÷ 2 = 60" or 5'. Answer: 7'6" x 5'.
12. LENS FORMULAS 1
IMAGE SIZE

Actual image size (or simply image size ) means how big an object within the field of view appears on the
camera viewing screen or the film. Relative image size means how big the object appears in one image
compared to its size in another image. Relative image size changes whenever either the focal length or the
distance from the lens to the object is changed.

RELATIVE IMAGE SIZE AND FOCAL LENGTH

Because lens power or magnification varies with focal length, so does image size. As focal length gets longer,
image size gets proportionately bigger. Using 2X more focal length increases image size 2X; using 6X more
focal length gives an image 6X larger; using 1/4X as much focal length gives 1/4X as much image size; and
so on.

To figure the change in relative image size produced by a change in focal length, divide the larger focal
length by the smaller focal length: FL ÷ FS. If you changed from a short to a longer focal length, the new
image size will be that many times bigger. If you changed from a long to a shorter focal length, put a 1 over
the answer to get a fraction. The new image size will be that fraction of the first image size.

Examples: Change from 50 mm to 150 mm lens. (Image size will be bigger.)


Relative image size = 150 ÷ 50 = 3X larger than before.

Change from 500 mm to 100 mm lens. (Image size will be smaller.)


500 ÷ 100 = 5. The change is to a shorter FL, so put 1 over answer.
Relative image size = 1/5 previous image size.

RELATIVE IMAGE SIZE AND DISTANCE TO OBJECT (SUBJECT)

With any lens, the image size of an object varies with how far it is from the lens. If either the camera or the
subject moves, as the lens-to-object distance, u, (or simply object distance) gets smaller, the image size gets
larger; and as the distance gets larger, the image size of the object gets smaller.

To figure the change in relative image size produced by a change in object distance, divide
the larger distance by the smaller distance: uL ÷ uS. If the distance change brought the object closer, the new
image size will be that many times larger. If the change increased the lens-to-object distance, put a 1 over the
answer to get a fraction. The image size will be that fraction of the first image size.

Examples: Change lens-to-object distance from 40 feet to 10 feet. (Image size will be bigger.)
Relative image size = 40 ÷ 10 = 4X larger than before.
Change distance from 5 meters to 15 meters. (Image size will be smaller.)
15 ÷ 5 = 3. Put 1 over answer because the distance has increased.
Relative image size = 1/3 previous image size.
13. DEPTH OF FIELD 1

DEPTH OF FIELD

Depth of Field is the near-to-far zone in the image in which details seem to be equally sharp. It begins at some
distance closer to the camera than the distance the lens is focused on, and extends beyond the focused distance
(perhaps to infinity). When a lens is focused at a given distance, some things closer to the camera and some
things farther away may also appear sharp in the final image. This zone of apparent sharpness is called depth of
field, DOF. The amount of DOF depends on the how small out-of-focus spots are on the negative. If they are
below a critical size—the size of the circle of confusion—they will look as sharp as sharply focused image
points.

When taking a picture, two things determine the size of the circle of confusion and therefore DOF: image
magnification (M) and aperture (f#).

For more DOF: “Think small”; Make the image size smaller or use a smaller aperture (higher f/#); or both.
*Reduce magnification (use shorter focal length; move farther away; or both)
*Use a smaller aperture (higher f#)
*Do both

For less DOF: “Think big”; Make the image size larger or use a larger aperture (lower f/#)
*Increase magnification (use longer focal length; move closer; or both)
*Use a larger aperture (lower f#)
*Do both

The idea that wide-angle (short focal length) lenses give more DOF and telephoto (long focal length) lenses give
less DOF than a normal lens is true only when they are used at the same lens-to-object distance, because they
have different degrees of magnification, and DOF varies with image size (at the same f#).

BUT: When used at different distances, so that the image size or magnification is equal, all lenses give the
same DoF when set at the same f#.

For example: If a 2" lens gives a 1" image 10' from an object, a 4" lens will give a 1" image from 20' away, and
an 8" lens will give a 1" image from 40' away. If the lenses are all set to f/8, the DOF will be the same. At f/4, all
will have the same, lesser DOF; at f/22, all will have the same, greater DOF; and so on at all other f-stops.

Depth of Field Limits

Depth of field is generally defined by its near and far limits, measured from the lens. The near limit, DN, is the
distance at which things will begin to look sharp in the image. The far limit, DF, is the distance at which things
no longer will look sharp in the image.
The determining factors are lens focal length F, allowable size of circle of confusion C, aperture f#, and focused
distance u. The factors F, C, and f# can be taken care of by using the hyperfocal distance.

Zone Focusing

Zone focusing lets you focus and adjust the depth of field in advance of shooting. This is useful when you may
need to shoot rapidly and can predict approximately where, although not exactly when, action will take place; an
auto race would be one example. It is also useful if you want to be relatively inconspicuous so you don’t distract
your subjects, for example when photographing strangers on the street. To zone focus, use the depth-of-field
scale on your lens to set the limits of the depth of field; anything photographed within those limits would then be
sharp. The precise distance at which something happened would not be important since the general area would
be acceptably in focus.
2

Zone focusing is a way of using a lens’s depth-of-field


scale to photograph a scene without focusing before
exposure. Suppose the nearest point you want sharp is 7 ft
(2.1 m) away and the farthest is 15 ft (4. 6 m). By setting
those distances opposite a pair of bracketing marks on the
depth-of-field scale, you will know what f-stop to use so
that objects between those two distances are in focus. With
this lens the aperture must be f/5.6 or smaller. The lens is
now set so that no further focusing need be done provided
the action stays between the two preset distances.

Focusing on the hyperfocal distance will help you get maximum depth of field in a scene where you want
everything to be in focus from the near foreground to a far-distant background. When the lens is focused on
infinity (marked on the lens distance scale) in photographic terms everything at that distance from the lens or
farther will be sharp. But for maximum depth of field, don’t focus on infinity. Instead focus on a point closer to
the lens, so that infinity falls just within the farthest limits of the depth of field.

This point, the hyperfocal distance, is the distance to the nearest plane that will be sharp when the camera is
focused on infinity. If you focus on the hyperfocal distance, everything from half the hyperfocal distance to the
farthest visible object will be sharp; a considerable increase in depth of field.

You can find the hyperfocal distance by using the depth-of-field scale on the lens. First set the lens to infinity,
then find the nearest distance within the depth of field (the nearest distance that will be sharp) for the f-stop you
are using. Focus on this distance for maximum depth of field. Another method is simply to adjust the distance
scale so the infinity mark is opposite your
f-stop on the depth-of-field scale.

For objects relatively far from the lens, the depth-of-field divides so that about two-thirds is behind the point
focused on while one-third is in front. As you focus closer to the lens, the depth of field becomes more evenly
divided so that at a very close distance (twice the lens focal length) the depth of field is half in front and half
behind the point focused on.
In the picture on the left, the lens was focused on infinity.
Although everything in the far distance is sharp, the nearest
distance in focus is not close enough lo include the
foreground which the photographer also wanted to be sharp.
When the camera is focused on infinity the nearest distance
that will be sharp (the nearest plane of the depth of field) is
called the hyperfocal distance.

On the right, the depth of field has been increased by focusing


on the hyperfocal distance (done simply by adjusting the
depth-of-field scale so the infinity mark was opposite the f-
stop number being used). The depth of field now begins
closer to the camera (the foreground is acceptably sharp) and
continues to the far distance. When the lens is set at the
hyperfocal distance, everything from half that distance to the
farthest visible object will be sharp.

You might also like